Science.gov

Sample records for adaptive method based

  1. Adaptive Discontinuous Galerkin Methods in Multiwavelets Bases

    SciTech Connect

    Archibald, Richard K; Fann, George I; Shelton Jr, William Allison

    2011-01-01

    We use a multiwavelet basis with the Discontinuous Galerkin (DG) method to produce a multi-scale DG method. We apply this Multiwavelet DG method to convection and convection-diffusion problems in multiple dimensions. Merging the DG method with multiwavelets allows the adaptivity in the DG method to be resolved through manipulation of multiwavelet coefficients rather than grid manipulation. Additionally, the Multiwavelet DG method is tested on non-linear equations in one dimension and on the cubed sphere.

  2. Adaptive Kernel Based Machine Learning Methods

    DTIC Science & Technology

    2012-10-15

    multiscale collocation method with a matrix compression strategy to discretize the system of integral equations and then use the multilevel...augmentation method to solve the resulting discrete system. A priori and a posteriori 1 parameter choice strategies are developed for thesemethods. The...performance of the proximity algo- rithms for the L1/TV denoising model. This leads us to a new characterization of all solutions to the L1/TV model via fixed

  3. Adaptive reconnection-based arbitrary Lagrangian Eulerian method

    DOE PAGES

    Bo, Wurigen; Shashkov, Mikhail

    2015-07-21

    We present a new adaptive Arbitrary Lagrangian Eulerian (ALE) method. This method is based on the reconnection-based ALE (ReALE) methodology of Refs. [35], [34] and [6]. The main elements in a standard ReALE method are: an explicit Lagrangian phase on an arbitrary polygonal (in 2D) mesh in which the solution and positions of grid nodes are updated; a rezoning phase in which a new grid is defined by changing the connectivity (using Voronoi tessellation) but not the number of cells; and a remapping phase in which the Lagrangian solution is transferred onto the new grid. Furthermore, in the standard ReALEmore » method, the rezoned mesh is smoothed by using one or several steps toward centroidal Voronoi tessellation, but it is not adapted to the solution in any way.« less

  4. Adaptive reconnection-based arbitrary Lagrangian Eulerian method

    SciTech Connect

    Bo, Wurigen; Shashkov, Mikhail

    2015-07-21

    We present a new adaptive Arbitrary Lagrangian Eulerian (ALE) method. This method is based on the reconnection-based ALE (ReALE) methodology of Refs. [35], [34] and [6]. The main elements in a standard ReALE method are: an explicit Lagrangian phase on an arbitrary polygonal (in 2D) mesh in which the solution and positions of grid nodes are updated; a rezoning phase in which a new grid is defined by changing the connectivity (using Voronoi tessellation) but not the number of cells; and a remapping phase in which the Lagrangian solution is transferred onto the new grid. Furthermore, in the standard ReALE method, the rezoned mesh is smoothed by using one or several steps toward centroidal Voronoi tessellation, but it is not adapted to the solution in any way.

  5. Adaptive Set-Based Methods for Association Testing.

    PubMed

    Su, Yu-Chen; Gauderman, William James; Berhane, Kiros; Lewinger, Juan Pablo

    2016-02-01

    With a typical sample size of a few thousand subjects, a single genome-wide association study (GWAS) using traditional one single nucleotide polymorphism (SNP)-at-a-time methods can only detect genetic variants conferring a sizable effect on disease risk. Set-based methods, which analyze sets of SNPs jointly, can detect variants with smaller effects acting within a gene, a pathway, or other biologically relevant sets. Although self-contained set-based methods (those that test sets of variants without regard to variants not in the set) are generally more powerful than competitive set-based approaches (those that rely on comparison of variants in the set of interest with variants not in the set), there is no consensus as to which self-contained methods are best. In particular, several self-contained set tests have been proposed to directly or indirectly "adapt" to the a priori unknown proportion and distribution of effects of the truly associated SNPs in the set, which is a major determinant of their power. A popular adaptive set-based test is the adaptive rank truncated product (ARTP), which seeks the set of SNPs that yields the best-combined evidence of association. We compared the standard ARTP, several ARTP variations we introduced, and other adaptive methods in a comprehensive simulation study to evaluate their performance. We used permutations to assess significance for all the methods and thus provide a level playing field for comparison. We found the standard ARTP test to have the highest power across our simulations followed closely by the global model of random effects (GMRE) and a least absolute shrinkage and selection operator (LASSO)-based test.

  6. Comparing Anisotropic Output-Based Grid Adaptation Methods by Decomposition

    NASA Technical Reports Server (NTRS)

    Park, Michael A.; Loseille, Adrien; Krakos, Joshua A.; Michal, Todd

    2015-01-01

    Anisotropic grid adaptation is examined by decomposing the steps of flow solution, ad- joint solution, error estimation, metric construction, and simplex grid adaptation. Multiple implementations of each of these steps are evaluated by comparison to each other and expected analytic results when available. For example, grids are adapted to analytic metric fields and grid measures are computed to illustrate the properties of multiple independent implementations of grid adaptation mechanics. Different implementations of each step in the adaptation process can be evaluated in a system where the other components of the adaptive cycle are fixed. Detailed examination of these properties allows comparison of different methods to identify the current state of the art and where further development should be targeted.

  7. Adaptive optics image restoration algorithm based on wavefront reconstruction and adaptive total variation method

    NASA Astrophysics Data System (ADS)

    Li, Dongming; Zhang, Lijuan; Wang, Ting; Liu, Huan; Yang, Jinhua; Chen, Guifen

    2016-11-01

    To improve the adaptive optics (AO) image's quality, we study the AO image restoration algorithm based on wavefront reconstruction technology and adaptive total variation (TV) method in this paper. Firstly, the wavefront reconstruction using Zernike polynomial is used for initial estimated for the point spread function (PSF). Then, we develop our proposed iterative solutions for AO images restoration, addressing the joint deconvolution issue. The image restoration experiments are performed to verify the image restoration effect of our proposed algorithm. The experimental results show that, compared with the RL-IBD algorithm and Wiener-IBD algorithm, we can see that GMG measures (for real AO image) from our algorithm are increased by 36.92%, and 27.44% respectively, and the computation time are decreased by 7.2%, and 3.4% respectively, and its estimation accuracy is significantly improved.

  8. Adaptive Algebraic Multigrid Methods

    SciTech Connect

    Brezina, M; Falgout, R; MacLachlan, S; Manteuffel, T; McCormick, S; Ruge, J

    2004-04-09

    Our ability to simulate physical processes numerically is constrained by our ability to solve the resulting linear systems, prompting substantial research into the development of multiscale iterative methods capable of solving these linear systems with an optimal amount of effort. Overcoming the limitations of geometric multigrid methods to simple geometries and differential equations, algebraic multigrid methods construct the multigrid hierarchy based only on the given matrix. While this allows for efficient black-box solution of the linear systems associated with discretizations of many elliptic differential equations, it also results in a lack of robustness due to assumptions made on the near-null spaces of these matrices. This paper introduces an extension to algebraic multigrid methods that removes the need to make such assumptions by utilizing an adaptive process. The principles which guide the adaptivity are highlighted, as well as their application to algebraic multigrid solution of certain symmetric positive-definite linear systems.

  9. Method for reducing the drag of blunt-based vehicles by adaptively increasing forebody roughness

    NASA Technical Reports Server (NTRS)

    Whitmore, Stephen A. (Inventor); Saltzman, Edwin J. (Inventor); Moes, Timothy R. (Inventor); Iliff, Kenneth W. (Inventor)

    2005-01-01

    A method for reducing drag upon a blunt-based vehicle by adaptively increasing forebody roughness to increase drag at the roughened area of the forebody, which results in a decrease in drag at the base of this vehicle, and in total vehicle drag.

  10. Search Control Algorithm Based on Random Step Size Hill-Climbing Method for Adaptive PMD Compensation

    NASA Astrophysics Data System (ADS)

    Tanizawa, Ken; Hirose, Akira

    Adaptive polarization mode dispersion (PMD) compensation is required for the speed-up and advancement of the present optical communications. The combination of a tunable PMD compensator and its adaptive control method achieves adaptive PMD compensation. In this paper, we report an effective search control algorithm for the feedback control of the PMD compensator. The algorithm is based on the hill-climbing method. However, the step size changes randomly to prevent the convergence from being trapped at a local maximum or a flat, unlike the conventional hill-climbing method. The randomness depends on the Gaussian probability density functions. We conducted transmission simulations at 160Gb/s and the results show that the proposed method provides more optimal compensator control than the conventional hill-climbing method.

  11. Adaptive remeshing method in 2D based on refinement and coarsening techniques

    NASA Astrophysics Data System (ADS)

    Giraud-Moreau, L.; Borouchaki, H.; Cherouat, A.

    2007-04-01

    The analysis of mechanical structures using the Finite Element Method, in the framework of large elastoplastic strains, needs frequent remeshing of the deformed domain during computation. Remeshing is necessary for two main reasons, the large geometric distortion of finite elements and the adaptation of the mesh size to the physical behavior of the solution. This paper presents an adaptive remeshing method to remesh a mechanical structure in two dimensions subjected to large elastoplastic deformations with damage. The proposed remeshing technique includes adaptive refinement and coarsening procedures, based on geometrical and physical criteria. The proposed method has been integrated in a computational environment using the ABAQUS solver. Numerical examples show the efficiency of the proposed approach.

  12. Adaptively deformed mesh based interface method for elliptic equations with discontinuous coefficients

    PubMed Central

    Xia, Kelin; Zhan, Meng; Wan, Decheng; Wei, Guo-Wei

    2011-01-01

    Mesh deformation methods are a versatile strategy for solving partial differential equations (PDEs) with a vast variety of practical applications. However, these methods break down for elliptic PDEs with discontinuous coefficients, namely, elliptic interface problems. For this class of problems, the additional interface jump conditions are required to maintain the well-posedness of the governing equation. Consequently, in order to achieve high accuracy and high order convergence, additional numerical algorithms are required to enforce the interface jump conditions in solving elliptic interface problems. The present work introduces an interface technique based adaptively deformed mesh strategy for resolving elliptic interface problems. We take the advantages of the high accuracy, flexibility and robustness of the matched interface and boundary (MIB) method to construct an adaptively deformed mesh based interface method for elliptic equations with discontinuous coefficients. The proposed method generates deformed meshes in the physical domain and solves the transformed governed equations in the computational domain, which maintains regular Cartesian meshes. The mesh deformation is realized by a mesh transformation PDE, which controls the mesh redistribution by a source term. The source term consists of a monitor function, which builds in mesh contraction rules. Both interface geometry based deformed meshes and solution gradient based deformed meshes are constructed to reduce the L∞ and L2 errors in solving elliptic interface problems. The proposed adaptively deformed mesh based interface method is extensively validated by many numerical experiments. Numerical results indicate that the adaptively deformed mesh based interface method outperforms the original MIB method for dealing with elliptic interface problems. PMID:22586356

  13. Phylogeny-based comparative methods question the adaptive nature of sporophytic specializations in mosses.

    PubMed

    Huttunen, Sanna; Olsson, Sanna; Buchbender, Volker; Enroth, Johannes; Hedenäs, Lars; Quandt, Dietmar

    2012-01-01

    Adaptive evolution has often been proposed to explain correlations between habitats and certain phenotypes. In mosses, a high frequency of species with specialized sporophytic traits in exposed or epiphytic habitats was, already 100 years ago, suggested as due to adaptation. We tested this hypothesis by contrasting phylogenetic and morphological data from two moss families, Neckeraceae and Lembophyllaceae, both of which show parallel shifts to a specialized morphology and to exposed epiphytic or epilithic habitats. Phylogeny-based tests for correlated evolution revealed that evolution of four sporophytic traits is correlated with a habitat shift. For three of them, evolutionary rates of dual character-state changes suggest that habitat shifts appear prior to changes in morphology. This suggests that they could have evolved as adaptations to new habitats. Regarding the fourth correlated trait the specialized morphology had already evolved before the habitat shift. In addition, several other specialized "epiphytic" traits show no correlation with a habitat shift. Besides adaptive diversification, other processes thus also affect the match between phenotype and environment. Several potential factors such as complex genetic and developmental pathways yielding the same phenotypes, differences in strength of selection, or constraints in phenotypic evolution may lead to an inability of phylogeny-based comparative methods to detect potential adaptations.

  14. Phylogeny-Based Comparative Methods Question the Adaptive Nature of Sporophytic Specializations in Mosses

    PubMed Central

    Huttunen, Sanna; Olsson, Sanna; Buchbender, Volker; Enroth, Johannes; Hedenäs, Lars; Quandt, Dietmar

    2012-01-01

    Adaptive evolution has often been proposed to explain correlations between habitats and certain phenotypes. In mosses, a high frequency of species with specialized sporophytic traits in exposed or epiphytic habitats was, already 100 years ago, suggested as due to adaptation. We tested this hypothesis by contrasting phylogenetic and morphological data from two moss families, Neckeraceae and Lembophyllaceae, both of which show parallel shifts to a specialized morphology and to exposed epiphytic or epilithic habitats. Phylogeny-based tests for correlated evolution revealed that evolution of four sporophytic traits is correlated with a habitat shift. For three of them, evolutionary rates of dual character-state changes suggest that habitat shifts appear prior to changes in morphology. This suggests that they could have evolved as adaptations to new habitats. Regarding the fourth correlated trait the specialized morphology had already evolved before the habitat shift. In addition, several other specialized “epiphytic” traits show no correlation with a habitat shift. Besides adaptive diversification, other processes thus also affect the match between phenotype and environment. Several potential factors such as complex genetic and developmental pathways yielding the same phenotypes, differences in strength of selection, or constraints in phenotypic evolution may lead to an inability of phylogeny-based comparative methods to detect potential adaptations. PMID:23118967

  15. Fuzzy physical programming for Space Manoeuvre Vehicles trajectory optimization based on hp-adaptive pseudospectral method

    NASA Astrophysics Data System (ADS)

    Chai, Runqi; Savvaris, Al; Tsourdos, Antonios

    2016-06-01

    In this paper, a fuzzy physical programming (FPP) method has been introduced for solving multi-objective Space Manoeuvre Vehicles (SMV) skip trajectory optimization problem based on hp-adaptive pseudospectral methods. The dynamic model of SMV is elaborated and then, by employing hp-adaptive pseudospectral methods, the problem has been transformed to nonlinear programming (NLP) problem. According to the mission requirements, the solutions were calculated for each single-objective scenario. To get a compromised solution for each target, the fuzzy physical programming (FPP) model is proposed. The preference function is established with considering the fuzzy factor of the system such that a proper compromised trajectory can be acquired. In addition, the NSGA-II is tested to obtain the Pareto-optimal solution set and verify the Pareto optimality of the FPP solution. Simulation results indicate that the proposed method is effective and feasible in terms of dealing with the multi-objective skip trajectory optimization for the SMV.

  16. Investigation of self-adaptive LED surgical lighting based on entropy contrast enhancing method

    NASA Astrophysics Data System (ADS)

    Liu, Peng; Wang, Huihui; Zhang, Yaqin; Shen, Junfei; Wu, Rengmao; Zheng, Zhenrong; Li, Haifeng; Liu, Xu

    2014-05-01

    Investigation was performed to explore the possibility of enhancing contrast by varying the spectral distribution (SPD) of the surgical lighting. The illumination scenes with different SPDs were generated by the combination of a self-adaptive white light optimization method and the LED ceiling system, the images of biological sample are taken by a CCD camera and then processed by an 'Entropy' based contrast evaluation model which is proposed specific for surgery occasion. Compared with the neutral white LED based and traditional algorithm based image enhancing methods, the illumination based enhancing method turns out a better performance in contrast enhancing and improves the average contrast value about 9% and 6%, respectively. This low cost method is simple, practicable, and thus may provide an alternative solution for the expensive visual facility medical instruments.

  17. Adaptive stochastic resonance method for impact signal detection based on sliding window

    NASA Astrophysics Data System (ADS)

    Li, Jimeng; Chen, Xuefeng; He, Zhengjia

    2013-04-01

    Aiming at solving the existing sharp problems in impact signal detection by using stochastic resonance (SR) in the fault diagnosis of rotating machinery, such as the measurement index selection of SR and the detection of impact signal with different impact amplitudes, the present study proposes an adaptive SR method for impact signal detection based on sliding window by analyzing the SR characteristics of impact signal. This method can not only achieve the optimal selection of system parameters by means of weighted kurtosis index constructed through using kurtosis index and correlation coefficient, but also achieve the detection of weak impact signal through the algorithm of data segmentation based on sliding window, even though the differences between different impact amplitudes are great. The algorithm flow of adaptive SR method is given and effectiveness of the method has been verified by the contrastive results between the proposed method and the traditional SR method of simulation experiments. Finally, the proposed method has been applied to a gearbox fault diagnosis in a hot strip finishing mill in which two local faults located on the pinion are obtained successfully. Therefore, it can be concluded that the proposed method is of great practical value in engineering.

  18. Adaptive circle-ellipse fitting method for estimating tree diameter based on single terrestrial laser scanning

    NASA Astrophysics Data System (ADS)

    Bu, Guochao; Wang, Pei

    2016-04-01

    Terrestrial laser scanning (TLS) has been used to extract accurate forest biophysical parameters for inventory purposes. The diameter at breast height (DBH) is a key parameter for individual trees because it has the potential for modeling the height, volume, biomass, and carbon sequestration potential of the tree based on empirical allometric scaling equations. In order to extract the DBH from the single-scan data of TLS automatically and accurately within a certain range, we proposed an adaptive circle-ellipse fitting method based on the point cloud transect. This proposed method can correct the error caused by the simple circle fitting method when a tree is slanted. A slanted tree was detected by the circle-ellipse fitting analysis, then the corresponding slant angle was found based on the ellipse fitting result. With this information, the DBH of the trees could be recalculated based on reslicing the point cloud data at breast height. Artificial stem data simulated by a cylindrical model of leaning trees and the scanning data acquired with the RIEGL VZ-400 were used to test the proposed adaptive fitting method. The results shown that the proposed method can detect the trees and accurately estimate the DBH for leaning trees.

  19. An adaptive filter-based method for robust, automatic detection and frequency estimation of whistles.

    PubMed

    Johansson, A Torbjorn; White, Paul R

    2011-08-01

    This paper proposes an adaptive filter-based method for detection and frequency estimation of whistle calls, such as the calls of birds and marine mammals, which are typically analyzed in the time-frequency domain using a spectrogram. The approach taken here is based on adaptive notch filtering, which is an established technique for frequency tracking. For application to automatic whistle processing, methods for detection and improved frequency tracking through frequency crossings as well as interfering transients are developed and coupled to the frequency tracker. Background noise estimation and compensation is accomplished using order statistics and pre-whitening. Using simulated signals as well as recorded calls of marine mammals and a human whistled speech utterance, it is shown that the proposed method can detect more simultaneous whistles than two competing spectrogram-based methods while not reporting any false alarms on the example datasets. In one example, it extracts complete 1.4 and 1.8 s bottlenose dolphin whistles successfully through frequency crossings. The method performs detection and estimates frequency tracks even at high sweep rates. The algorithm is also shown to be effective on human whistled utterances.

  20. Effective wavelet-based compression method with adaptive quantization threshold and zerotree coding

    NASA Astrophysics Data System (ADS)

    Przelaskowski, Artur; Kazubek, Marian; Jamrogiewicz, Tomasz

    1997-10-01

    Efficient image compression technique especially for medical applications is presented. Dyadic wavelet decomposition by use of Antonini and Villasenor bank filters is followed by adaptive space-frequency quantization and zerotree-based entropy coding of wavelet coefficients. Threshold selection and uniform quantization is made on a base of spatial variance estimate built on the lowest frequency subband data set. Threshold value for each coefficient is evaluated as linear function of 9-order binary context. After quantization zerotree construction, pruning and arithmetic coding is applied for efficient lossless data coding. Presented compression method is less complex than the most effective EZW-based techniques but allows to achieve comparable compression efficiency. Specifically our method has similar to SPIHT efficiency in MR image compression, slightly better for CT image and significantly better in US image compression. Thus the compression efficiency of presented method is competitive with the best published algorithms in the literature across diverse classes of medical images.

  1. Improved methods in neural network-based adaptive output feedback control, with applications to flight control

    NASA Astrophysics Data System (ADS)

    Kim, Nakwan

    Utilizing the universal approximation property of neural networks, we develop several novel approaches to neural network-based adaptive output feedback control of nonlinear systems, and illustrate these approaches for several flight control applications. In particular, we address the problem of non-affine systems and eliminate the fixed point assumption present in earlier work. All of the stability proofs are carried out in a form that eliminates an algebraic loop in the neural network implementation. An approximate input/output feedback linearizing controller is augmented with a neural network using input/output sequences of the uncertain system. These approaches permit adaptation to both parametric uncertainty and unmodeled dynamics. All physical systems also have control position and rate limits, which may either deteriorate performance or cause instability for a sufficiently high control bandwidth. Here we apply a method for protecting an adaptive process from the effects of input saturation and time delays, known as "pseudo control hedging". This method was originally developed for the state feedback case, and we provide a stability analysis that extends its domain of applicability to the case of output feedback. The approach is illustrated by the design of a pitch-attitude flight control system for a linearized model of an R-50 experimental helicopter, and by the design of a pitch-rate control system for a 58-state model of a flexible aircraft consisting of rigid body dynamics coupled with actuator and flexible modes. A new approach to augmentation of an existing linear controller is introduced. It is especially useful when there is limited information concerning the plant model, and the existing controller. The approach is applied to the design of an adaptive autopilot for a guided munition. Design of a neural network adaptive control that ensures asymptotically stable tracking performance is also addressed.

  2. An adaptive block-based fusion method with LUE-SSIM for multi-focus images

    NASA Astrophysics Data System (ADS)

    Zheng, Jianing; Guo, Yongcai; Huang, Yukun

    2016-09-01

    Because of the lenses' limited depth of field, digital cameras are incapable of acquiring an all-in-focus image of objects at varying distances in a scene. Multi-focus image fusion technique can effectively solve this problem. Aiming at the block-based multi-focus image fusion methods, the problem that blocking-artifacts often occurs. An Adaptive block-based fusion method based on lifting undistorted-edge structural similarity (LUE-SSIM) is put forward. In this method, image quality metrics LUE-SSIM is firstly proposed, which utilizes the characteristics of human visual system (HVS) and structural similarity (SSIM) to make the metrics consistent with the human visual perception. Particle swarm optimization(PSO) algorithm which selects LUE-SSIM as the object function is used for optimizing the block size to construct the fused image. Experimental results on LIVE image database shows that LUE-SSIM outperform SSIM on Gaussian defocus blur images quality assessment. Besides, multi-focus image fusion experiment is carried out to verify our proposed image fusion method in terms of visual and quantitative evaluation. The results show that the proposed method performs better than some other block-based methods, especially in reducing the blocking-artifact of the fused image. And our method can effectively preserve the undistorted-edge details in focus region of the source images.

  3. Parallel level-set methods on adaptive tree-based grids

    NASA Astrophysics Data System (ADS)

    Mirzadeh, Mohammad; Guittet, Arthur; Burstedde, Carsten; Gibou, Frederic

    2016-10-01

    We present scalable algorithms for the level-set method on dynamic, adaptive Quadtree and Octree Cartesian grids. The algorithms are fully parallelized and implemented using the MPI standard and the open-source p4est library. We solve the level set equation with a semi-Lagrangian method which, similar to its serial implementation, is free of any time-step restrictions. This is achieved by introducing a scalable global interpolation scheme on adaptive tree-based grids. Moreover, we present a simple parallel reinitialization scheme using the pseudo-time transient formulation. Both parallel algorithms scale on the Stampede supercomputer, where we are currently using up to 4096 CPU cores, the limit of our current account. Finally, a relevant application of the algorithms is presented in modeling a crystallization phenomenon by solving a Stefan problem, illustrating a level of detail that would be impossible to achieve without a parallel adaptive strategy. We believe that the algorithms presented in this article will be of interest and useful to researchers working with the level-set framework and modeling multi-scale physics in general.

  4. Comparative adaptation accuracy of acrylic denture bases evaluated by two different methods.

    PubMed

    Lee, Chung-Jae; Bok, Sung-Bem; Bae, Ji-Young; Lee, Hae-Hyoung

    2010-08-01

    This study examined the adaptation accuracy of acrylic denture base processed using fluid-resin (PERform), injection-moldings (SR-Ivocap, Success, Mak Press), and two compression-molding techniques. The adaptation accuracy was measured primarily by the posterior border gaps at the mid-palatal area using a microscope and subsequently by weighing of the weight of the impression material between the denture base and master cast using hand-mixed and automixed silicone. The correlation between the data measured using these two test methods was examined. The PERform and Mak Press produced significantly smaller maximum palatal gap dimensions than the other groups (p<0.05). Mak Press also showed a significantly smaller weight of automixed silicone material than the other groups (p<0.05), while SR-Ivocap and Success showed similar adaptation accuracy to the compression-molding denture. The correlationship between the magnitude of the posterior border gap and the weight of the silicone impression materials was affected by either the material or mixing variables.

  5. A Self-Adaptive Model-Based Wi-Fi Indoor Localization Method

    PubMed Central

    Tuta, Jure; Juric, Matjaz B.

    2016-01-01

    This paper presents a novel method for indoor localization, developed with the main aim of making it useful for real-world deployments. Many indoor localization methods exist, yet they have several disadvantages in real-world deployments—some are static, which is not suitable for long-term usage; some require costly human recalibration procedures; and others require special hardware such as Wi-Fi anchors and transponders. Our method is self-calibrating and self-adaptive thus maintenance free and based on Wi-Fi only. We have employed two well-known propagation models—free space path loss and ITU models—which we have extended with additional parameters for better propagation simulation. Our self-calibrating procedure utilizes one propagation model to infer parameters of the space and the other to simulate the propagation of the signal without requiring any additional hardware beside Wi-Fi access points, which is suitable for real-world usage. Our method is also one of the few model-based Wi-Fi only self-adaptive approaches that do not require the mobile terminal to be in the access-point mode. The only input requirements of the method are Wi-Fi access point positions, and positions and properties of the walls. Our method has been evaluated in single- and multi-room environments, with measured mean error of 2–3 and 3–4 m, respectively, which is similar to existing methods. The evaluation has proven that usable localization accuracy can be achieved in real-world environments solely by the proposed Wi-Fi method that relies on simple hardware and software requirements. PMID:27929453

  6. A Self-Adaptive Model-Based Wi-Fi Indoor Localization Method.

    PubMed

    Tuta, Jure; Juric, Matjaz B

    2016-12-06

    This paper presents a novel method for indoor localization, developed with the main aim of making it useful for real-world deployments. Many indoor localization methods exist, yet they have several disadvantages in real-world deployments-some are static, which is not suitable for long-term usage; some require costly human recalibration procedures; and others require special hardware such as Wi-Fi anchors and transponders. Our method is self-calibrating and self-adaptive thus maintenance free and based on Wi-Fi only. We have employed two well-known propagation models-free space path loss and ITU models-which we have extended with additional parameters for better propagation simulation. Our self-calibrating procedure utilizes one propagation model to infer parameters of the space and the other to simulate the propagation of the signal without requiring any additional hardware beside Wi-Fi access points, which is suitable for real-world usage. Our method is also one of the few model-based Wi-Fi only self-adaptive approaches that do not require the mobile terminal to be in the access-point mode. The only input requirements of the method are Wi-Fi access point positions, and positions and properties of the walls. Our method has been evaluated in single- and multi-room environments, with measured mean error of 2-3 and 3-4 m, respectively, which is similar to existing methods. The evaluation has proven that usable localization accuracy can be achieved in real-world environments solely by the proposed Wi-Fi method that relies on simple hardware and software requirements.

  7. Patched based methods for adaptive mesh refinement solutions of partial differential equations

    SciTech Connect

    Saltzman, J.

    1997-09-02

    This manuscript contains the lecture notes for a course taught from July 7th through July 11th at the 1997 Numerical Analysis Summer School sponsored by C.E.A., I.N.R.I.A., and E.D.F. The subject area was chosen to support the general theme of that year`s school which is ``Multiscale Methods and Wavelets in Numerical Simulation.`` The first topic covered in these notes is a description of the problem domain. This coverage is limited to classical PDEs with a heavier emphasis on hyperbolic systems and constrained hyperbolic systems. The next topic is difference schemes. These schemes are the foundation for the adaptive methods. After the background material is covered, attention is focused on a simple patched based adaptive algorithm and its associated data structures for square grids and hyperbolic conservation laws. Embellishments include curvilinear meshes, embedded boundary and overset meshes. Next, several strategies for parallel implementations are examined. The remainder of the notes contains descriptions of elliptic solutions on the mesh hierarchy, elliptically constrained flow solution methods and elliptically constrained flow solution methods with diffusion.

  8. Spatial-light-modulator-based adaptive optical system for the use of multiple phase retrieval methods.

    PubMed

    Lingel, Christian; Haist, Tobias; Osten, Wolfgang

    2016-12-20

    We propose an adaptive optical setup using a spatial light modulator (SLM), which is suitable to perform different phase retrieval methods with varying optical features and without mechanical movement. By this approach, it is possible to test many different phase retrieval methods and their parameters (optical and algorithmic) using one stable setup and without hardware adaption. We show exemplary results for the well-known transport of intensity equation (TIE) method and a new iterative adaptive phase retrieval method, where the object phase is canceled by an inverse phase written into part of the SLM. The measurement results are compared to white light interferometric measurements.

  9. Tensor Product Model Transformation Based Adaptive Integral-Sliding Mode Controller: Equivalent Control Method

    PubMed Central

    Zhao, Guoliang; Li, Hongxing

    2013-01-01

    This paper proposes new methodologies for the design of adaptive integral-sliding mode control. A tensor product model transformation based adaptive integral-sliding mode control law with respect to uncertainties and perturbations is studied, while upper bounds on the perturbations and uncertainties are assumed to be unknown. The advantage of proposed controllers consists in having a dynamical adaptive control gain to establish a sliding mode right at the beginning of the process. Gain dynamics ensure a reasonable adaptive gain with respect to the uncertainties. Finally, efficacy of the proposed controller is verified by simulations on an uncertain nonlinear system model. PMID:24453897

  10. Tensor product model transformation based adaptive integral-sliding mode controller: equivalent control method.

    PubMed

    Zhao, Guoliang; Sun, Kaibiao; Li, Hongxing

    2013-01-01

    This paper proposes new methodologies for the design of adaptive integral-sliding mode control. A tensor product model transformation based adaptive integral-sliding mode control law with respect to uncertainties and perturbations is studied, while upper bounds on the perturbations and uncertainties are assumed to be unknown. The advantage of proposed controllers consists in having a dynamical adaptive control gain to establish a sliding mode right at the beginning of the process. Gain dynamics ensure a reasonable adaptive gain with respect to the uncertainties. Finally, efficacy of the proposed controller is verified by simulations on an uncertain nonlinear system model.

  11. An adaptive distance-based group contribution method for thermodynamic property prediction.

    PubMed

    He, Tanjin; Li, Shuang; Chi, Yawei; Zhang, Hong-Bo; Wang, Zhi; Yang, Bin; He, Xin; You, Xiaoqing

    2016-09-14

    In the search for an accurate yet inexpensive method to predict thermodynamic properties of large hydrocarbon molecules, we have developed an automatic and adaptive distance-based group contribution (DBGC) method. The method characterizes the group interaction within a molecule with an exponential decay function of the group-to-group distance, defined as the number of bonds between the groups. A database containing the molecular bonding information and the standard enthalpy of formation (Hf,298K) for alkanes, alkenes, and their radicals at the M06-2X/def2-TZVP//B3LYP/6-31G(d) level of theory was constructed. Multiple linear regression (MLR) and artificial neural network (ANN) fitting were used to obtain the contributions from individual groups and group interactions for further predictions. Compared with the conventional group additivity (GA) method, the DBGC method predicts Hf,298K for alkanes more accurately using the same training sets. Particularly for some highly branched large hydrocarbons, the discrepancy with the literature data is smaller for the DBGC method than the conventional GA method. When extended to other molecular classes, including alkenes and radicals, the overall accuracy level of this new method is still satisfactory.

  12. CNOP-based sensitive areas identification for tropical cyclone adaptive observations with PCAGA method

    NASA Astrophysics Data System (ADS)

    Zhang, Lin-Lin; Yuan, Shi-Jin; Mu, Bin; Zhou, Fei-Fan

    2017-02-01

    In this paper, conditional nonlinear optimal perturbation (CNOP) was investigated to identify sensitive areas for tropical cyclone adaptive observations with principal component analysis based genetic algorithm (PCAGA) method and two tropical cyclones, Fitow (2013) and Matmo (2014), were studied with a 120 km resolution using the fifth-generation Mesoscale Model (MM5). To verify the effectiveness of PCAGA method, CNOPs were also calculated by an adjoint-based method as a benchmark for comparison on patterns, energies, and vertical distributions of temperatures. Comparing with the benchmark, the CNOPs obtained from PCAGA had similar patterns for Fitow and a little different for Matmo; the vertically integrated energies were located closer to the verification areas and the initial tropical cyclones. Experimental results also presented that the CNOPs of PCAGA had a more positive impact on the forecast improvement, which gained from the reductions of the CNOPs in the whole domain containing sensitive areas. Furthermore, the PCAGA program was executed 40 times for each case and all the averages of benefits were larger than the benchmark. This also proved the validity and stability of the PCAGA method. All results showed that the PCAGA method could approximately solve CNOP of complicated models without computing adjoint models, and obtain more benefits of reducing the CNOPs in the whole domain.

  13. A texture-analysis-based design method for self-adaptive focus criterion function.

    PubMed

    Liang, Q; Qu, Y F

    2012-05-01

    Autofocusing (AF) criterion functions are critical to the performance of a passive autofocusing system in automatic video microscopy. Most of the autofocusing criterion functions proposed are dependent on the imaging system and image captured by the objective being focused or ranged. This dependence destabilizes the performance of the system when the criterion functions are applied to objectives with different characteristics. In this paper, a new design method for autofocusing criterion functions is introduced. This method enables the system to have the ability to tell the texture directional information of the objective. Based on this information, the optimal focus criterion function specific to one texture direction is designed, voiding blindly using autofocusing functions which cannot perform well when applied to the certain surface and can even lead to failure of the whole process. In this way, we improved the self-adaptability, robustness, reliability and focusing accuracy of the algorithm. First, the grey-level co-occurrence matrices of real-time images are calculated in four directions. Next, the contrast values of the four matrices are computed and then compared. The result reflects the directional information of the measured objective surfaces. Finally, with the directional information, an adaptive criterion function is constructed. To demonstrate the effectiveness of the new focus algorithm, we conducted experiments on different texture surfaces and compared the results with those obtained by existing algorithms. The proposed algorithm excellently performs with different measured objectives.

  14. Adaptive model-based control systems and methods for controlling a gas turbine

    NASA Technical Reports Server (NTRS)

    Brunell, Brent Jerome (Inventor); Mathews, Jr., Harry Kirk (Inventor); Kumar, Aditya (Inventor)

    2004-01-01

    Adaptive model-based control systems and methods are described so that performance and/or operability of a gas turbine in an aircraft engine, power plant, marine propulsion, or industrial application can be optimized under normal, deteriorated, faulted, failed and/or damaged operation. First, a model of each relevant system or component is created, and the model is adapted to the engine. Then, if/when deterioration, a fault, a failure or some kind of damage to an engine component or system is detected, that information is input to the model-based control as changes to the model, constraints, objective function, or other control parameters. With all the information about the engine condition, and state and directives on the control goals in terms of an objective function and constraints, the control then solves an optimization so the optimal control action can be determined and taken. This model and control may be updated in real-time to account for engine-to-engine variation, deterioration, damage, faults and/or failures using optimal corrective control action command(s).

  15. Intelligent Condition Diagnosis Method Based on Adaptive Statistic Test Filter and Diagnostic Bayesian Network

    PubMed Central

    Li, Ke; Zhang, Qiuju; Wang, Kun; Chen, Peng; Wang, Huaqing

    2016-01-01

    A new fault diagnosis method for rotating machinery based on adaptive statistic test filter (ASTF) and Diagnostic Bayesian Network (DBN) is presented in this paper. ASTF is proposed to obtain weak fault features under background noise, ASTF is based on statistic hypothesis testing in the frequency domain to evaluate similarity between reference signal (noise signal) and original signal, and remove the component of high similarity. The optimal level of significance α is obtained using particle swarm optimization (PSO). To evaluate the performance of the ASTF, evaluation factor Ipq is also defined. In addition, a simulation experiment is designed to verify the effectiveness and robustness of ASTF. A sensitive evaluation method using principal component analysis (PCA) is proposed to evaluate the sensitiveness of symptom parameters (SPs) for condition diagnosis. By this way, the good SPs that have high sensitiveness for condition diagnosis can be selected. A three-layer DBN is developed to identify condition of rotation machinery based on the Bayesian Belief Network (BBN) theory. Condition diagnosis experiment for rolling element bearings demonstrates the effectiveness of the proposed method. PMID:26761006

  16. Structural break detection method based on the Adaptive Regression Splines technique

    NASA Astrophysics Data System (ADS)

    Kucharczyk, Daniel; Wyłomańska, Agnieszka; Zimroz, Radosław

    2017-04-01

    For many real data, long term observation consists of different processes that coexist or occur one after the other. Those processes very often exhibit different statistical properties and thus before the further analysis the observed data should be segmented. This problem one can find in different applications and therefore new segmentation techniques have been appeared in the literature during last years. In this paper we propose a new method of time series segmentation, i.e. extraction from the analysed vector of observations homogeneous parts with similar behaviour. This method is based on the absolute deviation about the median of the signal and is an extension of the previously proposed techniques also based on the simple statistics. In this paper we introduce the method of structural break point detection which is based on the Adaptive Regression Splines technique, one of the form of regression analysis. Moreover we propose also the statistical test which allows testing hypothesis of behaviour related to different regimes. First, the methodology we apply to the simulated signals with different distributions in order to show the effectiveness of the new technique. Next, in the application part we analyse the real data set that represents the vibration signal from a heavy duty crusher used in a mineral processing plant.

  17. Intelligent Condition Diagnosis Method Based on Adaptive Statistic Test Filter and Diagnostic Bayesian Network.

    PubMed

    Li, Ke; Zhang, Qiuju; Wang, Kun; Chen, Peng; Wang, Huaqing

    2016-01-08

    A new fault diagnosis method for rotating machinery based on adaptive statistic test filter (ASTF) and Diagnostic Bayesian Network (DBN) is presented in this paper. ASTF is proposed to obtain weak fault features under background noise, ASTF is based on statistic hypothesis testing in the frequency domain to evaluate similarity between reference signal (noise signal) and original signal, and remove the component of high similarity. The optimal level of significance α is obtained using particle swarm optimization (PSO). To evaluate the performance of the ASTF, evaluation factor Ipq is also defined. In addition, a simulation experiment is designed to verify the effectiveness and robustness of ASTF. A sensitive evaluation method using principal component analysis (PCA) is proposed to evaluate the sensitiveness of symptom parameters (SPs) for condition diagnosis. By this way, the good SPs that have high sensitiveness for condition diagnosis can be selected. A three-layer DBN is developed to identify condition of rotation machinery based on the Bayesian Belief Network (BBN) theory. Condition diagnosis experiment for rolling element bearings demonstrates the effectiveness of the proposed method.

  18. Dynamic Adaptive Runtime Systems for Advanced Multipole Method-based Science Achievement

    NASA Astrophysics Data System (ADS)

    Debuhr, Jackson; Anderson, Matthew; Sterling, Thomas; Zhang, Bo

    2015-04-01

    Multipole methods are a key computational kernel for a large class of scientific applications spanning multiple disciplines. Yet many of these applications are strong scaling constrained when using conventional programming practices. Hardware parallelism continues to grow, emphasizing medium and fine-grained thread parallelism rather than the coarse-grained process parallelism favored by conventional programming practices. Emerging, dynamic task management execution models can go beyond these conventional practices to significantly improve both efficiency and scalability for algorithms like multipole methods which exhibit irregular and time-varying execution properties. We present a new scientific library, DASHMM, built on the ParalleX HPX-5 runtime system, which explores the use of dynamic adaptive runtime techniques to improve scalability and efficiency for multipole-method based scientific computing. DASHMM allows application scientists to rapidly create custom, scalable, and efficient multipole methods, especially targeting the Fast Multipole Method and the Barnes-Hut N-body algorithm. After a discussion of the system and its goals, some application examples will be presented.

  19. A New Feedback-Based Method for Parameter Adaptation in Image Processing Routines

    PubMed Central

    Mikut, Ralf; Reischl, Markus

    2016-01-01

    The parametrization of automatic image processing routines is time-consuming if a lot of image processing parameters are involved. An expert can tune parameters sequentially to get desired results. This may not be productive for applications with difficult image analysis tasks, e.g. when high noise and shading levels in an image are present or images vary in their characteristics due to different acquisition conditions. Parameters are required to be tuned simultaneously. We propose a framework to improve standard image segmentation methods by using feedback-based automatic parameter adaptation. Moreover, we compare algorithms by implementing them in a feedforward fashion and then adapting their parameters. This comparison is proposed to be evaluated by a benchmark data set that contains challenging image distortions in an increasing fashion. This promptly enables us to compare different standard image segmentation algorithms in a feedback vs. feedforward implementation by evaluating their segmentation quality and robustness. We also propose an efficient way of performing automatic image analysis when only abstract ground truth is present. Such a framework evaluates robustness of different image processing pipelines using a graded data set. This is useful for both end-users and experts. PMID:27764213

  20. A New Feedback-Based Method for Parameter Adaptation in Image Processing Routines.

    PubMed

    Khan, Arif Ul Maula; Mikut, Ralf; Reischl, Markus

    2016-01-01

    The parametrization of automatic image processing routines is time-consuming if a lot of image processing parameters are involved. An expert can tune parameters sequentially to get desired results. This may not be productive for applications with difficult image analysis tasks, e.g. when high noise and shading levels in an image are present or images vary in their characteristics due to different acquisition conditions. Parameters are required to be tuned simultaneously. We propose a framework to improve standard image segmentation methods by using feedback-based automatic parameter adaptation. Moreover, we compare algorithms by implementing them in a feedforward fashion and then adapting their parameters. This comparison is proposed to be evaluated by a benchmark data set that contains challenging image distortions in an increasing fashion. This promptly enables us to compare different standard image segmentation algorithms in a feedback vs. feedforward implementation by evaluating their segmentation quality and robustness. We also propose an efficient way of performing automatic image analysis when only abstract ground truth is present. Such a framework evaluates robustness of different image processing pipelines using a graded data set. This is useful for both end-users and experts.

  1. Novel synthetic index-based adaptive stochastic resonance method and its application in bearing fault diagnosis

    NASA Astrophysics Data System (ADS)

    Zhou, Peng; Lu, Siliang; Liu, Fang; Liu, Yongbin; Li, Guihua; Zhao, Jiwen

    2017-03-01

    Stochastic resonance (SR), which is characterized by the fact that proper noise can be utilized to enhance weak periodic signals, has been widely applied in weak signal detection. SR is a nonlinear parameterized filter, and the output signal relies on the system parameters for the deterministic input signal. The most commonly used index for parameter tuning in the SR procedure is the signal-to-noise ratio (SNR). However, using the SNR index to evaluate the denoising effect of SR quantitatively is insufficient when the target signal frequency cannot be estimated accurately. To address this issue, six different indexes, namely, power spectral kurtosis of the SR output signal, correlation coefficient between the SR output and the original signal, peak SNR, structural similarity, root mean square error, and smoothness, are constructed in this study to measure the SR output quantitatively. These six quantitative indexes are fused into a new synthetic quantitative index (SQI) via a back propagation neural network to guide the adaptive parameter selection of the SR procedure. The index fusion procedure reduces the instability of each index and thus improves the robustness of parameter tuning. In addition, genetic algorithm is utilized to quickly select the optimal SR parameters. The efficiency of bearing fault diagnosis is thus further improved. The effectiveness and efficiency of the proposed SQI-based adaptive SR method for bearing fault diagnosis are verified through numerical and experiment analyses.

  2. Accelerated Adaptive Integration Method

    PubMed Central

    2015-01-01

    Conformational changes that occur upon ligand binding may be too slow to observe on the time scales routinely accessible using molecular dynamics simulations. The adaptive integration method (AIM) leverages the notion that when a ligand is either fully coupled or decoupled, according to λ, barrier heights may change, making some conformational transitions more accessible at certain λ values. AIM adaptively changes the value of λ in a single simulation so that conformations sampled at one value of λ seed the conformational space sampled at another λ value. Adapting the value of λ throughout a simulation, however, does not resolve issues in sampling when barriers remain high regardless of the λ value. In this work, we introduce a new method, called Accelerated AIM (AcclAIM), in which the potential energy function is flattened at intermediate values of λ, promoting the exploration of conformational space as the ligand is decoupled from its receptor. We show, with both a simple model system (Bromocyclohexane) and the more complex biomolecule Thrombin, that AcclAIM is a promising approach to overcome high barriers in the calculation of free energies, without the need for any statistical reweighting or additional processors. PMID:24780083

  3. The rejection of vibrations in adaptive optics systems using a DFT-based estimation method

    NASA Astrophysics Data System (ADS)

    Kania, Dariusz; Borkowski, Józef

    2016-04-01

    Adaptive optics systems are commonly used in many optical structures to reduce perturbations and to increase the system performance. The problem in such systems is undesirable vibrations due to some effects as shaking of the whole structure or the tracking process. This paper presents a frequency, amplitude and phase estimation method of a multifrequency signal that can be used to reject these vibrations in an adaptive method. The estimation method is based on using the FFT procedure. The undesirable signals are usually exponentially damped harmonic oscillations. The estimation error depends on several parameters and consists of a systematic component and a random component. The systematic error depends on the signal phase, the number of samples N in a measurement window, the value of CiR (number of signal periods in a measurement window), the THD value and the time window order H. The random error depends mainly on the variance of noise and the SNR value. This paper shows research on the sinusoidal signal phase and the estimation of exponentially damped sinusoids parameters. The shape of errors signals is periodical and it is associated with the signal period and with the sliding measurement window. For CiR=1.6 and the damping ratio 0.1% the error was in the order of 10-5 Hz/Hz, 10-4 V/V and 10-4 rad for the frequency, the amplitude and the phase estimation respectively. The information provided in this paper can be used to determine the approximate level of the efficiency of the vibrations elimination process before starting it.

  4. Automatic barcode recognition method based on adaptive edge detection and a mapping model

    NASA Astrophysics Data System (ADS)

    Yang, Hua; Chen, Lianzheng; Chen, Yifan; Lee, Yong; Yin, Zhouping

    2016-09-01

    An adaptive edge detection and mapping (AEDM) algorithm to address the challenging one-dimensional barcode recognition task with the existence of both image degradation and barcode shape deformation is presented. AEDM is an edge detection-based method that has three consecutive phases. The first phase extracts the scan lines from a cropped image. The second phase involves detecting the edge points in a scan line. The edge positions are assumed to be the intersecting points between a scan line and a corresponding well-designed reference line. The third phase involves adjusting the preliminary edge positions to more reasonable positions by employing prior information of the coding rules. Thus, a universal edge mapping model is established to obtain the coding positions of each edge in this phase, followed by a decoding procedure. The Levenberg-Marquardt method is utilized to solve this nonlinear model. The computational complexity and convergence analysis of AEDM are also provided. Several experiments were implemented to evaluate the performance of AEDM algorithm. The results indicate that the efficient AEDM algorithm outperforms state-of-the-art methods and adequately addresses multiple issues, such as out-of-focus blur, nonlinear distortion, noise, nonlinear optical illumination, and situations that involve the combinations of these issues.

  5. Vibration-based structural health monitoring using adaptive statistical method under varying environmental condition

    NASA Astrophysics Data System (ADS)

    Jin, Seung-Seop; Jung, Hyung-Jo

    2014-03-01

    It is well known that the dynamic properties of a structure such as natural frequencies depend not only on damage but also on environmental condition (e.g., temperature). The variation in dynamic characteristics of a structure due to environmental condition may mask damage of the structure. Without taking the change of environmental condition into account, false-positive or false-negative damage diagnosis may occur so that structural health monitoring becomes unreliable. In order to address this problem, an approach to construct a regression model based on structural responses considering environmental factors has been usually used by many researchers. The key to success of this approach is the formulation between the input and output variables of the regression model to take into account the environmental variations. However, it is quite challenging to determine proper environmental variables and measurement locations in advance for fully representing the relationship between the structural responses and the environmental variations. One alternative (i.e., novelty detection) is to remove the variations caused by environmental factors from the structural responses by using multivariate statistical analysis (e.g., principal component analysis (PCA), factor analysis, etc.). The success of this method is deeply depending on the accuracy of the description of normal condition. Generally, there is no prior information on normal condition during data acquisition, so that the normal condition is determined by subjective perspective with human-intervention. The proposed method is a novel adaptive multivariate statistical analysis for monitoring of structural damage detection under environmental change. One advantage of this method is the ability of a generative learning to capture the intrinsic characteristics of the normal condition. The proposed method is tested on numerically simulated data for a range of noise in measurement under environmental variation. A comparative

  6. Automatic off-body overset adaptive Cartesian mesh method based on an octree approach

    NASA Astrophysics Data System (ADS)

    Péron, Stéphanie; Benoit, Christophe

    2013-01-01

    This paper describes a method for generating adaptive structured Cartesian grids within a near-body/off-body mesh partitioning framework for the flow simulation around complex geometries. The off-body Cartesian mesh generation derives from an octree structure, assuming each octree leaf node defines a structured Cartesian block. This enables one to take into account the large scale discrepancies in terms of resolution between the different bodies involved in the simulation, with minimum memory requirements. Two different conversions from the octree to Cartesian grids are proposed: the first one generates Adaptive Mesh Refinement (AMR) type grid systems, and the second one generates abutting or minimally overlapping Cartesian grid set. We also introduce an algorithm to control the number of points at each adaptation, that automatically determines relevant values of the refinement indicator driving the grid refinement and coarsening. An application to a wing tip vortex computation assesses the capability of the method to capture accurately the flow features.

  7. Development and evaluation of a method of calibrating medical displays based on fixed adaptation

    SciTech Connect

    Sund, Patrik Månsson, Lars Gunnar; Båth, Magnus

    2015-04-15

    Purpose: The purpose of this work was to develop and evaluate a new method for calibration of medical displays that includes the effect of fixed adaptation and by using equipment and luminance levels typical for a modern radiology department. Methods: Low contrast sinusoidal test patterns were derived at nine luminance levels from 2 to 600 cd/m{sup 2} and used in a two alternative forced choice observer study, where the adaptation level was fixed at the logarithmic average of 35 cd/m{sup 2}. The contrast sensitivity at each luminance level was derived by establishing a linear relationship between the ten pattern contrast levels used at every luminance level and a detectability index (d′) calculated from the fraction of correct responses. A Gaussian function was fitted to the data and normalized to the adaptation level. The corresponding equation was used in a display calibration method that included the grayscale standard display function (GSDF) but compensated for fixed adaptation. In the evaluation study, the contrast of circular objects with a fixed pixel contrast was displayed using both calibration methods and was rated on a five-grade scale. Results were calculated using a visual grading characteristics method. Error estimations in both observer studies were derived using a bootstrap method. Results: The contrast sensitivities for the darkest and brightest patterns compared to the contrast sensitivity at the adaptation luminance were 37% and 56%, respectively. The obtained Gaussian fit corresponded well with similar studies. The evaluation study showed a higher degree of equally distributed contrast throughout the luminance range with the calibration method compensated for fixed adaptation than for the GSDF. The two lowest scores for the GSDF were obtained for the darkest and brightest patterns. These scores were significantly lower than the lowest score obtained for the compensated GSDF. For the GSDF, the scores for all luminance levels were statistically

  8. Advances in Adaptive Control Methods

    NASA Technical Reports Server (NTRS)

    Nguyen, Nhan

    2009-01-01

    This poster presentation describes recent advances in adaptive control technology developed by NASA. Optimal Control Modification is a novel adaptive law that can improve performance and robustness of adaptive control systems. A new technique has been developed to provide an analytical method for computing time delay stability margin for adaptive control systems.

  9. Parallel processing of Eulerian-Lagrangian, cell-based adaptive method for moving boundary problems

    NASA Astrophysics Data System (ADS)

    Kuan, Chih-Kuang

    In this study, issues and techniques related to the parallel processing of the Eulerian-Lagrangian method for multi-scale moving boundary computation are investigated. The scope of the study consists of the Eulerian approach for field equations, explicit interface-tracking, Lagrangian interface modification and reconstruction algorithms, and a cell-based unstructured adaptive mesh refinement (AMR) in a distributed-memory computation framework. We decomposed the Eulerian domain spatially along with AMR to balance the computational load of solving field equations, which is a primary cost of the entire solver. The Lagrangian domain is partitioned based on marker vicinities with respect to the Eulerian partitions to minimize inter-processor communication. Overall, the performance of an Eulerian task peaks at 10,000-20,000 cells per processor, and it is the upper bound of the performance of the Eulerian- Lagrangian method. Moreover, the load imbalance of the Lagrangian task is not as influential as the communication overhead of the Eulerian-Lagrangian tasks on the overall performance. To assess the parallel processing capabilities, a high Weber number drop collision is simulated. The high convective to viscous length scale ratios result in disparate length scale distributions; together with the moving and topologically irregular interfaces, the computational tasks require temporally and spatially resolved treatment adaptively. The techniques presented enable us to perform original studies to meet such computational requirements. Coalescence, stretch, and break-up of satellite droplets due to the interfacial instability are observed in current study, and the history of interface evolution is in good agreement with the experimental data. The competing mechanisms of the primary and secondary droplet break up, along with the gas-liquid interfacial dynamics are systematically investigated. This study shows that Rayleigh-Taylor instability on the edge of an extruding sheet

  10. Method for Reducing the Drag of Increasing Forebody Roughness Blunt-Based Vehicles by Adaptively Increasing Forebody Roughness

    NASA Technical Reports Server (NTRS)

    Whitmore, Stephen A. (Inventor); Saltzman, Edwin J. (Inventor); Moes, Timothy R. (Inventor); Iliff, Kenneth W. (Inventor)

    2005-01-01

    A method for reducing drag upon a blunt-based vehicle by adaptively increasing forebody roughness to increase drag at the roughened area of the forebody, which results in a decrease in drag at the base of this vehicle, and in total vehicle drag.

  11. Comparing Computer-Adaptive and Curriculum-Based Measurement Methods of Assessment

    ERIC Educational Resources Information Center

    Shapiro, Edward S.; Gebhardt, Sarah N.

    2012-01-01

    This article reported the concurrent, predictive, and diagnostic accuracy of a computer-adaptive test (CAT) and curriculum-based measurements (CBM; both computation and concepts/application measures) for universal screening in mathematics among students in first through fourth grade. Correlational analyses indicated moderate to strong…

  12. An edge-based solution-adaptive method applied to the AIRPLANE code

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak; Thomas, Scott D.; Cliff, Susan E.

    1995-01-01

    Computational methods to solve large-scale realistic problems in fluid flow can be made more efficient and cost effective by using them in conjunction with dynamic mesh adaption procedures that perform simultaneous coarsening and refinement to capture flow features of interest. This work couples the tetrahedral mesh adaption scheme, 3D_TAG, with the AIRPLANE code to solve complete aircraft configuration problems in transonic and supersonic flow regimes. Results indicate that the near-field sonic boom pressure signature of a cone-cylinder is improved, the oblique and normal shocks are better resolved on a transonic wing, and the bow shock ahead of an unstarted inlet is better defined.

  13. An edge-based solution-adaptive method applied to the AIRPLANE code

    NASA Astrophysics Data System (ADS)

    Biswas, Rupak; Thomas, Scott D.; Cliff, Susan E.

    1995-11-01

    Computational methods to solve large-scale realistic problems in fluid flow can be made more efficient and cost effective by using them in conjunction with dynamic mesh adaption procedures that perform simultaneous coarsening and refinement to capture flow features of interest. This work couples the tetrahedral mesh adaption scheme, 3D_TAG, with the AIRPLANE code to solve complete aircraft configuration problems in transonic and supersonic flow regimes. Results indicate that the near-field sonic boom pressure signature of a cone-cylinder is improved, the oblique and normal shocks are better resolved on a transonic wing, and the bow shock ahead of an unstarted inlet is better defined.

  14. A new finite element method for solving compressible Navier-Stokes equations based on an operator splitting method and h-p adaptivity

    NASA Technical Reports Server (NTRS)

    Demkowicz, L.; Oden, J. T.; Rachowicz, W.

    1990-01-01

    A new finite element method solving compressible Navier-Stokes equations is proposed. The method is based on a version of Strang's operator splitting and an h-p adaptive finite element approximation in space. This paper contains the formulation of the method with a detailed discussion of boundary conditions, a sample adaptive strategy and numerical examples involving compressible viscous flow over a flat plate with Reynolds number Re = 1000 and Re = 10,000.

  15. Refinement trajectory and determination of eigenstates by a wavelet based adaptive method

    SciTech Connect

    Pipek, Janos; Nagy, Szilvia

    2006-11-07

    The detail structure of the wave function is analyzed at various refinement levels using the methods of wavelet analysis. The eigenvalue problem of a model system is solved in granular Hilbert spaces, and the trajectory of the eigenstates is traced in terms of the resolution. An adaptive method is developed for identifying the fine structure localization regions, where further refinement of the wave function is necessary.

  16. A Digitalized Gyroscope System Based on a Modified Adaptive Control Method.

    PubMed

    Xia, Dunzhu; Hu, Yiwei; Ni, Peizhen

    2016-03-04

    In this work we investigate the possibility of applying the adaptive control algorithm to Micro-Electro-Mechanical System (MEMS) gyroscopes. Through comparing the gyroscope working conditions with the reference model, the adaptive control method can provide online estimation of the key parameters and the proper control strategy for the system. The digital second-order oscillators in the reference model are substituted for two phase locked loops (PLLs) to achieve a more steady amplitude and frequency control. The adaptive law is modified to satisfy the condition of unequal coupling stiffness and coupling damping coefficient. The rotation mode of the gyroscope system is considered in our work and a rotation elimination section is added to the digitalized system. Before implementing the algorithm in the hardware platform, different simulations are conducted to ensure the algorithm can meet the requirement of the angular rate sensor, and some of the key adaptive law coefficients are optimized. The coupling components are detected and suppressed respectively and Lyapunov criterion is applied to prove the stability of the system. The modified adaptive control algorithm is verified in a set of digitalized gyroscope system, the control system is realized in digital domain, with the application of Field Programmable Gate Array (FPGA). Key structure parameters are measured and compared with the estimation results, which validated that the algorithm is feasible in the setup. Extra gyroscopes are used in repeated experiments to prove the commonality of the algorithm.

  17. A Digitalized Gyroscope System Based on a Modified Adaptive Control Method

    PubMed Central

    Xia, Dunzhu; Hu, Yiwei; Ni, Peizhen

    2016-01-01

    In this work we investigate the possibility of applying the adaptive control algorithm to Micro-Electro-Mechanical System (MEMS) gyroscopes. Through comparing the gyroscope working conditions with the reference model, the adaptive control method can provide online estimation of the key parameters and the proper control strategy for the system. The digital second-order oscillators in the reference model are substituted for two phase locked loops (PLLs) to achieve a more steady amplitude and frequency control. The adaptive law is modified to satisfy the condition of unequal coupling stiffness and coupling damping coefficient. The rotation mode of the gyroscope system is considered in our work and a rotation elimination section is added to the digitalized system. Before implementing the algorithm in the hardware platform, different simulations are conducted to ensure the algorithm can meet the requirement of the angular rate sensor, and some of the key adaptive law coefficients are optimized. The coupling components are detected and suppressed respectively and Lyapunov criterion is applied to prove the stability of the system. The modified adaptive control algorithm is verified in a set of digitalized gyroscope system, the control system is realized in digital domain, with the application of Field Programmable Gate Array (FPGA). Key structure parameters are measured and compared with the estimation results, which validated that the algorithm is feasible in the setup. Extra gyroscopes are used in repeated experiments to prove the commonality of the algorithm. PMID:26959019

  18. Robust Optimal Adaptive Control Method with Large Adaptive Gain

    NASA Technical Reports Server (NTRS)

    Nguyen, Nhan T.

    2009-01-01

    In the presence of large uncertainties, a control system needs to be able to adapt rapidly to regain performance. Fast adaptation is referred to the implementation of adaptive control with a large adaptive gain to reduce the tracking error rapidly. However, a large adaptive gain can lead to high-frequency oscillations which can adversely affect robustness of an adaptive control law. A new adaptive control modification is presented that can achieve robust adaptation with a large adaptive gain without incurring high-frequency oscillations as with the standard model-reference adaptive control. The modification is based on the minimization of the Y2 norm of the tracking error, which is formulated as an optimal control problem. The optimality condition is used to derive the modification using the gradient method. The optimal control modification results in a stable adaptation and allows a large adaptive gain to be used for better tracking while providing sufficient stability robustness. Simulations were conducted for a damaged generic transport aircraft with both standard adaptive control and the adaptive optimal control modification technique. The results demonstrate the effectiveness of the proposed modification in tracking a reference model while maintaining a sufficient time delay margin.

  19. Method of adaptive artificial viscosity

    NASA Astrophysics Data System (ADS)

    Popov, I. V.; Fryazinov, I. V.

    2011-09-01

    A new finite-difference method for the numerical solution of gas dynamics equations is proposed. This method is a uniform monotonous finite-difference scheme of second-order approximation on time and space outside of domains of shock and compression waves. This method is based on inputting adaptive artificial viscosity (AAV) into gas dynamics equations. In this paper, this method is analyzed for 2D geometry. The testing computations of the movement of contact discontinuities and shock waves and the breakup of discontinuities are demonstrated.

  20. A novel adaptive 3D medical image interpolation method based on shape

    NASA Astrophysics Data System (ADS)

    Chen, Jiaxin; Ma, Wei

    2013-03-01

    Image interpolation of cross-sections is one of the key steps of medical visualization. Aiming at the problem of fuzzy boundaries and large amount of calculation, which are brought by the traditional interpolation, a novel adaptive 3-D medical image interpolation method is proposed in this paper. Firstly, the contour is obtained by the edge interpolation, and the corresponding points are found according to the relation of the contour and points on the original images. Secondly, this algorithm utilizes volume relativity to get the best point-pair with the adaptive methods. Finally, the grey value of interpolation pixel is got by the matching point interpolation. The experimental results show that the method presented in the paper not only can meet the requirements of interpolation accuracy, but also can be used effectively in medical image 3D reconstruction.

  1. Stabilized Conservative Level Set Method with Adaptive Wavelet-based Mesh Refinement

    NASA Astrophysics Data System (ADS)

    Shervani-Tabar, Navid; Vasilyev, Oleg V.

    2016-11-01

    This paper addresses one of the main challenges of the conservative level set method, namely the ill-conditioned behavior of the normal vector away from the interface. An alternative formulation for reconstruction of the interface is proposed. Unlike the commonly used methods which rely on the unit normal vector, Stabilized Conservative Level Set (SCLS) uses a modified renormalization vector with diminishing magnitude away from the interface. With the new formulation, in the vicinity of the interface the reinitialization procedure utilizes compressive flux and diffusive terms only in the normal direction to the interface, thus, preserving the conservative level set properties, while away from the interfaces the directional diffusion mechanism automatically switches to homogeneous diffusion. The proposed formulation is robust and general. It is especially well suited for use with adaptive mesh refinement (AMR) approaches due to need for a finer resolution in the vicinity of the interface in comparison with the rest of the domain. All of the results were obtained using the Adaptive Wavelet Collocation Method, a general AMR-type method, which utilizes wavelet decomposition to adapt on steep gradients in the solution while retaining a predetermined order of accuracy.

  2. Three-dimensional multi bioluminescent sources reconstruction based on adaptive finite element method

    NASA Astrophysics Data System (ADS)

    Ma, Xibo; Tian, Jie; Zhang, Bo; Zhang, Xing; Xue, Zhenwen; Dong, Di; Han, Dong

    2011-03-01

    Among many optical molecular imaging modalities, bioluminescence imaging (BLI) has more and more wide application in tumor detection and evaluation of pharmacodynamics, toxicity, pharmacokinetics because of its noninvasive molecular and cellular level detection ability, high sensitivity and low cost in comparison with other imaging technologies. However, BLI can not present the accurate location and intensity of the inner bioluminescence sources such as in the bone, liver or lung etc. Bioluminescent tomography (BLT) shows its advantage in determining the bioluminescence source distribution inside a small animal or phantom. Considering the deficiency of two-dimensional imaging modality, we developed three-dimensional tomography to reconstruct the information of the bioluminescence source distribution in transgenic mOC-Luc mice bone with the boundary measured data. In this paper, to study the osteocalcin (OC) accumulation in transgenic mOC-Luc mice bone, a BLT reconstruction method based on multilevel adaptive finite element (FEM) algorithm was used for localizing and quantifying multi bioluminescence sources. Optical and anatomical information of the tissues are incorporated as a priori knowledge in this method, which can reduce the ill-posedness of BLT. The data was acquired by the dual modality BLT and Micro CT prototype system that was developed by us. Through temperature control and absolute intensity calibration, a relative accurate intensity can be calculated. The location of the OC accumulation was reconstructed, which was coherent with the principle of bone differentiation. This result also was testified by ex vivo experiment in the black 96-plate well using the BLI system and the chemiluminescence apparatus.

  3. An adaptive line enhancement method for UWB proximity fuze signal processing based on correlation matrix estimation with time delay factor

    NASA Astrophysics Data System (ADS)

    Li, Meng; Huang, Zhonghua

    2016-10-01

    Signal processing for an ultra-wideband radio fuze receiver involves some challenges: it requires high real-time performance; the output signal is mixed with broadband noise; and the signal-to-noise ratio (SNR) decreases with increased detection range. The adaptive line enhancement method is used to filter the output signal of the ultra-wideband radio fuze receiver, and thus suppress the wideband noise from the output signal of the receiver and extract the target characteristic signal. The filter input correlation matrix estimation algorithm is based on the delay factor of an adaptive line enhancer. The proposed adaptive algorithm was used to filter and reduce noise in the output signal from the fuze receiver. Simulation results showed that the SNR of the output signal after adaptive noise reduction was improved by 20 dB, which was higher than the SNR of the output signal after finite impulse response (FIR) filtering of around 10 dB.

  4. An adaptive segment method for smoothing lidar signal based on noise estimation

    NASA Astrophysics Data System (ADS)

    Wang, Yuzhao; Luo, Pingping

    2014-10-01

    An adaptive segmentation smoothing method (ASSM) is introduced in the paper to smooth the signal and suppress the noise. In the ASSM, the noise is defined as the 3σ of the background signal. An integer number N is defined for finding the changing positions in the signal curve. If the difference of adjacent two points is greater than 3Nσ, the position is recorded as an end point of the smoothing segment. All the end points detected as above are recorded and the curves between them will be smoothed separately. In the traditional method, the end points of the smoothing windows in the signals are fixed. The ASSM creates changing end points in different signals and the smoothing windows could be set adaptively. The windows are always set as the half of the segmentations and then the average smoothing method will be applied in the segmentations. The Iterative process is required for reducing the end-point aberration effect in the average smoothing method and two or three times are enough. In ASSM, the signals are smoothed in the spacial area nor frequent area, that means the frequent disturbance will be avoided. A lidar echo was simulated in the experimental work. The echo was supposed to be created by a space-born lidar (e.g. CALIOP). And white Gaussian noise was added to the echo to act as the random noise resulted from environment and the detector. The novel method, ASSM, was applied to the noisy echo to filter the noise. In the test, N was set to 3 and the Iteration time is two. The results show that, the signal could be smoothed adaptively by the ASSM, but the N and the Iteration time might be optimized when the ASSM is applied in a different lidar.

  5. Comparing model-based adaptive LMS filters and a model-free hysteresis loop analysis method for structural health monitoring

    NASA Astrophysics Data System (ADS)

    Zhou, Cong; Chase, J. Geoffrey; Rodgers, Geoffrey W.; Xu, Chao

    2017-02-01

    The model-free hysteresis loop analysis (HLA) method for structural health monitoring (SHM) has significant advantages over the traditional model-based SHM methods that require a suitable baseline model to represent the actual system response. This paper provides a unique validation against both an experimental reinforced concrete (RC) building and a calibrated numerical model to delineate the capability of the model-free HLA method and the adaptive least mean squares (LMS) model-based method in detecting, localizing and quantifying damage that may not be visible, observable in overall structural response. Results clearly show the model-free HLA method is capable of adapting to changes in how structures transfer load or demand across structural elements over time and multiple events of different size. However, the adaptive LMS model-based method presented an image of greater spread of lesser damage over time and story when the baseline model is not well defined. Finally, the two algorithms are tested over a simpler hysteretic behaviour typical steel structure to quantify the impact of model mismatch between the baseline model used for identification and the actual response. The overall results highlight the need for model-based methods to have an appropriate model that can capture the observed response, in order to yield accurate results, even in small events where the structure remains linear.

  6. A vertical parallax reduction method for stereoscopic video based on adaptive interpolation

    NASA Astrophysics Data System (ADS)

    Li, Qingyu; Zhao, Yan

    2016-10-01

    The existence of vertical parallax is the main factor of affecting the viewing comfort of stereo video. Visual fatigue is gaining widespread attention with the booming development of 3D stereoscopic video technology. In order to reduce the vertical parallax without affecting the horizontal parallax, a self-adaptive image scaling algorithm is proposed, which can use the edge characteristics efficiently. In the meantime, the nonlinear Levenberg-Marquardt (L-M) algorithm is introduced in this paper to improve the accuracy of the transformation matrix. Firstly, the self-adaptive scaling algorithm is used for the original image interpolation. When the pixel point of original image is in the edge areas, the interpretation is implemented adaptively along the edge direction obtained by Sobel operator. Secondly the SIFT algorithm, which is invariant to scaling, rotation and affine transformation, is used to detect the feature matching points from the binocular images. Then according to the coordinate position of matching points, the transformation matrix, which can reduce the vertical parallax, is calculated using Levenberg-Marquardt algorithm. Finally, the transformation matrix is applied to target image to calculate the new coordinate position of each pixel from the view image. The experimental results show that: comparing with the method which reduces the vertical parallax using linear algorithm to calculate two-dimensional projective transformation, the proposed method improves the vertical parallax reduction obviously. At the same time, in terms of the impact on horizontal parallax, the proposed method has more similar horizontal parallax to that of the original image after vertical parallax reduction. Therefore, the proposed method can optimize the vertical parallax reduction.

  7. Overcoming the Curse of Dimension: Methods Based on Sparse Representation and Adaptive Sampling

    DTIC Science & Technology

    2011-02-28

    carried out mainly by him, together with our joint post-doc Haijun Yu. Please refer to his report for the progress made in this direction. 3 Exploring...multiscale modeling using sparse representation”, Comm. Comp. Phys., 4(5), pp. 1025–1033 (2008). [3] X. Zhou and W. Ren and W. E, “Adaptive minimum...action method for the study of rare events”, J. Chem. Phys., 128, 10, 2008. [4] X. Wan, X. Zhou and W. E, “Noise-induced transitions in the Kuramoto-Sivashinsky equation”, preprint, submitted. 4

  8. Thickness-based adaptive mesh refinement methods for multi-phase flow simulations with thin regions

    SciTech Connect

    Chen, Xiaodong; Yang, Vigor

    2014-07-15

    In numerical simulations of multi-scale, multi-phase flows, grid refinement is required to resolve regions with small scales. A notable example is liquid-jet atomization and subsequent droplet dynamics. It is essential to characterize the detailed flow physics with variable length scales with high fidelity, in order to elucidate the underlying mechanisms. In this paper, two thickness-based mesh refinement schemes are developed based on distance- and topology-oriented criteria for thin regions with confining wall/plane of symmetry and in any situation, respectively. Both techniques are implemented in a general framework with a volume-of-fluid formulation and an adaptive-mesh-refinement capability. The distance-oriented technique compares against a critical value, the ratio of an interfacial cell size to the distance between the mass center of the cell and a reference plane. The topology-oriented technique is developed from digital topology theories to handle more general conditions. The requirement for interfacial mesh refinement can be detected swiftly, without the need of thickness information, equation solving, variable averaging or mesh repairing. The mesh refinement level increases smoothly on demand in thin regions. The schemes have been verified and validated against several benchmark cases to demonstrate their effectiveness and robustness. These include the dynamics of colliding droplets, droplet motions in a microchannel, and atomization of liquid impinging jets. Overall, the thickness-based refinement technique provides highly adaptive meshes for problems with thin regions in an efficient and fully automatic manner.

  9. Parallel multilevel adaptive methods

    NASA Technical Reports Server (NTRS)

    Dowell, B.; Govett, M.; Mccormick, S.; Quinlan, D.

    1989-01-01

    The progress of a project for the design and analysis of a multilevel adaptive algorithm (AFAC/HM/) targeted for the Navier Stokes Computer is discussed. The results of initial timing tests of AFAC, coupled with multigrid and an efficient load balancer, on a 16-node Intel iPSC/2 hypercube are included. The results of timing tests are presented.

  10. An improved human visual system based reversible data hiding method using adaptive histogram modification

    NASA Astrophysics Data System (ADS)

    Hong, Wien; Chen, Tung-Shou; Wu, Mei-Chen

    2013-03-01

    Jung et al., IEEE Signal Processing Letters, 18, 2, 95, 2011 proposed a reversible data hiding method considering the human visual system (HVS). They employed the mean of visited neighboring pixels to predict the current pixel value, and estimated the just noticeable difference (JND) of the current pixel. Message bits are then embedded by adjusting the embedding level according to the calculated JND. Jung et al.'s method achieved excellent image quality. However, the embedding algorithm they used may result in over modification of pixel values and a large location map, which may deteriorate the image quality and decrease the pure payload. The proposed method exploits the nearest neighboring pixels to predict the visited pixel value and to estimate the corresponding JND. The cover pixels are preprocessed adaptively to reduce the size of the location map. We also employ an embedding level selection mechanism to prevent near-saturated pixels from being over modified. Experimental results show that the image quality of the proposed method is higher than that of Jung et al.'s method, and the payload can also be increased due to the reduction of the location map.

  11. A wavelet-MRA-based adaptive semi-Lagrangian method for the relativistic Vlasov-Maxwell system

    SciTech Connect

    Besse, Nicolas Latu, Guillaume Ghizzo, Alain Sonnendruecker, Eric Bertrand, Pierre

    2008-08-10

    In this paper we present a new method for the numerical solution of the relativistic Vlasov-Maxwell system on a phase-space grid using an adaptive semi-Lagrangian method. The adaptivity is performed through a wavelet multiresolution analysis, which gives a powerful and natural refinement criterion based on the local measurement of the approximation error and regularity of the distribution function. Therefore, the multiscale expansion of the distribution function allows to get a sparse representation of the data and thus save memory space and CPU time. We apply this numerical scheme to reduced Vlasov-Maxwell systems arising in laser-plasma physics. Interaction of relativistically strong laser pulses with overdense plasma slabs is investigated. These Vlasov simulations revealed a rich variety of phenomena associated with the fast particle dynamics induced by electromagnetic waves as electron trapping, particle acceleration, and electron plasma wavebreaking. However, the wavelet based adaptive method that we developed here, does not yield significant improvements compared to Vlasov solvers on a uniform mesh due to the substantial overhead that the method introduces. Nonetheless they might be a first step towards more efficient adaptive solvers based on different ideas for the grid refinement or on a more efficient implementation. Here the Vlasov simulations are performed in a two-dimensional phase-space where the development of thin filaments, strongly amplified by relativistic effects requires an important increase of the total number of points of the phase-space grid as they get finer as time goes on. The adaptive method could be more useful in cases where these thin filaments that need to be resolved are a very small fraction of the hyper-volume, which arises in higher dimensions because of the surface-to-volume scaling and the essentially one-dimensional structure of the filaments. Moreover, the main way to improve the efficiency of the adaptive method is to

  12. A Time-Adaptive Integrator Based on Radau Methods for Advection Diffusion Reaction PDEs

    NASA Astrophysics Data System (ADS)

    Gonzalez-Pinto, S.; Perez-Rodriguez, S.

    2009-09-01

    The numerical integration of time-dependent PDEs, especially of Advection Diffusion Reaction type, for two and three spatial variables (in short, 2D and 3D problems) in the MoL framework is considered. The spatial discretization is made by using Finite Differences and the time integration is carried out by means of the L-stable, third order formula known as the two stage Radau IIA method. The main point for the solution of the large dimensional ODEs is not to solve the stage values of the Radau method until convergence (because the convergence is very slow on the stiff components), but only giving a very few iterations and take as advancing solution the latter stage value computed. The iterations are carried out by using the Approximate Matrix Factorization (AMF) coupled to a Newton-type iteration (SNI) as indicated in [5], which turns out in an acceptably cheap iteration, like Alternating Directions Methods (ADI) of Peaceman and Rachford (1955). Some stability results for the whole process (AMF)-(SNI) and a local error estimate for an adaptive time-integration are also given. Numerical results on two standard PDEs are presented and some conclusions about our method and other well-known solvers are drawn.

  13. A New Sparse Adaptive Channel Estimation Method Based on Compressive Sensing for FBMC/OQAM Transmission Network.

    PubMed

    Wang, Han; Du, Wencai; Xu, Lingwei

    2016-06-24

    The conventional channel estimation methods based on a preamble for filter bank multicarrier with offset quadrature amplitude modulation (FBMC/OQAM) systems in mobile-to-mobile sensor networks are inefficient. By utilizing the intrinsicsparsity of wireless channels, channel estimation is researched as a compressive sensing (CS) problem to improve the estimation performance. In this paper, an AdaptiveRegularized Compressive Sampling Matching Pursuit (ARCoSaMP) algorithm is proposed. Unlike anterior greedy algorithms, the new algorithm can achieve the accuracy of reconstruction by choosing the support set adaptively, and exploiting the regularization process, which realizes the second selecting of atoms in the support set although the sparsity of the channel is unknown. Simulation results show that CS-based methods obtain significant channel estimation performance improvement compared to that of conventional preamble-based methods. The proposed ARCoSaMP algorithm outperforms the conventional sparse adaptive matching pursuit (SAMP) algorithm. ARCoSaMP provides even more interesting results than the mostadvanced greedy compressive sampling matching pursuit (CoSaMP) algorithm without a prior sparse knowledge of the channel.

  14. A New Sparse Adaptive Channel Estimation Method Based on Compressive Sensing for FBMC/OQAM Transmission Network

    PubMed Central

    Wang, Han; Du, Wencai; Xu, Lingwei

    2016-01-01

    The conventional channel estimation methods based on a preamble for filter bank multicarrier with offset quadrature amplitude modulation (FBMC/OQAM) systems in mobile-to-mobile sensor networks are inefficient. By utilizing the intrinsicsparsity of wireless channels, channel estimation is researched as a compressive sensing (CS) problem to improve the estimation performance. In this paper, an AdaptiveRegularized Compressive Sampling Matching Pursuit (ARCoSaMP) algorithm is proposed. Unlike anterior greedy algorithms, the new algorithm can achieve the accuracy of reconstruction by choosing the support set adaptively, and exploiting the regularization process, which realizes the second selecting of atoms in the support set although the sparsity of the channel is unknown. Simulation results show that CS-based methods obtain significant channel estimation performance improvement compared to that of conventional preamble-based methods. The proposed ARCoSaMP algorithm outperforms the conventional sparse adaptive matching pursuit (SAMP) algorithm. ARCoSaMP provides even more interesting results than the mostadvanced greedy compressive sampling matching pursuit (CoSaMP) algorithm without a prior sparse knowledge of the channel. PMID:27347967

  15. Detection of neuronal spikes using an adaptive threshold based on the max-min spread sorting method.

    PubMed

    Chan, Hsiao-Lung; Lin, Ming-An; Wu, Tony; Lee, Shih-Tseng; Tsai, Yu-Tai; Chao, Pei-Kuang

    2008-07-15

    Neuronal spike information can be used to correlate neuronal activity to various stimuli, to find target neural areas for deep brain stimulation, and to decode intended motor command for brain-machine interface. Typically, spike detection is performed based on the adaptive thresholds determined by running root-mean-square (RMS) value of the signal. Yet conventional detection methods are susceptible to threshold fluctuations caused by neuronal spike intensity. In the present study we propose a novel adaptive threshold based on the max-min spread sorting method. On the basis of microelectrode recording signals and simulated signals with Gaussian noises and colored noises, the novel method had the smallest threshold variations, and similar or better spike detection performance than either the RMS-based method or other improved methods. Moreover, the detection method described in this paper uses the reduced features of raw signal to determine the threshold, thereby giving a simple data manipulation that is beneficial for reducing the computational load when dealing with very large amounts of data (as multi-electrode recordings).

  16. Microwave medical imaging based on sparsity and an iterative method with adaptive thresholding.

    PubMed

    Azghani, Masoumeh; Kosmas, Panagiotis; Marvasti, Farokh

    2015-02-01

    We propose a new image recovery method to improve the resolution in microwave imaging applications. Scattered field data obtained from a simplified breast model with closely located targets is used to formulate an electromagnetic inverse scattering problem, which is then solved using the Distorted Born Iterative Method (DBIM). At each iteration of the DBIM method, an underdetermined set of linear equations is solved using our proposed sparse recovery algorithm, IMATCS. Our results demonstrate the ability of the proposed method to recover small targets in cases where traditional DBIM approaches fail. Furthermore, in order to regularize the sparse recovery algorithm, we propose a novel L(2) -based approach and prove its convergence. The simulation results indicate that the L(2)-regularized method improves the robustness of the algorithm against the ill-posed conditions of the EM inverse scattering problem. Finally, we demonstrate that the regularized IMATCS-DBIM approach leads to fast, accurate and stable reconstructions of highly dense breast compositions.

  17. Parallel adaptive mesh refinement method based on WENO finite difference scheme for the simulation of multi-dimensional detonation

    NASA Astrophysics Data System (ADS)

    Wang, Cheng; Dong, XinZhuang; Shu, Chi-Wang

    2015-10-01

    For numerical simulation of detonation, computational cost using uniform meshes is large due to the vast separation in both time and space scales. Adaptive mesh refinement (AMR) is advantageous for problems with vastly different scales. This paper aims to propose an AMR method with high order accuracy for numerical investigation of multi-dimensional detonation. A well-designed AMR method based on finite difference weighted essentially non-oscillatory (WENO) scheme, named as AMR&WENO is proposed. A new cell-based data structure is used to organize the adaptive meshes. The new data structure makes it possible for cells to communicate with each other quickly and easily. In order to develop an AMR method with high order accuracy, high order prolongations in both space and time are utilized in the data prolongation procedure. Based on the message passing interface (MPI) platform, we have developed a workload balancing parallel AMR&WENO code using the Hilbert space-filling curve algorithm. Our numerical experiments with detonation simulations indicate that the AMR&WENO is accurate and has a high resolution. Moreover, we evaluate and compare the performance of the uniform mesh WENO scheme and the parallel AMR&WENO method. The comparison results provide us further insight into the high performance of the parallel AMR&WENO method.

  18. New method adaptive to geospatial information acquisition and share based on grid

    NASA Astrophysics Data System (ADS)

    Fu, Yingchun; Yuan, Xiuxiao

    2005-11-01

    As we all know, it is difficult and time-consuming to acquire and share multi-source geospatial information in grid computing environment, especially for the data of different geo-reference benchmark. Although middleware for data format transformation has been applied by many grid applications and GIS software systems, it remains difficult to on demand realize spatial data assembly jobs among various geo-reference benchmarks because of complex computation of rigorous coordinate transformation model. To address the problem, an efficient hierarchical quadtree structure referred as multi-level grids is designed and coded to express the multi-scale global geo-space. The geospatial objects located in a certain grid of multi-level grids may be expressed as an increment value which is relative to the grid central point and is constant in different geo-reference benchmark. A mediator responsible for geo-reference transformation function with multi-level grids has been developed and aligned with grid service. With help of the mediator, a map or query spatial data sets from individual source of different geo-references can be merged into an uniform composite result. Instead of complex data pre-processing prior to compatible spatial integration, the introduced method is adaptive to be integrated with grid-enable service.

  19. Total enthalpy-based lattice Boltzmann method with adaptive mesh refinement for solid-liquid phase change

    NASA Astrophysics Data System (ADS)

    Huang, Rongzong; Wu, Huiying

    2016-06-01

    A total enthalpy-based lattice Boltzmann (LB) method with adaptive mesh refinement (AMR) is developed in this paper to efficiently simulate solid-liquid phase change problem where variables vary significantly near the phase interface and thus finer grid is required. For the total enthalpy-based LB method, the velocity field is solved by an incompressible LB model with multiple-relaxation-time (MRT) collision scheme, and the temperature field is solved by a total enthalpy-based MRT LB model with the phase interface effects considered and the deviation term eliminated. With a kinetic assumption that the density distribution function for solid phase is at equilibrium state, a volumetric LB scheme is proposed to accurately realize the nonslip velocity condition on the diffusive phase interface and in the solid phase. As compared with the previous schemes, this scheme can avoid nonphysical flow in the solid phase. As for the AMR approach, it is developed based on multiblock grids. An indicator function is introduced to control the adaptive generation of multiblock grids, which can guarantee the existence of overlap area between adjacent blocks for information exchange. Since MRT collision schemes are used, the information exchange is directly carried out in the moment space. Numerical tests are firstly performed to validate the strict satisfaction of the nonslip velocity condition, and then melting problems in a square cavity with different Prandtl numbers and Rayleigh numbers are simulated, which demonstrate that the present method can handle solid-liquid phase change problem with high efficiency and accuracy.

  20. An Adaptive Multi-Sensor Data Fusion Method Based on Deep Convolutional Neural Networks for Fault Diagnosis of Planetary Gearbox

    PubMed Central

    Jing, Luyang; Wang, Taiyong; Zhao, Ming; Wang, Peng

    2017-01-01

    A fault diagnosis approach based on multi-sensor data fusion is a promising tool to deal with complicated damage detection problems of mechanical systems. Nevertheless, this approach suffers from two challenges, which are (1) the feature extraction from various types of sensory data and (2) the selection of a suitable fusion level. It is usually difficult to choose an optimal feature or fusion level for a specific fault diagnosis task, and extensive domain expertise and human labor are also highly required during these selections. To address these two challenges, we propose an adaptive multi-sensor data fusion method based on deep convolutional neural networks (DCNN) for fault diagnosis. The proposed method can learn features from raw data and optimize a combination of different fusion levels adaptively to satisfy the requirements of any fault diagnosis task. The proposed method is tested through a planetary gearbox test rig. Handcraft features, manual-selected fusion levels, single sensory data, and two traditional intelligent models, back-propagation neural networks (BPNN) and a support vector machine (SVM), are used as comparisons in the experiment. The results demonstrate that the proposed method is able to detect the conditions of the planetary gearbox effectively with the best diagnosis accuracy among all comparative methods in the experiment. PMID:28230767

  1. An Adaptive Multi-Sensor Data Fusion Method Based on Deep Convolutional Neural Networks for Fault Diagnosis of Planetary Gearbox.

    PubMed

    Jing, Luyang; Wang, Taiyong; Zhao, Ming; Wang, Peng

    2017-02-21

    A fault diagnosis approach based on multi-sensor data fusion is a promising tool to deal with complicated damage detection problems of mechanical systems. Nevertheless, this approach suffers from two challenges, which are (1) the feature extraction from various types of sensory data and (2) the selection of a suitable fusion level. It is usually difficult to choose an optimal feature or fusion level for a specific fault diagnosis task, and extensive domain expertise and human labor are also highly required during these selections. To address these two challenges, we propose an adaptive multi-sensor data fusion method based on deep convolutional neural networks (DCNN) for fault diagnosis. The proposed method can learn features from raw data and optimize a combination of different fusion levels adaptively to satisfy the requirements of any fault diagnosis task. The proposed method is tested through a planetary gearbox test rig. Handcraft features, manual-selected fusion levels, single sensory data, and two traditional intelligent models, back-propagation neural networks (BPNN) and a support vector machine (SVM), are used as comparisons in the experiment. The results demonstrate that the proposed method is able to detect the conditions of the planetary gearbox effectively with the best diagnosis accuracy among all comparative methods in the experiment.

  2. Exploring adaptations to climate change with stakeholders: A participatory method to design grassland-based farming systems.

    PubMed

    Sautier, Marion; Piquet, Mathilde; Duru, Michel; Martin-Clouaire, Roger

    2017-05-15

    Research is expected to produce knowledge, methods and tools to enhance stakeholders' adaptive capacity by helping them to anticipate and cope with the effects of climate change at their own level. Farmers face substantial challenges from climate change, from changes in the average temperatures and the precipitation regime to an increased variability of weather conditions and the frequency of extreme events. Such changes can have dramatic consequences for many types of agricultural production systems such as grassland-based livestock systems for which climate change influences the seasonality and productivity of fodder production. We present a participatory design method called FARMORE (FARM-Oriented REdesign) that allows farmers to design and evaluate adaptations of livestock systems to future climatic conditions. It explicitly considers three climate features in the design and evaluation processes: climate change, climate variability and the limited predictability of weather. FARMORE consists of a sequence of three workshops for which a pre-existing game-like platform was adapted. Various year-round forage production and animal feeding requirements must be assembled by participants with a computerized support system. In workshop 1, farmers aim to produce a configuration that satisfies an average future weather scenario. They refine or revise the previous configuration by considering a sample of the between-year variability of weather in workshop 2. In workshop 3, they explicitly take the limited predictability of weather into account. We present the practical aspects of the method based on four case studies involving twelve farmers from Aveyron (France), and illustrate it through an in-depth description of one of these case studies with three dairy farmers. The case studies shows and discusses how workshop sequencing (1) supports a design process that progressively accommodates complexity of real management contexts by enlarging considerations of climate change

  3. Case-based reactive navigation: a method for on-line selection and adaptation of reactive robotic control parameters.

    PubMed

    Ram, A; Arkin, R C; Moorman, K; Clark, R J

    1997-01-01

    We present a new line of research investigating on-line adaptive reactive control mechanisms for autonomous intelligent agents. We discuss a case-based method for dynamic selection and modification of behavior assemblages for a navigational system. The case-based reasoning module is designed as an addition to a traditional reactive control system, and provides more flexible performance in novel environments without extensive high level reasoning that would otherwise slow the system down. The method is implemented in the ACBARR (case-based reactive robotic) system and evaluated through empirical simulation of the system on several different environments, including "box canyon" environments known to be problematic for reactive control systems in general.

  4. Feedback in Videogame-Based Adaptive Training

    ERIC Educational Resources Information Center

    Rivera, Iris Daliz

    2010-01-01

    The field of training has been changing rapidly due to advances in technology such as videogame-based adaptive training. Videogame-based adaptive training has provided flexibility and adaptability for training in cost-effective ways. Although this method of training may have many benefits for the trainee, current research has not kept up to pace…

  5. Goal-based h-adaptivity of the 1-D diamond difference discrete ordinate method

    NASA Astrophysics Data System (ADS)

    Jeffers, R. S.; Kópházi, J.; Eaton, M. D.; Févotte, F.; Hülsemann, F.; Ragusa, J.

    2017-04-01

    The quantity of interest (QoI) associated with a solution of a partial differential equation (PDE) is not, in general, the solution itself, but a functional of the solution. Dual weighted residual (DWR) error estimators are one way of providing an estimate of the error in the QoI resulting from the discretisation of the PDE. This paper aims to provide an estimate of the error in the QoI due to the spatial discretisation, where the discretisation scheme being used is the diamond difference (DD) method in space and discrete ordinate (SN) method in angle. The QoI are reaction rates in detectors and the value of the eigenvalue (Keff) for 1-D fixed source and eigenvalue (Keff criticality) neutron transport problems respectively. Local values of the DWR over individual cells are used as error indicators for goal-based mesh refinement, which aims to give an optimal mesh for a given QoI.

  6. Scatter-plot-based method for noise characteristics evaluation in remote sensing images using adaptive image clustering procedure

    NASA Astrophysics Data System (ADS)

    Abramova, Victoriya V.; Abramov, Sergey K.; Lukin, Vladimir V.; Vozel, Benoit; Chehdi, Kacem

    2016-10-01

    Several modifications of scatter-plot-based method for mixed noise parameters estimation are proposed. The modifications relate to the stage of image segmentation and they are intended to adaptively separate image blocks into clusters taking into account image peculiarities and to choose a required number of clusters. Comparative performance analysis of the proposed modifications for images from TID2008 database is performed. It is shown that the best estimation accuracy is provided by a method with automatic determination of a required number of clusters followed by block separation into clusters using k-means method. This modification allows improving the accuracy of noise characteristics estimation by up to 5% for both signal-independent and signal-dependent noise components in comparison to the basic method. The results for real-life data are presented.

  7. SU-D-207-04: GPU-Based 4D Cone-Beam CT Reconstruction Using Adaptive Meshing Method

    SciTech Connect

    Zhong, Z; Gu, X; Iyengar, P; Mao, W; Wang, J; Guo, X

    2015-06-15

    Purpose: Due to the limited number of projections at each phase, the image quality of a four-dimensional cone-beam CT (4D-CBCT) is often degraded, which decreases the accuracy of subsequent motion modeling. One of the promising methods is the simultaneous motion estimation and image reconstruction (SMEIR) approach. The objective of this work is to enhance the computational speed of the SMEIR algorithm using adaptive feature-based tetrahedral meshing and GPU-based parallelization. Methods: The first step is to generate the tetrahedral mesh based on the features of a reference phase 4D-CBCT, so that the deformation can be well captured and accurately diffused from the mesh vertices to voxels of the image volume. After the mesh generation, the updated motion model and other phases of 4D-CBCT can be obtained by matching the 4D-CBCT projection images at each phase with the corresponding forward projections of the deformed reference phase of 4D-CBCT. The entire process of this 4D-CBCT reconstruction method is implemented on GPU, resulting in significantly increasing the computational efficiency due to its tremendous parallel computing ability. Results: A 4D XCAT digital phantom was used to test the proposed mesh-based image reconstruction algorithm. The image Result shows both bone structures and inside of the lung are well-preserved and the tumor position can be well captured. Compared to the previous voxel-based CPU implementation of SMEIR, the proposed method is about 157 times faster for reconstructing a 10 -phase 4D-CBCT with dimension 256×256×150. Conclusion: The GPU-based parallel 4D CBCT reconstruction method uses the feature-based mesh for estimating motion model and demonstrates equivalent image Result with previous voxel-based SMEIR approach, with significantly improved computational speed.

  8. An adaptive Gaussian process-based method for efficient Bayesian experimental design in groundwater contaminant source identification problems: ADAPTIVE GAUSSIAN PROCESS-BASED INVERSION

    SciTech Connect

    Zhang, Jiangjiang; Li, Weixuan; Zeng, Lingzao; Wu, Laosheng

    2016-08-01

    Surrogate models are commonly used in Bayesian approaches such as Markov Chain Monte Carlo (MCMC) to avoid repetitive CPU-demanding model evaluations. However, the approximation error of a surrogate may lead to biased estimations of the posterior distribution. This bias can be corrected by constructing a very accurate surrogate or implementing MCMC in a two-stage manner. Since the two-stage MCMC requires extra original model evaluations, the computational cost is still high. If the information of measurement is incorporated, a locally accurate approximation of the original model can be adaptively constructed with low computational cost. Based on this idea, we propose a Gaussian process (GP) surrogate-based Bayesian experimental design and parameter estimation approach for groundwater contaminant source identification problems. A major advantage of the GP surrogate is that it provides a convenient estimation of the approximation error, which can be incorporated in the Bayesian formula to avoid over-confident estimation of the posterior distribution. The proposed approach is tested with a numerical case study. Without sacrificing the estimation accuracy, the new approach achieves about 200 times of speed-up compared to our previous work using two-stage MCMC.

  9. Development and Evaluation of an E-Learning Course for Deaf and Hard of Hearing Based on the Advanced Adapted Pedagogical Index Method

    ERIC Educational Resources Information Center

    Debevc, Matjaž; Stjepanovic, Zoran; Holzinger, Andreas

    2014-01-01

    Web-based and adapted e-learning materials provide alternative methods of learning to those used in a traditional classroom. Within the study described in this article, deaf and hard of hearing people used an adaptive e-learning environment to improve their computer literacy. This environment included streaming video with sign language interpreter…

  10. SAR based adaptive GMTI

    NASA Astrophysics Data System (ADS)

    Vu, Duc; Guo, Bin; Xu, Luzhou; Li, Jian

    2010-04-01

    We consider ground moving target indication (GMTI) and target velocity estimation based on multi-channel synthetic aperture radar (SAR) images. Via forming velocity versus cross-range images, we show that small moving targets can be detected even in the presence of strong stationary ground clutter. Moreover, the velocities of the moving targets can be estimated, and the misplaced moving targets can be placed back to their original locations based on the estimated velocities. Adaptive beamforming techniques, including Capon and generalizedlikelihood ratio test (GLRT), are used to form velocity versus cross-range images for each range bin of interest. The velocity estimation ambiguities caused by the multi-channel array geometry are analyzed. We also demonstrate the effectiveness of our approaches using the Air Force Research Laboratory (AFRL) publicly-released Gotcha SAR based GMTI data set.

  11. Improving the performance of lesion-based computer-aided detection schemes of breast masses using a case-based adaptive cueing method

    NASA Astrophysics Data System (ADS)

    Tan, Maxine; Aghaei, Faranak; Wang, Yunzhi; Qian, Wei; Zheng, Bin

    2016-03-01

    Current commercialized CAD schemes have high false-positive (FP) detection rates and also have high correlations in positive lesion detection with radiologists. Thus, we recently investigated a new approach to improve the efficacy of applying CAD to assist radiologists in reading and interpreting screening mammograms. Namely, we developed a new global feature based CAD approach/scheme that can cue the warning sign on the cases with high risk of being positive. In this study, we investigate the possibility of fusing global feature or case-based scores with the local or lesion-based CAD scores using an adaptive cueing method. We hypothesize that the information from the global feature extraction (features extracted from the whole breast regions) are different from and can provide supplementary information to the locally-extracted features (computed from the segmented lesion regions only). On a large and diverse full-field digital mammography (FFDM) testing dataset with 785 cases (347 negative and 438 cancer cases with masses only), we ran our lesion-based and case-based CAD schemes "as is" on the whole dataset. To assess the supplementary information provided by the global features, we used an adaptive cueing method to adaptively adjust the original CAD-generated detection scores (Sorg) of a detected suspicious mass region based on the computed case-based score (Scase) of the case associated with this detected region. Using the adaptive cueing method, better sensitivity results were obtained at lower FP rates (<= 1 FP per image). Namely, increases of sensitivities (in the FROC curves) of up to 6.7% and 8.2% were obtained for the ROI and Case-based results, respectively.

  12. An adaptive level set method

    SciTech Connect

    Milne, Roger Brent

    1995-12-01

    This thesis describes a new method for the numerical solution of partial differential equations of the parabolic type on an adaptively refined mesh in two or more spatial dimensions. The method is motivated and developed in the context of the level set formulation for the curvature dependent propagation of surfaces in three dimensions. In that setting, it realizes the multiple advantages of decreased computational effort, localized accuracy enhancement, and compatibility with problems containing a range of length scales.

  13. Particle System Based Adaptive Sampling on Spherical Parameter Space to Improve the MDL Method for Construction of Statistical Shape Models

    PubMed Central

    Zhou, Xiangrong; Hirano, Yasushi; Tachibana, Rie; Hara, Takeshi; Kido, Shoji; Fujita, Hiroshi

    2013-01-01

    Minimum description length (MDL) based group-wise registration was a state-of-the-art method to determine the corresponding points of 3D shapes for the construction of statistical shape models (SSMs). However, it suffered from the problem that determined corresponding points did not uniformly spread on original shapes, since corresponding points were obtained by uniformly sampling the aligned shape on the parameterized space of unit sphere. We proposed a particle-system based method to obtain adaptive sampling positions on the unit sphere to resolve this problem. Here, a set of particles was placed on the unit sphere to construct a particle system whose energy was related to the distortions of parameterized meshes. By minimizing this energy, each particle was moved on the unit sphere. When the system became steady, particles were treated as vertices to build a spherical mesh, which was then relaxed to slightly adjust vertices to obtain optimal sampling-positions. We used 47 cases of (left and right) lungs and 50 cases of livers, (left and right) kidneys, and spleens for evaluations. Experiments showed that the proposed method was able to resolve the problem of the original MDL method, and the proposed method performed better in the generalization and specificity tests. PMID:23861721

  14. Study adaptation, design, and methods of a web-based PTSD intervention for women Veterans.

    PubMed

    Lehavot, Keren; Litz, Brett; Millard, Steven P; Hamilton, Alison B; Sadler, Anne; Simpson, Tracy

    2017-02-01

    Women Veterans are a rapidly growing population with high risk of exposure to potentially traumatizing events and PTSD diagnoses. Despite the dissemination of evidence-based treatments for PTSD in the VA, most women Veteran VA users underutilize these treatments. Web-based PTSD treatment has the potential to reach and engage women Veterans with PTSD who do not receive treatment in VA settings. Our objective is to modify and evaluate Delivery of Self Training and Education for Stressful Situations (DESTRESSS), a web-based cognitive-behavioral intervention for PTSD, to target PTSD symptoms among women Veterans. The specific aims are to: (1) obtain feedback about DESTRESS, particularly on its relevance and sensitivity to women, using semi-structured interviews with expert clinicians and women Veterans with PTSD, and make modifications based on this feedback; (2) conduct a pilot study to finalize study procedures and make further refinements to the intervention; and (3) conduct a randomized clinical trial (RCT) evaluating a revised, telephone-assisted DESTRESS compared to telephone monitoring only. We describe the results from the first two aims, and the study design and procedures for the ongoing RCT. This line of research has the potential to result in a gender-sensitive, empirically-based, online treatment option for women Veterans with PTSD.

  15. Ontology-Based Adaptive Dynamic e-Learning Map Planning Method for Conceptual Knowledge Learning

    ERIC Educational Resources Information Center

    Chen, Tsung-Yi; Chu, Hui-Chuan; Chen, Yuh-Min; Su, Kuan-Chun

    2016-01-01

    E-learning improves the shareability and reusability of knowledge, and surpasses the constraints of time and space to achieve remote asynchronous learning. Since the depth of learning content often varies, it is thus often difficult to adjust materials based on the individual levels of learners. Therefore, this study develops an ontology-based…

  16. Novel image fusion method based on adaptive pulse coupled neural network and discrete multi-parameter fractional random transform

    NASA Astrophysics Data System (ADS)

    Lang, Jun; Hao, Zhengchao

    2014-01-01

    In this paper, we first propose the discrete multi-parameter fractional random transform (DMPFRNT), which can make the spectrum distributed randomly and uniformly. Then we introduce this new spectrum transform into the image fusion field and present a new approach for the remote sensing image fusion, which utilizes both adaptive pulse coupled neural network (PCNN) and the discrete multi-parameter fractional random transform in order to meet the requirements of both high spatial resolution and low spectral distortion. In the proposed scheme, the multi-spectral (MS) and panchromatic (Pan) images are converted into the discrete multi-parameter fractional random transform domains, respectively. In DMPFRNT spectrum domain, high amplitude spectrum (HAS) and low amplitude spectrum (LAS) components carry different informations of original images. We take full advantage of the synchronization pulse issuance characteristics of PCNN to extract the HAS and LAS components properly, and give us the PCNN ignition mapping images which can be used to determine the fusion parameters. In the fusion process, local standard deviation of the amplitude spectrum is chosen as the link strength of pulse coupled neural network. Numerical simulations are performed to demonstrate that the proposed method is more reliable and superior than several existing methods based on Hue Saturation Intensity representation, Principal Component Analysis, the discrete fractional random transform etc.

  17. Domain adaptive boosting method and its applications

    NASA Astrophysics Data System (ADS)

    Geng, Jie; Miao, Zhenjiang

    2015-03-01

    Differences of data distributions widely exist among datasets, i.e., domains. For many pattern recognition, nature language processing, and content-based analysis systems, a decrease in performance caused by the domain differences between the training and testing datasets is still a notable problem. We propose a domain adaptation method called domain adaptive boosting (DAB). It is based on the AdaBoost approach with extensions to cover the domain differences between the source and target domains. Two main stages are contained in this approach: source-domain clustering and source-domain sample selection. By iteratively adding the selected training samples from the source domain, the discrimination model is able to achieve better domain adaptation performance based on a small validation set. The DAB algorithm is suitable for the domains with large scale samples and easy to extend for multisource adaptation. We implement this method on three computer vision systems: the skin detection model in single images, the video concept detection model, and the object classification model. In the experiments, we compare the performances of several commonly used methods and the proposed DAB. Under most situations, the DAB is superior.

  18. A new algorithm for high-dimensional uncertainty quantification based on dimension-adaptive sparse grid approximation and reduced basis methods

    NASA Astrophysics Data System (ADS)

    Chen, Peng; Quarteroni, Alfio

    2015-10-01

    In this work we develop an adaptive and reduced computational algorithm based on dimension-adaptive sparse grid approximation and reduced basis methods for solving high-dimensional uncertainty quantification (UQ) problems. In order to tackle the computational challenge of "curse of dimensionality" commonly faced by these problems, we employ a dimension-adaptive tensor-product algorithm [16] and propose a verified version to enable effective removal of the stagnation phenomenon besides automatically detecting the importance and interaction of different dimensions. To reduce the heavy computational cost of UQ problems modelled by partial differential equations (PDE), we adopt a weighted reduced basis method [7] and develop an adaptive greedy algorithm in combination with the previous verified algorithm for efficient construction of an accurate reduced basis approximation. The efficiency and accuracy of the proposed algorithm are demonstrated by several numerical experiments.

  19. A hybrid wavelet-based adaptive immersed boundary finite-difference lattice Boltzmann method for two-dimensional fluid-structure interaction

    NASA Astrophysics Data System (ADS)

    Cui, Xiongwei; Yao, Xiongliang; Wang, Zhikai; Liu, Minghao

    2017-03-01

    A second generation wavelet-based adaptive finite-difference Lattice Boltzmann method (FD-LBM) is developed in this paper. In this approach, the adaptive wavelet collocation method (AWCM) is firstly, to the best of our knowledge, incorporated into the FD-LBM. According to the grid refinement criterion based on the wavelet amplitudes of density distribution functions, an adaptive sparse grid is generated by the omission and addition of collocation points. On the sparse grid, the finite differences are used to approximate the derivatives. To eliminate the special treatments in using the FD-based derivative approximation near boundaries, the immersed boundary method (IBM) is also introduced into FD-LBM. By using the adaptive technique, the adaptive code requires much less grid points as compared to the uniform-mesh code. As a consequence, the computational efficiency can be improved. To justify the proposed method, a series of test cases, including fixed boundary cases and moving boundary cases, are invested. A good agreement between the present results and the data in previous literatures is obtained, which demonstrates the accuracy and effectiveness of the present AWCM-IB-LBM.

  20. Structured adaptive grid generation using algebraic methods

    NASA Technical Reports Server (NTRS)

    Yang, Jiann-Cherng; Soni, Bharat K.; Roger, R. P.; Chan, Stephen C.

    1993-01-01

    The accuracy of the numerical algorithm depends not only on the formal order of approximation but also on the distribution of grid points in the computational domain. Grid adaptation is a procedure which allows optimal grid redistribution as the solution progresses. It offers the prospect of accurate flow field simulations without the use of an excessively timely, computationally expensive, grid. Grid adaptive schemes are divided into two basic categories: differential and algebraic. The differential method is based on a variational approach where a function which contains a measure of grid smoothness, orthogonality and volume variation is minimized by using a variational principle. This approach provided a solid mathematical basis for the adaptive method, but the Euler-Lagrange equations must be solved in addition to the original governing equations. On the other hand, the algebraic method requires much less computational effort, but the grid may not be smooth. The algebraic techniques are based on devising an algorithm where the grid movement is governed by estimates of the local error in the numerical solution. This is achieved by requiring the points in the large error regions to attract other points and points in the low error region to repel other points. The development of a fast, efficient, and robust algebraic adaptive algorithm for structured flow simulation applications is presented. This development is accomplished in a three step process. The first step is to define an adaptive weighting mesh (distribution mesh) on the basis of the equidistribution law applied to the flow field solution. The second, and probably the most crucial step, is to redistribute grid points in the computational domain according to the aforementioned weighting mesh. The third and the last step is to reevaluate the flow property by an appropriate search/interpolate scheme at the new grid locations. The adaptive weighting mesh provides the information on the desired concentration

  1. Parallel adaptive wavelet collocation method for PDEs

    SciTech Connect

    Nejadmalayeri, Alireza; Vezolainen, Alexei; Brown-Dymkoski, Eric; Vasilyev, Oleg V.

    2015-10-01

    A parallel adaptive wavelet collocation method for solving a large class of Partial Differential Equations is presented. The parallelization is achieved by developing an asynchronous parallel wavelet transform, which allows one to perform parallel wavelet transform and derivative calculations with only one data synchronization at the highest level of resolution. The data are stored using tree-like structure with tree roots starting at a priori defined level of resolution. Both static and dynamic domain partitioning approaches are developed. For the dynamic domain partitioning, trees are considered to be the minimum quanta of data to be migrated between the processes. This allows fully automated and efficient handling of non-simply connected partitioning of a computational domain. Dynamic load balancing is achieved via domain repartitioning during the grid adaptation step and reassigning trees to the appropriate processes to ensure approximately the same number of grid points on each process. The parallel efficiency of the approach is discussed based on parallel adaptive wavelet-based Coherent Vortex Simulations of homogeneous turbulence with linear forcing at effective non-adaptive resolutions up to 2048{sup 3} using as many as 2048 CPU cores.

  2. Parallel adaptive wavelet collocation method for PDEs

    NASA Astrophysics Data System (ADS)

    Nejadmalayeri, Alireza; Vezolainen, Alexei; Brown-Dymkoski, Eric; Vasilyev, Oleg V.

    2015-10-01

    A parallel adaptive wavelet collocation method for solving a large class of Partial Differential Equations is presented. The parallelization is achieved by developing an asynchronous parallel wavelet transform, which allows one to perform parallel wavelet transform and derivative calculations with only one data synchronization at the highest level of resolution. The data are stored using tree-like structure with tree roots starting at a priori defined level of resolution. Both static and dynamic domain partitioning approaches are developed. For the dynamic domain partitioning, trees are considered to be the minimum quanta of data to be migrated between the processes. This allows fully automated and efficient handling of non-simply connected partitioning of a computational domain. Dynamic load balancing is achieved via domain repartitioning during the grid adaptation step and reassigning trees to the appropriate processes to ensure approximately the same number of grid points on each process. The parallel efficiency of the approach is discussed based on parallel adaptive wavelet-based Coherent Vortex Simulations of homogeneous turbulence with linear forcing at effective non-adaptive resolutions up to 20483 using as many as 2048 CPU cores.

  3. An Atlas-Based Electron Density Mapping Method for Magnetic Resonance Imaging (MRI)-Alone Treatment Planning and Adaptive MRI-Based Prostate Radiation Therapy

    SciTech Connect

    Dowling, Jason A.; Lambert, Jonathan; Parker, Joel; Salvado, Olivier; Fripp, Jurgen; Capp, Anne; Wratten, Chris; Denham, James W.; Greer, Peter B.

    2012-05-01

    Purpose: Prostate radiation therapy dose planning directly on magnetic resonance imaging (MRI) scans would reduce costs and uncertainties due to multimodality image registration. Adaptive planning using a combined MRI-linear accelerator approach will also require dose calculations to be performed using MRI data. The aim of this work was to develop an atlas-based method to map realistic electron densities to MRI scans for dose calculations and digitally reconstructed radiograph (DRR) generation. Methods and Materials: Whole-pelvis MRI and CT scan data were collected from 39 prostate patients. Scans from 2 patients showed significantly different anatomy from that of the remaining patient population, and these patients were excluded. A whole-pelvis MRI atlas was generated based on the manually delineated MRI scans. In addition, a conjugate electron-density atlas was generated from the coregistered computed tomography (CT)-MRI scans. Pseudo-CT scans for each patient were automatically generated by global and nonrigid registration of the MRI atlas to the patient MRI scan, followed by application of the same transformations to the electron-density atlas. Comparisons were made between organ segmentations by using the Dice similarity coefficient (DSC) and point dose calculations for 26 patients on planning CT and pseudo-CT scans. Results: The agreement between pseudo-CT and planning CT was quantified by differences in the point dose at isocenter and distance to agreement in corresponding voxels. Dose differences were found to be less than 2%. Chi-squared values indicated that the planning CT and pseudo-CT dose distributions were equivalent. No significant differences (p > 0.9) were found between CT and pseudo-CT Hounsfield units for organs of interest. Mean {+-} standard deviation DSC scores for the atlas-based segmentation of the pelvic bones were 0.79 {+-} 0.12, 0.70 {+-} 0.14 for the prostate, 0.64 {+-} 0.16 for the bladder, and 0.63 {+-} 0.16 for the rectum

  4. Knowledge-based media adaptation

    NASA Astrophysics Data System (ADS)

    Leopold, Klaus; Jannach, Dietmar; Hellwagner, Hermann

    2004-10-01

    This paper introduces the principal approach and describes the basic architecture and current implementation of the knowledge-based multimedia adaptation framework we are currently developing. The framework can be used in Universal Multimedia Access scenarios, where multimedia content has to be adapted to specific usage environment parameters (network and client device capabilities, user preferences). Using knowledge-based techniques (state-space planning), the framework automatically computes an adaptation plan, i.e., a sequence of media conversion operations, to transform the multimedia resources to meet the client's requirements or constraints. The system takes as input standards-compliant descriptions of the content (using MPEG-7 metadata) and of the target usage environment (using MPEG-21 Digital Item Adaptation metadata) to derive start and goal states for the planning process, respectively. Furthermore, declarative descriptions of the conversion operations (such as available via software library functions) enable existing adaptation algorithms to be invoked without requiring programming effort. A running example in the paper illustrates the descriptors and techniques employed by the knowledge-based media adaptation system.

  5. Adaptive envelope protection methods for aircraft

    NASA Astrophysics Data System (ADS)

    Unnikrishnan, Suraj

    Carefree handling refers to the ability of a pilot to operate an aircraft without the need to continuously monitor aircraft operating limits. At the heart of all carefree handling or maneuvering systems, also referred to as envelope protection systems, are algorithms and methods for predicting future limit violations. Recently, envelope protection methods that have gained more acceptance, translate limit proximity information to its equivalent in the control channel. Envelope protection algorithms either use very small prediction horizon or are static methods with no capability to adapt to changes in system configurations. Adaptive approaches maximizing prediction horizon such as dynamic trim, are only applicable to steady-state-response critical limit parameters. In this thesis, a new adaptive envelope protection method is developed that is applicable to steady-state and transient response critical limit parameters. The approach is based upon devising the most aggressive optimal control profile to the limit boundary and using it to compute control limits. Pilot-in-the-loop evaluations of the proposed approach are conducted at the Georgia Tech Carefree Maneuver lab for transient longitudinal hub moment limit protection. Carefree maneuvering is the dual of carefree handling in the realm of autonomous Uninhabited Aerial Vehicles (UAVs). Designing a flight control system to fully and effectively utilize the operational flight envelope is very difficult. With the increasing role and demands for extreme maneuverability there is a need for developing envelope protection methods for autonomous UAVs. In this thesis, a full-authority automatic envelope protection method is proposed for limit protection in UAVs. The approach uses adaptive estimate of limit parameter dynamics and finite-time horizon predictions to detect impending limit boundary violations. Limit violations are prevented by treating the limit boundary as an obstacle and by correcting nominal control

  6. A new method based on Adaptive Discrete Wavelet Entropy Energy and Neural Network Classifier (ADWEENN) for recognition of urine cells from microscopic images independent of rotation and scaling.

    PubMed

    Avci, Derya; Leblebicioglu, Mehmet Kemal; Poyraz, Mustafa; Dogantekin, Esin

    2014-02-01

    So far, analysis and classification of urine cells number has become an important topic for medical diagnosis of some diseases. Therefore, in this study, we suggest a new technique based on Adaptive Discrete Wavelet Entropy Energy and Neural Network Classifier (ADWEENN) for Recognition of Urine Cells from Microscopic Images Independent of Rotation and Scaling. Some digital image processing methods such as noise reduction, contrast enhancement, segmentation, and morphological process are used for feature extraction stage of this ADWEENN in this study. Nowadays, the image processing and pattern recognition topics have come into prominence. The image processing concludes operation and design of systems that recognize patterns in data sets. In the past years, very difficulty in classification of microscopic images was the deficiency of enough methods to characterize. Lately, it is seen that, multi-resolution image analysis methods such as Gabor filters, discrete wavelet decompositions are superior to other classic methods for analysis of these microscopic images. In this study, the structure of the ADWEENN method composes of four stages. These are preprocessing stage, feature extraction stage, classification stage and testing stage. The Discrete Wavelet Transform (DWT) and adaptive wavelet entropy and energy is used for adaptive feature extraction in feature extraction stage to strengthen the premium features of the Artificial Neural Network (ANN) classifier in this study. Efficiency of the developed ADWEENN method was tested showing that an avarage of 97.58% recognition succes was obtained.

  7. A Small Leak Detection Method Based on VMD Adaptive De-Noising and Ambiguity Correlation Classification Intended for Natural Gas Pipelines.

    PubMed

    Xiao, Qiyang; Li, Jian; Bai, Zhiliang; Sun, Jiedi; Zhou, Nan; Zeng, Zhoumo

    2016-12-13

    In this study, a small leak detection method based on variational mode decomposition (VMD) and ambiguity correlation classification (ACC) is proposed. The signals acquired from sensors were decomposed using the VMD, and numerous components were obtained. According to the probability density function (PDF), an adaptive de-noising algorithm based on VMD is proposed for noise component processing and de-noised components reconstruction. Furthermore, the ambiguity function image was employed for analysis of the reconstructed signals. Based on the correlation coefficient, ACC is proposed to detect the small leak of pipeline. The analysis of pipeline leakage signals, using 1 mm and 2 mm leaks, has shown that proposed detection method can detect a small leak accurately and effectively. Moreover, the experimental results have shown that the proposed method achieved better performances than support vector machine (SVM) and back propagation neural network (BP) methods.

  8. A Small Leak Detection Method Based on VMD Adaptive De-Noising and Ambiguity Correlation Classification Intended for Natural Gas Pipelines

    PubMed Central

    Xiao, Qiyang; Li, Jian; Bai, Zhiliang; Sun, Jiedi; Zhou, Nan; Zeng, Zhoumo

    2016-01-01

    In this study, a small leak detection method based on variational mode decomposition (VMD) and ambiguity correlation classification (ACC) is proposed. The signals acquired from sensors were decomposed using the VMD, and numerous components were obtained. According to the probability density function (PDF), an adaptive de-noising algorithm based on VMD is proposed for noise component processing and de-noised components reconstruction. Furthermore, the ambiguity function image was employed for analysis of the reconstructed signals. Based on the correlation coefficient, ACC is proposed to detect the small leak of pipeline. The analysis of pipeline leakage signals, using 1 mm and 2 mm leaks, has shown that proposed detection method can detect a small leak accurately and effectively. Moreover, the experimental results have shown that the proposed method achieved better performances than support vector machine (SVM) and back propagation neural network (BP) methods. PMID:27983577

  9. A composite control method based on the adaptive RBFNN feedback control and the ESO for two-axis inertially stabilized platforms.

    PubMed

    Lei, Xusheng; Zou, Ying; Dong, Fei

    2015-11-01

    Due to the nonlinearity and time variation of a two-axis inertially stabilized platform (ISP) system, the conventional feedback control cannot be utilized directly. To realize the control performance with fast dynamic response and high stabilization precision, the dynamic model of the ISP system is expected to match the ideal model which satisfies the desired control performance. Therefore, a composite control method based on the adaptive radial basis function neural network (RBFNN) feedback control and the extended state observer (ESO), is proposed for ISP. The adaptive RBFNN is proposed to generate the feedback control parameters online. Based on the state error information in the working process, the adaptive RBFNN can be constructed and optimized directly. Therefore, no priori training data is needed for the construction of the RBFNN. Furthermore, a linear second-order ESO is constructed to compensate for the composite disturbance. The asymptotic stability of the proposed control method has been proven by the Lyapunov stability theory. The applicability of the proposed method is validated by a series of simulations and flight tests.

  10. Ensemble transform sensitivity method for adaptive observations

    NASA Astrophysics Data System (ADS)

    Zhang, Yu; Xie, Yuanfu; Wang, Hongli; Chen, Dehui; Toth, Zoltan

    2016-01-01

    The Ensemble Transform (ET) method has been shown to be useful in providing guidance for adaptive observation deployment. It predicts forecast error variance reduction for each possible deployment using its corresponding transformation matrix in an ensemble subspace. In this paper, a new ET-based sensitivity (ETS) method, which calculates the gradient of forecast error variance reduction in terms of analysis error variance reduction, is proposed to specify regions for possible adaptive observations. ETS is a first order approximation of the ET; it requires just one calculation of a transformation matrix, increasing computational efficiency (60%-80% reduction in computational cost). An explicit mathematical formulation of the ETS gradient is derived and described. Both the ET and ETS methods are applied to the Hurricane Irene (2011) case and a heavy rainfall case for comparison. The numerical results imply that the sensitive areas estimated by the ETS and ET are similar. However, ETS is much more efficient, particularly when the resolution is higher and the number of ensemble members is larger.

  11. A new orientation-adaptive interpolation method.

    PubMed

    Wang, Qing; Ward, Rabab Kreidieh

    2007-04-01

    We propose an isophote-oriented, orientation-adaptive interpolation method. The proposed method employs an interpolation kernel that adapts to the local orientation of isophotes, and the pixel values are obtained through an oriented, bilinear interpolation. We show that, by doing so, the curvature of the interpolated isophotes is reduced, and, thus, zigzagging artifacts are largely suppressed. Analysis and experiments show that images interpolated using the proposed method are visually pleasing and almost artifact free.

  12. The Method of Adaptive Comparative Judgement

    ERIC Educational Resources Information Center

    Pollitt, Alastair

    2012-01-01

    Adaptive Comparative Judgement (ACJ) is a modification of Thurstone's method of comparative judgement that exploits the power of adaptivity, but in scoring rather than testing. Professional judgement by teachers replaces the marking of tests; a judge is asked to compare the work of two students and simply to decide which of them is the better.…

  13. An adaptive method with integration of multi-wavelet based features for unsupervised classification of SAR images

    NASA Astrophysics Data System (ADS)

    Chamundeeswari, V. V.; Singh, D.; Singh, K.

    2007-12-01

    In single band and single polarized synthetic aperture radar (SAR) images, the information is limited to intensity and texture only and it is very difficult to interpret such SAR images without any a priori information. For unsupervised classification of SAR images, M-band wavelet decomposition is performed on the SAR image and sub-band selection on the basis of energy levels is applied to improve the classification results since sparse representation of sub-bands degrades the performance of classification. Then, textural features are obtained from selected sub-bands and integrated with intensity features. An adaptive neuro-fuzzy algorithm is used to improve computational efficiency by extracting significant features. K-means classification is performed on the extracted features and land features are labeled. This classification algorithm involves user defined parameters. To remove the user dependency and to obtain maximum achievable classification accuracy, an algorithm is developed in this paper for classification accuracy in terms of the parameters involved in the segmentation process. This is very helpful to develop the automated land-cover monitoring system with SAR, where optimized parameters are to be identified only once and these parameters can be applied to SAR imagery of the same scene obtained year after year. A single band, single polarized SAR image is classified into water, urban and vegetation areas using this method and overall classification accuracy is obtained in the range of 85.92%-93.70% by comparing with ground truth data.

  14. A Block-Structured Adaptive Mesh Refinement Technique with a Finite-Difference-Based Lattice Boltzmann Method

    NASA Astrophysics Data System (ADS)

    Fakhari, Abbas; Lee, Taehun

    2013-11-01

    A novel adaptive mesh refinement (AMR) algorithm for the numerical solution of fluid flow problems is presented in this study. The proposed AMR algorithm can be used to solve partial differential equations including, but not limited to, the Navier-Stokes equations using an AMR technique. Here, the lattice Boltzmann method (LBM) is employed as a substitute of the nearly incompressible Navier-Stokes equations. Besides its simplicity, the proposed AMR algorithm is straightforward and yet efficient. The idea is to remove the need for a tree-type data structure by using the pointer attributes in a unique way, along with an appropriate adjustment of the child block's IDs, to determine the neighbors of a certain block. Thanks to the unique way of invoking pointers, there is no need to construct a quad-tree (in 2D) or oct-tree (in 3D) data structure for maintaining the connectivity data between different blocks. As a result, the memory and time required for tree traversal are completely eliminated, leaving us with a clean and efficient algorithm that is easier to implement and use on parallel machines. Several benchmark studies are carried out to assess the accuracy and efficiency of the proposed AMR-LBM, including lid-driven cavity flow, vortex shedding past a square cylinder, and Kelvin-Helmholtz instability for single-phase and multiphase fluids.

  15. Developing a new case based computer-aided detection scheme and an adaptive cueing method to improve performance in detecting mammographic lesions

    NASA Astrophysics Data System (ADS)

    Tan, Maxine; Aghaei, Faranak; Wang, Yunzhi; Zheng, Bin

    2017-01-01

    The purpose of this study is to evaluate a new method to improve performance of computer-aided detection (CAD) schemes of screening mammograms with two approaches. In the first approach, we developed a new case based CAD scheme using a set of optimally selected global mammographic density, texture, spiculation, and structural similarity features computed from all four full-field digital mammography images of the craniocaudal (CC) and mediolateral oblique (MLO) views by using a modified fast and accurate sequential floating forward selection feature selection algorithm. Selected features were then applied to a ‘scoring fusion’ artificial neural network classification scheme to produce a final case based risk score. In the second approach, we combined the case based risk score with the conventional lesion based scores of a conventional lesion based CAD scheme using a new adaptive cueing method that is integrated with the case based risk scores. We evaluated our methods using a ten-fold cross-validation scheme on 924 cases (476 cancer and 448 recalled or negative), whereby each case had all four images from the CC and MLO views. The area under the receiver operating characteristic curve was AUC  =  0.793  ±  0.015 and the odds ratio monotonically increased from 1 to 37.21 as CAD-generated case based detection scores increased. Using the new adaptive cueing method, the region based and case based sensitivities of the conventional CAD scheme at a false positive rate of 0.71 per image increased by 2.4% and 0.8%, respectively. The study demonstrated that supplementary information can be derived by computing global mammographic density image features to improve CAD-cueing performance on the suspicious mammographic lesions.

  16. Perceptually-Based Adaptive JPEG Coding

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.; Rosenholtz, Ruth; Null, Cynthia H. (Technical Monitor)

    1996-01-01

    An extension to the JPEG standard (ISO/IEC DIS 10918-3) allows spatial adaptive coding of still images. As with baseline JPEG coding, one quantization matrix applies to an entire image channel, but in addition the user may specify a multiplier for each 8 x 8 block, which multiplies the quantization matrix, yielding the new matrix for the block. MPEG 1 and 2 use much the same scheme, except there the multiplier changes only on macroblock boundaries. We propose a method for perceptual optimization of the set of multipliers. We compute the perceptual error for each block based upon DCT quantization error adjusted according to contrast sensitivity, light adaptation, and contrast masking, and pick the set of multipliers which yield maximally flat perceptual error over the blocks of the image. We investigate the bitrate savings due to this adaptive coding scheme and the relative importance of the different sorts of masking on adaptive coding.

  17. An adaptive decorrelation method removes Illumina DNA base-calling errors caused by crosstalk between adjacent clusters.

    PubMed

    Wang, Bo; Wan, Lin; Wang, Anqi; Li, Lei M

    2017-02-20

    Base-calling accuracy is crucial for high-throughput DNA sequencing and downstream analysis such as read mapping and genome assembly. Accordingly, we made an endeavor to reduce DNA sequencing errors of Illumina systems by correcting three kinds of crosstalk in the cluster intensity data. We discovered that signal crosstalk between adjacent clusters accounts for a large portion of sequencing errors in Illumina systems, even after correcting color crosstalk caused by the overlap of dye emission spectra and phasing/pre-phasing caused by out-of-step nucleotide synthesis. Interestingly and importantly, spatial crosstalk between adjacent clusters is cluster-specific and often asymmetric, which cannot be corrected by existing deconvolution methods. Therefore, we introduce a novel mathematical method able to estimate and remove spatial crosstalk, thereby reducing base-calling errors by 44-69% at a given mapping rate from Illumina systems. Furthermore, the resolution gained from this work provides new room for higher throughput of DNA sequencing and of general measurement systems using fluorescence-based imaging technology. The resulting base-caller 3Dec is available for academic users at http://github.com/flishwnag/3dec. Not only does it reduce 62.1% errors compared to the standard pipeline, but also its implementation is fast enough for daily sequencing.

  18. An adaptive decorrelation method removes Illumina DNA base-calling errors caused by crosstalk between adjacent clusters

    PubMed Central

    Wang, Bo; Wan, Lin; Wang, Anqi; Li, Lei M.

    2017-01-01

    Base-calling accuracy is crucial for high-throughput DNA sequencing and downstream analysis such as read mapping and genome assembly. Accordingly, we made an endeavor to reduce DNA sequencing errors of Illumina systems by correcting three kinds of crosstalk in the cluster intensity data. We discovered that signal crosstalk between adjacent clusters accounts for a large portion of sequencing errors in Illumina systems, even after correcting color crosstalk caused by the overlap of dye emission spectra and phasing/pre-phasing caused by out-of-step nucleotide synthesis. Interestingly and importantly, spatial crosstalk between adjacent clusters is cluster-specific and often asymmetric, which cannot be corrected by existing deconvolution methods. Therefore, we introduce a novel mathematical method able to estimate and remove spatial crosstalk, thereby reducing base-calling errors by 44–69% at a given mapping rate from Illumina systems. Furthermore, the resolution gained from this work provides new room for higher throughput of DNA sequencing and of general measurement systems using fluorescence-based imaging technology. The resulting base-caller 3Dec is available for academic users at http://github.com/flishwnag/3dec. Not only does it reduce 62.1% errors compared to the standard pipeline, but also its implementation is fast enough for daily sequencing. PMID:28216647

  19. Adaptive Methods for Compressible Flow

    DTIC Science & Technology

    1994-03-01

    convergence while requiring little additional storage. In addition, multigrid can be used in conjunction with any convergent base scheme, with adequate care ...coarse grids can be obtained with the agglomeration technique, although care must be taken to ensure that the coarse grid operator is convergent on these... care - those cells that intersect the surface. DT- ful programming, many of these geometri- NURBS does not currently provide the ca- cal computations can

  20. Adaptive filtering for the lattice Boltzmann method

    NASA Astrophysics Data System (ADS)

    Marié, Simon; Gloerfelt, Xavier

    2017-03-01

    In this study, a new selective filtering technique is proposed for the Lattice Boltzmann Method. This technique is based on an adaptive implementation of the selective filter coefficient σ. The proposed model makes the latter coefficient dependent on the shear stress in order to restrict the use of the spatial filtering technique in sheared stress region where numerical instabilities may occur. Different parameters are tested on 2D test-cases sensitive to numerical stability and on a 3D decaying Taylor-Green vortex. The results are compared to the classical static filtering technique and to the use of a standard subgrid-scale model and give significant improvements in particular for low-order filter consistent with the LBM stencil.

  1. Hybrid Adaptive Ray-Moment Method (HARM2): A highly parallel method for radiation hydrodynamics on adaptive grids

    NASA Astrophysics Data System (ADS)

    Rosen, A. L.; Krumholz, M. R.; Oishi, J. S.; Lee, A. T.; Klein, R. I.

    2017-02-01

    We present a highly-parallel multi-frequency hybrid radiation hydrodynamics algorithm that combines a spatially-adaptive long characteristics method for the radiation field from point sources with a moment method that handles the diffuse radiation field produced by a volume-filling fluid. Our Hybrid Adaptive Ray-Moment Method (HARM2) operates on patch-based adaptive grids, is compatible with asynchronous time stepping, and works with any moment method. In comparison to previous long characteristics methods, we have greatly improved the parallel performance of the adaptive long-characteristics method by developing a new completely asynchronous and non-blocking communication algorithm. As a result of this improvement, our implementation achieves near-perfect scaling up to O (103) processors on distributed memory machines. We present a series of tests to demonstrate the accuracy and performance of the method.

  2. Comparison of two adaptive temperature-based replica exchange methods applied to a sharp phase transition of protein unfolding-folding.

    PubMed

    Lee, Michael S; Olson, Mark A

    2011-06-28

    Temperature-based replica exchange (T-ReX) enhances sampling of molecular dynamics simulations by autonomously heating and cooling simulation clients via a Metropolis exchange criterion. A pathological case for T-ReX can occur when a change in state (e.g., folding to unfolding of a protein) has a large energetic difference over a short temperature interval leading to insufficient exchanges amongst replica clients near the transition temperature. One solution is to allow the temperature set to dynamically adapt in the temperature space, thereby enriching the population of clients near the transition temperature. In this work, we evaluated two approaches for adapting the temperature set: a method that equalizes exchange rates over all neighbor temperature pairs and a method that attempts to induce clients to visit all temperatures (dubbed "current maximization") by positioning many clients at or near the transition temperature. As a test case, we simulated the 57-residue SH3 domain of alpha-spectrin. Exchange rate equalization yielded the same unfolding-folding transition temperature as fixed-temperature ReX with much smoother convergence of this value. Surprisingly, the current maximization method yielded a significantly lower transition temperature, in close agreement with experimental observation, likely due to more extensive sampling of the transition state.

  3. Adaptive control of a Stewart platform-based manipulator

    NASA Technical Reports Server (NTRS)

    Nguyen, Charles C.; Antrazi, Sami S.; Zhou, Zhen-Lei; Campbell, Charles E., Jr.

    1993-01-01

    A joint-space adaptive control scheme for controlling noncompliant motion of a Stewart platform-based manipulator (SPBM) was implemented in the Hardware Real-Time Emulator at Goddard Space Flight Center. The six-degrees of freedom SPBM uses two platforms and six linear actuators driven by dc motors. The adaptive control scheme is based on proportional-derivative controllers whose gains are adjusted by an adaptation law based on model reference adaptive control and Liapunov direct method. It is concluded that the adaptive control scheme provides superior tracking capability as compared to fixed-gain controllers.

  4. Fast coeff_token decoding method and new memory architecture design for an efficient H.264/AVC context-based adaptive variable length coding decoder

    NASA Astrophysics Data System (ADS)

    Moon, Yong Ho; Yoon, Kun Su; Ha, Seok Wun

    2009-12-01

    A fast coeff_token decoding method based on new memory architecture is proposed to implement an efficient context-based adaptive variable length-coding (CAVLC) decoder. The heavy memory access needed in CAVLC decoding is a significant issue in designing a real system, such as digital multimedia broadcasting players, portable media players, and mobile phones with video, because it results in high power consumption and delay in operations. Recently, a new coeff_token variable-length decoding method has been suggested to achieve memory access reduction. However, it still requires a large portion of the total memory access in CAVLC decoding. In this work, an effective memory architecture is designed through careful examination of codewords in variable-length code tables. In addition, a novel fast decoding method is proposed to further reduce the memory accesses required for reconstructing the coeff_token element. Only one memory access is used for reconstructing each coeff_token element in the proposed method.

  5. Hysteresis compensation of the piezoelectric ceramic actuators-based tip/tilt mirror with a neural network method in adaptive optics

    NASA Astrophysics Data System (ADS)

    Wang, Chongchong; Wang, Yukun; Hu, Lifa; Wang, Shaoxin; Cao, Zhaoliang; Mu, Quanquan; Li, Dayu; Yang, Chengliang; Xuan, Li

    2016-05-01

    The intrinsic hysteresis nonlinearity of the piezo-actuators can severely degrade the positioning accuracy of a tip-tilt mirror (TTM) in an adaptive optics system. This paper focuses on compensating this hysteresis nonlinearity by feed-forward linearization with an inverse hysteresis model. This inverse hysteresis model is based on the classical Presiach model, and the neural network (NN) is used to describe the hysteresis loop. In order to apply it in the real-time adaptive correction, an analytical nonlinear function derived from the NN is introduced to compute the inverse hysteresis model output instead of the time-consuming NN simulation process. Experimental results show that the proposed method effectively linearized the TTM behavior with the static hysteresis nonlinearity of TTM reducing from 15.6% to 1.4%. In addition, the tip-tilt tracking experiments using the integrator with and without hysteresis compensation are conducted. The wavefront tip-tilt aberration rejection ability of the TTM control system is significantly improved with the -3 dB error rejection bandwidth increasing from 46 to 62 Hz.

  6. An adaptive spectral/DG method for a reduced phase-space based level set approach to geometrical optics on curved elements

    NASA Astrophysics Data System (ADS)

    Cockburn, Bernardo; Kao, Chiu-Yen; Reitich, Fernando

    2014-02-01

    We present an adaptive spectral/discontinuous Galerkin (DG) method on curved elements to simulate high-frequency wavefronts within a reduced phase-space formulation of geometrical optics. Following recent work, the approach is based on the use of level sets defined by functions satisfying the Liouville equations in reduced phase-space and, in particular, it relies on the smoothness of these functions to represent them by rapidly convergent spectral expansions in the phase variables. The resulting (hyperbolic) system of equations for the coefficients in these expansions are then amenable to a high-order accurate treatment via DG approximations. In the present work, we significantly expand on the applicability and efficiency of the approach by incorporating mechanisms that allow for its use in scattering simulations and for a reduced overall computational cost. With regards to the former we demonstrate that the incorporation of curved elements is necessary to attain any kind of accuracy in calculations that involve scattering off non-flat interfaces. With regards to efficiency, on the other hand, we also show that the level-set formulation allows for a space p-adaptive scheme that under-resolves the level-set functions away from the wavefront without incurring in a loss of accuracy in the approximation of its location. As we show, these improvements enable simulations that are beyond the capabilities of previous implementations of these numerical procedures.

  7. Adaptive Finite Element Methods for Continuum Damage Modeling

    NASA Technical Reports Server (NTRS)

    Min, J. B.; Tworzydlo, W. W.; Xiques, K. E.

    1995-01-01

    The paper presents an application of adaptive finite element methods to the modeling of low-cycle continuum damage and life prediction of high-temperature components. The major objective is to provide automated and accurate modeling of damaged zones through adaptive mesh refinement and adaptive time-stepping methods. The damage modeling methodology is implemented in an usual way by embedding damage evolution in the transient nonlinear solution of elasto-viscoplastic deformation problems. This nonlinear boundary-value problem is discretized by adaptive finite element methods. The automated h-adaptive mesh refinements are driven by error indicators, based on selected principal variables in the problem (stresses, non-elastic strains, damage, etc.). In the time domain, adaptive time-stepping is used, combined with a predictor-corrector time marching algorithm. The time selection is controlled by required time accuracy. In order to take into account strong temperature dependency of material parameters, the nonlinear structural solution a coupled with thermal analyses (one-way coupling). Several test examples illustrate the importance and benefits of adaptive mesh refinements in accurate prediction of damage levels and failure time.

  8. Adaptive Method for Nonsmooth Nonnegative Matrix Factorization.

    PubMed

    Yang, Zuyuan; Xiang, Yong; Xie, Kan; Lai, Yue

    2017-04-01

    Nonnegative matrix factorization (NMF) is an emerging tool for meaningful low-rank matrix representation. In NMF, explicit constraints are usually required, such that NMF generates desired products (or factorizations), especially when the products have significant sparseness features. It is known that the ability of NMF in learning sparse representation can be improved by embedding a smoothness factor between the products. Motivated by this result, we propose an adaptive nonsmooth NMF (Ans-NMF) method in this paper. In our method, the embedded factor is obtained by using a data-related approach, so it matches well with the underlying products, implying a superior faithfulness of the representations. Besides, due to the usage of an adaptive selection scheme to this factor, the sparseness of the products can be separately constrained, leading to wider applicability and interpretability. Furthermore, since the adaptive selection scheme is processed through solving a series of typical linear programming problems, it can be easily implemented. Simulations using computer-generated data and real-world data show the advantages of the proposed Ans-NMF method over the state-of-the-art methods.

  9. Laser Raman detection for oral cancer based on an adaptive Gaussian process classification method with posterior probabilities

    NASA Astrophysics Data System (ADS)

    Du, Zhanwei; Yang, Yongjian; Bai, Yuan; Wang, Lijun; Su, Le; Chen, Yong; Li, Xianchang; Zhou, Xiaodong; Jia, Jun; Shen, Aiguo; Hu, Jiming

    2013-03-01

    The existing methods for early and differential diagnosis of oral cancer are limited due to the unapparent early symptoms and the imperfect imaging examination methods. In this paper, the classification models of oral adenocarcinoma, carcinoma tissues and a control group with just four features are established by utilizing the hybrid Gaussian process (HGP) classification algorithm, with the introduction of the mechanisms of noise reduction and posterior probability. HGP shows much better performance in the experimental results. During the experimental process, oral tissues were divided into three groups, adenocarcinoma (n = 87), carcinoma (n = 100) and the control group (n = 134). The spectral data for these groups were collected. The prospective application of the proposed HGP classification method improved the diagnostic sensitivity to 56.35% and the specificity to about 70.00%, and resulted in a Matthews correlation coefficient (MCC) of 0.36. It is proved that the utilization of HGP in LRS detection analysis for the diagnosis of oral cancer gives accurate results. The prospect of application is also satisfactory.

  10. Adaptive Fourier decomposition based ECG denoising.

    PubMed

    Wang, Ze; Wan, Feng; Wong, Chi Man; Zhang, Liming

    2016-10-01

    A novel ECG denoising method is proposed based on the adaptive Fourier decomposition (AFD). The AFD decomposes a signal according to its energy distribution, thereby making this algorithm suitable for separating pure ECG signal and noise with overlapping frequency ranges but different energy distributions. A stop criterion for the iterative decomposition process in the AFD is calculated on the basis of the estimated signal-to-noise ratio (SNR) of the noisy signal. The proposed AFD-based method is validated by the synthetic ECG signal using an ECG model and also real ECG signals from the MIT-BIH Arrhythmia Database both with additive Gaussian white noise. Simulation results of the proposed method show better performance on the denoising and the QRS detection in comparing with major ECG denoising schemes based on the wavelet transform, the Stockwell transform, the empirical mode decomposition, and the ensemble empirical mode decomposition.

  11. Wavelet methods in multi-conjugate adaptive optics

    NASA Astrophysics Data System (ADS)

    Helin, T.; Yudytskiy, M.

    2013-08-01

    The next generation ground-based telescopes rely heavily on adaptive optics for overcoming the limitation of atmospheric turbulence. In the future adaptive optics modalities, like multi-conjugate adaptive optics (MCAO), atmospheric tomography is the major mathematical and computational challenge. In this severely ill-posed problem, a fast and stable reconstruction algorithm is needed that can take into account many real-life phenomena of telescope imaging. We introduce a novel reconstruction method for the atmospheric tomography problem and demonstrate its performance and flexibility in the context of MCAO. Our method is based on using locality properties of compactly supported wavelets, both in the spatial and frequency domains. The reconstruction in the atmospheric tomography problem is obtained by solving the Bayesian MAP estimator with a conjugate-gradient-based algorithm. An accelerated algorithm with preconditioning is also introduced. Numerical performance is demonstrated on the official end-to-end simulation tool OCTOPUS of European Southern Observatory.

  12. Method and system for environmentally adaptive fault tolerant computing

    NASA Technical Reports Server (NTRS)

    Copenhaver, Jason L. (Inventor); Jeremy, Ramos (Inventor); Wolfe, Jeffrey M. (Inventor); Brenner, Dean (Inventor)

    2010-01-01

    A method and system for adapting fault tolerant computing. The method includes the steps of measuring an environmental condition representative of an environment. An on-board processing system's sensitivity to the measured environmental condition is measured. It is determined whether to reconfigure a fault tolerance of the on-board processing system based in part on the measured environmental condition. The fault tolerance of the on-board processing system may be reconfigured based in part on the measured environmental condition.

  13. Adaptive control of space based robot manipulators

    NASA Technical Reports Server (NTRS)

    Walker, Michael W.; Wee, Liang-Boon

    1991-01-01

    For space based robots in which the base is free to move, motion planning and control is complicated by uncertainties in the inertial properties of the manipulator and its load. A new adaptive control method is presented for space based robots which achieves globally stable trajectory tracking in the presence of uncertainties in the inertial parameters of the system. A partition is made of the fifteen degree of freedom system dynamics into two parts: a nine degree of freedom invertible portion and a six degree of freedom noninvertible portion. The controller is then designed to achieve trajectory tracking of the invertible portion of the system. This portion consist of the manipulator joint positions and the orientation of the base. The motion of the noninvertible portion is bounded, but unpredictable. This portion consist of the position of the robot's base and the position of the reaction wheel.

  14. Testlet-Based Multidimensional Adaptive Testing.

    PubMed

    Frey, Andreas; Seitz, Nicki-Nils; Brandt, Steffen

    2016-01-01

    Multidimensional adaptive testing (MAT) is a highly efficient method for the simultaneous measurement of several latent traits. Currently, no psychometrically sound approach is available for the use of MAT in testlet-based tests. Testlets are sets of items sharing a common stimulus such as a graph or a text. They are frequently used in large operational testing programs like TOEFL, PISA, PIRLS, or NAEP. To make MAT accessible for such testing programs, we present a novel combination of MAT with a multidimensional generalization of the random effects testlet model (MAT-MTIRT). MAT-MTIRT compared to non-adaptive testing is examined for several combinations of testlet effect variances (0.0, 0.5, 1.0, and 1.5) and testlet sizes (3, 6, and 9 items) with a simulation study considering three ability dimensions with simple loading structure. MAT-MTIRT outperformed non-adaptive testing regarding the measurement precision of the ability estimates. Further, the measurement precision decreased when testlet effect variances and testlet sizes increased. The suggested combination of the MTIRT model therefore provides a solution to the substantial problems of testlet-based tests while keeping the length of the test within an acceptable range.

  15. Testlet-Based Multidimensional Adaptive Testing

    PubMed Central

    Frey, Andreas; Seitz, Nicki-Nils; Brandt, Steffen

    2016-01-01

    Multidimensional adaptive testing (MAT) is a highly efficient method for the simultaneous measurement of several latent traits. Currently, no psychometrically sound approach is available for the use of MAT in testlet-based tests. Testlets are sets of items sharing a common stimulus such as a graph or a text. They are frequently used in large operational testing programs like TOEFL, PISA, PIRLS, or NAEP. To make MAT accessible for such testing programs, we present a novel combination of MAT with a multidimensional generalization of the random effects testlet model (MAT-MTIRT). MAT-MTIRT compared to non-adaptive testing is examined for several combinations of testlet effect variances (0.0, 0.5, 1.0, and 1.5) and testlet sizes (3, 6, and 9 items) with a simulation study considering three ability dimensions with simple loading structure. MAT-MTIRT outperformed non-adaptive testing regarding the measurement precision of the ability estimates. Further, the measurement precision decreased when testlet effect variances and testlet sizes increased. The suggested combination of the MTIRT model therefore provides a solution to the substantial problems of testlet-based tests while keeping the length of the test within an acceptable range. PMID:27917132

  16. The model adaptive space shrinkage (MASS) approach: a new method for simultaneous variable selection and outlier detection based on model population analysis.

    PubMed

    Wen, Ming; Deng, Bai-Chuan; Cao, Dong-Sheng; Yun, Yong-Huan; Yang, Rui-Han; Lu, Hong-Mei; Liang, Yi-Zeng

    2016-10-07

    Variable selection and outlier detection are important processes in chemical modeling. Usually, they affect each other. Their performing orders also strongly affect the modeling results. Currently, many studies perform these processes separately and in different orders. In this study, we examined the interaction between outliers and variables and compared the modeling procedures performed with different orders of variable selection and outlier detection. Because the order of outlier detection and variable selection can affect the interpretation of the model, it is difficult to decide which order is preferable when the predictabilities (prediction error) of the different orders are relatively close. To address this problem, a simultaneous variable selection and outlier detection approach called Model Adaptive Space Shrinkage (MASS) was developed. This proposed approach is based on model population analysis (MPA). Through weighted binary matrix sampling (WBMS) from model space, a large number of partial least square (PLS) regression models were built, and the elite parts of the models were selected to statistically reassign the weight of each variable and sample. Then, the whole process was repeated until the weights of the variables and samples converged. Finally, MASS adaptively found a high performance model which consisted of the optimized variable subset and sample subset. The combination of these two subsets could be considered as the cleaned dataset used for chemical modeling. In the proposed approach, the problem of the order of variable selection and outlier detection is avoided. One near infrared spectroscopy (NIR) dataset and one quantitative structure-activity relationship (QSAR) dataset were used to test this approach. The result demonstrated that MASS is a useful method for data cleaning before building a predictive model.

  17. Towards in silico oncology: adapting a four dimensional nephroblastoma treatment model to a clinical trial case based on multi-method sensitivity analysis.

    PubMed

    Georgiadi, Eleni Ch; Dionysiou, Dimitra D; Graf, Norbert; Stamatakos, Georgios S

    2012-11-01

    In the past decades a great progress in cancer research has been made although medical treatment is still widely based on empirically established protocols which have many limitations. Computational models address such limitations by providing insight into the complex biological mechanisms of tumor progression. A set of clinically-oriented, multiscale models of solid tumor dynamics has been developed by the In Silico Oncology Group (ISOG), Institute of Communication and Computer Systems (ICCS)-National Technical University of Athens (NTUA) to study cancer growth and response to treatment. Within this context using certain representative parameter values, tumor growth and response have been modeled under a cancer preoperative chemotherapy protocol in the framework of the SIOP 2001/GPOH clinical trial. A thorough cross-method sensitivity analysis of the model has been performed. Based on the sensitivity analysis results, a reasonable adaptation of the values of the model parameters to a real clinical case of bilateral nephroblastomatosis has been achieved. The analysis presented supports the potential of the model for the study and eventually the future design of personalized treatment schemes and/or schedules using the data obtained from in vitro experiments and clinical studies.

  18. A Novel Method for Predicting Late Genitourinary Toxicity After Prostate Radiation Therapy and the Need for Age-Based Risk-Adapted Dose Constraints

    SciTech Connect

    Ahmed, Awad A.; Egleston, Brian; Alcantara, Pino; Li, Linna; Pollack, Alan; Horwitz, Eric M.; Buyyounouski, Mark K.

    2013-07-15

    Background: There are no well-established normal tissue sparing dose–volume histogram (DVH) criteria that limit the risk of urinary toxicity from prostate radiation therapy (RT). The aim of this study was to determine which criteria predict late toxicity among various DVH parameters when contouring the entire solid bladder and its contents versus the bladder wall. The area under the histogram curve (AUHC) was also analyzed. Methods and Materials: From 1993 to 2000, 503 men with prostate cancer received 3-dimensional conformal RT (median follow-up time, 71 months). The whole bladder and the bladder wall were contoured in all patients. The primary endpoint was grade ≥2 genitourinary (GU) toxicity occurring ≥3 months after completion of RT. Cox regressions of time to grade ≥2 toxicity were estimated separately for the entire bladder and bladder wall. Concordance probability estimates (CPE) assessed model discriminative ability. Before training the models, an external random test group of 100 men was set aside for testing. Separate analyses were performed based on the mean age (≤ 68 vs >68 years). Results: Age, pretreatment urinary symptoms, mean dose (entire bladder and bladder wall), and AUHC (entire bladder and bladder wall) were significant (P<.05) in multivariable analysis. Overall, bladder wall CPE values were higher than solid bladder values. The AUHC for bladder wall provided the greatest discrimination for late bladder toxicity when compared with alternative DVH points, with CPE values of 0.68 for age ≤68 years and 0.81 for age >68 years. Conclusion: The AUHC method based on bladder wall volumes was superior for predicting late GU toxicity. Age >68 years was associated with late grade ≥2 GU toxicity, which suggests that risk-adapted dose constraints based on age should be explored.

  19. Predictor-Based Model Reference Adaptive Control

    NASA Technical Reports Server (NTRS)

    Lavretsky, Eugene; Gadient, Ross; Gregory, Irene M.

    2009-01-01

    This paper is devoted to robust, Predictor-based Model Reference Adaptive Control (PMRAC) design. The proposed adaptive system is compared with the now-classical Model Reference Adaptive Control (MRAC) architecture. Simulation examples are presented. Numerical evidence indicates that the proposed PMRAC tracking architecture has better than MRAC transient characteristics. In this paper, we presented a state-predictor based direct adaptive tracking design methodology for multi-input dynamical systems, with partially known dynamics. Efficiency of the design was demonstrated using short period dynamics of an aircraft. Formal proof of the reported PMRAC benefits constitute future research and will be reported elsewhere.

  20. An improved adaptive IHS method for image fusion

    NASA Astrophysics Data System (ADS)

    Wang, Ting

    2015-12-01

    An improved adaptive intensity-hue-saturation (IHS) method is proposed for image fusion in this paper based on the adaptive IHS (AIHS) method and its improved method(IAIHS). Through improved method, the weighting matrix, which decides how many spatial details in the panchromatic (Pan) image should be injected into the multispectral (MS) image, is defined on the basis of the linear relationship of the edges of Pan and MS image. At the same time, a modulation parameter t is used to balance the spatial resolution and spectral resolution of the fusion image. Experiments showed that the improved method can improve spectral quality and maintain spatial resolution compared with the AIHS and IAIHS methods.

  1. Workshop on adaptive grid methods for fusion plasmas

    SciTech Connect

    Wiley, J.C.

    1995-07-01

    The author describes a general `hp` finite element method with adaptive grids. The code was based on the work of Oden, et al. The term `hp` refers to the method of spatial refinement (h), in conjunction with the order of polynomials used as a part of the finite element discretization (p). This finite element code seems to handle well the different mesh grid sizes occuring between abuted grids with different resolutions.

  2. Adaptive method with intercessory feedback control for an intelligent agent

    DOEpatents

    Goldsmith, Steven Y.

    2004-06-22

    An adaptive architecture method with feedback control for an intelligent agent provides for adaptively integrating reflexive and deliberative responses to a stimulus according to a goal. An adaptive architecture method with feedback control for multiple intelligent agents provides for coordinating and adaptively integrating reflexive and deliberative responses to a stimulus according to a goal. Re-programming of the adaptive architecture is through a nexus which coordinates reflexive and deliberator components.

  3. Adaptive Accommodation Control Method for Complex Assembly

    NASA Astrophysics Data System (ADS)

    Kang, Sungchul; Kim, Munsang; Park, Shinsuk

    Robotic systems have been used to automate assembly tasks in manufacturing and in teleoperation. Conventional robotic systems, however, have been ineffective in controlling contact force in multiple contact states of complex assemblythat involves interactions between complex-shaped parts. Unlike robots, humans excel at complex assembly tasks by utilizing their intrinsic impedance, forces and torque sensation, and tactile contact clues. By examining the human behavior in assembling complex parts, this study proposes a novel geometry-independent control method for robotic assembly using adaptive accommodation (or damping) algorithm. Two important conditions for complex assembly, target approachability and bounded contact force, can be met by the proposed control scheme. It generates target approachable motion that leads the object to move closer to a desired target position, while contact force is kept under a predetermined value. Experimental results from complex assembly tests have confirmed the feasibility and applicability of the proposed method.

  4. Adaptive control based on retrospective cost optimization

    NASA Astrophysics Data System (ADS)

    Santillo, Mario A.

    This dissertation studies adaptive control of multi-input, multi-output, linear, time-invariant, discrete-time systems that are possibly unstable and nonminimum phase. We consider both gradient-based adaptive control as well as retrospective-cost-based adaptive control. Retrospective cost optimization is a measure of performance at the current time based on a past window of data and without assumptions about the command or disturbance signals. In particular, retrospective cost optimization acts as an inner loop to the adaptive control algorithm by modifying the performance variables based on the difference between the actual past control inputs and the recomputed past control inputs based on the current control law. We develop adaptive control algorithms that are effective for systems that are nonminimum phase. We consider discrete-time adaptive control since these control laws can be implemented directly in embedded code without requiring an intermediate discretization step. Furthermore, the adaptive controllers in this dissertation are developed under minimal modeling assumptions. In particular, the adaptive controllers require knowledge of the sign of the high-frequency gain and a sufficient number of Markov parameters to approximate the nonminimum-phase zeros (if any). No additional modeling information is necessary. The adaptive controllers presented in this dissertation are developed for full-state-feedback stabilization, static-output-feedback stabilization, as well as dynamic compensation for stabilization, command following, disturbance rejection, and model reference adaptive control. Lyapunov-based stability and convergence proofs are provided for special cases. We present numerical examples to illustrate the algorithms' effectiveness in handling systems that are unstable and/or nonminimum phase and to provide insight into the modeling information required for controller implementation.

  5. Adapting implicit methods to parallel processors

    SciTech Connect

    Reeves, L.; McMillin, B.; Okunbor, D.; Riggins, D.

    1994-12-31

    When numerically solving many types of partial differential equations, it is advantageous to use implicit methods because of their better stability and more flexible parameter choice, (e.g. larger time steps). However, since implicit methods usually require simultaneous knowledge of the entire computational domain, these methods axe difficult to implement directly on distributed memory parallel processors. This leads to infrequent use of implicit methods on parallel/distributed systems. The usual implementation of implicit methods is inefficient due to the nature of parallel systems where it is common to take the computational domain and distribute the grid points over the processors so as to maintain a relatively even workload per processor. This creates a problem at the locations in the domain where adjacent points are not on the same processor. In order for the values at these points to be calculated, messages have to be exchanged between the corresponding processors. Without special adaptation, this will result in idle processors during part of the computation, and as the number of idle processors increases, the lower the effective speed improvement by using a parallel processor.

  6. Adaptive model training system and method

    DOEpatents

    Bickford, Randall L; Palnitkar, Rahul M; Lee, Vo

    2014-04-15

    An adaptive model training system and method for filtering asset operating data values acquired from a monitored asset for selectively choosing asset operating data values that meet at least one predefined criterion of good data quality while rejecting asset operating data values that fail to meet at least the one predefined criterion of good data quality; and recalibrating a previously trained or calibrated model having a learned scope of normal operation of the asset by utilizing the asset operating data values that meet at least the one predefined criterion of good data quality for adjusting the learned scope of normal operation of the asset for defining a recalibrated model having the adjusted learned scope of normal operation of the asset.

  7. Adaptive model training system and method

    DOEpatents

    Bickford, Randall L; Palnitkar, Rahul M

    2014-11-18

    An adaptive model training system and method for filtering asset operating data values acquired from a monitored asset for selectively choosing asset operating data values that meet at least one predefined criterion of good data quality while rejecting asset operating data values that fail to meet at least the one predefined criterion of good data quality; and recalibrating a previously trained or calibrated model having a learned scope of normal operation of the asset by utilizing the asset operating data values that meet at least the one predefined criterion of good data quality for adjusting the learned scope of normal operation of the asset for defining a recalibrated model having the adjusted learned scope of normal operation of the asset.

  8. An Adaptive Cross-Architecture Combination Method for Graph Traversal

    SciTech Connect

    You, Yang; Song, Shuaiwen; Kerbyson, Darren J.

    2014-06-18

    Breadth-First Search (BFS) is widely used in many real-world applications including computational biology, social networks, and electronic design automation. The combination method, using both top-down and bottom-up techniques, is the most effective BFS approach. However, current combination methods rely on trial-and-error and exhaustive search to locate the optimal switching point, which may cause significant runtime overhead. To solve this problem, we design an adaptive method based on regression analysis to predict an optimal switching point for the combination method at runtime within less than 0.1% of the BFS execution time.

  9. Gradient-based adaptation of continuous dynamic model structures

    NASA Astrophysics Data System (ADS)

    La Cava, William G.; Danai, Kourosh

    2016-01-01

    A gradient-based method of symbolic adaptation is introduced for a class of continuous dynamic models. The proposed model structure adaptation method starts with the first-principles model of the system and adapts its structure after adjusting its individual components in symbolic form. A key contribution of this work is its introduction of the model's parameter sensitivity as the measure of symbolic changes to the model. This measure, which is essential to defining the structural sensitivity of the model, not only accommodates algebraic evaluation of candidate models in lieu of more computationally expensive simulation-based evaluation, but also makes possible the implementation of gradient-based optimisation in symbolic adaptation. The proposed method is applied to models of several virtual and real-world systems that demonstrate its potential utility.

  10. A feature extraction method of the particle swarm optimization algorithm based on adaptive inertia weight and chaos optimization for Brillouin scattering spectra

    NASA Astrophysics Data System (ADS)

    Zhang, Yanjun; Zhao, Yu; Fu, Xinghu; Xu, Jinrui

    2016-10-01

    A novel particle swarm optimization algorithm based on adaptive inertia weight and chaos optimization is proposed for extracting the features of Brillouin scattering spectra. Firstly, the adaptive inertia weight parameter of the velocity is introduced to the basic particle swarm algorithm. Based on the current iteration number of particles and the adaptation value, the algorithm can change the weight coefficient and adjust the iteration speed of searching space for particles, so the local optimization ability can be enhanced. Secondly, the logical self-mapping chaotic search is carried out by using the chaos optimization in particle swarm optimization algorithm, which makes the particle swarm optimization algorithm jump out of local optimum. The novel algorithm is compared with finite element analysis-Levenberg Marquardt algorithm, particle swarm optimization-Levenberg Marquardt algorithm and particle swarm optimization algorithm by changing the linewidth, the signal-to-noise ratio and the linear weight ratio of Brillouin scattering spectra. Then the algorithm is applied to the feature extraction of Brillouin scattering spectra in different temperatures. The simulation analysis and experimental results show that this algorithm has a high fitting degree and small Brillouin frequency shift error for different linewidth, SNR and linear weight ratio. Therefore, this algorithm can be applied to the distributed optical fiber sensing system based on Brillouin optical time domain reflection, which can effectively improve the accuracy of Brillouin frequency shift extraction.

  11. Adaptable state based control system

    NASA Technical Reports Server (NTRS)

    Rasmussen, Robert D. (Inventor); Dvorak, Daniel L. (Inventor); Gostelow, Kim P. (Inventor); Starbird, Thomas W. (Inventor); Gat, Erann (Inventor); Chien, Steve Ankuo (Inventor); Keller, Robert M. (Inventor)

    2004-01-01

    An autonomous controller, comprised of a state knowledge manager, a control executor, hardware proxies and a statistical estimator collaborates with a goal elaborator, with which it shares common models of the behavior of the system and the controller. The elaborator uses the common models to generate from temporally indeterminate sets of goals, executable goals to be executed by the controller. The controller may be updated to operate in a different system or environment than that for which it was originally designed by the replacement of shared statistical models and by the instantiation of a new set of state variable objects derived from a state variable class. The adaptation of the controller does not require substantial modification of the goal elaborator for its application to the new system or environment.

  12. Advanced numerical methods in mesh generation and mesh adaptation

    SciTech Connect

    Lipnikov, Konstantine; Danilov, A; Vassilevski, Y; Agonzal, A

    2010-01-01

    Numerical solution of partial differential equations requires appropriate meshes, efficient solvers and robust and reliable error estimates. Generation of high-quality meshes for complex engineering models is a non-trivial task. This task is made more difficult when the mesh has to be adapted to a problem solution. This article is focused on a synergistic approach to the mesh generation and mesh adaptation, where best properties of various mesh generation methods are combined to build efficiently simplicial meshes. First, the advancing front technique (AFT) is combined with the incremental Delaunay triangulation (DT) to build an initial mesh. Second, the metric-based mesh adaptation (MBA) method is employed to improve quality of the generated mesh and/or to adapt it to a problem solution. We demonstrate with numerical experiments that combination of all three methods is required for robust meshing of complex engineering models. The key to successful mesh generation is the high-quality of the triangles in the initial front. We use a black-box technique to improve surface meshes exported from an unattainable CAD system. The initial surface mesh is refined into a shape-regular triangulation which approximates the boundary with the same accuracy as the CAD mesh. The DT method adds robustness to the AFT. The resulting mesh is topologically correct but may contain a few slivers. The MBA uses seven local operations to modify the mesh topology. It improves significantly the mesh quality. The MBA method is also used to adapt the mesh to a problem solution to minimize computational resources required for solving the problem. The MBA has a solid theoretical background. In the first two experiments, we consider the convection-diffusion and elasticity problems. We demonstrate the optimal reduction rate of the discretization error on a sequence of adaptive strongly anisotropic meshes. The key element of the MBA method is construction of a tensor metric from hierarchical edge-based

  13. Autonomous Organization-Based Adaptive Information Systems

    DTIC Science & Technology

    2005-01-01

    intentional Multi - agent System (MAS) approach [10]. While these approaches are functional AIS systems, they lack the ability to reorganize and adapt...extended a multi - agent system with a self- reorganizing architecture to create an autonomous, adaptive information system. Design Our organization-based...goals. An advantage of a multi - agent system using the organization theoretic model is its extensibility. The practical, numerical limits to the

  14. Adaptive L₁/₂ shooting regularization method for survival analysis using gene expression data.

    PubMed

    Liu, Xiao-Ying; Liang, Yong; Xu, Zong-Ben; Zhang, Hai; Leung, Kwong-Sak

    2013-01-01

    A new adaptive L₁/₂ shooting regularization method for variable selection based on the Cox's proportional hazards mode being proposed. This adaptive L₁/₂ shooting algorithm can be easily obtained by the optimization of a reweighed iterative series of L₁ penalties and a shooting strategy of L₁/₂ penalty. Simulation results based on high dimensional artificial data show that the adaptive L₁/₂ shooting regularization method can be more accurate for variable selection than Lasso and adaptive Lasso methods. The results from real gene expression dataset (DLBCL) also indicate that the L₁/₂ regularization method performs competitively.

  15. A simplified self-adaptive grid method, SAGE

    NASA Technical Reports Server (NTRS)

    Davies, C.; Venkatapathy, E.

    1989-01-01

    The formulation of the Self-Adaptive Grid Evolution (SAGE) code, based on the work of Nakahashi and Deiwert, is described in the first section of this document. The second section is presented in the form of a user guide which explains the input and execution of the code, and provides many examples. Application of the SAGE code, by Ames Research Center and by others, in the solution of various flow problems has been an indication of the code's general utility and success. Although the basic formulation follows the method of Nakahashi and Deiwert, many modifications have been made to facilitate the use of the self-adaptive grid method for single, zonal, and multiple grids. Modifications to the methodology and the simplified input options make this current version a flexible and user-friendly code.

  16. Optimal and adaptive methods of processing hydroacoustic signals (review)

    NASA Astrophysics Data System (ADS)

    Malyshkin, G. S.; Sidel'nikov, G. B.

    2014-09-01

    Different methods of optimal and adaptive processing of hydroacoustic signals for multipath propagation and scattering are considered. Advantages and drawbacks of the classical adaptive (Capon, MUSIC, and Johnson) algorithms and "fast" projection algorithms are analyzed for the case of multipath propagation and scattering of strong signals. The classical optimal approaches to detecting multipath signals are presented. A mechanism of controlled normalization of strong signals is proposed to automatically detect weak signals. The results of simulating the operation of different detection algorithms for a linear equidistant array under multipath propagation and scattering are presented. An automatic detector is analyzed, which is based on classical or fast projection algorithms, which estimates the background proceeding from median filtering or the method of bilateral spatial contrast.

  17. Adaptive numerical methods for partial differential equations

    SciTech Connect

    Cololla, P.

    1995-07-01

    This review describes a structured approach to adaptivity. The Automated Mesh Refinement (ARM) algorithms developed by M Berger are described, touching on hyperbolic and parabolic applications. Adaptivity is achieved by overlaying finer grids only in areas flagged by a generalized error criterion. The author discusses some of the issues involved in abutting disparate-resolution grids, and demonstrates that suitable algorithms exist for dissipative as well as hyperbolic systems.

  18. Space-time adaptive numerical methods for geophysical applications.

    PubMed

    Castro, C E; Käser, M; Toro, E F

    2009-11-28

    In this paper we present high-order formulations of the finite volume and discontinuous Galerkin finite-element methods for wave propagation problems with a space-time adaptation technique using unstructured meshes in order to reduce computational cost without reducing accuracy. Both methods can be derived in a similar mathematical framework and are identical in their first-order version. In their extension to higher order accuracy in space and time, both methods use spatial polynomials of higher degree inside each element, a high-order solution of the generalized Riemann problem and a high-order time integration method based on the Taylor series expansion. The static adaptation strategy uses locally refined high-resolution meshes in areas with low wave speeds to improve the approximation quality. Furthermore, the time step length is chosen locally adaptive such that the solution is evolved explicitly in time by an optimal time step determined by a local stability criterion. After validating the numerical approach, both schemes are applied to geophysical wave propagation problems such as tsunami waves and seismic waves comparing the new approach with the classical global time-stepping technique. The problem of mesh partitioning for large-scale applications on multi-processor architectures is discussed and a new mesh partition approach is proposed and tested to further reduce computational cost.

  19. An adaptive penalty method for DIRECT algorithm in engineering optimization

    NASA Astrophysics Data System (ADS)

    Vilaça, Rita; Rocha, Ana Maria A. C.

    2012-09-01

    The most common approach for solving constrained optimization problems is based on penalty functions, where the constrained problem is transformed into a sequence of unconstrained problem by penalizing the objective function when constraints are violated. In this paper, we analyze the implementation of an adaptive penalty method, within the DIRECT algorithm, in which the constraints that are more difficult to be satisfied will have relatively higher penalty values. In order to assess the applicability and performance of the proposed method, some benchmark problems from engineering design optimization are considered.

  20. Grid adaptation and remapping for arbitrary lagrangian eulerian (ALE) methods

    SciTech Connect

    Lapenta, G. M.

    2002-01-01

    Methods to include automatic grid adaptation tools within the Arbitrary Lagrangian Eulerian (ALE) method are described. Two main developments will be described. First, a new grid adaptation approach is described, based on an automatic and accurate estimate of the local truncation error. Second, a new method to remap the information between two grids is presented, based on the MPDATA approach. The Arbitrary Lagrangian Eulerian (ALE) method solves hyperbolic equations by splitting the operators is two phases. First, in the Lagrangian phase, the equations under consideration are written in a Lagrangian frame and are discretized. In this phase, the grid moves with the solution, the velocity of each node being the local fluid velocity. Second, in the Eulerian phase, a new grid is generated and the information is transferred to the new grid. The advantage of considering this second step is the possibility of avoiding mesh distortion and tangling typical of pure Lagrangian methods. The second phase of the ALE method is the primary topic of the present communication. In the Eulerian phase two tasks need to be completed. First, a new grid need to be created (we will refer to this task as rezoning). Second, the information is transferred from the grid available at the end of the Lagrangian phase to the new grid (we will refer to this task as remapping). New techniques are presented for the two tasks of the Eulerian phase: rezoning and remapping.

  1. Developing new online calibration methods for multidimensional computerized adaptive testing.

    PubMed

    Chen, Ping; Wang, Chun; Xin, Tao; Chang, Hua-Hua

    2017-02-01

    Multidimensional computerized adaptive testing (MCAT) has received increasing attention over the past few years in educational measurement. Like all other formats of CAT, item replenishment is an essential part of MCAT for its item bank maintenance and management, which governs retiring overexposed or obsolete items over time and replacing them with new ones. Moreover, calibration precision of the new items will directly affect the estimation accuracy of examinees' ability vectors. In unidimensional CAT (UCAT) and cognitive diagnostic CAT, online calibration techniques have been developed to effectively calibrate new items. However, there has been very little discussion of online calibration in MCAT in the literature. Thus, this paper proposes new online calibration methods for MCAT based upon some popular methods used in UCAT. Three representative methods, Method A, the 'one EM cycle' method and the 'multiple EM cycles' method, are generalized to MCAT. Three simulation studies were conducted to compare the three new methods by manipulating three factors (test length, item bank design, and level of correlation between coordinate dimensions). The results showed that all the new methods were able to recover the item parameters accurately, and the adaptive online calibration designs showed some improvements compared to the random design under most conditions.

  2. Adaptation of fast marching methods to intracellular signaling

    NASA Astrophysics Data System (ADS)

    Chikando, Aristide C.; Kinser, Jason M.

    2006-02-01

    Imaging of signaling phenomena within the intracellular domain is a well studied field. Signaling is the process by which all living cells communicate with their environment and with each other. In the case of signaling calcium waves, numerous computational models based on solving homogeneous reaction diffusion equations have been developed. Typically, the reaction diffusion approach consists of solving systems of partial differential equations at each update step. The traditional methods used to solve these reaction diffusion equations are very computationally expensive since they must employ small time steps in order to reduce the computational error. The presented research suggests the application of fast marching methods to imaging signaling calcium waves, more specifically fertilization calcium waves, in Xenopus laevis eggs. The fast marching approach provides fast and efficient means of tracking the evolution of monotonically advancing fronts. A model that employs biophysical properties of intracellular calcium signaling, and adapts fast marching methods to tracking the propagation of signaling calcium waves is presented. The developed model is used to reproduce simulation results obtained with reaction diffusion based model. Results obtained with our model agree with both the results obtained with reaction diffusion based models, and confocal microscopy observations during in vivo experiments. The adaptation of fast marching methods to intracellular protein or macromolecule trafficking is also briefly explored.

  3. Block-based adaptive lifting schemes for multiband image compression

    NASA Astrophysics Data System (ADS)

    Masmoudi, Hela; Benazza-Benyahia, Amel; Pesquet, Jean-Christophe

    2004-02-01

    In this paper, we are interested in designing lifting schemes adapted to the statistics of the wavelet coefficients of multiband images for compression applications. More precisely, nonseparable vector lifting schemes are used in order to capture simultaneously the spatial and the spectral redundancies. The underlying operators are then computed in order to minimize the entropy of the resulting multiresolution representation. To this respect, we have developed a new iterative block-based classification algorithm. Simulation tests carried out on remotely sensed multispectral images indicate that a substantial gain in terms of bit-rate is achieved by the proposed adaptive coding method w.r.t the non-adaptive one.

  4. A novel adaptive noise filtering method for SAR images

    NASA Astrophysics Data System (ADS)

    Li, Weibin; He, Mingyi

    2009-08-01

    In the most application situation, signal or image always is corrupted by additive noise. As a result there are mass methods to remove the additive noise while few approaches can work well for the multiplicative noise. The paper presents an improved MAP-based filter for multiplicative noise by adaptive window denoising technique. A Gamma noise models is discussed and a preprocessing technique to differential the matured and un-matured pixel is applied to get accurate estimate for Equivalent Number of Looks. Also the adaptive local window growth and 3 different denoise strategies are applied to smooth noise while keep its subtle information according to its local statistics feature. The simulation results show that the performance is better than existing filter. Several image experiments demonstrate its theoretical performance.

  5. QPSO-based adaptive DNA computing algorithm.

    PubMed

    Karakose, Mehmet; Cigdem, Ugur

    2013-01-01

    DNA (deoxyribonucleic acid) computing that is a new computation model based on DNA molecules for information storage has been increasingly used for optimization and data analysis in recent years. However, DNA computing algorithm has some limitations in terms of convergence speed, adaptability, and effectiveness. In this paper, a new approach for improvement of DNA computing is proposed. This new approach aims to perform DNA computing algorithm with adaptive parameters towards the desired goal using quantum-behaved particle swarm optimization (QPSO). Some contributions provided by the proposed QPSO based on adaptive DNA computing algorithm are as follows: (1) parameters of population size, crossover rate, maximum number of operations, enzyme and virus mutation rate, and fitness function of DNA computing algorithm are simultaneously tuned for adaptive process, (2) adaptive algorithm is performed using QPSO algorithm for goal-driven progress, faster operation, and flexibility in data, and (3) numerical realization of DNA computing algorithm with proposed approach is implemented in system identification. Two experiments with different systems were carried out to evaluate the performance of the proposed approach with comparative results. Experimental results obtained with Matlab and FPGA demonstrate ability to provide effective optimization, considerable convergence speed, and high accuracy according to DNA computing algorithm.

  6. A novel adaptive force control method for IPMC manipulation

    NASA Astrophysics Data System (ADS)

    Hao, Lina; Sun, Zhiyong; Li, Zhi; Su, Yunquan; Gao, Jianchao

    2012-07-01

    IPMC is a type of electro-active polymer material, also called artificial muscle, which can generate a relatively large deformation under a relatively low input voltage (generally speaking, less than 5 V), and can be implemented in a water environment. Due to these advantages, IPMC can be used in many fields such as biomimetics, service robots, bio-manipulation, etc. Until now, most existing methods for IPMC manipulation are displacement control not directly force control, however, under most conditions, the success rate of manipulations for tiny fragile objects is limited by the contact force, such as using an IPMC gripper to fix cells. Like most EAPs, a creep phenomenon exists in IPMC, of which the generated force will change with time and the creep model will be influenced by the change of the water content or other environmental factors, so a proper force control method is urgently needed. This paper presents a novel adaptive force control method (AIPOF control—adaptive integral periodic output feedback control), based on employing a creep model of which parameters are obtained by using the FRLS on-line identification method. The AIPOF control method can achieve an arbitrary pole configuration as long as the plant is controllable and observable. This paper also designs the POF and IPOF controller to compare their test results. Simulation and experiments of micro-force-tracking tests are carried out, with results confirming that the proposed control method is viable.

  7. Parameter testing for lattice filter based adaptive modal control systems

    NASA Technical Reports Server (NTRS)

    Sundararajan, N.; Williams, J. P.; Montgomery, R. C.

    1983-01-01

    For Large Space Structures (LSS), an adaptive control system is highly desirable. The present investigation is concerned with an 'indirect' adaptive control scheme wherein the system order, mode shapes, and modal amplitudes are estimated on-line using an identification scheme based on recursive, least-squares, lattice filters. Using the identified model parameters, a modal control law based on a pole-placement scheme with the objective of vibration suppression is employed. A method is presented for closed loop adaptive control of a flexible free-free beam. The adaptive control scheme consists of a two stage identification scheme working in series and a modal pole placement control scheme. The main conclusion from the current study is that the identified parameters cannot be directly used for controller design purposes.

  8. Optimal Hops-Based Adaptive Clustering Algorithm

    NASA Astrophysics Data System (ADS)

    Xuan, Xin; Chen, Jian; Zhen, Shanshan; Kuo, Yonghong

    This paper proposes an optimal hops-based adaptive clustering algorithm (OHACA). The algorithm sets an energy selection threshold before the cluster forms so that the nodes with less energy are more likely to go to sleep immediately. In setup phase, OHACA introduces an adaptive mechanism to adjust cluster head and load balance. And the optimal distance theory is applied to discover the practical optimal routing path to minimize the total energy for transmission. Simulation results show that OHACA prolongs the life of network, improves utilizing rate and transmits more data because of energy balance.

  9. A Method for Severely Constrained Item Selection in Adaptive Testing.

    ERIC Educational Resources Information Center

    Stocking, Martha L.; Swanson, Len

    1993-01-01

    A method is presented for incorporating a large number of constraints on adaptive item selection in the construction of computerized adaptive tests. The method, which emulates practices of expert test specialists, is illustrated for verbal and quantitative measures. Its foundation is application of a weighted deviations model and algorithm. (SLD)

  10. Solution-adaptive finite element method in computational fracture mechanics

    NASA Technical Reports Server (NTRS)

    Min, J. B.; Bass, J. M.; Spradley, L. W.

    1993-01-01

    Some recent results obtained using solution-adaptive finite element method in linear elastic two-dimensional fracture mechanics problems are presented. The focus is on the basic issue of adaptive finite element method for validating the applications of new methodology to fracture mechanics problems by computing demonstration problems and comparing the stress intensity factors to analytical results.

  11. Adaptive method for electron bunch profile prediction

    SciTech Connect

    Scheinker, Alexander; Gessner, Spencer

    2015-10-01

    We report on an experiment performed at the Facility for Advanced Accelerator Experimental Tests (FACET) at SLAC National Accelerator Laboratory, in which a new adaptive control algorithm, one with known, bounded update rates, despite operating on analytically unknown cost functions, was utilized in order to provide quasi-real-time bunch property estimates of the electron beam. Multiple parameters, such as arbitrary rf phase settings and other time-varying accelerator properties, were simultaneously tuned in order to match a simulated bunch energy spectrum with a measured energy spectrum. The simple adaptive scheme was digitally implemented using matlab and the experimental physics and industrial control system. The main result is a nonintrusive, nondestructive, real-time diagnostic scheme for prediction of bunch profiles, as well as other beam parameters, the precise control of which are important for the plasma wakefield acceleration experiments being explored at FACET. © 2015 authors. Published by the American Physical Society.

  12. Adaptive method for electron bunch profile prediction

    NASA Astrophysics Data System (ADS)

    Scheinker, Alexander; Gessner, Spencer

    2015-10-01

    We report on an experiment performed at the Facility for Advanced Accelerator Experimental Tests (FACET) at SLAC National Accelerator Laboratory, in which a new adaptive control algorithm, one with known, bounded update rates, despite operating on analytically unknown cost functions, was utilized in order to provide quasi-real-time bunch property estimates of the electron beam. Multiple parameters, such as arbitrary rf phase settings and other time-varying accelerator properties, were simultaneously tuned in order to match a simulated bunch energy spectrum with a measured energy spectrum. The simple adaptive scheme was digitally implemented using matlab and the experimental physics and industrial control system. The main result is a nonintrusive, nondestructive, real-time diagnostic scheme for prediction of bunch profiles, as well as other beam parameters, the precise control of which are important for the plasma wakefield acceleration experiments being explored at FACET.

  13. Adaptive finite element methods in electrochemistry.

    PubMed

    Gavaghan, David J; Gillow, Kathryn; Süli, Endre

    2006-12-05

    In this article, we review some of our previous work that considers the general problem of numerical simulation of the currents at microelectrodes using an adaptive finite element approach. Microelectrodes typically consist of an electrode embedded (or recessed) in an insulating material. For all such electrodes, numerical simulation is made difficult by the presence of a boundary singularity at the electrode edge (where the electrode meets the insulator), manifested by the large increase in the current density at this point, often referred to as the edge effect. Our approach to overcoming this problem has involved the derivation of an a posteriori bound on the error in the numerical approximation for the current that can be used to drive an adaptive mesh-generation algorithm, allowing calculation of the quantity of interest (the current) to within a prescribed tolerance. We illustrate the generic applicability of the approach by considering a broad range of steady-state applications of the technique.

  14. A decentralized adaptive robust method for chaos control.

    PubMed

    Kobravi, Hamid-Reza; Erfanian, Abbas

    2009-09-01

    This paper presents a control strategy, which is based on sliding mode control, adaptive control, and fuzzy logic system for controlling the chaotic dynamics. We consider this control paradigm in chaotic systems where the equations of motion are not known. The proposed control strategy is robust against the external noise disturbance and system parameter variations and can be used to convert the chaotic orbits not only to the desired periodic ones but also to any desired chaotic motions. Simulation results of controlling some typical higher order chaotic systems demonstrate the effectiveness of the proposed control method.

  15. A wavelet-based Projector Augmented-Wave (PAW) method: Reaching frozen-core all-electron precision with a systematic, adaptive and localized wavelet basis set

    NASA Astrophysics Data System (ADS)

    Rangel, T.; Caliste, D.; Genovese, L.; Torrent, M.

    2016-11-01

    We present a Projector Augmented-Wave (PAW) method based on a wavelet basis set. We implemented our wavelet-PAW method as a PAW library in the ABINIT package [http://www.abinit.org] and into BigDFT [http://www.bigdft.org]. We test our implementation in prototypical systems to illustrate the potential usage of our code. By using the wavelet-PAW method, we can simulate charged and special boundary condition systems with frozen-core all-electron precision. Furthermore, our work paves the way to large-scale and potentially order- N simulations within a PAW method.

  16. Adaptive methods, rolling contact, and nonclassical friction laws

    NASA Technical Reports Server (NTRS)

    Oden, J. T.

    1989-01-01

    Results and methods on three different areas of contemporary research are outlined. These include adaptive methods, the rolling contact problem for finite deformation of a hyperelastic or viscoelastic cylinder, and non-classical friction laws for modeling dynamic friction phenomena.

  17. Matched filter based iterative adaptive approach

    NASA Astrophysics Data System (ADS)

    Nepal, Ramesh; Zhang, Yan Rockee; Li, Zhengzheng; Blake, William

    2016-05-01

    Matched Filter sidelobes from diversified LPI waveform design and sensor resolution are two important considerations in radars and active sensors in general. Matched Filter sidelobes can potentially mask weaker targets, and low sensor resolution not only causes a high margin of error but also limits sensing in target-rich environment/ sector. The improvement in those factors, in part, concern with the transmitted waveform and consequently pulse compression techniques. An adaptive pulse compression algorithm is hence desired that can mitigate the aforementioned limitations. A new Matched Filter based Iterative Adaptive Approach, MF-IAA, as an extension to traditional Iterative Adaptive Approach, IAA, has been developed. MF-IAA takes its input as the Matched Filter output. The motivation here is to facilitate implementation of Iterative Adaptive Approach without disrupting the processing chain of traditional Matched Filter. Similar to IAA, MF-IAA is a user parameter free, iterative, weighted least square based spectral identification algorithm. This work focuses on the implementation of MF-IAA. The feasibility of MF-IAA is studied using a realistic airborne radar simulator as well as actual measured airborne radar data. The performance of MF-IAA is measured with different test waveforms, and different Signal-to-Noise (SNR) levels. In addition, Range-Doppler super-resolution using MF-IAA is investigated. Sidelobe reduction as well as super-resolution enhancement is validated. The robustness of MF-IAA with respect to different LPI waveforms and SNR levels is also demonstrated.

  18. Robust time and frequency domain estimation methods in adaptive control

    NASA Technical Reports Server (NTRS)

    Lamaire, Richard Orville

    1987-01-01

    A robust identification method was developed for use in an adaptive control system. The type of estimator is called the robust estimator, since it is robust to the effects of both unmodeled dynamics and an unmeasurable disturbance. The development of the robust estimator was motivated by a need to provide guarantees in the identification part of an adaptive controller. To enable the design of a robust control system, a nominal model as well as a frequency-domain bounding function on the modeling uncertainty associated with this nominal model must be provided. Two estimation methods are presented for finding parameter estimates, and, hence, a nominal model. One of these methods is based on the well developed field of time-domain parameter estimation. In a second method of finding parameter estimates, a type of weighted least-squares fitting to a frequency-domain estimated model is used. The frequency-domain estimator is shown to perform better, in general, than the time-domain parameter estimator. In addition, a methodology for finding a frequency-domain bounding function on the disturbance is used to compute a frequency-domain bounding function on the additive modeling error due to the effects of the disturbance and the use of finite-length data. The performance of the robust estimator in both open-loop and closed-loop situations is examined through the use of simulations.

  19. An Adaptive Discontinuous Galerkin Method for Modeling Atmospheric Convection (Preprint)

    DTIC Science & Technology

    2011-04-13

    Giraldo and Volkmar Wirth 5 SENSITIVITY STUDIES One important question for each adaptive numerical model is: how accurate is the adaptive method? For...this criterion that is used later for some sensitivity studies . These studies include a comparison between a simulation on an adaptive mesh with a...simulation on a uniform mesh and a sensitivity study concerning the size of the refinement region. 5.1 Comparison Criterion For comparing different

  20. Sinusoidal synthesis based adaptive tracking for rotating machinery fault detection

    NASA Astrophysics Data System (ADS)

    Li, Gang; McDonald, Geoff L.; Zhao, Qing

    2017-01-01

    This paper presents a novel Sinusoidal Synthesis Based Adaptive Tracking (SSBAT) technique for vibration-based rotating machinery fault detection. The proposed SSBAT algorithm is an adaptive time series technique that makes use of both frequency and time domain information of vibration signals. Such information is incorporated in a time varying dynamic model. Signal tracking is then realized by applying adaptive sinusoidal synthesis to the vibration signal. A modified Least-Squares (LS) method is adopted to estimate the model parameters. In addition to tracking, the proposed vibration synthesis model is mainly used as a linear time-varying predictor. The health condition of the rotating machine is monitored by checking the residual between the predicted and measured signal. The SSBAT method takes advantage of the sinusoidal nature of vibration signals and transfers the nonlinear problem into a linear adaptive problem in the time domain based on a state-space realization. It has low computation burden and does not need a priori knowledge of the machine under the no-fault condition which makes the algorithm ideal for on-line fault detection. The method is validated using both numerical simulation and practical application data. Meanwhile, the fault detection results are compared with the commonly adopted autoregressive (AR) and autoregressive Minimum Entropy Deconvolution (ARMED) method to verify the feasibility and performance of the SSBAT method.

  1. Adaptive skin detection based on online training

    NASA Astrophysics Data System (ADS)

    Zhang, Ming; Tang, Liang; Zhou, Jie; Rong, Gang

    2007-11-01

    Skin is a widely used cue for porn image classification. Most conventional methods are off-line training schemes. They usually use a fixed boundary to segment skin regions in the images and are effective only in restricted conditions: e.g. good lightness and unique human race. This paper presents an adaptive online training scheme for skin detection which can handle these tough cases. In our approach, skin detection is considered as a classification problem on Gaussian mixture model. For each image, human face is detected and the face color is used to establish a primary estimation of skin color distribution. Then an adaptive online training algorithm is used to find the real boundary between skin color and background color in current image. Experimental results on 450 images showed that the proposed method is more robust in general situations than the conventional ones.

  2. Adaptive fusion method of visible light and infrared images based on non-subsampled shearlet transform and fast non-negative matrix factorization

    NASA Astrophysics Data System (ADS)

    Kong, Weiwei; Lei, Yang; Zhao, Huaixun

    2014-11-01

    The issue of visible light and infrared images fusion has been an active topic in both military and civilian areas, and a great many relevant algorithms and techniques have been developed accordingly. This paper addresses a novel adaptive approach to the above two patterns of images fusion problem, employing multi-scale geometry analysis (MGA) of non-subsampled shearlet transform (NSST) and fast non-negative matrix factorization (FNMF) together. Compared with other existing conventional MGA tools, NSST owns not only better feature-capturing capabilities, but also much lower computational complexities. As a modification version of the classic NMF model, FNMF overcomes the local optimum property inherent in NMF to a large extent. Furthermore, use of the FNMF with a less complex structure and much fewer iteration numbers required leads to the enhancement of the overall computational efficiency, which is undoubtedly meaningful and promising in so many real-time applications especially the military and medical technologies. Experimental results indicate that the proposed method is superior to other current popular ones in both aspects of subjective visual and objective performance.

  3. An adaptive pattern based nonlinear PID controller.

    PubMed

    Segovia, Juan Pablo; Sbarbaro, Daniel; Ceballos, Eric

    2004-04-01

    This paper presents a nonlinear proportional-integral-derivative (PID) controller, combining a pattern based adaptive algorithm to cope with the problem of tuning the controller, and an associative memory to store the parameters, according to different operating conditions. The simplicity of the algorithm enables its implementation in current programmable logic controller technology. Several real-time experiments, carried out in a pressurized tank, illustrate the performance of the proposed controller.

  4. Adaptable radiation monitoring system and method

    DOEpatents

    Archer, Daniel E.; Beauchamp, Brock R.; Mauger, G. Joseph; Nelson, Karl E.; Mercer, Michael B.; Pletcher, David C.; Riot, Vincent J.; Schek, James L.; Knapp, David A.

    2006-06-20

    A portable radioactive-material detection system capable of detecting radioactive sources moving at high speeds. The system has at least one radiation detector capable of detecting gamma-radiation and coupled to an MCA capable of collecting spectral data in very small time bins of less than about 150 msec. A computer processor is connected to the MCA for determining from the spectral data if a triggering event has occurred. Spectral data is stored on a data storage device, and a power source supplies power to the detection system. Various configurations of the detection system may be adaptably arranged for various radiation detection scenarios. In a preferred embodiment, the computer processor operates as a server which receives spectral data from other networked detection systems, and communicates the collected data to a central data reporting system.

  5. Adaptive computational methods for aerothermal heating analysis

    NASA Technical Reports Server (NTRS)

    Price, John M.; Oden, J. Tinsley

    1988-01-01

    The development of adaptive gridding techniques for finite-element analysis of fluid dynamics equations is described. The developmental work was done with the Euler equations with concentration on shock and inviscid flow field capturing. Ultimately this methodology is to be applied to a viscous analysis for the purpose of predicting accurate aerothermal loads on complex shapes subjected to high speed flow environments. The development of local error estimate strategies as a basis for refinement strategies is discussed, as well as the refinement strategies themselves. The application of the strategies to triangular elements and a finite-element flux-corrected-transport numerical scheme are presented. The implementation of these strategies in the GIM/PAGE code for 2-D and 3-D applications is documented and demonstrated.

  6. An adaptive pseudospectral method for discontinuous problems

    NASA Technical Reports Server (NTRS)

    Augenbaum, Jeffrey M.

    1988-01-01

    The accuracy of adaptively chosen, mapped polynomial approximations is studied for functions with steep gradients or discontinuities. It is shown that, for steep gradient functions, one can obtain spectral accuracy in the original coordinate system by using polynomial approximations in a transformed coordinate system with substantially fewer collocation points than are necessary using polynomial expansion directly in the original, physical, coordinate system. It is also shown that one can avoid the usual Gibbs oscillation associated with steep gradient solutions of hyperbolic pde's by approximation in suitably chosen coordinate systems. Continuous, high gradient solutions are computed with spectral accuracy (as measured in the physical coordinate system). Discontinuous solutions associated with nonlinear hyperbolic equations can be accurately computed by using an artificial viscosity chosen to smooth out the solution in the mapped, computational domain. Thus, shocks can be effectively resolved on a scale that is subgrid to the resolution available with collocation only in the physical domain. Examples with Fourier and Chebyshev collocation are given.

  7. Moving and adaptive grid methods for compressible flows

    NASA Technical Reports Server (NTRS)

    Trepanier, Jean-Yves; Camarero, Ricardo

    1995-01-01

    This paper describes adaptive grid methods developed specifically for compressible flow computations. The basic flow solver is a finite-volume implementation of Roe's flux difference splitting scheme or arbitrarily moving unstructured triangular meshes. The grid adaptation is performed according to geometric and flow requirements. Some results are included to illustrate the potential of the methodology.

  8. Adaptive mesh strategies for the spectral element method

    NASA Technical Reports Server (NTRS)

    Mavriplis, Catherine

    1992-01-01

    An adaptive spectral method was developed for the efficient solution of time dependent partial differential equations. Adaptive mesh strategies that include resolution refinement and coarsening by three different methods are illustrated on solutions to the 1-D viscous Burger equation and the 2-D Navier-Stokes equations for driven flow in a cavity. Sharp gradients, singularities, and regions of poor resolution are resolved optimally as they develop in time using error estimators which indicate the choice of refinement to be used. The adaptive formulation presents significant increases in efficiency, flexibility, and general capabilities for high order spectral methods.

  9. Adaptive regularization network based neural modeling paradigm for nonlinear adaptive estimation of cerebral evoked potentials.

    PubMed

    Zhang, Jian-Hua; Böhme, Johann F

    2007-11-01

    In this paper we report an adaptive regularization network (ARN) approach to realizing fast blind separation of cerebral evoked potentials (EPs) from background electroencephalogram (EEG) activity with no need to make any explicit assumption on the statistical (or deterministic) signal model. The ARNs are proposed to construct nonlinear EEG and EP signal models. A novel adaptive regularization training (ART) algorithm is proposed to improve the generalization performance of the ARN. Two adaptive neural modeling methods based on the ARN are developed and their implementation and performance analysis are also presented. The computer experiments using simulated and measured visual evoked potential (VEP) data have shown that the proposed ARN modeling paradigm yields computationally efficient and more accurate VEP signal estimation owing to its intrinsic model-free and nonlinear processing characteristics.

  10. A new adaptive time step method for unsteady flow simulations in a human lung.

    PubMed

    Fenández-Tena, Ana; Marcos, Alfonso C; Martínez, Cristina; Keith Walters, D

    2017-04-07

    The innovation presented is a method for adaptive time-stepping that allows clustering of time steps in portions of the cycle for which flow variables are rapidly changing, based on the concept of using a uniform step in a relevant dependent variable rather than a uniform step in the independent variable time. A user-defined function was developed to adapt the magnitude of the time step (adaptive time step) to a defined rate of change in inlet velocity. Quantitative comparison indicates that the new adaptive time stepping method significantly improves accuracy for simulations using an equivalent number of time steps per cycle.

  11. Decision-directed entropy-based adaptive filtering

    NASA Astrophysics Data System (ADS)

    Myler, Harley R.; Weeks, Arthur R.; Van Dyke-Lewis, Michelle

    1991-12-01

    A recurring problem in adaptive filtering is selection of control measures for parameter modification. A number of methods reported thus far have used localized order statistics to adaptively adjust filter parameters. The most effective techniques are based on edge detection as a decision mechanism to allow the preservation of edge information while noise is filtered. In general, decision-directed adaptive filters operate on a localized area within an image by using statistics of the area as a discrimination parameter. Typically, adaptive filters are based on pixel to pixel variations within a localized area that are due to either edges or additive noise. In homogeneous areas within the image where variances are due to additive noise, the filter should operate to reduce the noise. Using an edge detection technique, a decision directed adaptive filter can vary the filtering proportional to the amount of edge information detected. We show an approach using an entropy measure on edges to differentiate between variations in the image due to edge information as compared against noise. The method uses entropy calculated against the spatial contour variations of edges in the window.

  12. An adaptive stepsize method for the chemical Langevin equation.

    PubMed

    Ilie, Silvana; Teslya, Alexandra

    2012-05-14

    Mathematical and computational modeling are key tools in analyzing important biological processes in cells and living organisms. In particular, stochastic models are essential to accurately describe the cellular dynamics, when the assumption of the thermodynamic limit can no longer be applied. However, stochastic models are computationally much more challenging than the traditional deterministic models. Moreover, many biochemical systems arising in applications have multiple time-scales, which lead to mathematical stiffness. In this paper we investigate the numerical solution of a stochastic continuous model of well-stirred biochemical systems, the chemical Langevin equation. The chemical Langevin equation is a stochastic differential equation with multiplicative, non-commutative noise. We propose an adaptive stepsize algorithm for approximating the solution of models of biochemical systems in the Langevin regime, with small noise, based on estimates of the local error. The underlying numerical method is the Milstein scheme. The proposed adaptive method is tested on several examples arising in applications and it is shown to have improved efficiency and accuracy compared to the existing fixed stepsize schemes.

  13. Surface estimation methods with phased-arrays for adaptive ultrasonic imaging in complex components

    NASA Astrophysics Data System (ADS)

    Robert, S.; Calmon, P.; Calvo, M.; Le Jeune, L.; Iakovleva, E.

    2015-03-01

    Immersion ultrasonic testing of structures with complex geometries may be significantly improved by using phased-arrays and specific adaptive algorithms that allow to image flaws under a complex and unknown interface. In this context, this paper presents a comparative study of different Surface Estimation Methods (SEM) available in the CIVA software and used for adaptive imaging. These methods are based either on time-of-flight measurements or on image processing. We also introduce a generalized adaptive method where flaws may be fully imaged with half-skip modes. In this method, both the surface and the back-wall of a complex structure are estimated before imaging flaws.

  14. SU-E-J-153: MRI Based, Daily Adaptive Radiotherapy for Rectal Cancer: Contour Adaptation

    SciTech Connect

    Kleijnen, J; Burbach, M; Verbraeken, T; Weggers, R; Zoetelief, A; Reerink, O; Lagendijk, J; Raaymakers, B; Asselen, B

    2014-06-01

    Purpose: A major hurdle in adaptive radiotherapy is the adaptation of the planning MRI's delineations to the daily anatomy. We therefore investigate the accuracy and time needed for online clinical target volume (CTV) adaptation by radiation therapists (RTT), to be used in MRI-guided adaptive treatments on a MRI-Linac (MRL). Methods: Sixteen patients, diagnosed with early stage rectal cancer, underwent a T2-weighted MRI prior to each fraction of short-course radiotherapy, resulting in 4–5 scans per patient. On these scans, the CTV was delineated according to guidelines by an experienced radiation oncologist (RO) and considered to be the gold standard. For each patient, the first MRI was considered as the planning MRI and matched on bony anatomy to the 3–4 daily MRIs. The planning MRI's CTV delineation was rigidly propagated to the daily MRI scans as a proposal for adaptation. Three RTTs in training started the adaptation of the CTV conform guidelines, after a two hour training lecture and a two patient (n=7) training set. To assess the inter-therapist variation, all three RTTs altered delineations of 3 patients (n=12). One RTT altered the CTV delineations (n=53) of the remaining 11 patients. Time needed for adaptation of the CTV to guidelines was registered.As a measure of agreement, the conformity index (CI) was determined between the RTTs' delineations as a group. Dice similarity coefficients were determined between delineations of the RTT and the RO. Results: We found good agreement between RTTs' and RO's delineations (average Dice=0.91, SD=0.03). Furthermore, the inter-observer agreement between the RTTs was high (average CI=0.94, SD=0.02). Adaptation time reduced from 10:33 min (SD= 3:46) to 2:56 min (SD=1:06) between the first and last ten delineations, respectively. Conclusion: Daily CTV adaptation by RTTs, seems a feasible and safe way to introduce daily, online MRI-based plan adaptation for a MRL.

  15. Adaptive upscaling with the dual mesh method

    SciTech Connect

    Guerillot, D.; Verdiere, S.

    1997-08-01

    The objective of this paper is to demonstrate that upscaling should be calculated during the flow simulation instead of trying to enhance the a priori upscaling methods. Hence, counter-examples are given to motivate our approach, the so-called Dual Mesh Method. The main steps of this numerical algorithm are recalled. Applications illustrate the necessity to consider different average relative permeability values depending on the direction in space. Moreover, these values could be different for the same average saturation. This proves that an a priori upscaling cannot be the answer even in homogeneous cases because of the {open_quotes}dynamical heterogeneity{close_quotes} created by the saturation profile. Other examples show the efficiency of the Dual Mesh Method applied to heterogeneous medium and to an actual field case in South America.

  16. Adaptive grid methods for RLV environment assessment and nozzle analysis

    NASA Technical Reports Server (NTRS)

    Thornburg, Hugh J.

    1996-01-01

    , forcing functions to attract/repel points in an elliptic system, or to trigger local refinement, based upon application of an equidistribution principle. The popularity of solution-adaptive techniques is growing in tandem with unstructured methods. The difficultly of precisely controlling mesh densities and orientations with current unstructured grid generation systems has driven the use of solution-adaptive meshing. Use of derivatives of density or pressure are widely used for construction of such weight functions, and have been proven very successful for inviscid flows with shocks. However, less success has been realized for flowfields with viscous layers, vortices or shocks of disparate strength. It is difficult to maintain the appropriate mesh point spacing in the various regions which require a fine spacing for adequate resolution. Mesh points often migrate from important regions due to refinement of dominant features. An example of this is the well know tendency of adaptive methods to increase the resolution of shocks in the flowfield around airfoils, but in the incorrect location due to inadequate resolution of the stagnation region. This problem has been the motivation for this research.

  17. Adaptive script based animations for intervention planning.

    PubMed

    Muehler, Konrad; Bade, Ragnar; Preim, Bernhard

    2006-01-01

    We describe scripting facilities to create medical animations for intervention planning based on medical volume data and derived segmentation information. A data independent scripting language has been developed to separate animation scripts from imaging data. The scripting facilities are adaptive and allow to reuse one script to create animations for many different patients. With expressive animations, we support the individual planning process, the preoperative documentation as well as discussions between medical doctors, for example in a tumor board. We also discuss the enhancement of interactive explorations with animations generated on the fly.

  18. Adaptive image coding based on cubic-spline interpolation

    NASA Astrophysics Data System (ADS)

    Jiang, Jian-Xing; Hong, Shao-Hua; Lin, Tsung-Ching; Wang, Lin; Truong, Trieu-Kien

    2014-09-01

    It has been investigated that at low bit rates, downsampling prior to coding and upsampling after decoding can achieve better compression performance than standard coding algorithms, e.g., JPEG and H. 264/AVC. However, at high bit rates, the sampling-based schemes generate more distortion. Additionally, the maximum bit rate for the sampling-based scheme to outperform the standard algorithm is image-dependent. In this paper, a practical adaptive image coding algorithm based on the cubic-spline interpolation (CSI) is proposed. This proposed algorithm adaptively selects the image coding method from CSI-based modified JPEG and standard JPEG under a given target bit rate utilizing the so called ρ-domain analysis. The experimental results indicate that compared with the standard JPEG, the proposed algorithm can show better performance at low bit rates and maintain the same performance at high bit rates.

  19. A method of camera calibration with adaptive thresholding

    NASA Astrophysics Data System (ADS)

    Gao, Lei; Yan, Shu-hua; Wang, Guo-chao; Zhou, Chun-lei

    2009-07-01

    In order to calculate the parameters of the camera correctly, we must figure out the accurate coordinates of the certain points in the image plane. Corners are the important features in the 2D images. Generally speaking, they are the points that have high curvature and lie in the junction of different brightness regions of images. So corners detection has already widely used in many fields. In this paper we use the pinhole camera model and SUSAN corner detection algorithm to calibrate the camera. When using the SUSAN corner detection algorithm, we propose an approach to retrieve the gray difference threshold, adaptively. That makes it possible to pick up the right chessboard inner comers in all kinds of gray contrast. The experiment result based on this method was proved to be feasible.

  20. Research on PGNAA adaptive analysis method with BP neural network

    NASA Astrophysics Data System (ADS)

    Peng, Ke-Xin; Yang, Jian-Bo; Tuo, Xian-Guo; Du, Hua; Zhang, Rui-Xue

    2016-11-01

    A new approach method to dealing with the puzzle of spectral analysis in prompt gamma neutron activation analysis (PGNAA) is developed and demonstrated. It consists of utilizing BP neural network to PGNAA energy spectrum analysis which is based on Monte Carlo (MC) simulation, the main tasks which we will accomplish as follows: (1) Completing the MC simulation of PGNAA spectrum library, we respectively set mass fractions of element Si, Ca, Fe from 0.00 to 0.45 with a step of 0.05 and each sample is simulated using MCNP. (2) Establishing the BP model of adaptive quantitative analysis of PGNAA energy spectrum, we calculate peak areas of eight characteristic gamma rays that respectively correspond to eight elements in each individual of 1000 samples and that of the standard sample. (3) Verifying the viability of quantitative analysis of the adaptive algorithm where 68 samples were used successively. Results show that the precision when using neural network to calculate the content of each element is significantly higher than the MCLLS.

  1. Efficient Combustion Simulation via the Adaptive Wavelet Collocation Method

    NASA Astrophysics Data System (ADS)

    Lung, Kevin; Brown-Dymkoski, Eric; Guerrero, Victor; Doran, Eric; Museth, Ken; Balme, Jo; Urberger, Bob; Kessler, Andre; Jones, Stephen; Moses, Billy; Crognale, Anthony

    Rocket engine development continues to be driven by the intuition and experience of designers, progressing through extensive trial-and-error test campaigns. Extreme temperatures and pressures frustrate direct observation, while high-fidelity simulation can be impractically expensive owing to the inherent muti-scale, multi-physics nature of the problem. To address this cost, an adaptive multi-resolution PDE solver has been designed which targets the high performance, many-core architecture of GPUs. The adaptive wavelet collocation method is used to maintain a sparse-data representation of the high resolution simulation, greatly reducing the memory footprint while tightly controlling physical fidelity. The tensorial, stencil topology of wavelet-based grids lends itself to highly vectorized algorithms which are necessary to exploit the performance of GPUs. This approach permits efficient implementation of direct finite-rate kinetics, and improved resolution of steep thermodynamic gradients and the smaller mixing scales that drive combustion dynamics. Resolving these scales is crucial for accurate chemical kinetics, which are typically degraded or lost in statistical modeling approaches.

  2. Modular adaptive implant based on smart materials.

    PubMed

    Bîzdoacă, N; Tarniţă, Daniela; Tarniţă, D N

    2008-01-01

    Applications of biological methods and systems found in nature to the study and design of engineering systems and modern technology are defined as Bionics. The present paper describes a bionics application of shape memory alloy in construction of orthopedic implant. The main idea of this paper is related to design modular adaptive implants for fractured bones. In order to target the efficiency of medical treatment, the implant has to protect the fractured bone, for the healing period, undertaking much as is possible from the daily usual load of the healthy bones. After a particular stage of healing period is passed, using implant modularity, the load is gradually transferred to bone, assuring in this manner a gradually recover of bone function. The adaptability of this design is related to medical possibility of the physician to made the implant to correspond to patient specifically anatomy. Using a CT realistic numerical bone models, the mechanical simulation of different types of loading of the fractured bones treated with conventional method are presented. The results are commented and conclusions are formulated.

  3. Adaptive RED algorithm based on minority game

    NASA Astrophysics Data System (ADS)

    Wei, Jiaolong; Lei, Ling; Qian, Jingjing

    2007-11-01

    With more and more applications appearing and the technology developing in the Internet, only relying on terminal system can not satisfy the complicated demand of QoS network. Router mechanisms must be participated into protecting responsive flows from the non-responsive. Routers mainly use active queue management mechanism (AQM) to avoid congestion. In the point of interaction between the routers, the paper applies minority game to describe the interaction of the users and observes the affection on the length of average queue. The parameters α, β of ARED being hard to confirm, adaptive RED based on minority game can depict the interactions of main body and amend the parameter α, β of ARED to the best. Adaptive RED based on minority game optimizes ARED and realizes the smoothness of average queue length. At the same time, this paper extends the network simulator plat - NS by adding new elements. Simulation has been implemented and the results show that new algorithm can reach the anticipative objects.

  4. Adaptive DFT-based Interferometer Fringe Tracking

    NASA Technical Reports Server (NTRS)

    Wilson, Edward; Pedretti, Ettore; Bregman, Jesse; Mah, Robert W.; Traub, Wesley A.

    2004-01-01

    An automatic interferometer fringe tracking system has been developed, implemented, and tested at the Infrared Optical Telescope Array (IOTA) observatory at Mt. Hopkins, Arizona. The system can minimize the optical path differences (OPDs) for all three baselines of the Michelson stellar interferometer at IOTA. Based on sliding window discrete Fourier transform (DFT) calculations that were optimized for computational efficiency and robustness to atmospheric disturbances, the algorithm has also been tested extensively on off-line data. Implemented in ANSI C on the 266 MHz PowerPC processor running the VxWorks real-time operating system, the algorithm runs in approximately 2.0 milliseconds per scan (including all three interferograms), using the science camera and piezo scanners to measure and correct the OPDs. The adaptive DFT-based tracking algorithm should be applicable to other systems where there is a need to detect or track a signal with an approximately constant-frequency carrier pulse.

  5. Fabrication Methods for Adaptive Deformable Mirrors

    NASA Technical Reports Server (NTRS)

    Toda, Risaku; White, Victor E.; Manohara, Harish; Patterson, Keith D.; Yamamoto, Namiko; Gdoutos, Eleftherios; Steeves, John B.; Daraio, Chiara; Pellegrino, Sergio

    2013-01-01

    Previously, it was difficult to fabricate deformable mirrors made by piezoelectric actuators. This is because numerous actuators need to be precisely assembled to control the surface shape of the mirror. Two approaches have been developed. Both approaches begin by depositing a stack of piezoelectric films and electrodes over a silicon wafer substrate. In the first approach, the silicon wafer is removed initially by plasmabased reactive ion etching (RIE), and non-plasma dry etching with xenon difluoride (XeF2). In the second approach, the actuator film stack is immersed in a liquid such as deionized water. The adhesion between the actuator film stack and the substrate is relatively weak. Simply by seeping liquid between the film and the substrate, the actuator film stack is gently released from the substrate. The deformable mirror contains multiple piezoelectric membrane layers as well as multiple electrode layers (some are patterned and some are unpatterned). At the piezolectric layer, polyvinylidene fluoride (PVDF), or its co-polymer, poly(vinylidene fluoride trifluoroethylene P(VDF-TrFE) is used. The surface of the mirror is coated with a reflective coating. The actuator film stack is fabricated on silicon, or silicon on insulator (SOI) substrate, by repeatedly spin-coating the PVDF or P(VDFTrFE) solution and patterned metal (electrode) deposition. In the first approach, the actuator film stack is prepared on SOI substrate. Then, the thick silicon (typically 500-micron thick and called handle silicon) of the SOI wafer is etched by a deep reactive ion etching process tool (SF6-based plasma etching). This deep RIE stops at the middle SiO2 layer. The middle SiO2 layer is etched by either HF-based wet etching or dry plasma etch. The thin silicon layer (generally called a device layer) of SOI is removed by XeF2 dry etch. This XeF2 etch is very gentle and extremely selective, so the released mirror membrane is not damaged. It is possible to replace SOI with silicon

  6. Sparse diffraction imaging method using an adaptive reweighting homotopy algorithm

    NASA Astrophysics Data System (ADS)

    Yu, Caixia; Zhao, Jingtao; Wang, Yanfei; Qiu, Zhen

    2017-02-01

    Seismic diffractions carry valuable information from subsurface small-scale geologic discontinuities, such as faults, cavities and other features associated with hydrocarbon reservoirs. However, seismic imaging methods mainly use reflection theory for constructing imaging models, which means a smooth constraint on imaging conditions. In fact, diffractors occupy a small account of distributions in an imaging model and possess discontinuous characteristics. In mathematics, this kind of phenomena can be described by the sparse optimization theory. Therefore, we propose a diffraction imaging method based on a sparsity-constraint model for studying diffractors. A reweighted L 2-norm and L 1-norm minimization model is investigated, where the L 2 term requests a least-square error between modeled diffractions and observed diffractions and the L 1 term imposes sparsity on the solution. In order to efficiently solve this model, we use an adaptive reweighting homotopy algorithm that updates the solutions by tracking a path along inexpensive homotopy steps. Numerical examples and field data application demonstrate the feasibility of the proposed method and show its significance for detecting small-scale discontinuities in a seismic section. The proposed method has an advantage in improving the focusing ability of diffractions and reducing the migration artifacts.

  7. A locally adaptive kernel regression method for facies delineation

    NASA Astrophysics Data System (ADS)

    Fernàndez-Garcia, D.; Barahona-Palomo, M.; Henri, C. V.; Sanchez-Vila, X.

    2015-12-01

    Facies delineation is defined as the separation of geological units with distinct intrinsic characteristics (grain size, hydraulic conductivity, mineralogical composition). A major challenge in this area stems from the fact that only a few scattered pieces of hydrogeological information are available to delineate geological facies. Several methods to delineate facies are available in the literature, ranging from those based only on existing hard data, to those including secondary data or external knowledge about sedimentological patterns. This paper describes a methodology to use kernel regression methods as an effective tool for facies delineation. The method uses both the spatial and the actual sampled values to produce, for each individual hard data point, a locally adaptive steering kernel function, self-adjusting the principal directions of the local anisotropic kernels to the direction of highest local spatial correlation. The method is shown to outperform the nearest neighbor classification method in a number of synthetic aquifers whenever the available number of hard data is small and randomly distributed in space. In the case of exhaustive sampling, the steering kernel regression method converges to the true solution. Simulations ran in a suite of synthetic examples are used to explore the selection of kernel parameters in typical field settings. It is shown that, in practice, a rule of thumb can be used to obtain suboptimal results. The performance of the method is demonstrated to significantly improve when external information regarding facies proportions is incorporated. Remarkably, the method allows for a reasonable reconstruction of the facies connectivity patterns, shown in terms of breakthrough curves performance.

  8. Adaptive stellar spectral subclass classification based on Bayesian SVMs

    NASA Astrophysics Data System (ADS)

    Du, Changde; Luo, Ali; Yang, Haifeng

    2017-02-01

    Stellar spectral classification is one of the most fundamental tasks in survey astronomy. Many automated classification methods have been applied to spectral data. However, their main limitation is that the model parameters must be tuned repeatedly to deal with different data sets. In this paper, we utilize the Bayesian support vector machines (BSVM) to classify the spectral subclass data. Based on Gibbs sampling, BSVM can infer all model parameters adaptively according to different data sets, which allows us to circumvent the time-consuming cross validation for penalty parameter. We explored different normalization methods for stellar spectral data, and the best one has been suggested in this study. Finally, experimental results on several stellar spectral subclass classification problems show that the BSVM model not only possesses good adaptability but also provides better prediction performance than traditional methods.

  9. Adaptive Optics for Ground-based Hypertelescopes

    NASA Astrophysics Data System (ADS)

    Labeyrie, Antoine; Borkowski, Virginie; Martinache, Franz; Arnold, Luc; Dejonghe, Julien; Riaud, Pierre; Lardière, Olivier; Gillet, Sophie

    Hypertelescopes, which may be considered as "exploded" versions of an OWL or other ELT, can in principle reach aperture sizes exceeding 1-10 kilometers. They utilize a multi-aperture diluted array and produce direct images through a densified exit pupil. Variants with a flat (the hypertelescope version of the Optical Very Large Array) or spherical (Arecibo-like CARLINA concept) site are studied. Adaptive optics is a major requirement for obtaining direct snapshot images at high resolution. Ways of adapting the Shack-Hartmann and curvature sensing methods for diluted apertures have been proposed. We explore the feasibility of applying 3D Fourier transforms to the dispersed images for extracting the path difference and phase information. With a spherical site, the numerous stars observable simultaneously at large angles can presumably help in the way of atmospheric tomography. Similar optics, equipped with a coronagraph, is proposed to NASA for the Terrestrial Planet Finder. The 3D Fourier transform algorithm also appears applicable in this case for fringe acquisition and π/100 phasing.

  10. Cartesian-cell based grid generation and adaptive mesh refinement

    NASA Technical Reports Server (NTRS)

    Coirier, William J.; Powell, Kenneth G.

    1993-01-01

    Viewgraphs on Cartesian-cell based grid generation and adaptive mesh refinement are presented. Topics covered include: grid generation; cell cutting; data structures; flow solver formulation; adaptive mesh refinement; and viscous flow.

  11. Adaptive BCI based on software agents.

    PubMed

    Castillo-Garcia, Javier; Cotrina, Anibal; Benevides, Alessandro; Delisle-Rodriguez, Denis; Longo, Berthil; Caicedo, Eduardo; Ferreira, Andre; Bastos, Teodiano

    2014-01-01

    The selection of features is generally the most difficult field to model in BCIs. Therefore, time and effort are invested in individual feature selection prior to data set training. Another great difficulty regarding the model of the BCI topology is the brain signal variability between users. How should this topology be in order to implement a system that can be used by large number of users with an optimal set of features? The proposal presented in this paper allows for obtaining feature reduction and classifier selection based on software agents. The software agents contain Genetic Algorithms (GA) and a cost function. GA used entropy and mutual information to choose the number of features. For the classifier selection a cost function was defined. Success rate and Cohen's Kappa coefficient are used as parameters to evaluate the classifiers performance. The obtained results allow finding a topology represented as a neural model for an adaptive BCI, where the number of the channels, features and the classifier are interrelated. The minimal subset of features and the optimal classifier were obtained with the adaptive BCI. Only three EEG channels were needed to obtain a success rate of 93% for the BCI competition III data set IVa.

  12. Adjoint Methods for Guiding Adaptive Mesh Refinement in Tsunami Modeling

    NASA Astrophysics Data System (ADS)

    Davis, B. N.; LeVeque, R. J.

    2016-12-01

    One difficulty in developing numerical methods for tsunami modeling is the fact that solutions contain time-varying regions where much higher resolution is required than elsewhere in the domain, particularly when tracking a tsunami propagating across the ocean. The open source GeoClaw software deals with this issue by using block-structured adaptive mesh refinement to selectively refine around propagating waves. For problems where only a target area of the total solution is of interest (e.g., one coastal community), a method that allows identifying and refining the grid only in regions that influence this target area would significantly reduce the computational cost of finding a solution. In this work, we show that solving the time-dependent adjoint equation and using a suitable inner product with the forward solution allows more precise refinement of the relevant waves. We present the adjoint methodology first in one space dimension for illustration and in a broad context since it could also be used in other adaptive software, and potentially for other tsunami applications beyond adaptive refinement. We then show how this adjoint method has been integrated into the adaptive mesh refinement strategy of the open source GeoClaw software and present tsunami modeling results showing that the accuracy of the solution is maintained and the computational time required is significantly reduced through the integration of the adjoint method into adaptive mesh refinement.

  13. On Accuracy of Adaptive Grid Methods for Captured Shocks

    NASA Technical Reports Server (NTRS)

    Yamaleev, Nail K.; Carpenter, Mark H.

    2002-01-01

    The accuracy of two grid adaptation strategies, grid redistribution and local grid refinement, is examined by solving the 2-D Euler equations for the supersonic steady flow around a cylinder. Second- and fourth-order linear finite difference shock-capturing schemes, based on the Lax-Friedrichs flux splitting, are used to discretize the governing equations. The grid refinement study shows that for the second-order scheme, neither grid adaptation strategy improves the numerical solution accuracy compared to that calculated on a uniform grid with the same number of grid points. For the fourth-order scheme, the dominant first-order error component is reduced by the grid adaptation, while the design-order error component drastically increases because of the grid nonuniformity. As a result, both grid adaptation techniques improve the numerical solution accuracy only on the coarsest mesh or on very fine grids that are seldom found in practical applications because of the computational cost involved. Similar error behavior has been obtained for the pressure integral across the shock. A simple analysis shows that both grid adaptation strategies are not without penalties in the numerical solution accuracy. Based on these results, a new grid adaptation criterion for captured shocks is proposed.

  14. Studies of an Adaptive Kaczmarz Method for Electrical Impedance Imaging

    NASA Astrophysics Data System (ADS)

    Li, Taoran; Isaacson, David; Newell, Jonathan C.; Saulnier, Gary J.

    2013-04-01

    We present an adaptive Kaczmarz method for solving the inverse problem in electrical impedance tomography and determining the conductivity distribution inside an object from electrical measurements made on the surface. To best characterize an unknown conductivity distribution and avoid inverting the Jacobian-related term JTJ which could be expensive in terms of memory storage in large scale problems, we propose to solve the inverse problem by adaptively updating both the optimal current pattern with improved distinguishability and the conductivity estimate at each iteration. With a novel subset scheme, the memory-efficient reconstruction algorithm which appropriately combines the optimal current pattern generation and the Kaczmarz method can produce accurate and stable solutions adaptively compared to traditional Kaczmarz and Gauss-Newton type methods. Several reconstruction image metrics are used to quantitatively evaluate the performance of the simulation results.

  15. Adaptive Elastic Net for Generalized Methods of Moments.

    PubMed

    Caner, Mehmet; Zhang, Hao Helen

    2014-01-30

    Model selection and estimation are crucial parts of econometrics. This paper introduces a new technique that can simultaneously estimate and select the model in generalized method of moments (GMM) context. The GMM is particularly powerful for analyzing complex data sets such as longitudinal and panel data, and it has wide applications in econometrics. This paper extends the least squares based adaptive elastic net estimator of Zou and Zhang (2009) to nonlinear equation systems with endogenous variables. The extension is not trivial and involves a new proof technique due to estimators lack of closed form solutions. Compared to Bridge-GMM of Caner (2009), we allow for the number of parameters to diverge to infinity as well as collinearity among a large number of variables, also the redundant parameters set to zero via a data dependent technique. This method has the oracle property, meaning that we can estimate nonzero parameters with their standard limit and the redundant parameters are dropped from the equations simultaneously. Numerical examples are used to illustrate the performance of the new method.

  16. Evaluation of Adaptive Subdivision Method on Mobile Device

    NASA Astrophysics Data System (ADS)

    Rahim, Mohd Shafry Mohd; Isa, Siti Aida Mohd; Rehman, Amjad; Saba, Tanzila

    2013-06-01

    Recently, there are significant improvements in the capabilities of mobile devices; but rendering large 3D object is still tedious because of the constraint in resources of mobile devices. To reduce storage requirement, 3D object is simplified but certain area of curvature is compromised and the surface will not be smooth. Therefore a method to smoother selected area of a curvature is implemented. One of the popular methods is adaptive subdivision method. Experiments are performed using two data with results based on processing time, rendering speed and the appearance of the object on the devices. The result shows a downfall in frame rate performance due to the increase in the number of triangles with each level of iteration while the processing time of generating the new mesh also significantly increase. Since there is a difference in screen size between the devices the surface on the iPhone appears to have more triangles and more compact than the surface displayed on the iPad. [Figure not available: see fulltext.

  17. Adaptive responses to antibody based therapy.

    PubMed

    Rodems, Tamara S; Iida, Mari; Brand, Toni M; Pearson, Hannah E; Orbuch, Rachel A; Flanigan, Bailey G; Wheeler, Deric L

    2016-02-01

    Receptor tyrosine kinases (RTKs) represent a large class of protein kinases that span the cellular membrane. There are 58 human RTKs identified which are grouped into 20 distinct families based upon their ligand binding, sequence homology and structure. They are controlled by ligand binding which activates intrinsic tyrosine-kinase activity. This activity leads to the phosphorylation of distinct tyrosines on the cytoplasmic tail, leading to the activation of cell signaling cascades. These signaling cascades ultimately regulate cellular proliferation, apoptosis, migration, survival and homeostasis of the cell. The vast majority of RTKs have been directly tied to the etiology and progression of cancer. Thus, using antibodies to target RTKs as a cancer therapeutic strategy has been intensely pursued. Although antibodies against the epidermal growth factor receptor (EGFR) and human epidermal growth factor receptor 2 (HER2) have shown promise in the clinical arena, the development of both intrinsic and acquired resistance to antibody-based therapies is now well appreciated. In this review we provide an overview of the RTK family, the biology of EGFR and HER2, as well as an in-depth review of the adaptive responses undertaken by cells in response to antibody based therapies directed against these receptors. A greater understanding of these mechanisms and their relevance in human models will lead to molecular insights in overcoming and circumventing resistance to antibody based therapy.

  18. Adaptive control with an expert system based supervisory level. Thesis

    NASA Technical Reports Server (NTRS)

    Sullivan, Gerald A.

    1991-01-01

    Adaptive control is presently one of the methods available which may be used to control plants with poorly modelled dynamics or time varying dynamics. Although many variations of adaptive controllers exist, a common characteristic of all adaptive control schemes, is that input/output measurements from the plant are used to adjust a control law in an on-line fashion. Ideally the adjustment mechanism of the adaptive controller is able to learn enough about the dynamics of the plant from input/output measurements to effectively control the plant. In practice, problems such as measurement noise, controller saturation, and incorrect model order, to name a few, may prevent proper adjustment of the controller and poor performance or instability result. In this work we set out to avoid the inadequacies of procedurally implemented safety nets, by introducing a two level control scheme in which an expert system based 'supervisor' at the upper level provides all the safety net functions for an adaptive controller at the lower level. The expert system is based on a shell called IPEX, (Interactive Process EXpert), that we developed specifically for the diagnosis and treatment of dynamic systems. Some of the more important functions that the IPEX system provides are: (1) temporal reasoning; (2) planning of diagnostic activities; and (3) interactive diagnosis. Also, because knowledge and control logic are separate, the incorporation of new diagnostic and treatment knowledge is relatively simple. We note that the flexibility available in the system to express diagnostic and treatment knowledge, allows much greater functionality than could ever be reasonably expected from procedural implementations of safety nets. The remainder of this chapter is divided into three sections. In section 1.1 we give a detailed review of the literature in the area of supervisory systems for adaptive controllers. In particular, we describe the evolution of safety nets from simple ad hoc techniques, up

  19. Final Report: Symposium on Adaptive Methods for Partial Differential Equations

    SciTech Connect

    Pernice, M.; Johnson, C.R.; Smith, P.J.; Fogelson, A.

    1998-12-10

    OAK-B135 Final Report: Symposium on Adaptive Methods for Partial Differential Equations. Complex physical phenomena often include features that span a wide range of spatial and temporal scales. Accurate simulation of such phenomena can be difficult to obtain, and computations that are under-resolved can even exhibit spurious features. While it is possible to resolve small scale features by increasing the number of grid points, global grid refinement can quickly lead to problems that are intractable, even on the largest available computing facilities. These constraints are particularly severe for three dimensional problems that involve complex physics. One way to achieve the needed resolution is to refine the computational mesh locally, in only those regions where enhanced resolution is required. Adaptive solution methods concentrate computational effort in regions where it is most needed. These methods have been successfully applied to a wide variety of problems in computational science and engineering. Adaptive methods can be difficult to implement, prompting the development of tools and environments to facilitate their use. To ensure that the results of their efforts are useful, algorithm and tool developers must maintain close communication with application specialists. Conversely it remains difficult for application specialists who are unfamiliar with the methods to evaluate the trade-offs between the benefits of enhanced local resolution and the effort needed to implement an adaptive solution method.

  20. Adaptive entropy-constrained discontinuous Galerkin method for simulation of turbulent flows

    NASA Astrophysics Data System (ADS)

    Lv, Yu; Ihme, Matthias

    2015-11-01

    A robust and adaptive computational framework will be presented for high-fidelity simulations of turbulent flows based on the discontinuous Galerkin (DG) scheme. For this, an entropy-residual based adaptation indicator is proposed to enable adaptation in polynomial and physical space. The performance and generality of this entropy-residual indicator is evaluated through direct comparisons with classical indicators. In addition, a dynamic load balancing procedure is developed to improve computational efficiency. The adaptive framework is tested by considering a series of turbulent test cases, which include homogeneous isotropic turbulence, channel flow and flow-over-a-cylinder. The accuracy, performance and scalability are assessed, and the benefit of this adaptive high-order method is discussed. The funding from NSF CAREER award is greatly acknowledged.

  1. A Dynamically Adaptive Arbitrary Lagrangian-Eulerian Method for Hydrodynamics

    SciTech Connect

    Anderson, R W; Pember, R B; Elliott, N S

    2002-10-19

    A new method that combines staggered grid Arbitrary Lagrangian-Eulerian (ALE) techniques with structured local adaptive mesh refinement (AMR) has been developed for solution of the Euler equations. The novel components of the combined ALE-AMR method hinge upon the integration of traditional AMR techniques with both staggered grid Lagrangian operators as well as elliptic relaxation operators on moving, deforming mesh hierarchies. Numerical examples demonstrate the utility of the method in performing detailed three-dimensional shock-driven instability calculations.

  2. A Dynamically Adaptive Arbitrary Lagrangian-Eulerian Method for Hydrodynamics

    SciTech Connect

    Anderson, R W; Pember, R B; Elliott, N S

    2004-01-28

    A new method that combines staggered grid Arbitrary Lagrangian-Eulerian (ALE) techniques with structured local adaptive mesh refinement (AMR) has been developed for solution of the Euler equations. The novel components of the combined ALE-AMR method hinge upon the integration of traditional AMR techniques with both staggered grid Lagrangian operators as well as elliptic relaxation operators on moving, deforming mesh hierarchies. Numerical examples demonstrate the utility of the method in performing detailed three-dimensional shock-driven instability calculations.

  3. Adaptive Estimation of Intravascular Shear Rate Based on Parameter Optimization

    NASA Astrophysics Data System (ADS)

    Nitta, Naotaka; Takeda, Naoto

    2008-05-01

    The relationships between the intravascular wall shear stress, controlled by flow dynamics, and the progress of arteriosclerosis plaque have been clarified by various studies. Since the shear stress is determined by the viscosity coefficient and shear rate, both factors must be estimated accurately. In this paper, an adaptive method for improving the accuracy of quantitative shear rate estimation was investigated. First, the parameter dependence of the estimated shear rate was investigated in terms of the differential window width and the number of averaged velocity profiles based on simulation and experimental data, and then the shear rate calculation was optimized. The optimized result revealed that the proposed adaptive method of shear rate estimation was effective for improving the accuracy of shear rate calculation.

  4. Contrast-based sensorless adaptive optics for retinal imaging.

    PubMed

    Zhou, Xiaolin; Bedggood, Phillip; Bui, Bang; Nguyen, Christine T O; He, Zheng; Metha, Andrew

    2015-09-01

    Conventional adaptive optics ophthalmoscopes use wavefront sensing methods to characterize ocular aberrations for real-time correction. However, there are important situations in which the wavefront sensing step is susceptible to difficulties that affect the accuracy of the correction. To circumvent these, wavefront sensorless adaptive optics (or non-wavefront sensing AO; NS-AO) imaging has recently been developed and has been applied to point-scanning based retinal imaging modalities. In this study we show, for the first time, contrast-based NS-AO ophthalmoscopy for full-frame in vivo imaging of human and animal eyes. We suggest a robust image quality metric that could be used for any imaging modality, and test its performance against other metrics using (physical) model eyes.

  5. Contrast-based sensorless adaptive optics for retinal imaging

    PubMed Central

    Zhou, Xiaolin; Bedggood, Phillip; Bui, Bang; Nguyen, Christine T.O.; He, Zheng; Metha, Andrew

    2015-01-01

    Conventional adaptive optics ophthalmoscopes use wavefront sensing methods to characterize ocular aberrations for real-time correction. However, there are important situations in which the wavefront sensing step is susceptible to difficulties that affect the accuracy of the correction. To circumvent these, wavefront sensorless adaptive optics (or non-wavefront sensing AO; NS-AO) imaging has recently been developed and has been applied to point-scanning based retinal imaging modalities. In this study we show, for the first time, contrast-based NS-AO ophthalmoscopy for full-frame in vivo imaging of human and animal eyes. We suggest a robust image quality metric that could be used for any imaging modality, and test its performance against other metrics using (physical) model eyes. PMID:26417525

  6. Speckle reduction in optical coherence tomography by adaptive total variation method

    NASA Astrophysics Data System (ADS)

    Wu, Tong; Shi, Yaoyao; Liu, Youwen; He, Chongjun

    2015-12-01

    An adaptive total variation method based on the combination of speckle statistics and total variation restoration is proposed and developed for reducing speckle noise in optical coherence tomography (OCT) images. The statistical distribution of the speckle noise in OCT image is investigated and measured. With the measured parameters such as the mean value and variance of the speckle noise, the OCT image is restored by the adaptive total variation restoration method. The adaptive total variation restoration algorithm was applied to the OCT images of a volunteer's hand skin, which showed effective speckle noise reduction and image quality improvement. For image quality comparison, the commonly used median filtering method was also applied to the same images to reduce the speckle noise. The measured results demonstrate the superior performance of the adaptive total variation restoration method in terms of image signal-to-noise ratio, equivalent number of looks, contrast-to-noise ratio, and mean square error.

  7. Adaptive computational methods for SSME internal flow analysis

    NASA Technical Reports Server (NTRS)

    Oden, J. T.

    1986-01-01

    Adaptive finite element methods for the analysis of classes of problems in compressible and incompressible flow of interest in SSME (space shuttle main engine) analysis and design are described. The general objective of the adaptive methods is to improve and to quantify the quality of numerical solutions to the governing partial differential equations of fluid dynamics in two-dimensional cases. There are several different families of adaptive schemes that can be used to improve the quality of solutions in complex flow simulations. Among these are: (1) r-methods (node-redistribution or moving mesh methods) in which a fixed number of nodal points is allowed to migrate to points in the mesh where high error is detected; (2) h-methods, in which the mesh size h is automatically refined to reduce local error; and (3) p-methods, in which the local degree p of the finite element approximation is increased to reduce local error. Two of the three basic techniques have been studied in this project: an r-method for steady Euler equations in two dimensions and a p-method for transient, laminar, viscous incompressible flow. Numerical results are presented. A brief introduction to residual methods of a-posterior error estimation is also given and some pertinent conclusions of the study are listed.

  8. Adaptive clustering and adaptive weighting methods to detect disease associated rare variants.

    PubMed

    Sha, Qiuying; Wang, Shuaicheng; Zhang, Shuanglin

    2013-03-01

    Current statistical methods to test association between rare variants and phenotypes are essentially the group-wise methods that collapse or aggregate all variants in a predefined group into a single variant. Comparing with the variant-by-variant methods, the group-wise methods have their advantages. However, two factors may affect the power of these methods. One is that some of the causal variants may be protective. When both risk and protective variants are presented, it will lose power by collapsing or aggregating all variants because the effects of risk and protective variants will counteract each other. The other is that not all variants in the group are causal; rather, a large proportion is believed to be neutral. When a large proportion of variants are neutral, collapsing or aggregating all variants may not be an optimal solution. We propose two alternative methods, adaptive clustering (AC) method and adaptive weighting (AW) method, aiming to test rare variant association in the presence of neutral and/or protective variants. Both of AC and AW are applicable to quantitative traits as well as qualitative traits. Results of extensive simulation studies show that AC and AW have similar power and both of them have clear advantages from power to computational efficiency comparing with existing group-wise methods and existing data-driven methods that allow neutral and protective variants. We recommend AW method because AW method is computationally more efficient than AC method.

  9. Analyzing Hedges in Verbal Communication: An Adaptation-Based Approach

    ERIC Educational Resources Information Center

    Wang, Yuling

    2010-01-01

    Based on Adaptation Theory, the article analyzes the production process of hedges. The procedure consists of the continuous making of choices in linguistic forms and communicative strategies. These choices are made just for adaptation to the contextual correlates. Besides, the adaptation process is dynamic, intentional and bidirectional.

  10. A new and efficient method to obtain benzalkonium chloride adapted cells of Listeria monocytogenes.

    PubMed

    Saá Ibusquiza, Paula; Herrera, Juan J R; Vázquez-Sánchez, Daniel; Parada, Adelaida; Cabo, Marta L

    2012-10-01

    A new method to obtain benzalkonium chloride (BAC) adapted L. monocytogenes cells was developed. A factorial design was used to assess the effects of the inoculum size and BAC concentration on the adaptation (measured in terms of lethal dose 50 -LD50-) of 6 strains of Listeria monocytogenes after only one exposure. The proposed method could be applied successfully in the L. monocytogenes strains with higher adaptive capacity to BAC. In those cases, a significant empirical equation was obtained showing a positive effect of the inoculum size and a positive interaction between the effects of BAC and inoculum size on the level of adaptation achieved. However, a slight negative effect of BAC, due to the biocide, was also significant. The proposed method improves the classical method based on successive stationary phase cultures in sublethal BAC concentrations because it is less time-consuming and more effective. For the laboratory strain L. monocytogenes 5873, by applying the new procedure it was possible to increase BAC-adaptation 3.69-fold in only 33 h, whereas using the classical procedure 2.61-fold of increase was reached after 5 days. Moreover, with the new method, the maximum level of adaptation was determined for all the strains reaching surprisingly almost the same concentration of BAC (mg/l) for 5 out 6 strains. Thus, a good reference for establishing the effective concentrations of biocides to ensure the maximum level of adaptation was also determined.

  11. Adaptive windowed range-constrained Otsu method using local information

    NASA Astrophysics Data System (ADS)

    Zheng, Jia; Zhang, Dinghua; Huang, Kuidong; Sun, Yuanxi; Tang, Shaojie

    2016-01-01

    An adaptive windowed range-constrained Otsu method using local information is proposed for improving the performance of image segmentation. First, the reason why traditional thresholding methods do not perform well in the segmentation of complicated images is analyzed. Therein, the influences of global and local thresholdings on the image segmentation are compared. Second, two methods that can adaptively change the size of the local window according to local information are proposed by us. The characteristics of the proposed methods are analyzed. Thereby, the information on the number of edge pixels in the local window of the binarized variance image is employed to adaptively change the local window size. Finally, the superiority of the proposed method over other methods such as the range-constrained Otsu, the active contour model, the double Otsu, the Bradley's, and the distance-regularized level set evolution is demonstrated. It is validated by the experiments that the proposed method can keep more details and acquire much more satisfying area overlap measure as compared with the other conventional methods.

  12. Adaptive DFT-Based Interferometer Fringe Tracking

    NASA Astrophysics Data System (ADS)

    Wilson, Edward; Pedretti, Ettore; Bregman, Jesse; Mah, Robert W.; Traub, Wesley A.

    An automatic interferometer fringe tracking system has been developed, implemented, and tested at the Infrared Optical Telescope Array (IOTA) Observatory at Mount Hopkins, Arizona. The system can minimize the optical path differences (OPDs) for all three baselines of the Michelson stellar interferometer at IOTA. Based on sliding window discrete Fourier-transform (DFT) calculations that were optimized for computational efficiency and robustness to atmospheric disturbances, the algorithm has also been tested extensively on offline data. Implemented in ANSI C on the 266 MHz PowerPC processor running the VxWorks real-time operating system, the algorithm runs in approximately 2.0 milliseconds per scan (including all three interferograms), using the science camera and piezo scanners to measure and correct the OPDs. The adaptive DFT-based tracking algorithm should be applicable to other systems where there is a need to detect or track a signal with an approximately constant-frequency carrier pulse. One example of such an application might be to the field of thin-film measurement by ellipsometry, using a broadband light source and a Fourier-transform spectrometer to detect the resulting fringe patterns.

  13. New developments in adaptive methods for computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Oden, J. T.; Bass, Jon M.

    1990-01-01

    New developments in a posteriori error estimates, smart algorithms, and h- and h-p adaptive finite element methods are discussed in the context of two- and three-dimensional compressible and incompressible flow simulations. Applications to rotor-stator interaction, rotorcraft aerodynamics, shock and viscous boundary layer interaction and fluid-structure interaction problems are discussed.

  14. Likelihood Methods for Adaptive Filtering and Smoothing. Technical Report #455.

    ERIC Educational Resources Information Center

    Butler, Ronald W.

    The dynamic linear model or Kalman filtering model provides a useful methodology for predicting the past, present, and future states of a dynamic system, such as an object in motion or an economic or social indicator that is changing systematically with time. Recursive likelihood methods for adaptive Kalman filtering and smoothing are developed.…

  15. A Conditional Exposure Control Method for Multidimensional Adaptive Testing

    ERIC Educational Resources Information Center

    Finkelman, Matthew; Nering, Michael L.; Roussos, Louis A.

    2009-01-01

    In computerized adaptive testing (CAT), ensuring the security of test items is a crucial practical consideration. A common approach to reducing item theft is to define maximum item exposure rates, i.e., to limit the proportion of examinees to whom a given item can be administered. Numerous methods for controlling exposure rates have been proposed…

  16. A Mixture Rasch Model-Based Computerized Adaptive Test for Latent Class Identification

    ERIC Educational Resources Information Center

    Jiao, Hong; Macready, George; Liu, Junhui; Cho, Youngmi

    2012-01-01

    This study explored a computerized adaptive test delivery algorithm for latent class identification based on the mixture Rasch model. Four item selection methods based on the Kullback-Leibler (KL) information were proposed and compared with the reversed and the adaptive KL information under simulated testing conditions. When item separation was…

  17. Goal-based angular adaptivity applied to a wavelet-based discretisation of the neutral particle transport equation

    SciTech Connect

    Goffin, Mark A.; Buchan, Andrew G.; Dargaville, Steven; Pain, Christopher C.; Smith, Paul N.; Smedley-Stevenson, Richard P.

    2015-01-15

    A method for applying goal-based adaptive methods to the angular resolution of the neutral particle transport equation is presented. The methods are applied to an octahedral wavelet discretisation of the spherical angular domain which allows for anisotropic resolution. The angular resolution is adapted across both the spatial and energy dimensions. The spatial domain is discretised using an inner-element sub-grid scale finite element method. The goal-based adaptive methods optimise the angular discretisation to minimise the error in a specific functional of the solution. The goal-based error estimators require the solution of an adjoint system to determine the importance to the specified functional. The error estimators and the novel methods to calculate them are described. Several examples are presented to demonstrate the effectiveness of the methods. It is shown that the methods can significantly reduce the number of unknowns and computational time required to obtain a given error. The novelty of the work is the use of goal-based adaptive methods to obtain anisotropic resolution in the angular domain for solving the transport equation. -- Highlights: •Wavelet angular discretisation used to solve transport equation. •Adaptive method developed for the wavelet discretisation. •Anisotropic angular resolution demonstrated through the adaptive method. •Adaptive method provides improvements in computational efficiency.

  18. Adaptive Neural Network Based Control of Noncanonical Nonlinear Systems.

    PubMed

    Zhang, Yanjun; Tao, Gang; Chen, Mou

    2016-09-01

    This paper presents a new study on the adaptive neural network-based control of a class of noncanonical nonlinear systems with large parametric uncertainties. Unlike commonly studied canonical form nonlinear systems whose neural network approximation system models have explicit relative degree structures, which can directly be used to derive parameterized controllers for adaptation, noncanonical form nonlinear systems usually do not have explicit relative degrees, and thus their approximation system models are also in noncanonical forms. It is well-known that the adaptive control of noncanonical form nonlinear systems involves the parameterization of system dynamics. As demonstrated in this paper, it is also the case for noncanonical neural network approximation system models. Effective control of such systems is an open research problem, especially in the presence of uncertain parameters. This paper shows that it is necessary to reparameterize such neural network system models for adaptive control design, and that such reparameterization can be realized using a relative degree formulation, a concept yet to be studied for general neural network system models. This paper then derives the parameterized controllers that guarantee closed-loop stability and asymptotic output tracking for noncanonical form neural network system models. An illustrative example is presented with the simulation results to demonstrate the control design procedure, and to verify the effectiveness of such a new design method.

  19. Visual-adaptation-mechanism based underwater object extraction

    NASA Astrophysics Data System (ADS)

    Chen, Zhe; Wang, Huibin; Xu, Lizhong; Shen, Jie

    2014-03-01

    Due to the major obstacles originating from the strong light absorption and scattering in a dynamic underwater environment, underwater optical information acquisition and processing suffer from effects such as limited range, non-uniform lighting, low contrast, and diminished colors, causing it to become the bottleneck for marine scientific research and projects. After studying and generalizing the underwater biological visual mechanism, we explore its advantages in light adaption which helps animals to precisely sense the underwater scene and recognize their prey or enemies. Then, aiming to transform the significant advantage of the visual adaptation mechanism into underwater computer vision tasks, a novel knowledge-based information weighting fusion model is established for underwater object extraction. With this bionic model, the dynamical adaptability is given to the underwater object extraction task, making them more robust to the variability of the optical properties in different environments. The capability of the proposed method to adapt to the underwater optical environments is shown, and its outperformance for the object extraction is demonstrated by comparison experiments.

  20. ICASE/LaRC Workshop on Adaptive Grid Methods

    NASA Technical Reports Server (NTRS)

    South, Jerry C., Jr. (Editor); Thomas, James L. (Editor); Vanrosendale, John (Editor)

    1995-01-01

    Solution-adaptive grid techniques are essential to the attainment of practical, user friendly, computational fluid dynamics (CFD) applications. In this three-day workshop, experts gathered together to describe state-of-the-art methods in solution-adaptive grid refinement, analysis, and implementation; to assess the current practice; and to discuss future needs and directions for research. This was accomplished through a series of invited and contributed papers. The workshop focused on a set of two-dimensional test cases designed by the organizers to aid in assessing the current state of development of adaptive grid technology. In addition, a panel of experts from universities, industry, and government research laboratories discussed their views of needs and future directions in this field.

  1. Self-Adaptive Filon's Integration Method and Its Application to Computing Synthetic Seismograms

    NASA Astrophysics Data System (ADS)

    Zhang, Hai-Ming; Chen, Xiao-Fei

    2001-03-01

    Based on the principle of the self-adaptive Simpson integration method, and by incorporating the `fifth-order' Filon's integration algorithm [Bull. Seism. Soc. Am. 73(1983)913], we have proposed a simple and efficient numerical integration method, i.e., the self-adaptive Filon's integration method (SAFIM), for computing synthetic seismograms at large epicentral distances. With numerical examples, we have demonstrated that the SAFIM is not only accurate but also very efficient. This new integration method is expected to be very useful in seismology, as well as in computing similar oscillatory integrals in other branches of physics.

  2. A Simulation Study of Methods for Assessing Differential Item Functioning in Computerized Adaptive Tests.

    ERIC Educational Resources Information Center

    Zwick, Rebecca; And Others

    1994-01-01

    Simulated data were used to investigate the performance of modified versions of the Mantel-Haenszel method of differential item functioning (DIF) analysis in computerized adaptive tests (CAT). Results indicate that CAT-based DIF procedures perform well and support the use of item response theory-based matching variables in DIF analysis. (SLD)

  3. Free energy calculations: an efficient adaptive biasing potential method.

    PubMed

    Dickson, Bradley M; Legoll, Frédéric; Lelièvre, Tony; Stoltz, Gabriel; Fleurat-Lessard, Paul

    2010-05-06

    We develop an efficient sampling and free energy calculation technique within the adaptive biasing potential (ABP) framework. By mollifying the density of states we obtain an approximate free energy and an adaptive bias potential that is computed directly from the population along the coordinates of the free energy. Because of the mollifier, the bias potential is "nonlocal", and its gradient admits a simple analytic expression. A single observation of the reaction coordinate can thus be used to update the approximate free energy at every point within a neighborhood of the observation. This greatly reduces the equilibration time of the adaptive bias potential. This approximation introduces two parameters: strength of mollification and the zero of energy of the bias potential. While we observe that the approximate free energy is a very good estimate of the actual free energy for a large range of mollification strength, we demonstrate that the errors associated with the mollification may be removed via deconvolution. The zero of energy of the bias potential, which is easy to choose, influences the speed of convergence but not the limiting accuracy. This method is simple to apply to free energy or mean force computation in multiple dimensions and does not involve second derivatives of the reaction coordinates, matrix manipulations nor on-the-fly adaptation of parameters. For the alanine dipeptide test case, the new method is found to gain as much as a factor of 10 in efficiency as compared to two basic implementations of the adaptive biasing force methods, and it is shown to be as efficient as well-tempered metadynamics with the postprocess deconvolution giving a clear advantage to the mollified density of states method.

  4. Adaptive discrete cosine transform based image coding

    NASA Astrophysics Data System (ADS)

    Hu, Neng-Chung; Luoh, Shyan-Wen

    1996-04-01

    In this discrete cosine transform (DCT) based image coding, the DCT kernel matrix is decomposed into a product of two matrices. The first matrix is called the discrete cosine preprocessing transform (DCPT), whose kernels are plus or minus 1 or plus or minus one- half. The second matrix is the postprocessing stage treated as a correction stage that converts the DCPT to the DCT. On applying the DCPT to image coding, image blocks are processed by the DCPT, then a decision is made to determine whether the processed image blocks are inactive or active in the DCPT domain. If the processed image blocks are inactive, then the compactness of the processed image blocks is the same as that of the image blocks processed by the DCT. However, if the processed image blocks are active, a correction process is required; this is achieved by multiplying the processed image block by the postprocessing stage. As a result, this adaptive image coding achieves the same performance as the DCT image coding, and both the overall computation and the round-off error are reduced, because both the DCPT and the postprocessing stage can be implemented by distributed arithmetic or fast computation algorithms.

  5. Examining adaptations of evidence-based programs in natural contexts.

    PubMed

    Moore, Julia E; Bumbarger, Brian K; Cooper, Brittany Rhoades

    2013-06-01

    When evidence-based programs (EBPs) are scaled up in natural, or non-research, settings, adaptations are commonly made. Given the fidelity-versus-adaptation debate, theoretical rationales have been provided for the pros and cons of adaptations. Yet the basis of this debate is theoretical; thus, empirical evidence is needed to understand the types of adaptations made in natural settings. In the present study, we introduce a taxonomy for understanding adaptations. This taxonomy addresses several aspects of adaptations made to programs including the fit (philosophical or logistical), timing (proactive or reactive), and valence, or the degree to which the adaptations align with the program's goals and theory, (positive, negative, or neutral). Self-reported qualitative data from communities delivering one of ten state-funded EBPs were coded based on the taxonomy constructs; additionally, quantitative data were used to examine the types and reasons for making adaptations under natural conditions. Forty-four percent of respondents reported making adaptations. Adaptations to the procedures, dosage, and content were cited most often. Lack of time, limited resources, and difficulty retaining participants were listed as the most common reasons for making adaptations. Most adaptations were made reactively, as a result of issues of logistical fit, and were not aligned with, or deviated from, the program's goals and theory.

  6. Adaptable Metadata Rich IO Methods for Portable High Performance IO

    SciTech Connect

    Lofstead, J.; Zheng, Fang; Klasky, Scott A; Schwan, Karsten

    2009-01-01

    Since IO performance on HPC machines strongly depends on machine characteristics and configuration, it is important to carefully tune IO libraries and make good use of appropriate library APIs. For instance, on current petascale machines, independent IO tends to outperform collective IO, in part due to bottlenecks at the metadata server. The problem is exacerbated by scaling issues, since each IO library scales differently on each machine, and typically, operates efficiently to different levels of scaling on different machines. With scientific codes being run on a variety of HPC resources, efficient code execution requires us to address three important issues: (1) end users should be able to select the most efficient IO methods for their codes, with minimal effort in terms of code updates or alterations; (2) such performance-driven choices should not prevent data from being stored in the desired file formats, since those are crucial for later data analysis; and (3) it is important to have efficient ways of identifying and selecting certain data for analysis, to help end users cope with the flood of data produced by high end codes. This paper employs ADIOS, the ADaptable IO System, as an IO API to address (1)-(3) above. Concerning (1), ADIOS makes it possible to independently select the IO methods being used by each grouping of data in an application, so that end users can use those IO methods that exhibit best performance based on both IO patterns and the underlying hardware. In this paper, we also use this facility of ADIOS to experimentally evaluate on petascale machines alternative methods for high performance IO. Specific examples studied include methods that use strong file consistency vs. delayed parallel data consistency, as that provided by MPI-IO or POSIX IO. Concerning (2), to avoid linking IO methods to specific file formats and attain high IO performance, ADIOS introduces an efficient intermediate file format, termed BP, which can be converted, at small

  7. Tsunami modelling with adaptively refined finite volume methods

    USGS Publications Warehouse

    LeVeque, R.J.; George, D.L.; Berger, M.J.

    2011-01-01

    Numerical modelling of transoceanic tsunami propagation, together with the detailed modelling of inundation of small-scale coastal regions, poses a number of algorithmic challenges. The depth-averaged shallow water equations can be used to reduce this to a time-dependent problem in two space dimensions, but even so it is crucial to use adaptive mesh refinement in order to efficiently handle the vast differences in spatial scales. This must be done in a 'wellbalanced' manner that accurately captures very small perturbations to the steady state of the ocean at rest. Inundation can be modelled by allowing cells to dynamically change from dry to wet, but this must also be done carefully near refinement boundaries. We discuss these issues in the context of Riemann-solver-based finite volume methods for tsunami modelling. Several examples are presented using the GeoClaw software, and sample codes are available to accompany the paper. The techniques discussed also apply to a variety of other geophysical flows. ?? 2011 Cambridge University Press.

  8. Adaptive Kaczmarz Method for Image Reconstruction in Electrical Impedance Tomography

    PubMed Central

    Li, Taoran; Kao, Tzu-Jen; Isaacson, David; Newell, Jonathan C.; Saulnier, Gary J.

    2013-01-01

    We present an adaptive Kaczmarz method for solving the inverse problem in electrical impedance tomography and determining the conductivity distribution inside an object from electrical measurements made on the surface. To best characterize an unknown conductivity distribution and avoid inverting the Jacobian-related term JTJ which could be expensive in terms of computation cost and memory in large scale problems, we propose solving the inverse problem by applying the optimal current patterns for distinguishing the actual conductivity from the conductivity estimate between each iteration of the block Kaczmarz algorithm. With a novel subset scheme, the memory-efficient reconstruction algorithm which appropriately combines the optimal current pattern generation with the Kaczmarz method can produce more accurate and stable solutions adaptively as compared to traditional Kaczmarz and Gauss-Newton type methods. Choices of initial current pattern estimates are discussed in the paper. Several reconstruction image metrics are used to quantitatively evaluate the performance of the simulation results. PMID:23718952

  9. Final Report: Symposium on Adaptive Methods for Partial Differential Equations

    SciTech Connect

    Pernice, Michael; Johnson, Christopher R.; Smith, Philip J.; Fogelson, Aaron

    1998-12-08

    Complex physical phenomena often include features that span a wide range of spatial and temporal scales. Accurate simulation of such phenomena can be difficult to obtain, and computations that are under-resolved can even exhibit spurious features. While it is possible to resolve small scale features by increasing the number of grid points, global grid refinement can quickly lead to problems that are intractable, even on the largest available computing facilities. These constraints are particularly severe for three dimensional problems that involve complex physics. One way to achieve the needed resolution is to refine the computational mesh locally, in only those regions where enhanced resolution is required. Adaptive solution methods concentrate computational effort in regions where it is most needed. These methods have been successfully applied to a wide variety of problems in computational science and engineering. Adaptive methods can be difficult to implement, prompting the development of tools and environments to facilitate their use. To ensure that the results of their efforts are useful, algorithm and tool developers must maintain close communication with application specialists. Conversely it remains difficult for application specialists who are unfamiliar with the methods to evaluate the trade-offs between the benefits of enhanced local resolution and the effort needed to implement an adaptive solution method.

  10. Parallel 3D Mortar Element Method for Adaptive Nonconforming Meshes

    NASA Technical Reports Server (NTRS)

    Feng, Huiyu; Mavriplis, Catherine; VanderWijngaart, Rob; Biswas, Rupak

    2004-01-01

    High order methods are frequently used in computational simulation for their high accuracy. An efficient way to avoid unnecessary computation in smooth regions of the solution is to use adaptive meshes which employ fine grids only in areas where they are needed. Nonconforming spectral elements allow the grid to be flexibly adjusted to satisfy the computational accuracy requirements. The method is suitable for computational simulations of unsteady problems with very disparate length scales or unsteady moving features, such as heat transfer, fluid dynamics or flame combustion. In this work, we select the Mark Element Method (MEM) to handle the non-conforming interfaces between elements. A new technique is introduced to efficiently implement MEM in 3-D nonconforming meshes. By introducing an "intermediate mortar", the proposed method decomposes the projection between 3-D elements and mortars into two steps. In each step, projection matrices derived in 2-D are used. The two-step method avoids explicitly forming/deriving large projection matrices for 3-D meshes, and also helps to simplify the implementation. This new technique can be used for both h- and p-type adaptation. This method is applied to an unsteady 3-D moving heat source problem. With our new MEM implementation, mesh adaptation is able to efficiently refine the grid near the heat source and coarsen the grid once the heat source passes. The savings in computational work resulting from the dynamic mesh adaptation is demonstrated by the reduction of the the number of elements used and CPU time spent. MEM and mesh adaptation, respectively, bring irregularity and dynamics to the computer memory access pattern. Hence, they provide a good way to gauge the performance of computer systems when running scientific applications whose memory access patterns are irregular and unpredictable. We select a 3-D moving heat source problem as the Unstructured Adaptive (UA) grid benchmark, a new component of the NAS Parallel

  11. Methods for prismatic/tetrahedral grid generation and adaptation

    NASA Technical Reports Server (NTRS)

    Kallinderis, Y.

    1995-01-01

    The present work involves generation of hybrid prismatic/tetrahedral grids for complex 3-D geometries including multi-body domains. The prisms cover the region close to each body's surface, while tetrahedra are created elsewhere. Two developments are presented for hybrid grid generation around complex 3-D geometries. The first is a new octree/advancing front type of method for generation of the tetrahedra of the hybrid mesh. The main feature of the present advancing front tetrahedra generator that is different from previous such methods is that it does not require the creation of a background mesh by the user for the determination of the grid-spacing and stretching parameters. These are determined via an automatically generated octree. The second development is a method for treating the narrow gaps in between different bodies in a multiply-connected domain. This method is applied to a two-element wing case. A High Speed Civil Transport (HSCT) type of aircraft geometry is considered. The generated hybrid grid required only 170 K tetrahedra instead of an estimated two million had a tetrahedral mesh been used in the prisms region as well. A solution adaptive scheme for viscous computations on hybrid grids is also presented. A hybrid grid adaptation scheme that employs both h-refinement and redistribution strategies is developed to provide optimum meshes for viscous flow computations. Grid refinement is a dual adaptation scheme that couples 3-D, isotropic division of tetrahedra and 2-D, directional division of prisms.

  12. Efficient Unstructured Grid Adaptation Methods for Sonic Boom Prediction

    NASA Technical Reports Server (NTRS)

    Campbell, Richard L.; Carter, Melissa B.; Deere, Karen A.; Waithe, Kenrick A.

    2008-01-01

    This paper examines the use of two grid adaptation methods to improve the accuracy of the near-to-mid field pressure signature prediction of supersonic aircraft computed using the USM3D unstructured grid flow solver. The first method (ADV) is an interactive adaptation process that uses grid movement rather than enrichment to more accurately resolve the expansion and compression waves. The second method (SSGRID) uses an a priori adaptation approach to stretch and shear the original unstructured grid to align the grid with the pressure waves and reduce the cell count required to achieve an accurate signature prediction at a given distance from the vehicle. Both methods initially create negative volume cells that are repaired in a module in the ADV code. While both approaches provide significant improvements in the near field signature (< 3 body lengths) relative to a baseline grid without increasing the number of grid points, only the SSGRID approach allows the details of the signature to be accurately computed at mid-field distances (3-10 body lengths) for direct use with mid-field-to-ground boom propagation codes.

  13. Methods for prismatic/tetrahedral grid generation and adaptation

    NASA Astrophysics Data System (ADS)

    Kallinderis, Y.

    1995-10-01

    The present work involves generation of hybrid prismatic/tetrahedral grids for complex 3-D geometries including multi-body domains. The prisms cover the region close to each body's surface, while tetrahedra are created elsewhere. Two developments are presented for hybrid grid generation around complex 3-D geometries. The first is a new octree/advancing front type of method for generation of the tetrahedra of the hybrid mesh. The main feature of the present advancing front tetrahedra generator that is different from previous such methods is that it does not require the creation of a background mesh by the user for the determination of the grid-spacing and stretching parameters. These are determined via an automatically generated octree. The second development is a method for treating the narrow gaps in between different bodies in a multiply-connected domain. This method is applied to a two-element wing case. A High Speed Civil Transport (HSCT) type of aircraft geometry is considered. The generated hybrid grid required only 170 K tetrahedra instead of an estimated two million had a tetrahedral mesh been used in the prisms region as well. A solution adaptive scheme for viscous computations on hybrid grids is also presented. A hybrid grid adaptation scheme that employs both h-refinement and redistribution strategies is developed to provide optimum meshes for viscous flow computations. Grid refinement is a dual adaptation scheme that couples 3-D, isotropic division of tetrahedra and 2-D, directional division of prisms.

  14. Adaptive Training Considerations for Use in Simulation-Based Systems

    DTIC Science & Technology

    2010-09-01

    ABSTRACT In this report, we examine theoretical and empirical papers that describe adaptive training (AT). When applied effectively, AT has the...the effectiveness of various instructional techniques and methods. In the current report, we examine theoretical and empirical papers that describe...this report, we examine theoretical and empirical papers that describe one such advanced training method, Adaptive Training (AT). In AT, some

  15. An Adaptive Unstructured Grid Method by Grid Subdivision, Local Remeshing, and Grid Movement

    NASA Technical Reports Server (NTRS)

    Pirzadeh, Shahyar Z.

    1999-01-01

    An unstructured grid adaptation technique has been developed and successfully applied to several three dimensional inviscid flow test cases. The approach is based on a combination of grid subdivision, local remeshing, and grid movement. For solution adaptive grids, the surface triangulation is locally refined by grid subdivision, and the tetrahedral grid in the field is partially remeshed at locations of dominant flow features. A grid redistribution strategy is employed for geometric adaptation of volume grids to moving or deforming surfaces. The method is automatic and fast and is designed for modular coupling with different solvers. Several steady state test cases with different inviscid flow features were tested for grid/solution adaptation. In all cases, the dominant flow features, such as shocks and vortices, were accurately and efficiently predicted with the present approach. A new and robust method of moving tetrahedral "viscous" grids is also presented and demonstrated on a three-dimensional example.

  16. Adaptive color correction based on object color classification

    NASA Astrophysics Data System (ADS)

    Kotera, Hiroaki; Morimoto, Tetsuro; Yasue, Nobuyuki; Saito, Ryoichi

    1998-09-01

    An adaptive color management strategy depending on the image contents is proposed. Pictorial color image is classified into different object areas with clustered color distribution. Euclidian or Mahalanobis color distance measures, and maximum likelihood method based on Bayesian decision rule, are introduced to the classification. After the classification process, each clustered pixels are projected onto principal component space by Hotelling transform and the color corrections are performed for the principal components to be matched each other in between the individual clustered color areas of original and printed images.

  17. Adaptive Rule Based Fetal QRS Complex Detection Using Hilbert Transform

    PubMed Central

    Ulusar, Umit D.; Govindan, R.B.; Wilson, James D.; Lowery, Curtis L.; Preissl, Hubert; Eswaran, Hari

    2010-01-01

    In this paper we introduce an adaptive rule based QRS detection algorithm using the Hilbert transform (adHQRS) for fetal magnetocardiography processing. Hilbert transform is used to combine multiple channel measurements and the adaptive rule based decision process is used to eliminate spurious beats. The algorithm has been tested with a large number of datasets and promising results were obtained. PMID:19964648

  18. Adaptive rule based fetal QRS complex detection using Hilbert transform.

    PubMed

    Ulusar, Umit D; Govindan, R B; Wilson, James D; Lowery, Curtis L; Preissl, Hubert; Eswaran, Hari

    2009-01-01

    In this paper we introduce an adaptive rule based QRS detection algorithm using the Hilbert transform (adHQRS) for fetal magnetocardiography processing. Hilbert transform is used to combine multiple channel measurements and the adaptive rule based decision process is used to eliminate spurious beats. The algorithm has been tested with a large number of datasets and promising results were obtained.

  19. Turbulent Output-Based Anisotropic Adaptation

    NASA Technical Reports Server (NTRS)

    Park, Michael A.; Carlson, Jan-Renee

    2010-01-01

    Controlling discretization error is a remaining challenge for computational fluid dynamics simulation. Grid adaptation is applied to reduce estimated discretization error in drag or pressure integral output functions. To enable application to high O(10(exp 7)) Reynolds number turbulent flows, a hybrid approach is utilized that freezes the near-wall boundary layer grids and adapts the grid away from the no slip boundaries. The hybrid approach is not applicable to problems with under resolved initial boundary layer grids, but is a powerful technique for problems with important off-body anisotropic features. Supersonic nozzle plume, turbulent flat plate, and shock-boundary layer interaction examples are presented with comparisons to experimental measurements of pressure and velocity. Adapted grids are produced that resolve off-body features in locations that are not known a priori.

  20. A self-adaptive-grid method with application to airfoil flow

    NASA Technical Reports Server (NTRS)

    Nakahashi, K.; Deiwert, G. S.

    1985-01-01

    A self-adaptive-grid method is described that is suitable for multidimensional steady and unsteady computations. Based on variational principles, a spring analogy is used to redistribute grid points in an optimal sense to reduce the overall solution error. User-specified parameters, denoting both maximum and minimum permissible grid spacings, are used to define the all-important constants, thereby minimizing the empiricism and making the method self-adaptive. Operator splitting and one-sided controls for orthogonality and smoothness are used to make the method practical, robust, and efficient. Examples are included for both steady and unsteady viscous flow computations about airfoils in two dimensions, as well as for a steady inviscid flow computation and a one-dimensional case. These examples illustrate the precise control the user has with the self-adaptive method and demonstrate a significant improvement in accuracy and quality of the solutions.

  1. Passivity-Based Adaptive Hybrid Synchronization of a New Hyperchaotic System with Uncertain Parameters

    PubMed Central

    2012-01-01

    We investigate the adaptive hybrid synchronization problem for a new hyperchaotic system with uncertain parameters. Based on the passivity theory and the adaptive control theory, corresponding controllers and parameter estimation update laws are proposed to achieve hybrid synchronization between two identical uncertain hyperchaotic systems with different initial values, respectively. Numerical simulation indicates that the presented methods work effectively. PMID:23365538

  2. Passivity-based adaptive hybrid synchronization of a new hyperchaotic system with uncertain parameters.

    PubMed

    Zhou, Xiaobing; Fan, Zhangbiao; Zhou, Dongming; Cai, Xiaomei

    2012-01-01

    We investigate the adaptive hybrid synchronization problem for a new hyperchaotic system with uncertain parameters. Based on the passivity theory and the adaptive control theory, corresponding controllers and parameter estimation update laws are proposed to achieve hybrid synchronization between two identical uncertain hyperchaotic systems with different initial values, respectively. Numerical simulation indicates that the presented methods work effectively.

  3. Web-Based Adaptive Testing System (WATS) for Classifying Students Academic Ability

    ERIC Educational Resources Information Center

    Lee, Jaemu; Park, Sanghoon; Kim, Kwangho

    2012-01-01

    Computer Adaptive Testing (CAT) has been highlighted as a promising assessment method to fulfill two testing purposes: estimating student academic ability and classifying student academic level. In this paper, assessment for we introduced the Web-based Adaptive Testing System (WATS) developed to support a cost effective assessment for classifying…

  4. Adaptive SVD-Based Digital Image Watermarking

    NASA Astrophysics Data System (ADS)

    Shirvanian, Maliheh; Torkamani Azar, Farah

    Digital data utilization along with the increase popularity of the Internet has facilitated information sharing and distribution. However, such applications have also raised concern about copyright issues and unauthorized modification and distribution of digital data. Digital watermarking techniques which are proposed to solve these problems hide some information in digital media and extract it whenever needed to indicate the data owner. In this paper a new method of image watermarking based on singular value decomposition (SVD) of images is proposed which considers human visual system prior to embedding watermark by segmenting the original image into several blocks of different sizes, with more density in the edges of the image. In this way the original image quality is preserved in the watermarked image. Additional advantages of the proposed technique are large capacity of watermark embedding and robustness of the method against different types of image manipulation techniques.

  5. Investigation of the Multiple Method Adaptive Control (MMAC) method for flight control systems

    NASA Technical Reports Server (NTRS)

    Athans, M.; Baram, Y.; Castanon, D.; Dunn, K. P.; Green, C. S.; Lee, W. H.; Sandell, N. R., Jr.; Willsky, A. S.

    1979-01-01

    The stochastic adaptive control of the NASA F-8C digital-fly-by-wire aircraft using the multiple model adaptive control (MMAC) method is presented. The selection of the performance criteria for the lateral and the longitudinal dynamics, the design of the Kalman filters for different operating conditions, the identification algorithm associated with the MMAC method, the control system design, and simulation results obtained using the real time simulator of the F-8 aircraft at the NASA Langley Research Center are discussed.

  6. A two-dimensional adaptive mesh generation method

    NASA Astrophysics Data System (ADS)

    Altas, Irfan; Stephenson, John W.

    1991-05-01

    The present, two-dimensional adaptive mesh-generation method allows selective modification of a small portion of the mesh without affecting large areas of adjacent mesh-points, and is applicable with or without boundary-fitted coordinate-generation procedures. The cases of differential equation discretization by, on the one hand, classical difference formulas designed for uniform meshes, and on the other the present difference formulas, are illustrated through the application of the method to the Hiemenz flow for which the Navier-Stokes equation's exact solution is known, as well as to a two-dimensional viscous internal flow problem.

  7. Adaptively wavelet-based image denoising algorithm with edge preserving

    NASA Astrophysics Data System (ADS)

    Tan, Yihua; Tian, Jinwen; Liu, Jian

    2006-02-01

    A new wavelet-based image denoising algorithm, which exploits the edge information hidden in the corrupted image, is presented. Firstly, a canny-like edge detector identifies the edges in each subband. Secondly, multiplying the wavelet coefficients in neighboring scales is implemented to suppress the noise while magnifying the edge information, and the result is utilized to exclude the fake edges. The isolated edge pixel is also identified as noise. Unlike the thresholding method, after that we use local window filter in the wavelet domain to remove noise in which the variance estimation is elaborated to utilize the edge information. This method is adaptive to local image details, and can achieve better performance than the methods of state of the art.

  8. Adaptive P300 based control system

    PubMed Central

    Jin, Jing; Allison, Brendan Z.; Sellers, Eric W.; Brunner, Clemens; Horki, Petar; Wang, Xingyu; Neuper, Christa

    2015-01-01

    An adaptive P300 brain-computer interface (BCI) using a 12 × 7 matrix explored new paradigms to improve bit rate and accuracy. During online use, the system adaptively selects the number of flashes to average. Five different flash patterns were tested. The 19-flash paradigm represents the typical row/column presentation (i.e., 12 columns and 7 rows). The 9- and 14-flash A & B paradigms present all items of the 12 × 7 matrix three times using either nine or 14 flashes (instead of 19), decreasing the amount of time to present stimuli. Compared to 9-flash A, 9-flash B decreased the likelihood that neighboring items would flash when the target was not flashing, thereby reducing interference from items adjacent to targets. 14-flash A also reduced adjacent item interference and 14-flash B additionally eliminated successive (double) flashes of the same item. Results showed that accuracy and bit rate of the adaptive system were higher than the non-adaptive system. In addition, 9- and 14-flash B produced significantly higher performance than their respective A conditions. The results also show the trend that the 14-flash B paradigm was better than the 19-flash pattern for naïve users. PMID:21474877

  9. Adaptive Current Control Method for Hybrid Active Power Filter

    NASA Astrophysics Data System (ADS)

    Chau, Minh Thuyen

    2016-09-01

    This paper proposes an adaptive current control method for Hybrid Active Power Filter (HAPF). It consists of a fuzzy-neural controller, identification and prediction model and cost function. The fuzzy-neural controller parameters are adjusted according to the cost function minimum criteria. For this reason, the proposed control method has a capability on-line control clings to variation of the load harmonic currents. Compared to the single fuzzy logic control method, the proposed control method shows the advantages of better dynamic response, compensation error in steady-state is smaller, able to online control is better and harmonics cancelling is more effective. Simulation and experimental results have demonstrated the effectiveness of the proposed control method.

  10. Parallel, adaptive finite element methods for conservation laws

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak; Devine, Karen D.; Flaherty, Joseph E.

    1994-01-01

    We construct parallel finite element methods for the solution of hyperbolic conservation laws in one and two dimensions. Spatial discretization is performed by a discontinuous Galerkin finite element method using a basis of piecewise Legendre polynomials. Temporal discretization utilizes a Runge-Kutta method. Dissipative fluxes and projection limiting prevent oscillations near solution discontinuities. A posteriori estimates of spatial errors are obtained by a p-refinement technique using superconvergence at Radau points. The resulting method is of high order and may be parallelized efficiently on MIMD computers. We compare results using different limiting schemes and demonstrate parallel efficiency through computations on an NCUBE/2 hypercube. We also present results using adaptive h- and p-refinement to reduce the computational cost of the method.

  11. An extended framework for adaptive playback-based video summarization

    NASA Astrophysics Data System (ADS)

    Peker, Kadir A.; Divakaran, Ajay

    2003-11-01

    In our previous work, we described an adaptive fast playback framework for video summarization where we changed the playback rate using the motion activity feature so as to maintain a constant "pace." This method provides an effective way of skimming through video, especially when the motion is not too complex and the background is mostly still, such as in surveillance video. In this paper, we present an extended summarization framework that, in addition to motion activity, uses semantic cues such as face or skin color appearance, speech and music detection, or other domain dependent semantically significant events to control the playback rate. The semantic features we use are computationally inexpensive and can be computed in compressed domain, yet are robust, reliable, and have a wide range of applicability across different content types. The presented framework also allows for adaptive summaries based on preference, for example, to include more dramatic vs. action elements, or vice versa. The user can switch at any time between the skimming and the normal playback modes. The continuity of the video is preserved, and complete omission of segments that may be important to the user is avoided by using adaptive fast playback instead of skipping over long segments. The rule-set and the input parameters can be further modified to fit a certain domain or application. Our framework can be used by itself, or as a subsequent presentation stage for a summary produced by any other summarization technique that relies on generating a sub-set of the content.

  12. Wavelet-Based Adaptive Solvers on Multi-core Architectures for the Simulation of Complex Systems

    NASA Astrophysics Data System (ADS)

    Rossinelli, Diego; Bergdorf, Michael; Hejazialhosseini, Babak; Koumoutsakos, Petros

    We build wavelet-based adaptive numerical methods for the simulation of advection dominated flows that develop multiple spatial scales, with an emphasis on fluid mechanics problems. Wavelet based adaptivity is inherently sequential and in this work we demonstrate that these numerical methods can be implemented in software that is capable of harnessing the capabilities of multi-core architectures while maintaining their computational efficiency. Recent designs in frameworks for multi-core software development allow us to rethink parallelism as task-based, where parallel tasks are specified and automatically mapped into physical threads. This way of exposing parallelism enables the parallelization of algorithms that were considered inherently sequential, such as wavelet-based adaptive simulations. In this paper we present a framework that combines wavelet-based adaptivity with the task-based parallelism. We demonstrate good scaling performance obtained by simulating diverse physical systems on different multi-core and SMP architectures using up to 16 cores.

  13. A NOISE ADAPTIVE FUZZY EQUALIZATION METHOD FOR PROCESSING SOLAR EXTREME ULTRAVIOLET IMAGES

    SciTech Connect

    Druckmueller, M.

    2013-08-15

    A new image enhancement tool ideally suited for the visualization of fine structures in extreme ultraviolet images of the corona is presented in this paper. The Noise Adaptive Fuzzy Equalization method is particularly suited for the exceptionally high dynamic range images from the Atmospheric Imaging Assembly instrument on the Solar Dynamics Observatory. This method produces artifact-free images and gives significantly better results than methods based on convolution or Fourier transform which are often used for that purpose.

  14. An Adaptive Altitude Information Fusion Method for Autonomous Landing Processes of Small Unmanned Aerial Rotorcraft

    PubMed Central

    Lei, Xusheng; Li, Jingjing

    2012-01-01

    This paper presents an adaptive information fusion method to improve the accuracy and reliability of the altitude measurement information for small unmanned aerial rotorcraft during the landing process. Focusing on the low measurement performance of sensors mounted on small unmanned aerial rotorcraft, a wavelet filter is applied as a pre-filter to attenuate the high frequency noises in the sensor output. Furthermore, to improve altitude information, an adaptive extended Kalman filter based on a maximum a posteriori criterion is proposed to estimate measurement noise covariance matrix in real time. Finally, the effectiveness of the proposed method is proved by static tests, hovering flight and autonomous landing flight tests. PMID:23201993

  15. A comparison of locally adaptive multigrid methods: LDC, FAC and FIC

    NASA Technical Reports Server (NTRS)

    Khadra, Khodor; Angot, Philippe; Caltagirone, Jean-Paul

    1993-01-01

    This study is devoted to a comparative analysis of three 'Adaptive ZOOM' (ZOom Overlapping Multi-level) methods based on similar concepts of hierarchical multigrid local refinement: LDC (Local Defect Correction), FAC (Fast Adaptive Composite), and FIC (Flux Interface Correction)--which we proposed recently. These methods are tested on two examples of a bidimensional elliptic problem. We compare, for V-cycle procedures, the asymptotic evolution of the global error evaluated by discrete norms, the corresponding local errors, and the convergence rates of these algorithms.

  16. The block adaptive multigrid method applied to the solution of the Euler equations

    NASA Technical Reports Server (NTRS)

    Pantelelis, Nikos

    1993-01-01

    In the present study, a scheme capable of solving very fast and robust complex nonlinear systems of equations is presented. The Block Adaptive Multigrid (BAM) solution method offers multigrid acceleration and adaptive grid refinement based on the prediction of the solution error. The proposed solution method was used with an implicit upwind Euler solver for the solution of complex transonic flows around airfoils. Very fast results were obtained (18-fold acceleration of the solution) using one fourth of the volumes of a global grid with the same solution accuracy for two test cases.

  17. Adaptive Training for Voice Conversion Based on Eigenvoices

    NASA Astrophysics Data System (ADS)

    Ohtani, Yamato; Toda, Tomoki; Saruwatari, Hiroshi; Shikano, Kiyohiro

    In this paper, we describe a novel model training method for one-to-many eigenvoice conversion (EVC). One-to-many EVC is a technique for converting a specific source speaker's voice into an arbitrary target speaker's voice. An eigenvoice Gaussian mixture model (EV-GMM) is trained in advance using multiple parallel data sets consisting of utterance-pairs of the source speaker and many pre-stored target speakers. The EV-GMM can be adapted to new target speakers using only a few of their arbitrary utterances by estimating a small number of adaptive parameters. In the adaptation process, several parameters of the EV-GMM to be fixed for different target speakers strongly affect the conversion performance of the adapted model. In order to improve the conversion performance in one-to-many EVC, we propose an adaptive training method of the EV-GMM. In the proposed training method, both the fixed parameters and the adaptive parameters are optimized by maximizing a total likelihood function of the EV-GMMs adapted to individual pre-stored target speakers. We conducted objective and subjective evaluations to demonstrate the effectiveness of the proposed training method. The experimental results show that the proposed adaptive training yields significant quality improvements in the converted speech.

  18. Adaptive probabilistic collocation based Kalman filter for unsaturated flow problem

    NASA Astrophysics Data System (ADS)

    Man, J.; Li, W.; Zeng, L.; Wu, L.

    2015-12-01

    The ensemble Kalman filter (EnKF) has gained popularity in hydrological data assimilation problems. As a Monte Carlo based method, a relatively large ensemble size is usually required to guarantee the accuracy. As an alternative approach, the probabilistic collocation based Kalman filter (PCKF) employs the Polynomial Chaos to approximate the original system. In this way, the sampling error can be reduced. However, PCKF suffers from the so called "cure of dimensionality". When the system nonlinearity is strong and number of parameters is large, PCKF is even more computationally expensive than EnKF. Motivated by recent developments in uncertainty quantification, we propose a restart adaptive probabilistic collocation based Kalman filter (RAPCKF) for data assimilation in unsaturated flow problem. During the implementation of RAPCKF, the important parameters are identified and active PCE basis functions are adaptively selected. The "restart" technology is used to alleviate the inconsistency between model parameters and states. The performance of RAPCKF is tested by unsaturated flow numerical cases. It is shown that RAPCKF is more efficient than EnKF with the same computational cost. Compared with the traditional PCKF, the RAPCKF is more applicable in strongly nonlinear and high dimensional problems.

  19. Spatially adaptive block-based super-resolution.

    PubMed

    Su, Heng; Tang, Liang; Wu, Ying; Tretter, Daniel; Zhou, Jie

    2012-03-01

    Super-resolution technology provides an effective way to increase image resolution by incorporating additional information from successive input images or training samples. Various super-resolution algorithms have been proposed based on different assumptions, and their relative performances can differ in regions of different characteristics within a single image. Based on this observation, an adaptive algorithm is proposed in this paper to integrate a higher level image classification task and a lower level super-resolution process, in which we incorporate reconstruction-based super-resolution algorithms, single-image enhancement, and image/video classification into a single comprehensive framework. The target high-resolution image plane is divided into adaptive-sized blocks, and different suitable super-resolution algorithms are automatically selected for the blocks. Then, a deblocking process is applied to reduce block edge artifacts. A new benchmark is also utilized to measure the performance of super-resolution algorithms. Experimental results with real-life videos indicate encouraging improvements with our method.

  20. Planetary gearbox fault diagnosis using an adaptive stochastic resonance method

    NASA Astrophysics Data System (ADS)

    Lei, Yaguo; Han, Dong; Lin, Jing; He, Zhengjia

    2013-07-01

    Planetary gearboxes are widely used in aerospace, automotive and heavy industry applications due to their large transmission ratio, strong load-bearing capacity and high transmission efficiency. The tough operation conditions of heavy duty and intensive impact load may cause gear tooth damage such as fatigue crack and teeth missed etc. The challenging issues in fault diagnosis of planetary gearboxes include selection of sensitive measurement locations, investigation of vibration transmission paths and weak feature extraction. One of them is how to effectively discover the weak characteristics from noisy signals of faulty components in planetary gearboxes. To address the issue in fault diagnosis of planetary gearboxes, an adaptive stochastic resonance (ASR) method is proposed in this paper. The ASR method utilizes the optimization ability of ant colony algorithms and adaptively realizes the optimal stochastic resonance system matching input signals. Using the ASR method, the noise may be weakened and weak characteristics highlighted, and therefore the faults can be diagnosed accurately. A planetary gearbox test rig is established and experiments with sun gear faults including a chipped tooth and a missing tooth are conducted. And the vibration signals are collected under the loaded condition and various motor speeds. The proposed method is used to process the collected signals and the results of feature extraction and fault diagnosis demonstrate its effectiveness.

  1. Adaptive control based on retrospective cost optimization

    NASA Technical Reports Server (NTRS)

    Santillo, Mario A. (Inventor); Bernstein, Dennis S. (Inventor)

    2012-01-01

    A discrete-time adaptive control law for stabilization, command following, and disturbance rejection that is effective for systems that are unstable, MIMO, and/or nonminimum phase. The adaptive control algorithm includes guidelines concerning the modeling information needed for implementation. This information includes the relative degree, the first nonzero Markov parameter, and the nonminimum-phase zeros. Except when the plant has nonminimum-phase zeros whose absolute value is less than the plant's spectral radius, the required zero information can be approximated by a sufficient number of Markov parameters. No additional information about the poles or zeros need be known. Numerical examples are presented to illustrate the algorithm's effectiveness in handling systems with errors in the required modeling data, unknown latency, sensor noise, and saturation.

  2. Adaptive robust controller based on integral sliding mode concept

    NASA Astrophysics Data System (ADS)

    Taleb, M.; Plestan, F.

    2016-09-01

    This paper proposes, for a class of uncertain nonlinear systems, an adaptive controller based on adaptive second-order sliding mode control and integral sliding mode control concepts. The adaptation strategy solves the problem of gain tuning and has the advantage of chattering reduction. Moreover, limited information about perturbation and uncertainties has to be known. The control is composed of two parts: an adaptive one whose objective is to reject the perturbation and system uncertainties, whereas the second one is chosen such as the nominal part of the system is stabilised in zero. To illustrate the effectiveness of the proposed approach, an application on an academic example is shown with simulation results.

  3. Weighted adaptive threshold estimating method and its application to Satellite-to-Ground optical communications

    NASA Astrophysics Data System (ADS)

    Ran, Qiwen; Yang, Zhonghua; Ma, Jing; Tan, Liying; Liao, Huixi; Liu, Qingfeng

    2013-02-01

    In this paper, a weighted adaptive threshold estimating method is proposed to deal with long and deep channel fades in Satellite-to-Ground optical communications. During the channel correlation interval where there are sufficient correlations in adjacent signal samples, the correlations in its change rates are described by weighted equations in the form of Toeplitz matrix. As vital inputs to the proposed adaptive threshold estimator, the optimal values of the change rates can be obtained by solving the weighted equation systems. The effect of channel fades and aberrant samples can be mitigated by joint use of weighted equation systems and Kalman estimation. Based on the channel information data from star observation trails, simulations are made and the numerical results show that the proposed method have better anti-fade performances than the D-value adaptive threshold estimating method in both weak and strong turbulence conditions.

  4. Adaptive User Model for Web-Based Learning Environment.

    ERIC Educational Resources Information Center

    Garofalakis, John; Sirmakessis, Spiros; Sakkopoulos, Evangelos; Tsakalidis, Athanasios

    This paper describes the design of an adaptive user model and its implementation in an advanced Web-based Virtual University environment that encompasses combined and synchronized adaptation between educational material and well-known communication facilities. The Virtual University environment has been implemented to support a postgraduate…

  5. Trust-Guided Behavior Adaptation Using Case-Based Reasoning

    DTIC Science & Technology

    2015-08-01

    trustworthiness and adapt its behavior ac- cordingly. As behavior adaptation is performed, us- ing case-based reasoning (CBR), information about the...complete set of rules for trustwor- thy behavior if the robot is expected to handle changes in teammates, environments, or mission contexts. The way

  6. Adaptive-Anisotropic Wavelet Collocation Method on general curvilinear coordinate systems

    NASA Astrophysics Data System (ADS)

    Brown-Dymkoski, Eric; Vasilyev, Oleg V.

    2017-03-01

    A new general framework for an Adaptive-Anisotropic Wavelet Collocation Method (A-AWCM) for the solution of partial differential equations is developed. This proposed framework addresses two major shortcomings of existing wavelet-based adaptive numerical methodologies, namely the reliance on a rectangular domain and the "curse of anisotropy", i.e. drastic over-resolution of sheet- and filament-like features arising from the inability of the wavelet refinement mechanism to distinguish highly correlated directional information in the solution. The A-AWCM addresses both of these challenges by incorporating coordinate transforms into the Adaptive Wavelet Collocation Method for the solution of PDEs. The resulting integrated framework leverages the advantages of both the curvilinear anisotropic meshes and wavelet-based adaptive refinement in a complimentary fashion, resulting in greatly reduced cost of resolution for anisotropic features. The proposed Adaptive-Anisotropic Wavelet Collocation Method retains the a priori error control of the solution and fully automated mesh refinement, while offering new abilities through the flexible mesh geometry, including body-fitting. The new A-AWCM is demonstrated for a variety of cases, including parabolic diffusion, acoustic scattering, and unsteady external flow.

  7. The SMART CLUSTER METHOD - adaptive earthquake cluster analysis and declustering

    NASA Astrophysics Data System (ADS)

    Schaefer, Andreas; Daniell, James; Wenzel, Friedemann

    2016-04-01

    Earthquake declustering is an essential part of almost any statistical analysis of spatial and temporal properties of seismic activity with usual applications comprising of probabilistic seismic hazard assessments (PSHAs) and earthquake prediction methods. The nature of earthquake clusters and subsequent declustering of earthquake catalogues plays a crucial role in determining the magnitude-dependent earthquake return period and its respective spatial variation. Various methods have been developed to address this issue from other researchers. These have differing ranges of complexity ranging from rather simple statistical window methods to complex epidemic models. This study introduces the smart cluster method (SCM), a new methodology to identify earthquake clusters, which uses an adaptive point process for spatio-temporal identification. Hereby, an adaptive search algorithm for data point clusters is adopted. It uses the earthquake density in the spatio-temporal neighbourhood of each event to adjust the search properties. The identified clusters are subsequently analysed to determine directional anisotropy, focussing on a strong correlation along the rupture plane and adjusts its search space with respect to directional properties. In the case of rapid subsequent ruptures like the 1992 Landers sequence or the 2010/2011 Darfield-Christchurch events, an adaptive classification procedure is applied to disassemble subsequent ruptures which may have been grouped into an individual cluster using near-field searches, support vector machines and temporal splitting. The steering parameters of the search behaviour are linked to local earthquake properties like magnitude of completeness, earthquake density and Gutenberg-Richter parameters. The method is capable of identifying and classifying earthquake clusters in space and time. It is tested and validated using earthquake data from California and New Zealand. As a result of the cluster identification process, each event in

  8. Model-Based Nonrigid Motion Analysis Using Natural Feature Adaptive Mesh

    SciTech Connect

    Zhang, Y.; Goldgof, D.B.; Sarkar, S.; Tsap, L.V.

    2000-04-25

    The success of nonrigid motion analysis using physical finite element model is dependent on the mesh that characterizes the object's geometric structure. We suggest a deformable mesh adapted to the natural features of images. The adaptive mesh requires much fewer number of nodes than the fixed mesh which was used in our previous work. We demonstrate the higher efficiency of the adaptive mesh in the context of estimating burn scar elasticity relative to normal skin elasticity using the observed 2D image sequence. Our results show that the scar assessment method based on the physical model using natural feature adaptive mesh can be applied to images which do not have artificial markers.

  9. An h-adaptive local discontinuous Galerkin method for the Navier-Stokes-Korteweg equations

    NASA Astrophysics Data System (ADS)

    Tian, Lulu; Xu, Yan; Kuerten, J. G. M.; van der Vegt, J. J. W.

    2016-08-01

    In this article, we develop a mesh adaptation algorithm for a local discontinuous Galerkin (LDG) discretization of the (non)-isothermal Navier-Stokes-Korteweg (NSK) equations modeling liquid-vapor flows with phase change. This work is a continuation of our previous research, where we proposed LDG discretizations for the (non)-isothermal NSK equations with a time-implicit Runge-Kutta method. To save computing time and to capture the thin interfaces more accurately, we extend the LDG discretization with a mesh adaptation method. Given the current adapted mesh, a criterion for selecting candidate elements for refinement and coarsening is adopted based on the locally largest value of the density gradient. A strategy to refine and coarsen the candidate elements is then provided. We emphasize that the adaptive LDG discretization is relatively simple and does not require additional stabilization. The use of a locally refined mesh in combination with an implicit Runge-Kutta time method is, however, non-trivial, but results in an efficient time integration method for the NSK equations. Computations, including cases with solid wall boundaries, are provided to demonstrate the accuracy, efficiency and capabilities of the adaptive LDG discretizations.

  10. Lens based adaptive optics scanning laser ophthalmoscope.

    PubMed

    Felberer, Franz; Kroisamer, Julia-Sophie; Hitzenberger, Christoph K; Pircher, Michael

    2012-07-30

    We present an alternative approach for an adaptive optics scanning laser ophthalmoscope (AO-SLO). In contrast to other commonly used AO-SLO instruments, the imaging optics consist of lenses. Images of the fovea region of 5 healthy volunteers are recorded. The system is capable to resolve human foveal cones in 3 out of 5 healthy volunteers. Additionally, we investigated the capability of the system to support larger scanning angles (up to 5°) on the retina. Finally, in order to demonstrate the performance of the instrument images of rod photoreceptors are presented.

  11. Smoothed aggregation adaptive spectral element-based algebraic multigrid

    SciTech Connect

    2015-01-20

    SAAMGE provides parallel methods for building multilevel hierarchies and solvers that can be used for elliptic equations with highly heterogeneous coefficients. Additionally, hierarchy adaptation is implemented allowing solving multiple problems with close coefficients without rebuilding the hierarchy.

  12. Beam shaping for laser-based adaptive optics in astronomy.

    PubMed

    Béchet, Clémentine; Guesalaga, Andrés; Neichel, Benoit; Fesquet, Vincent; González-Núñez, Héctor; Zúñiga, Sebastián; Escarate, Pedro; Guzman, Dani

    2014-06-02

    The availability and performance of laser-based adaptive optics (AO) systems are strongly dependent on the power and quality of the laser beam before being projected to the sky. Frequent and time-consuming alignment procedures are usually required in the laser systems with free-space optics to optimize the beam. Despite these procedures, significant distortions of the laser beam have been observed during the first two years of operation of the Gemini South multi-conjugate adaptive optics system (GeMS). A beam shaping concept with two deformable mirrors is investigated in order to provide automated optimization of the laser quality for astronomical AO. This study aims at demonstrating the correction of quasi-static aberrations of the laser, in both amplitude and phase, testing a prototype of this two-deformable mirror concept on GeMS. The paper presents the results of the preparatory study before the experimental phase. An algorithm to control amplitude and phase correction, based on phase retrieval techniques, is presented with a novel unwrapping method. Its performance is assessed via numerical simulations, using aberrations measured at GeMS as reference. The results predict effective amplitude and phase correction of the laser distortions with about 120 actuators per mirror and a separation of 1.4 m between the mirrors. The spot size is estimated to be reduced by up to 15% thanks to the correction. In terms of AO noise level, this has the same benefit as increasing the photon flux by 40%.

  13. Kalman filter based control for Adaptive Optics

    NASA Astrophysics Data System (ADS)

    Petit, Cyril; Quiros-Pacheco, Fernando; Conan, Jean-Marc; Kulcsár, Caroline; Raynaud, Henri-François; Fusco, Thierry

    2004-12-01

    Classical Adaptive Optics suffer from a limitation of the corrected Field Of View. This drawback has lead to the development of MultiConjugated Adaptive Optics. While the first MCAO experimental set-ups are presently under construction, little attention has been paid to the control loop. This is however a key element in the optimization process especially for MCAO systems. Different approaches have been proposed in recent articles for astronomical applications : simple integrator, Optimized Modal Gain Integrator and Kalman filtering. We study here Kalman filtering which seems a very promising solution. Following the work of Brice Leroux, we focus on a frequential characterization of kalman filters, computing a transfer matrix. The result brings much information about their behaviour and allows comparisons with classical controllers. It also appears that straightforward improvements of the system models can lead to static aberrations and vibrations filtering. Simulation results are proposed and analysed thanks to our frequential characterization. Related problems such as model errors, aliasing effect reduction or experimental implementation and testing of Kalman filter control loop on a simplified MCAO experimental set-up could be then discussed.

  14. Adaptive 2-D wavelet transform based on the lifting scheme with preserved vanishing moments.

    PubMed

    Vrankic, Miroslav; Sersic, Damir; Sucic, Victor

    2010-08-01

    In this paper, we propose novel adaptive wavelet filter bank structures based on the lifting scheme. The filter banks are nonseparable, based on quincunx sampling, with their properties being pixel-wise adapted according to the local image features. Despite being adaptive, the filter banks retain a desirable number of primal and dual vanishing moments. The adaptation is introduced in the predict stage of the filter bank with an adaptation region chosen independently for each pixel, based on the intersection of confidence intervals (ICI) rule. The image denoising results are presented for both synthetic and real-world images. It is shown that the obtained wavelet decompositions perform well, especially for synthetic images that contain periodic patterns, for which the proposed method outperforms the state of the art in image denoising.

  15. Turbulence profiling methods applied to ESO's adaptive optics facility

    NASA Astrophysics Data System (ADS)

    Valenzuela, Javier; Béchet, Clémentine; Garcia-Rissmann, Aurea; Gonté, Frédéric; Kolb, Johann; Le Louarn, Miska; Neichel, Benoît; Madec, Pierre-Yves; Guesalaga, Andrés.

    2014-07-01

    Two algorithms were recently studied for C2n profiling from wide-field Adaptive Optics (AO) measurements on GeMS (Gemini Multi-Conjugate AO system). They both rely on the Slope Detection and Ranging (SLODAR) approach, using spatial covariances of the measurements issued from various wavefront sensors. The first algorithm estimates the C2n profile by applying the truncated least-squares inverse of a matrix modeling the response of slopes covariances to various turbulent layer heights. In the second method, the profile is estimated by deconvolution of these spatial cross-covariances of slopes. We compare these methods in the new configuration of ESO Adaptive Optics Facility (AOF), a high-order multiple laser system under integration. For this, we use measurements simulated by the AO cluster of ESO. The impact of the measurement noise and of the outer scale of the atmospheric turbulence is analyzed. The important influence of the outer scale on the results leads to the development of a new step for outer scale fitting included in each algorithm. This increases the reliability and robustness of the turbulence strength and profile estimations.

  16. Role-based adaptation for video conferencing in healthcare applications

    NASA Astrophysics Data System (ADS)

    Figuerola, Oscar; Kalva, Hari; Escudero, Antonio; Agarwal, Ankur

    2014-02-01

    A large number of health-related applications are being developed using web infrastructure. Video is increasingly used in healthcare applications to enable communications between patients and care providers. We present a video conferencing system designed for healthcare applications. In face of network congestion, the system uses role-based adaptation to ensure seamless service. A new web technology, WebRTC, is used to enable seamless conferencing applications. We present the video conferencing application and demonstrate the usefulness of role based adaptation.

  17. Adaptive eigenspace method for inverse scattering problems in the frequency domain

    NASA Astrophysics Data System (ADS)

    Grote, Marcus J.; Kray, Marie; Nahum, Uri

    2017-02-01

    A nonlinear optimization method is proposed for the solution of inverse scattering problems in the frequency domain, when the scattered field is governed by the Helmholtz equation. The time-harmonic inverse medium problem is formulated as a PDE-constrained optimization problem and solved by an inexact truncated Newton-type iteration. Instead of a grid-based discrete representation, the unknown wave speed is projected to a particular finite-dimensional basis of eigenfunctions, which is iteratively adapted during the optimization. Truncating the adaptive eigenspace (AE) basis at a (small and slowly increasing) finite number of eigenfunctions effectively introduces regularization into the inversion and thus avoids the need for standard Tikhonov-type regularization. Both analytical and numerical evidence underpins the accuracy of the AE representation. Numerical experiments demonstrate the efficiency and robustness to missing or noisy data of the resulting adaptive eigenspace inversion method.

  18. A GPU-accelerated adaptive discontinuous Galerkin method for level set equation

    NASA Astrophysics Data System (ADS)

    Karakus, A.; Warburton, T.; Aksel, M. H.; Sert, C.

    2016-01-01

    This paper presents a GPU-accelerated nodal discontinuous Galerkin method for the solution of two- and three-dimensional level set (LS) equation on unstructured adaptive meshes. Using adaptive mesh refinement, computations are localised mostly near the interface location to reduce the computational cost. Small global time step size resulting from the local adaptivity is avoided by local time-stepping based on a multi-rate Adams-Bashforth scheme. Platform independence of the solver is achieved with an extensible multi-threading programming API that allows runtime selection of different computing devices (GPU and CPU) and different threading interfaces (CUDA, OpenCL and OpenMP). Overall, a highly scalable, accurate and mass conservative numerical scheme that preserves the simplicity of LS formulation is obtained. Efficiency, performance and local high-order accuracy of the method are demonstrated through distinct numerical test cases.

  19. Three-dimensional self-adaptive grid method for complex flows

    NASA Technical Reports Server (NTRS)

    Djomehri, M. Jahed; Deiwert, George S.

    1988-01-01

    A self-adaptive grid procedure for efficient computation of three-dimensional complex flow fields is described. The method is based on variational principles to minimize the energy of a spring system analogy which redistributes the grid points. Grid control parameters are determined by specifying maximum and minimum grid spacing. Multidirectional adaptation is achieved by splitting the procedure into a sequence of successive applications of a unidirectional adaptation. One-sided, two-directional constraints for orthogonality and smoothness are used to enhance the efficiency of the method. Feasibility of the scheme is demonstrated by application to a multinozzle, afterbody, plume flow field. Application of the algorithm for initial grid generation is illustrated by constructing a three-dimensional grid about a bump-like geometry.

  20. Framework for Instructional Technology: Methods of Implementing Adaptive Training and Education

    DTIC Science & Technology

    2014-01-01

    business , or the military. With Role Adaptation, trainees select their role (e.g., tank driver vs. tank gunner) and are then presented with different...one-size-fits-all, non -mastery based methods (for a review see Durlach & Ray, 2011). After conducting a meta-analysis of various tutoring methods... verbal ), and/or to challenge or stimulate learners with above average aptitude. Multiple versions might also be created to suit students with

  1. An adaptive, formally second order accurate version of the immersed boundary method

    NASA Astrophysics Data System (ADS)

    Griffith, Boyce E.; Hornung, Richard D.; McQueen, David M.; Peskin, Charles S.

    2007-04-01

    Like many problems in biofluid mechanics, cardiac mechanics can be modeled as the dynamic interaction of a viscous incompressible fluid (the blood) and a (visco-)elastic structure (the muscular walls and the valves of the heart). The immersed boundary method is a mathematical formulation and numerical approach to such problems that was originally introduced to study blood flow through heart valves, and extensions of this work have yielded a three-dimensional model of the heart and great vessels. In the present work, we introduce a new adaptive version of the immersed boundary method. This adaptive scheme employs the same hierarchical structured grid approach (but a different numerical scheme) as the two-dimensional adaptive immersed boundary method of Roma et al. [A multilevel self adaptive version of the immersed boundary method, Ph.D. Thesis, Courant Institute of Mathematical Sciences, New York University, 1996; An adaptive version of the immersed boundary method, J. Comput. Phys. 153 (2) (1999) 509-534] and is based on a formally second order accurate (i.e., second order accurate for problems with sufficiently smooth solutions) version of the immersed boundary method that we have recently described [B.E. Griffith, C.S. Peskin, On the order of accuracy of the immersed boundary method: higher order convergence rates for sufficiently smooth problems, J. Comput. Phys. 208 (1) (2005) 75-105]. Actual second order convergence rates are obtained for both the uniform and adaptive methods by considering the interaction of a viscous incompressible flow and an anisotropic incompressible viscoelastic shell. We also present initial results from the application of this methodology to the three-dimensional simulation of blood flow in the heart and great vessels. The results obtained by the adaptive method show good qualitative agreement with simulation results obtained by earlier non-adaptive versions of the method, but the flow in the vicinity of the model heart valves

  2. Wavelet-based Multiresolution Particle Methods

    NASA Astrophysics Data System (ADS)

    Bergdorf, Michael; Koumoutsakos, Petros

    2006-03-01

    Particle methods offer a robust numerical tool for solving transport problems across disciplines, such as fluid dynamics, quantitative biology or computer graphics. Their strength lies in their stability, as they do not discretize the convection operator, and appealing numerical properties, such as small dissipation and dispersion errors. Many problems of interest are inherently multiscale, and their efficient solution requires either multiscale modeling approaches or spatially adaptive numerical schemes. We present a hybrid particle method that employs a multiresolution analysis to identify and adapt to small scales in the solution. The method combines the versatility and efficiency of grid-based Wavelet collocation methods while retaining the numerical properties and stability of particle methods. The accuracy and efficiency of this method is then assessed for transport and interface capturing problems in two and three dimensions, illustrating the capabilities and limitations of our approach.

  3. Adaptive control for solar energy based DC microgrid system development

    NASA Astrophysics Data System (ADS)

    Zhang, Qinhao

    During the upgrading of current electric power grid, it is expected to develop smarter, more robust and more reliable power systems integrated with distributed generations. To realize these objectives, traditional control techniques are no longer effective in either stabilizing systems or delivering optimal and robust performances. Therefore, development of advanced control methods has received increasing attention in power engineering. This work addresses two specific problems in the control of solar panel based microgrid systems. First, a new control scheme is proposed for the microgrid systems to achieve optimal energy conversion ratio in the solar panels. The control system can optimize the efficiency of the maximum power point tracking (MPPT) algorithm by implementing two layers of adaptive control. Such a hierarchical control architecture has greatly improved the system performance, which is validated through both mathematical analysis and computer simulation. Second, in the development of the microgrid transmission system, the issues related to the tele-communication delay and constant power load (CPL)'s negative incremental impedance are investigated. A reference model based method is proposed for pole and zero placements that address the challenges of the time delay and CPL in closed-loop control. The effectiveness of the proposed modeling and control design methods are demonstrated in a simulation testbed. Practical aspects of the proposed methods for general microgrid systems are also discussed.

  4. Wavelet-Based Speech Enhancement Using Time-Frequency Adaptation

    NASA Astrophysics Data System (ADS)

    Wang, Kun-Ching

    2003-12-01

    Wavelet denoising is commonly used for speech enhancement because of the simplicity of its implementation. However, the conventional methods generate the presence of musical residual noise while thresholding the background noise. The unvoiced components of speech are often eliminated from this method. In this paper, a novel algorithm of wavelet coefficient threshold (WCT) based on time-frequency adaptation is proposed. In addition, an unvoiced speech enhancement algorithm is also integrated into the system to improve the intelligibility of speech. The wavelet coefficient threshold (WCT) of each subband is first temporally adjusted according to the value of a posterior signal-to-noise ratio (SNR). To prevent the degradation of unvoiced sounds during noise, the algorithm utilizes a simple speech/noise detector (SND) and further divides speech signal into unvoiced and voiced sounds. Then, we apply appropriate wavelet thresholding according to voiced/unvoiced (V/U) decision. Based on the masking properties of human auditory system, a perceptual gain factor is adopted into wavelet thresholding for suppressing musical residual noise. Simulation results show that the proposed method is capable of reducing noise with little speech degradation and the overall performance is superior to several competitive methods.

  5. A Dynamically Adaptive Arbitrary Lagrangian-Eulerian Method for Solution of the Euler Equations

    SciTech Connect

    Anderson, R W; Elliott, N S; Pember, R B

    2003-02-14

    A new method that combines staggered grid arbitrary Lagrangian-Eulerian (ALE) techniques with structured local adaptive mesh refinement (AMR) has been developed for solution of the Euler equations. The novel components of the methods are driven by the need to reconcile traditional AMR techniques with the staggered variables and moving, deforming meshes associated with Lagrange based ALE schemes. We develop interlevel solution transfer operators and interlevel boundary conditions first in the case of purely Lagrangian hydrodynamics, and then extend these ideas into an ALE method by developing adaptive extensions of elliptic mesh relaxation techniques. Conservation properties of the method are analyzed, and a series of test problem calculations are presented which demonstrate the utility and efficiency of the method.

  6. Adaptive iteration method for star centroid extraction under highly dynamic conditions

    NASA Astrophysics Data System (ADS)

    Gao, Yushan; Qin, Shiqiao; Wang, Xingshu

    2016-10-01

    Star centroiding accuracy decreases significantly when star sensor works under highly dynamic conditions or star images are corrupted by severe noise, reducing the output attitude precision. Herein, an adaptive iteration method is proposed to solve this problem. Firstly, initial star centroids are predicted by traditional method, and then based on initial reported star centroids and angular velocities of the star sensor, adaptive centroiding windows are generated to cover the star area and then an iterative method optimizing the location of centroiding window is used to obtain the final star spot extraction results. Simulation results shows that, compared with traditional star image restoration method and Iteratively Weighted Center of Gravity method, AWI algorithm maintains higher extraction accuracy when rotation velocities or noise level increases.

  7. An adaptive demodulation approach for bearing fault detection based on adaptive wavelet filtering and spectral subtraction

    NASA Astrophysics Data System (ADS)

    Zhang, Yan; Tang, Baoping; Liu, Ziran; Chen, Rengxiang

    2016-02-01

    Fault diagnosis of rolling element bearings is important for improving mechanical system reliability and performance. Vibration signals contain a wealth of complex information useful for state monitoring and fault diagnosis. However, any fault-related impulses in the original signal are often severely tainted by various noises and the interfering vibrations caused by other machine elements. Narrow-band amplitude demodulation has been an effective technique to detect bearing faults by identifying bearing fault characteristic frequencies. To achieve this, the key step is to remove the corrupting noise and interference, and to enhance the weak signatures of the bearing fault. In this paper, a new method based on adaptive wavelet filtering and spectral subtraction is proposed for fault diagnosis in bearings. First, to eliminate the frequency associated with interfering vibrations, the vibration signal is bandpass filtered with a Morlet wavelet filter whose parameters (i.e. center frequency and bandwidth) are selected in separate steps. An alternative and efficient method of determining the center frequency is proposed that utilizes the statistical information contained in the production functions (PFs). The bandwidth parameter is optimized using a local ‘greedy’ scheme along with Shannon wavelet entropy criterion. Then, to further reduce the residual in-band noise in the filtered signal, a spectral subtraction procedure is elaborated after wavelet filtering. Instead of resorting to a reference signal as in the majority of papers in the literature, the new method estimates the power spectral density of the in-band noise from the associated PF. The effectiveness of the proposed method is validated using simulated data, test rig data, and vibration data recorded from the transmission system of a helicopter. The experimental results and comparisons with other methods indicate that the proposed method is an effective approach to detecting the fault-related impulses

  8. Reduction in redundancy of multichannel telemetric information by the method of adaptive discretization with associative sorting

    NASA Technical Reports Server (NTRS)

    Kantor, A. V.; Timonin, V. G.; Azarova, Y. S.

    1974-01-01

    The method of adaptive discretization is the most promising for elimination of redundancy from telemetry messages characterized by signal shape. Adaptive discretization with associative sorting was considered as a way to avoid the shortcomings of adaptive discretization with buffer smoothing and adaptive discretization with logical switching in on-board information compression devices (OICD) in spacecraft. Mathematical investigations of OICD are presented.

  9. Robust image registration using adaptive coherent point drift method

    NASA Astrophysics Data System (ADS)

    Yang, Lijuan; Tian, Zheng; Zhao, Wei; Wen, Jinhuan; Yan, Weidong

    2016-04-01

    Coherent point drift (CPD) method is a powerful registration tool under the framework of the Gaussian mixture model (GMM). However, the global spatial structure of point sets is considered only without other forms of additional attribute information. The equivalent simplification of mixing parameters and the manual setting of the weight parameter in GMM make the CPD method less robust to outlier and have less flexibility. An adaptive CPD method is proposed to automatically determine the mixing parameters by embedding the local attribute information of features into the construction of GMM. In addition, the weight parameter is treated as an unknown parameter and automatically determined in the expectation-maximization algorithm. In image registration applications, the block-divided salient image disk extraction method is designed to detect sparse salient image features and local self-similarity is used as attribute information to describe the local neighborhood structure of each feature. The experimental results on optical images and remote sensing images show that the proposed method can significantly improve the matching performance.

  10. Method and system for training dynamic nonlinear adaptive filters which have embedded memory

    NASA Technical Reports Server (NTRS)

    Rabinowitz, Matthew (Inventor)

    2002-01-01

    Described herein is a method and system for training nonlinear adaptive filters (or neural networks) which have embedded memory. Such memory can arise in a multi-layer finite impulse response (FIR) architecture, or an infinite impulse response (IIR) architecture. We focus on filter architectures with separate linear dynamic components and static nonlinear components. Such filters can be structured so as to restrict their degrees of computational freedom based on a priori knowledge about the dynamic operation to be emulated. The method is detailed for an FIR architecture which consists of linear FIR filters together with nonlinear generalized single layer subnets. For the IIR case, we extend the methodology to a general nonlinear architecture which uses feedback. For these dynamic architectures, we describe how one can apply optimization techniques which make updates closer to the Newton direction than those of a steepest descent method, such as backpropagation. We detail a novel adaptive modified Gauss-Newton optimization technique, which uses an adaptive learning rate to determine both the magnitude and direction of update steps. For a wide range of adaptive filtering applications, the new training algorithm converges faster and to a smaller value of cost than both steepest-descent methods such as backpropagation-through-time, and standard quasi-Newton methods. We apply the algorithm to modeling the inverse of a nonlinear dynamic tracking system 5, as well as a nonlinear amplifier 6.

  11. Nonlinear mode decomposition: A noise-robust, adaptive decomposition method

    NASA Astrophysics Data System (ADS)

    Iatsenko, Dmytro; McClintock, Peter V. E.; Stefanovska, Aneta

    2015-09-01

    The signals emanating from complex systems are usually composed of a mixture of different oscillations which, for a reliable analysis, should be separated from each other and from the inevitable background of noise. Here we introduce an adaptive decomposition tool—nonlinear mode decomposition (NMD)—which decomposes a given signal into a set of physically meaningful oscillations for any wave form, simultaneously removing the noise. NMD is based on the powerful combination of time-frequency analysis techniques—which, together with the adaptive choice of their parameters, make it extremely noise robust—and surrogate data tests used to identify interdependent oscillations and to distinguish deterministic from random activity. We illustrate the application of NMD to both simulated and real signals and demonstrate its qualitative and quantitative superiority over other approaches, such as (ensemble) empirical mode decomposition, Karhunen-Loève expansion, and independent component analysis. We point out that NMD is likely to be applicable and useful in many different areas of research, such as geophysics, finance, and the life sciences. The necessary matlab codes for running NMD are freely available for download.

  12. Nonlinear mode decomposition: a noise-robust, adaptive decomposition method.

    PubMed

    Iatsenko, Dmytro; McClintock, Peter V E; Stefanovska, Aneta

    2015-09-01

    The signals emanating from complex systems are usually composed of a mixture of different oscillations which, for a reliable analysis, should be separated from each other and from the inevitable background of noise. Here we introduce an adaptive decomposition tool-nonlinear mode decomposition (NMD)-which decomposes a given signal into a set of physically meaningful oscillations for any wave form, simultaneously removing the noise. NMD is based on the powerful combination of time-frequency analysis techniques-which, together with the adaptive choice of their parameters, make it extremely noise robust-and surrogate data tests used to identify interdependent oscillations and to distinguish deterministic from random activity. We illustrate the application of NMD to both simulated and real signals and demonstrate its qualitative and quantitative superiority over other approaches, such as (ensemble) empirical mode decomposition, Karhunen-Loève expansion, and independent component analysis. We point out that NMD is likely to be applicable and useful in many different areas of research, such as geophysics, finance, and the life sciences. The necessary matlab codes for running NMD are freely available for download.

  13. Simulation of nonpoint source contamination based on adaptive mesh refinement

    NASA Astrophysics Data System (ADS)

    Kourakos, G.; Harter, T.

    2014-12-01

    Contamination of groundwater aquifers from nonpoint sources is a worldwide problem. Typical agricultural groundwater basins receive contamination from a large array (in the order of ~10^5-6) of spatially and temporally heterogeneous sources such as fields, crops, dairies etc, while the received contaminants emerge at significantly uncertain time lags to a large array of discharge surfaces such as public supply, domestic and irrigation wells and streams. To support decision making in such complex regimes several approaches have been developed, which can be grouped into 3 categories: i) Index methods, ii)regression methods and iii) physically based methods. Among the three, physically based methods are considered more accurate, but at the cost of computational demand. In this work we present a physically based simulation framework which exploits the latest hardware and software developments to simulate large (>>1,000 km2) groundwater basins. First we simulate groundwater flow using a sufficiently detailed mesh to capture the spatial heterogeneity. To achieve optimal mesh quality we combine adaptive mesh refinement with the nonlinear solution for unconfined flow. Starting from a coarse grid the mesh is refined iteratively in the parts of the domain where the flow heterogeneity appears higher resulting in optimal grid. Secondly we simulate the nonpoint source pollution based on the detailed velocity field computed from the previous step. In our approach we use the streamline model where the 3D transport problem is decomposed into multiple 1D transport problems. The proposed framework is applied to simulate nonpoint source pollution in the Central Valley aquifer system, California.

  14. An adaptive Cartesian grid generation method for Dirty geometry

    NASA Astrophysics Data System (ADS)

    Wang, Z. J.; Srinivasan, Kumar

    2002-07-01

    Traditional structured and unstructured grid generation methods need a water-tight boundary surface grid to start. Therefore, these methods are named boundary to interior (B2I) approaches. Although these methods have achieved great success in fluid flow simulations, the grid generation process can still be very time consuming if non-water-tight geometries are given. Significant user time can be taken to repair or clean a dirty geometry with cracks, overlaps or invalid manifolds before grid generation can take place. In this paper, we advocate a different approach in grid generation, namely the interior to boundary (I2B) approach. With an I2B approach, the computational grid is first generated inside the computational domain. Then this grid is intelligently connected to the boundary, and the boundary grid is a result of this connection. A significant advantage of the I2B approach is that dirty geometries can be handled without cleaning or repairing, dramatically reducing grid generation time. An I2B adaptive Cartesian grid generation method is developed in this paper to handle dirty geometries without geometry repair. Comparing with a B2I approach, the grid generation time with the I2B approach for a complex automotive engine can be reduced by three orders of magnitude. Copyright

  15. The PCNN adaptive segmentation algorithm based on visual perception

    NASA Astrophysics Data System (ADS)

    Zhao, Yanming

    To solve network adaptive parameter determination problem of the pulse coupled neural network (PCNN), and improve the image segmentation results in image segmentation. The PCNN adaptive segmentation algorithm based on visual perception of information is proposed. Based on the image information of visual perception and Gabor mathematical model of Optic nerve cells receptive field, the algorithm determines adaptively the receptive field of each pixel of the image. And determines adaptively the network parameters W, M, and β of PCNN by the Gabor mathematical model, which can overcome the problem of traditional PCNN parameter determination in the field of image segmentation. Experimental results show that the proposed algorithm can improve the region connectivity and edge regularity of segmentation image. And also show the PCNN of visual perception information for segmentation image of advantage.

  16. A forward method for optimal stochastic nonlinear and adaptive control

    NASA Technical Reports Server (NTRS)

    Bayard, David S.

    1988-01-01

    A computational approach is taken to solve the optimal nonlinear stochastic control problem. The approach is to systematically solve the stochastic dynamic programming equations forward in time, using a nested stochastic approximation technique. Although computationally intensive, this provides a straightforward numerical solution for this class of problems and provides an alternative to the usual dimensionality problem associated with solving the dynamic programming equations backward in time. It is shown that the cost degrades monotonically as the complexity of the algorithm is reduced. This provides a strategy for suboptimal control with clear performance/computation tradeoffs. A numerical study focusing on a generic optimal stochastic adaptive control example is included to demonstrate the feasibility of the method.

  17. Adaptive mesh refinement and adjoint methods in geophysics simulations

    NASA Astrophysics Data System (ADS)

    Burstedde, Carsten

    2013-04-01

    It is an ongoing challenge to increase the resolution that can be achieved by numerical geophysics simulations. This applies to considering sub-kilometer mesh spacings in global-scale mantle convection simulations as well as to using frequencies up to 1 Hz in seismic wave propagation simulations. One central issue is the numerical cost, since for three-dimensional space discretizations, possibly combined with time stepping schemes, a doubling of resolution can lead to an increase in storage requirements and run time by factors between 8 and 16. A related challenge lies in the fact that an increase in resolution also increases the dimensionality of the model space that is needed to fully parametrize the physical properties of the simulated object (a.k.a. earth). Systems that exhibit a multiscale structure in space are candidates for employing adaptive mesh refinement, which varies the resolution locally. An example that we found well suited is the mantle, where plate boundaries and fault zones require a resolution on the km scale, while deeper area can be treated with 50 or 100 km mesh spacings. This approach effectively reduces the number of computational variables by several orders of magnitude. While in this case it is possible to derive the local adaptation pattern from known physical parameters, it is often unclear what are the most suitable criteria for adaptation. We will present the goal-oriented error estimation procedure, where such criteria are derived from an objective functional that represents the observables to be computed most accurately. Even though this approach is well studied, it is rarely used in the geophysics community. A related strategy to make finer resolution manageable is to design methods that automate the inference of model parameters. Tweaking more than a handful of numbers and judging the quality of the simulation by adhoc comparisons to known facts and observations is a tedious task and fundamentally limited by the turnaround times

  18. Adaptive control system having hedge unit and related apparatus and methods

    NASA Technical Reports Server (NTRS)

    Johnson, Eric Norman (Inventor); Calise, Anthony J. (Inventor)

    2003-01-01

    The invention includes an adaptive control system used to control a plant. The adaptive control system includes a hedge unit that receives at least one control signal and a plant state signal. The hedge unit generates a hedge signal based on the control signal, the plant state signal, and a hedge model including a first model having one or more characteristics to which the adaptive control system is not to adapt, and a second model not having the characteristic(s) to which the adaptive control system is not to adapt. The hedge signal is used in the adaptive control system to remove the effect of the characteristic from a signal supplied to an adaptation law unit of the adaptive control system so that the adaptive control system does not adapt to the characteristic in controlling the plant.

  19. Adaptive control system having hedge unit and related apparatus and methods

    NASA Technical Reports Server (NTRS)

    Johnson, Eric Norman (Inventor); Calise, Anthony J. (Inventor)

    2007-01-01

    The invention includes an adaptive control system used to control a plant. The adaptive control system includes a hedge unit that receives at least one control signal and a plant state signal. The hedge unit generates a hedge signal based on the control signal, the plant state signal, and a hedge model including a first model having one or more characteristics to which the adaptive control system is not to adapt, and a second model not having the characteristic(s) to which the adaptive control system is not to adapt. The hedge signal is used in the adaptive control system to remove the effect of the characteristic from a signal supplied to an adaptation law unit of the adaptive control system so that the adaptive control system does not adapt to the characteristic in controlling the plant.

  20. Mixed Methods in Intervention Research: Theory to Adaptation

    ERIC Educational Resources Information Center

    Nastasi, Bonnie K.; Hitchcock, John; Sarkar, Sreeroopa; Burkholder, Gary; Varjas, Kristen; Jayasena, Asoka

    2007-01-01

    The purpose of this article is to demonstrate the application of mixed methods research designs to multiyear programmatic research and development projects whose goals include integration of cultural specificity when generating or translating evidence-based practices. The authors propose a set of five mixed methods designs related to different…

  1. Distributed adaptive simulation through standards-based integration of simulators and adaptive learning systems.

    PubMed

    Bergeron, Bryan; Cline, Andrew; Shipley, Jaime

    2012-01-01

    We have developed a distributed, standards-based architecture that enables simulation and simulator designers to leverage adaptive learning systems. Our approach, which incorporates an electronic competency record, open source LMS, and open source microcontroller hardware, is a low-cost, pragmatic option to integrating simulators with traditional courseware.

  2. A wavelet-optimized, very high order adaptive grid and order numerical method

    NASA Technical Reports Server (NTRS)

    Jameson, Leland

    1996-01-01

    Differencing operators of arbitrarily high order can be constructed by interpolating a polynomial through a set of data followed by differentiation of this polynomial and finally evaluation of the polynomial at the point where a derivative approximation is desired. Furthermore, the interpolating polynomial can be constructed from algebraic, trigonometric, or, perhaps exponential polynomials. This paper begins with a comparison of such differencing operator construction. Next, the issue of proper grids for high order polynomials is addressed. Finally, an adaptive numerical method is introduced which adapts the numerical grid and the order of the differencing operator depending on the data. The numerical grid adaptation is performed on a Chebyshev grid. That is, at each level of refinement the grid is a Chebvshev grid and this grid is refined locally based on wavelet analysis.

  3. Adaptive surrogate model based multi-objective transfer trajectory optimization between different libration points

    NASA Astrophysics Data System (ADS)

    Peng, Haijun; Wang, Wei

    2016-10-01

    An adaptive surrogate model-based multi-objective optimization strategy that combines the benefits of invariant manifolds and low-thrust control toward developing a low-computational-cost transfer trajectory between libration orbits around the L1 and L2 libration points in the Sun-Earth system has been proposed in this paper. A new structure for a multi-objective transfer trajectory optimization model that divides the transfer trajectory into several segments and gives the dominations for invariant manifolds and low-thrust control in different segments has been established. To reduce the computational cost of multi-objective transfer trajectory optimization, a mixed sampling strategy-based adaptive surrogate model has been proposed. Numerical simulations show that the results obtained from the adaptive surrogate-based multi-objective optimization are in agreement with the results obtained using direct multi-objective optimization methods, and the computational workload of the adaptive surrogate-based multi-objective optimization is only approximately 10% of that of direct multi-objective optimization. Furthermore, the generating efficiency of the Pareto points of the adaptive surrogate-based multi-objective optimization is approximately 8 times that of the direct multi-objective optimization. Therefore, the proposed adaptive surrogate-based multi-objective optimization provides obvious advantages over direct multi-objective optimization methods.

  4. Adaptive projection method applied to three-dimensional ultrasonic focusing and steering through the ribs.

    PubMed

    Cochard, E; Aubry, J F; Tanter, M; Prada, C

    2011-08-01

    An adaptive projection method for ultrasonic focusing through the rib cage, with minimal energy deposition on the ribs, was evaluated experimentally in 3D geometry. Adaptive projection is based on decomposition of the time-reversal operator (DORT method) and projection on the "noise" subspace. It is shown that 3D implementation of this method is straightforward, and not more time-consuming than 2D. Comparisons are made between adaptive projection, spherical focusing, and a previously proposed time-reversal focusing method, by measuring pressure fields in the focal plane and rib region using the three methods. The ratio of the specific absorption rate at the focus over the one at the ribs was found to be increased by a factor of up to eight, versus spherical emission. Beam steering out of geometric focus was also investigated. For all configurations projecting steered emissions were found to deposit less energy on the ribs than steering time-reversed emissions: thus the non-invasive method presented here is more efficient than state-of-the-art invasive techniques. In fact, this method could be used for real-time treatment, because a single acquisition of back-scattered echoes from the ribs is enough to treat a large volume around the focus, thanks to real time projection of the steered beams.

  5. Feature Selection for Natural Language Call Routing Based on Self-Adaptive Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Koromyslova, A.; Semenkina, M.; Sergienko, R.

    2017-02-01

    The text classification problem for natural language call routing was considered in the paper. Seven different term weighting methods were applied. As dimensionality reduction methods, the feature selection based on self-adaptive GA is considered. k-NN, linear SVM and ANN were used as classification algorithms. The tasks of the research are the following: perform research of text classification for natural language call routing with different term weighting methods and classification algorithms and investigate the feature selection method based on self-adaptive GA. The numerical results showed that the most effective term weighting is TRR. The most effective classification algorithm is ANN. Feature selection with self-adaptive GA provides improvement of classification effectiveness and significant dimensionality reduction with all term weighting methods and with all classification algorithms.

  6. Effect of Repeated Simulated Disinfections by Microwave Energy on the Complete Denture Base Adaptation

    PubMed Central

    Consani, Rafael L.X.; Iwasaki, Rose Y; Mesquita, Marcelo F; Mendes, Wilson B; Consani, Simonides

    2008-01-01

    This study evaluated the effect of repeated microwave disinfections on the adaptation of the maxillar denture base using 2 different flask closure methods. Twenty stone cast-wax base sets were prepared for flasking by traditional cramp or RS system methods. Five bases for each method were submitted to 5 repeated simulated disinfections in a microwave oven with 650W for 3 minutes. Control bases were not disinfected. Three transverse cuts were made through each stone cast-resin base set, corresponding to canine, first molar, and posterior region. Measurements were made using an optical micrometer at 5 points for each cut to determine base adaptation: left and right marginal limits of the flanges, left and right ridge crests, and midline. Results for base adaptation performed by the flask closure methods were: traditional cramp (non-disinfected = 0.21 ± 0.05mm and disinfected = 0.22 ± 0.05mm), and RS system (non-disinfected = 0.16 ± 0.05 and disinfected = 0.17 ± 0.04mm). Collected data were submitted to ANOVA and Tukey test (α=.05). Repeated simulated disinfections by microwave energy did not cause deleterious effect on the base adaptation, when the traditional cramp and RS system flask closure methods were compared. PMID:19088884

  7. Adaptive Device Context Based Mobile Learning Systems

    ERIC Educational Resources Information Center

    Pu, Haitao; Lin, Jinjiao; Song, Yanwei; Liu, Fasheng

    2011-01-01

    Mobile learning is e-learning delivered through mobile computing devices, which represents the next stage of computer-aided, multi-media based learning. Therefore, mobile learning is transforming the way of traditional education. However, as most current e-learning systems and their contents are not suitable for mobile devices, an approach for…

  8. An Evidence-Based Public Health Approach to Climate Change Adaptation

    PubMed Central

    Eidson, Millicent; Tlumak, Jennifer E.; Raab, Kristin K.; Luber, George

    2014-01-01

    Background: Public health is committed to evidence-based practice, yet there has been minimal discussion of how to apply an evidence-based practice framework to climate change adaptation. Objectives: Our goal was to review the literature on evidence-based public health (EBPH), to determine whether it can be applied to climate change adaptation, and to consider how emphasizing evidence-based practice may influence research and practice decisions related to public health adaptation to climate change. Methods: We conducted a substantive review of EBPH, identified a consensus EBPH framework, and modified it to support an EBPH approach to climate change adaptation. We applied the framework to an example and considered implications for stakeholders. Discussion: A modified EBPH framework can accommodate the wide range of exposures, outcomes, and modes of inquiry associated with climate change adaptation and the variety of settings in which adaptation activities will be pursued. Several factors currently limit application of the framework, including a lack of higher-level evidence of intervention efficacy and a lack of guidelines for reporting climate change health impact projections. To enhance the evidence base, there must be increased attention to designing, evaluating, and reporting adaptation interventions; standardized health impact projection reporting; and increased attention to knowledge translation. This approach has implications for funders, researchers, journal editors, practitioners, and policy makers. Conclusions: The current approach to EBPH can, with modifications, support climate change adaptation activities, but there is little evidence regarding interventions and knowledge translation, and guidelines for projecting health impacts are lacking. Realizing the goal of an evidence-based approach will require systematic, coordinated efforts among various stakeholders. Citation: Hess JJ, Eidson M, Tlumak JE, Raab KK, Luber G. 2014. An evidence-based public

  9. Achieving Adaptability through Inquiry Based Learning

    DTIC Science & Technology

    2010-06-01

    knowledge. IBL is based on a different conception of learning, one traceable back to John Dewey (1910) and Jean Piaget (1972; von Glasersfeld, 1995) and...Dewey, 1910; Duffy 2009; Piaget , 1972; Schank, Fano, Bell, and Jona, 1993). If the learners are focused on figuring out what the instructor wants...errors or the inability to fully make sense of a situation provides the basis for learning ( Piaget , 1973; Schank, et al, 1993). Thus the errors

  10. Modeling of Rate-Dependent Hysteresis Using a GPO-Based Adaptive Filter.

    PubMed

    Zhang, Zhen; Ma, Yaopeng

    2016-02-06

    A novel generalized play operator-based (GPO-based) nonlinear adaptive filter is proposed to model rate-dependent hysteresis nonlinearity for smart actuators. In the proposed filter, the input signal vector consists of the output of a tapped delay line. GPOs with various thresholds are used to construct a nonlinear network and connected with the input signals. The output signal of the filter is composed of a linear combination of signals from the output of GPOs. The least-mean-square (LMS) algorithm is used to adjust the weights of the nonlinear filter. The modeling results of four adaptive filter methods are compared: GPO-based adaptive filter, Volterra filter, backlash filter and linear adaptive filter. Moreover, a phenomenological operator-based model, the rate-dependent generalized Prandtl-Ishlinskii (RDGPI) model, is compared to the proposed adaptive filter. The various rate-dependent modeling methods are applied to model the rate-dependent hysteresis of a giant magnetostrictive actuator (GMA). It is shown from the modeling results that the GPO-based adaptive filter can describe the rate-dependent hysteresis nonlinear of the GMA more accurately and effectively.

  11. Adaptive enriched Galerkin methods for miscible displacement problems with entropy residual stabilization

    NASA Astrophysics Data System (ADS)

    Lee, Sanghyun; Wheeler, Mary F.

    2017-02-01

    We present a novel approach to the simulation of miscible displacement by employing adaptive enriched Galerkin finite element methods (EG) coupled with entropy residual stabilization for transport. In particular, numerical simulations of viscous fingering instabilities in heterogeneous porous media and Hele-Shaw cells are illustrated. EG is formulated by enriching the conforming continuous Galerkin finite element method (CG) with piecewise constant functions. The method provides locally and globally conservative fluxes, which are crucial for coupled flow and transport problems. Moreover, EG has fewer degrees of freedom in comparison with discontinuous Galerkin (DG) and an efficient flow solver has been derived which allows for higher order schemes. Dynamic adaptive mesh refinement is applied in order to reduce computational costs for large-scale three dimensional applications. In addition, entropy residual based stabilization for high order EG transport systems prevents spurious oscillations. Numerical tests are presented to show the capabilities of EG applied to flow and transport.

  12. Method for removing tilt control in adaptive optics systems

    DOEpatents

    Salmon, Joseph Thaddeus

    1998-01-01

    A new adaptive optics system and method of operation, whereby the method removes tilt control, and includes the steps of using a steering mirror to steer a wavefront in the desired direction, for aiming an impinging aberrated light beam in the direction of a deformable mirror. The deformable mirror has its surface deformed selectively by means of a plurality of actuators, and compensates, at least partially, for existing aberrations in the light beam. The light beam is split into an output beam and a sample beam, and the sample beam is sampled using a wavefront sensor. The sampled signals are converted into corresponding electrical signals for driving a controller, which, in turn, drives the deformable mirror in a feedback loop in response to the sampled signals, for compensating for aberrations in the wavefront. To this purpose, a displacement error (gradient) of the wavefront is measured, and adjusted by a modified gain matrix, which satisfies the following equation: G'=(I-X(X.sup.T X).sup.-1 X.sup.T)G(I-A)

  13. Method for removing tilt control in adaptive optics systems

    DOEpatents

    Salmon, J.T.

    1998-04-28

    A new adaptive optics system and method of operation are disclosed, whereby the method removes tilt control, and includes the steps of using a steering mirror to steer a wavefront in the desired direction, for aiming an impinging aberrated light beam in the direction of a deformable mirror. The deformable mirror has its surface deformed selectively by means of a plurality of actuators, and compensates, at least partially, for existing aberrations in the light beam. The light beam is split into an output beam and a sample beam, and the sample beam is sampled using a wavefront sensor. The sampled signals are converted into corresponding electrical signals for driving a controller, which, in turn, drives the deformable mirror in a feedback loop in response to the sampled signals, for compensating for aberrations in the wavefront. To this purpose, a displacement error (gradient) of the wavefront is measured, and adjusted by a modified gain matrix, which satisfies the following equation: G{prime} = (I{minus}X(X{sup T} X){sup {minus}1}X{sup T})G(I{minus}A). 3 figs.

  14. Adapted G-mode Clustering Method applied to Asteroid Taxonomy

    NASA Astrophysics Data System (ADS)

    Hasselmann, Pedro H.; Carvano, Jorge M.; Lazzaro, D.

    2013-11-01

    The original G-mode was a clustering method developed by A. I. Gavrishin in the late 60's for geochemical classification of rocks, but was also applied to asteroid photometry, cosmic rays, lunar sample and planetary science spectroscopy data. In this work, we used an adapted version to classify the asteroid photometry from SDSS Moving Objects Catalog. The method works by identifying normal distributions in a multidimensional space of variables. The identification starts by locating a set of points with smallest mutual distance in the sample, which is a problem when data is not planar. Here we present a modified version of the G-mode algorithm, which was previously written in FORTRAN 77, in Python 2.7 and using NumPy, SciPy and Matplotlib packages. The NumPy was used for array and matrix manipulation and Matplotlib for plot control. The Scipy had a import role in speeding up G-mode, Scipy.spatial.distance.mahalanobis was chosen as distance estimator and Numpy.histogramdd was applied to find the initial seeds from which clusters are going to evolve. Scipy was also used to quickly produce dendrograms showing the distances among clusters. Finally, results for Asteroids Taxonomy and tests for different sample sizes and implementations are presented.

  15. Adaptive neural network consensus based control of robot formations

    NASA Astrophysics Data System (ADS)

    Guzey, H. M.; Sarangapani, Jagannathan

    2013-05-01

    In this paper, adaptive consensus based formation control scheme is derived for mobile robots in a pre-defined formation when full dynamics of the robots which include inertia, Corolis, and friction vector are considered. It is shown that dynamic uncertainties of robots can make overall formation unstable when traditional consensus scheme is utilized. In order to estimate the affine nonlinear robot dynamics, a NN based adaptive scheme is utilized. In addition to this adaptive feedback control input, an additional control input is introduced based on the consensus approach to make the robots keep their desired formation. Subsequently, the outer consensus loop is redesigned for reduced communication. Lyapunov theory is used to show the stability of overall system. Simulation results are included at the end.

  16. Adaptive Kalman filtering methods for tracking GPS signals in high noise/high dynamic environments

    NASA Astrophysics Data System (ADS)

    Zuo, Qiyao; Yuan, Hong; Lin, Baojun

    2007-11-01

    GPS C/A signal tracking algorithms have been developed based on adaptive Kalman filtering theory. In the research, an adaptive Kalman filter is used to substitute for standard tracking loop filters. The goal is to improve estimation accuracy and tracking stabilization in high noise and high dynamic environments. The linear dynamics model and the measurements model are designed to estimate code phase, carrier phase, Doppler shift, and rate of change of Doppler shift. Two adaptive algorithms are applied to improve robustness and adaptive faculty of the tracking, one is Sage adaptive filtering approach and the other is strong tracking method. Both the new algorithms and the conventional tracking loop have been tested by using simulation data. In the simulation experiment, the highest jerk of the receiver is set to 10G m/s 3 with the lowest C/No 30dBHz. The results indicate that the Kalman filtering algorithms are more robust than the standard tracking loop, and performance of tracking loop using the algorithms is satisfactory in such extremely adverse circumstances.

  17. A Self-Adaptive Projection and Contraction Method for Linear Complementarity Problems

    SciTech Connect

    Liao Lizhi Wang Shengli

    2003-10-15

    In this paper we develop a self-adaptive projection and contraction method for the linear complementarity problem (LCP). This method improves the practical performance of the modified projection and contraction method by adopting a self-adaptive technique. The global convergence of our new method is proved under mild assumptions. Our numerical tests clearly demonstrate the necessity and effectiveness of our proposed method.

  18. An h-adaptive finite element method for turbulent heat transfer

    SciTech Connect

    Carriington, David B

    2009-01-01

    A two-equation turbulence closure model (k-{omega}) using an h-adaptive grid technique and finite element method (FEM) has been developed to simulate low Mach flow and heat transfer. These flows are applicable to many flows in engineering and environmental sciences. Of particular interest in the engineering modeling areas are: combustion, solidification, and heat exchanger design. Flows for indoor air quality modeling and atmospheric pollution transport are typical types of environmental flows modeled with this method. The numerical method is based on a hybrid finite element model using an equal-order projection process. The model includes thermal and species transport, localized mesh refinement (h-adaptive) and Petrov-Galerkin weighting for the stabilizing the advection. This work develops the continuum model of a two-equation turbulence closure method. The fractional step solution method is stated along with the h-adaptive grid method (Carrington and Pepper, 2002). Solutions are presented for 2d flow over a backward-facing step.

  19. Adaptive conventional power system stabilizer based on artificial neural network

    SciTech Connect

    Kothari, M.L.; Segal, R.; Ghodki, B.K.

    1995-12-31

    This paper deals with an artificial neural network (ANN) based adaptive conventional power system stabilizer (PSS). The ANN comprises an input layer, a hidden layer and an output layer. The input vector to the ANN comprises real power (P) and reactive power (Q), while the output vector comprises optimum PSS parameters. A systematic approach for generating training set covering wide range of operating conditions, is presented. The ANN has been trained using back-propagation training algorithm. Investigations reveal that the dynamic performance of ANN based adaptive conventional PSS is quite insensitive to wide variations in loading conditions.

  20. HMM-Based Style Control for Expressive Speech Synthesis with Arbitrary Speaker's Voice Using Model Adaptation

    NASA Astrophysics Data System (ADS)

    Nose, Takashi; Tachibana, Makoto; Kobayashi, Takao

    This paper presents methods for controlling the intensity of emotional expressions and speaking styles of an arbitrary speaker's synthetic speech by using a small amount of his/her speech data in HMM-based speech synthesis. Model adaptation approaches are introduced into the style control technique based on the multiple-regression hidden semi-Markov model (MRHSMM). Two different approaches are proposed for training a target speaker's MRHSMMs. The first one is MRHSMM-based model adaptation in which the pretrained MRHSMM is adapted to the target speaker's model. For this purpose, we formulate the MLLR adaptation algorithm for the MRHSMM. The second method utilizes simultaneous adaptation of speaker and style from an average voice model to obtain the target speaker's style-dependent HSMMs which are used for the initialization of the MRHSMM. From the result of subjective evaluation using adaptation data of 50 sentences of each style, we show that the proposed methods outperform the conventional speaker-dependent model training when using the same size of speech data of the target speaker.

  1. Principles and Methods of Adapted Physical Education and Recreation.

    ERIC Educational Resources Information Center

    Arnheim, Daniel D.; And Others

    This text is designed for the elementary and secondary school physical educator and the recreation specialist in adapted physical education and, more specifically, as a text for college courses in adapted and corrective physical education and therapeutic recreation. The text is divided into four major divisions: scope, key teaching and therapy…

  2. Adaptation.

    PubMed

    Broom, Donald M

    2006-01-01

    The term adaptation is used in biology in three different ways. It may refer to changes which occur at the cell and organ level, or at the individual level, or at the level of gene action and evolutionary processes. Adaptation by cells, especially nerve cells helps in: communication within the body, the distinguishing of stimuli, the avoidance of overload and the conservation of energy. The time course and complexity of these mechanisms varies. Adaptive characters of organisms, including adaptive behaviours, increase fitness so this adaptation is evolutionary. The major part of this paper concerns adaptation by individuals and its relationships to welfare. In complex animals, feed forward control is widely used. Individuals predict problems and adapt by acting before the environmental effect is substantial. Much of adaptation involves brain control and animals have a set of needs, located in the brain and acting largely via motivational mechanisms, to regulate life. Needs may be for resources but are also for actions and stimuli which are part of the mechanism which has evolved to obtain the resources. Hence pigs do not just need food but need to be able to carry out actions like rooting in earth or manipulating materials which are part of foraging behaviour. The welfare of an individual is its state as regards its attempts to cope with its environment. This state includes various adaptive mechanisms including feelings and those which cope with disease. The part of welfare which is concerned with coping with pathology is health. Disease, which implies some significant effect of pathology, always results in poor welfare. Welfare varies over a range from very good, when adaptation is effective and there are feelings of pleasure or contentment, to very poor. A key point concerning the concept of individual adaptation in relation to welfare is that welfare may be good or poor while adaptation is occurring. Some adaptation is very easy and energetically cheap and

  3. Functional phase response curves: a method for understanding synchronization of adapting neurons.

    PubMed

    Cui, Jianxia; Canavier, Carmen C; Butera, Robert J

    2009-07-01

    Phase response curves (PRCs) for a single neuron are often used to predict the synchrony of mutually coupled neurons. Previous theoretical work on pulse-coupled oscillators used single-pulse perturbations. We propose an alternate method in which functional PRCs (fPRCs) are generated using a train of pulses applied at a fixed delay after each spike, with the PRC measured when the phasic relationship between the stimulus and the subsequent spike in the neuron has converged. The essential information is the dependence of the recovery time from pulse onset until the next spike as a function of the delay between the previous spike and the onset of the applied pulse. Experimental fPRCs in Aplysia pacemaker neurons were different from single-pulse PRCs, principally due to adaptation. In the biological neuron, convergence to the fully adapted recovery interval was slower at some phases than that at others because the change in the effective intrinsic period due to adaptation changes the effective phase resetting in a way that opposes and slows the effects of adaptation. The fPRCs for two isolated adapting model neurons were used to predict the existence and stability of 1:1 phase-locked network activity when the two neurons were coupled. A stability criterion was derived by linearizing a coupled map based on the fPRC and the existence and stability criteria were successfully tested in two-simulated-neuron networks with reciprocal inhibition or excitation. The fPRC is the first PRC-based tool that can account for adaptation in analyzing networks of neural oscillators.

  4. Functional Phase Response Curves: A Method for Understanding Synchronization of Adapting Neurons

    PubMed Central

    Cui, Jianxia; Canavier, Carmen C.; Butera, Robert J.

    2009-01-01

    Phase response curves (PRCs) for a single neuron are often used to predict the synchrony of mutually coupled neurons. Previous theoretical work on pulse-coupled oscillators used single-pulse perturbations. We propose an alternate method in which functional PRCs (fPRCs) are generated using a train of pulses applied at a fixed delay after each spike, with the PRC measured when the phasic relationship between the stimulus and the subsequent spike in the neuron has converged. The essential information is the dependence of the recovery time from pulse onset until the next spike as a function of the delay between the previous spike and the onset of the applied pulse. Experimental fPRCs in Aplysia pacemaker neurons were different from single-pulse PRCs, principally due to adaptation. In the biological neuron, convergence to the fully adapted recovery interval was slower at some phases than that at others because the change in the effective intrinsic period due to adaptation changes the effective phase resetting in a way that opposes and slows the effects of adaptation. The fPRCs for two isolated adapting model neurons were used to predict the existence and stability of 1:1 phase-locked network activity when the two neurons were coupled. A stability criterion was derived by linearizing a coupled map based on the fPRC and the existence and stability criteria were successfully tested in two-simulated-neuron networks with reciprocal inhibition or excitation. The fPRC is the first PRC-based tool that can account for adaptation in analyzing networks of neural oscillators. PMID:19420126

  5. An adaptive kernel smoothing method for classifying Austrosimulium tillyardianum (Diptera: Simuliidae) larval instars.

    PubMed

    Cen, Guanjun; Yu, Yonghao; Zeng, Xianru; Long, Xiuzhen; Wei, Dewei; Gao, Xuyuan; Zeng, Tao

    2015-01-01

    In insects, the frequency distribution of the measurements of sclerotized body parts is generally used to classify larval instars and is characterized by a multimodal overlap between instar stages. Nonparametric methods with fixed bandwidths, such as histograms, have significant limitations when used to fit this type of distribution, making it difficult to identify divisions between instars. Fixed bandwidths have also been chosen somewhat subjectively in the past, which is another problem. In this study, we describe an adaptive kernel smoothing method to differentiate instars based on discontinuities in the growth rates of sclerotized insect body parts. From Brooks' rule, we derived a new standard for assessing the quality of instar classification and a bandwidth selector that more accurately reflects the distributed character of specific variables. We used this method to classify the larvae of Austrosimulium tillyardianum (Diptera: Simuliidae) based on five different measurements. Based on head capsule width and head capsule length, the larvae were separated into nine instars. Based on head capsule postoccipital width and mandible length, the larvae were separated into 8 instars and 10 instars, respectively. No reasonable solution was found for antennal segment 3 length. Separation of the larvae into nine instars using head capsule width or head capsule length was most robust and agreed with Crosby's growth rule. By strengthening the distributed character of the separation variable through the use of variable bandwidths, the adaptive kernel smoothing method could identify divisions between instars more effectively and accurately than previous methods.

  6. An Adaptive Kernel Smoothing Method for Classifying Austrosimulium tillyardianum (Diptera: Simuliidae) Larval Instars

    PubMed Central

    Cen, Guanjun; Zeng, Xianru; Long, Xiuzhen; Wei, Dewei; Gao, Xuyuan; Zeng, Tao

    2015-01-01

    In insects, the frequency distribution of the measurements of sclerotized body parts is generally used to classify larval instars and is characterized by a multimodal overlap between instar stages. Nonparametric methods with fixed bandwidths, such as histograms, have significant limitations when used to fit this type of distribution, making it difficult to identify divisions between instars. Fixed bandwidths have also been chosen somewhat subjectively in the past, which is another problem. In this study, we describe an adaptive kernel smoothing method to differentiate instars based on discontinuities in the growth rates of sclerotized insect body parts. From Brooks’ rule, we derived a new standard for assessing the quality of instar classification and a bandwidth selector that more accurately reflects the distributed character of specific variables. We used this method to classify the larvae of Austrosimulium tillyardianum (Diptera: Simuliidae) based on five different measurements. Based on head capsule width and head capsule length, the larvae were separated into nine instars. Based on head capsule postoccipital width and mandible length, the larvae were separated into 8 instars and 10 instars, respectively. No reasonable solution was found for antennal segment 3 length. Separation of the larvae into nine instars using head capsule width or head capsule length was most robust and agreed with Crosby’s growth rule. By strengthening the distributed character of the separation variable through the use of variable bandwidths, the adaptive kernel smoothing method could identify divisions between instars more effectively and accurately than previous methods. PMID:26546689

  7. Millimetre Level Accuracy GNSS Positioning with the Blind Adaptive Beamforming Method in Interference Environments.

    PubMed

    Daneshmand, Saeed; Marathe, Thyagaraja; Lachapelle, Gérard

    2016-10-31

    The use of antenna arrays in Global Navigation Satellite System (GNSS) applications is gaining significant attention due to its superior capability to suppress both narrowband and wideband interference. However, the phase distortions resulting from array processing may limit the applicability of these methods for high precision applications using carrier phase based positioning techniques. This paper studies the phase distortions occurring with the adaptive blind beamforming method in which satellite angle of arrival (AoA) information is not employed in the optimization problem. To cater to non-stationary interference scenarios, the array weights of the adaptive beamformer are continuously updated. The effects of these continuous updates on the tracking parameters of a GNSS receiver are analyzed. The second part of this paper focuses on reducing the phase distortions during the blind beamforming process in order to allow the receiver to perform carrier phase based positioning by applying a constraint on the structure of the array configuration and by compensating the array uncertainties. Limitations of the previous methods are studied and a new method is proposed that keeps the simplicity of the blind beamformer structure and, at the same time, reduces tracking degradations while achieving millimetre level positioning accuracy in interference environments. To verify the applicability of the proposed method and analyze the degradations, array signals corresponding to the GPS L1 band are generated using a combination of hardware and software simulators. Furthermore, the amount of degradation and performance of the proposed method under different conditions are evaluated based on Monte Carlo simulations.

  8. Millimetre Level Accuracy GNSS Positioning with the Blind Adaptive Beamforming Method in Interference Environments

    PubMed Central

    Daneshmand, Saeed; Marathe, Thyagaraja; Lachapelle, Gérard

    2016-01-01

    The use of antenna arrays in Global Navigation Satellite System (GNSS) applications is gaining significant attention due to its superior capability to suppress both narrowband and wideband interference. However, the phase distortions resulting from array processing may limit the applicability of these methods for high precision applications using carrier phase based positioning techniques. This paper studies the phase distortions occurring with the adaptive blind beamforming method in which satellite angle of arrival (AoA) information is not employed in the optimization problem. To cater to non-stationary interference scenarios, the array weights of the adaptive beamformer are continuously updated. The effects of these continuous updates on the tracking parameters of a GNSS receiver are analyzed. The second part of this paper focuses on reducing the phase distortions during the blind beamforming process in order to allow the receiver to perform carrier phase based positioning by applying a constraint on the structure of the array configuration and by compensating the array uncertainties. Limitations of the previous methods are studied and a new method is proposed that keeps the simplicity of the blind beamformer structure and, at the same time, reduces tracking degradations while achieving millimetre level positioning accuracy in interference environments. To verify the applicability of the proposed method and analyze the degradations, array signals corresponding to the GPS L1 band are generated using a combination of hardware and software simulators. Furthermore, the amount of degradation and performance of the proposed method under different conditions are evaluated based on Monte Carlo simulations. PMID:27809252

  9. A hybrid method for optimization of the adaptive Goldstein filter

    NASA Astrophysics Data System (ADS)

    Jiang, Mi; Ding, Xiaoli; Tian, Xin; Malhotra, Rakesh; Kong, Weixue

    2014-12-01

    The Goldstein filter is a well-known filter for interferometric filtering in the frequency domain. The main parameter of this filter, alpha, is set as a power of the filtering function. Depending on it, considered areas are strongly or weakly filtered. Several variants have been developed to adaptively determine alpha using different indicators such as the coherence, and phase standard deviation. The common objective of these methods is to prevent areas with low noise from being over filtered while simultaneously allowing stronger filtering over areas with high noise. However, the estimators of these indicators are biased in the real world and the optimal model to accurately determine the functional relationship between the indicators and alpha is also not clear. As a result, the filter always under- or over-filters and is rarely correct. The study presented in this paper aims to achieve accurate alpha estimation by correcting the biased estimator using homogeneous pixel selection and bootstrapping algorithms, and by developing an optimal nonlinear model to determine alpha. In addition, an iteration is also merged into the filtering procedure to suppress the high noise over incoherent areas. The experimental results from synthetic and real data show that the new filter works well under a variety of conditions and offers better and more reliable performance when compared to existing approaches.

  10. A gradient-adaptive lattice-based complex adaptive notch filter

    NASA Astrophysics Data System (ADS)

    Zhu, Rui; Yang, Feiran; Yang, Jun

    2016-12-01

    This paper presents a new complex adaptive notch filter to estimate and track the frequency of a complex sinusoidal signal. The gradient-adaptive lattice structure instead of the traditional gradient one is adopted to accelerate the convergence rate. It is proved that the proposed algorithm results in unbiased estimations by using the ordinary differential equation approach. The closed-form expressions for the steady-state mean square error and the upper bound of step size are also derived. Simulations are conducted to validate the theoretical analysis and demonstrate that the proposed method generates considerably better convergence rates and tracking properties than existing methods, particularly in low signal-to-noise ratio environments.

  11. Sensorless adaptive optics system based on image second moment measurements

    NASA Astrophysics Data System (ADS)

    Agbana, Temitope E.; Yang, Huizhen; Soloviev, Oleg; Vdovin, Gleb; Verhaegen, Michel

    2016-04-01

    This paper presents experimental results of a static aberration control algorithm based on the linear relation be- tween mean square of the aberration gradient and the second moment of point spread function for the generation of control signal input for a deformable mirror (DM). Results presented in the work of Yang et al.1 suggested a good feasibility of the method for correction of static aberration for point and extended sources. However, a practical realisation of the algorithm has not been demonstrated. The goal of this article is to check the method experimentally in the real conditions of the present noise, finite dynamic range of the imaging camera, and system misalignments. The experiments have shown strong dependence of the linearity of the relationship on image noise and overall image intensity, which depends on the aberration level. Also, the restoration capability and the rate of convergence of the AO system for aberrations generated by the deformable mirror are experi- mentally investigated. The presented approach as well as the experimental results finds practical application in compensation of static aberration in adaptive microscopic imaging system.

  12. Preliminary Exploration of Adaptive State Predictor Based Human Operator Modeling

    NASA Technical Reports Server (NTRS)

    Trujillo, Anna C.; Gregory, Irene M.

    2012-01-01

    Control-theoretic modeling of the human operator dynamic behavior in manual control tasks has a long and rich history. In the last two decades, there has been a renewed interest in modeling the human operator. There has also been significant work on techniques used to identify the pilot model of a given structure. The purpose of this research is to attempt to go beyond pilot identification based on collected experimental data and to develop a predictor of pilot behavior. An experiment was conducted to quantify the effects of changing aircraft dynamics on an operator s ability to track a signal in order to eventually model a pilot adapting to changing aircraft dynamics. A gradient descent estimator and a least squares estimator with exponential forgetting used these data to predict pilot stick input. The results indicate that individual pilot characteristics and vehicle dynamics did not affect the accuracy of either estimator method to estimate pilot stick input. These methods also were able to predict pilot stick input during changing aircraft dynamics and they may have the capability to detect a change in a subject due to workload, engagement, etc., or the effects of changes in vehicle dynamics on the pilot.

  13. Serial identification of EEG patterns using adaptive wavelet-based analysis

    NASA Astrophysics Data System (ADS)

    Nazimov, A. I.; Pavlov, A. N.; Nazimova, A. A.; Grubov, V. V.; Koronovskii, A. A.; Sitnikova, E.; Hramov, A. E.

    2013-10-01

    A problem of recognition specific oscillatory patterns in the electroencephalograms with the continuous wavelet-transform is discussed. Aiming to improve abilities of the wavelet-based tools we propose a serial adaptive method for sequential identification of EEG patterns such as sleep spindles and spike-wave discharges. This method provides an optimal selection of parameters based on objective functions and enables to extract the most informative features of the recognized structures. Different ways of increasing the quality of patterns recognition within the proposed serial adaptive technique are considered.

  14. LDRD Final Report: Adaptive Methods for Laser Plasma Simulation

    SciTech Connect

    Dorr, M R; Garaizar, F X; Hittinger, J A

    2003-01-29

    The goal of this project was to investigate the utility of parallel adaptive mesh refinement (AMR) in the simulation of laser plasma interaction (LPI). The scope of work included the development of new numerical methods and parallel implementation strategies. The primary deliverables were (1) parallel adaptive algorithms to solve a system of equations combining plasma fluid and light propagation models, (2) a research code implementing these algorithms, and (3) an analysis of the performance of parallel AMR on LPI problems. The project accomplished these objectives. New algorithms were developed for the solution of a system of equations describing LPI. These algorithms were implemented in a new research code named ALPS (Adaptive Laser Plasma Simulator) that was used to test the effectiveness of the AMR algorithms on the Laboratory's large-scale computer platforms. The details of the algorithm and the results of the numerical tests were documented in an article published in the Journal of Computational Physics [2]. A principal conclusion of this investigation is that AMR is most effective for LPI systems that are ''hydrodynamically large'', i.e., problems requiring the simulation of a large plasma volume relative to the volume occupied by the laser light. Since the plasma-only regions require less resolution than the laser light, AMR enables the use of efficient meshes for such problems. In contrast, AMR is less effective for, say, a single highly filamented beam propagating through a phase plate, since the resulting speckle pattern may be too dense to adequately separate scales with a locally refined mesh. Ultimately, the gain to be expected from the use of AMR is highly problem-dependent. One class of problems investigated in this project involved a pair of laser beams crossing in a plasma flow. Under certain conditions, energy can be transferred from one beam to the other via a resonant interaction with an ion acoustic wave in the crossing region. AMR provides an

  15. Stability of a modified Peaceman-Rachford method for the paraxial Helmholtz equation on adaptive grids

    NASA Astrophysics Data System (ADS)

    Sheng, Qin; Sun, Hai-wei

    2016-11-01

    This study concerns the asymptotic stability of an eikonal, or ray, transformation based Peaceman-Rachford splitting method for solving the paraxial Helmholtz equation with high wave numbers. Arbitrary nonuniform grids are considered in transverse and beam propagation directions. The differential equation targeted has been used for modeling propagations of high intensity laser pulses over a long distance without diffractions. Self-focusing of high intensity beams may be balanced with the de-focusing effect of created ionized plasma channel in the situation, and applications of grid adaptations are frequently essential. It is shown rigorously that the fully discretized oscillation-free decomposition method on arbitrary adaptive grids is asymptotically stable with a stability index one. Simulation experiments are carried out to illustrate our concern and conclusions.

  16. Research on a Pulmonary Nodule Segmentation Method Combining Fast Self-Adaptive FCM and Classification

    PubMed Central

    Liu, Hui; Zhang, Cai-Ming; Su, Zhi-Yuan; Wang, Kai; Deng, Kai

    2015-01-01

    The key problem of computer-aided diagnosis (CAD) of lung cancer is to segment pathologically changed tissues fast and accurately. As pulmonary nodules are potential manifestation of lung cancer, we propose a fast and self-adaptive pulmonary nodules segmentation method based on a combination of FCM clustering and classification learning. The enhanced spatial function considers contributions to fuzzy membership from both the grayscale similarity between central pixels and single neighboring pixels and the spatial similarity between central pixels and neighborhood and improves effectively the convergence rate and self-adaptivity of the algorithm. Experimental results show that the proposed method can achieve more accurate segmentation of vascular adhesion, pleural adhesion, and ground glass opacity (GGO) pulmonary nodules than other typical algorithms. PMID:25945120

  17. An a posteriori-driven adaptive Mixed High-Order method with application to electrostatics

    NASA Astrophysics Data System (ADS)

    Di Pietro, Daniele A.; Specogna, Ruben

    2016-12-01

    In this work we propose an adaptive version of the recently introduced Mixed High-Order method and showcase its performance on a comprehensive set of academic and industrial problems in computational electromagnetism. The latter include, in particular, the numerical modeling of comb-drive and MEMS devices. Mesh adaptation is driven by newly derived, residual-based error estimators. The resulting method has several advantageous features: It supports fairly general meshes, it enables arbitrary approximation orders, and has a moderate computational cost thanks to hybridization and static condensation. The a posteriori-driven mesh refinement is shown to significantly enhance the performance on problems featuring singular solutions, allowing to fully exploit the high-order of approximation.

  18. Adapting Western research methods to indigenous ways of knowing.

    PubMed

    Simonds, Vanessa W; Christopher, Suzanne

    2013-12-01

    Indigenous communities have long experienced exploitation by researchers and increasingly require participatory and decolonizing research processes. We present a case study of an intervention research project to exemplify a clash between Western research methodologies and Indigenous methodologies and how we attempted reconciliation. We then provide implications for future research based on lessons learned from Native American community partners who voiced concern over methods of Western deductive qualitative analysis. Decolonizing research requires constant reflective attention and action, and there is an absence of published guidance for this process. Continued exploration is needed for implementing Indigenous methods alone or in conjunction with appropriate Western methods when conducting research in Indigenous communities. Currently, examples of Indigenous methods and theories are not widely available in academic texts or published articles, and are often not perceived as valid.

  19. Evidence-Based Practice in Adapted Physical Education

    ERIC Educational Resources Information Center

    Jin, Jooyeon; Yun, Joonkoo

    2010-01-01

    Although implementation of evidence-based practice (EBP) has been strongly advocated by federal legislation as well as school districts in recent years, the concept has not been well accepted in adapted physical education (APE), perhaps due to a lack of understanding of the central notion of EBP. The purpose of this article is to discuss how APE…

  20. Adaptive Knowledge Management of Project-Based Learning

    ERIC Educational Resources Information Center

    Tilchin, Oleg; Kittany, Mohamed

    2016-01-01

    The goal of an approach to Adaptive Knowledge Management (AKM) of project-based learning (PBL) is to intensify subject study through guiding, inducing, and facilitating development knowledge, accountability skills, and collaborative skills of students. Knowledge development is attained by knowledge acquisition, knowledge sharing, and knowledge…

  1. Teaching a Biotechnology Curriculum Based on Adapted Primary Literature

    ERIC Educational Resources Information Center

    Falk, Hedda; Brill, Gilat; Yarden, Anat

    2008-01-01

    Adapted primary literature (APL) refers to an educational genre specifically designed to enable the use of research articles for learning biology in high school. The present investigation focuses on the pedagogical content knowledge (PCK) of four high-school biology teachers who enacted an APL-based curriculum in biotechnology. Using a…

  2. An Adaptive Evaluation Structure for Computer-Based Instruction.

    ERIC Educational Resources Information Center

    Welsh, William A.

    Adaptive Evaluation Structure (AES) is a set of linked computer programs designed to increase the effectiveness of interactive computer-assisted instruction at the college level. The package has four major features, the first of which is based on a prior cognitive inventory and on the accuracy and pace of student responses. AES adjusts materials…

  3. Feasibility of an online adaptive replanning method for cranial frameless intensity-modulated radiosurgery

    SciTech Connect

    Calvo, Juan Francisco; San José, Sol; Garrido, LLuís; Puertas, Enrique; Moragues, Sandra; Pozo, Miquel; Casals, Joan

    2013-10-01

    To introduce an approach for online adaptive replanning (i.e., dose-guided radiosurgery) in frameless stereotactic radiosurgery, when a 6-dimensional (6D) robotic couch is not available in the linear accelerator (linac). Cranial radiosurgical treatments are planned in our department using intensity-modulated technique. Patients are immobilized using thermoplastic mask. A cone-beam computed tomography (CBCT) scan is acquired after the initial laser-based patient setup (CBCT{sub setup}). The online adaptive replanning procedure we propose consists of a 6D registration-based mapping of the reference plan onto actual CBCT{sub setup}, followed by a reoptimization of the beam fluences (“6D plan”) to achieve similar dosage as originally was intended, while the patient is lying in the linac couch and the original beam arrangement is kept. The goodness of the online adaptive method proposed was retrospectively analyzed for 16 patients with 35 targets treated with CBCT-based frameless intensity modulated technique. Simulation of reference plan onto actual CBCT{sub setup}, according to the 4 degrees of freedom, supported by linac couch was also generated for each case (4D plan). Target coverage (D99%) and conformity index values of 6D and 4D plans were compared with the corresponding values of the reference plans. Although the 4D-based approach does not always assure the target coverage (D99% between 72% and 103%), the proposed online adaptive method gave a perfect coverage in all cases analyzed as well as a similar conformity index value as was planned. Dose-guided radiosurgery approach is effective to assure the dose coverage and conformity of an intracranial target volume, avoiding resetting the patient inside the mask in a “trial and error” way so as to remove the pitch and roll errors when a robotic table is not available.

  4. [Novel method of noise power spectrum measurement for computed tomography images with adaptive iterative reconstruction method].

    PubMed

    Nishimaru, Eiji; Ichikawa, Katsuhiro; Hara, Takanori; Terakawa, Shoichi; Yokomachi, Kazushi; Fujioka, Chikako; Kiguchi, Masao; Ishifuro, Minoru

    2012-01-01

    Adaptive iterative reconstruction techniques (IRs) can decrease image noise in computed tomography (CT) and are expected to contribute to reduction of the radiation dose. To evaluate the performance of IRs, the conventional two-dimensional (2D) noise power spectrum (NPS) is widely used. However, when an IR provides an NPS value drop at all spatial frequency (which is similar to NPS changes by dose increase), the conventional method cannot evaluate the correct noise property because the conventional method does not correspond to the volume data natures of CT images. The purpose of our study was to develop a new method for NPS measurements that can be adapted to IRs. Our method utilized thick multi-planar reconstruction (MPR) images. The thick images are generally made by averaging CT volume data in a direction perpendicular to a MPR plane (e.g. z-direction for axial MPR plane). By using this averaging technique as a cutter for 3D-NPS, we can obtain adequate 2D-extracted NPS (eNPS) from 3D NPS. We applied this method to IR images generated with adaptive iterative dose reduction 3D (AIDR-3D, Toshiba) to investigate the validity of our method. A water phantom with 24 cm-diameters was scanned at 120 kV and 200 mAs with a 320-row CT (Acquilion One, Toshiba). From the results of study, the adequate thickness of MPR images for eNPS was more than 25.0 mm. Our new NPS measurement method utilizing thick MPR images was accurate and effective for evaluating noise reduction effects of IRs.

  5. Self-adaptive image denoising based on bidimensional empirical mode decomposition (BEMD).

    PubMed

    Guo, Song; Luan, Fangjun; Song, Xiaoyu; Li, Changyou

    2014-01-01

    To better analyze images with the Gaussian white noise, it is necessary to remove the noise before image processing. In this paper, we propose a self-adaptive image denoising method based on bidimensional empirical mode decomposition (BEMD). Firstly, normal probability plot confirms that 2D-IMF of Gaussian white noise images decomposed by BEMD follow the normal distribution. Secondly, energy estimation equation of the ith 2D-IMF (i=2,3,4,......) is proposed referencing that of ith IMF (i=2,3,4,......) obtained by empirical mode decomposition (EMD). Thirdly, the self-adaptive threshold of each 2D-IMF is calculated. Eventually, the algorithm of the self-adaptive image denoising method based on BEMD is described. From the practical perspective, this is applied for denoising of the magnetic resonance images (MRI) of the brain. And the results show it has a better denoising performance compared with other methods.

  6. Biological Bases for Radiation Adaptive Responses in the Lung

    SciTech Connect

    Scott, Bobby R.; Lin, Yong; Wilder, Julie; Belinsky, Steven

    2015-03-01

    Our main research objective was to determine the biological bases for low-dose, radiation-induced adaptive responses in the lung and use the knowledge gained to produce an improved risk model for radiation-induced lung cancer that accounts for activated natural protection, genetic influences, and the role of epigenetic regulation (epiregulation). Currently, low-dose radiation risk assessment is based on the linear-no-threshold hypothesis which now is known to be unsupported by a large volume of data.

  7. Searching for adaptive traits in genetic resources - phenology based approach

    NASA Astrophysics Data System (ADS)

    Bari, Abdallah

    2015-04-01

    Searching for adaptive traits in genetic resources - phenology based approach Abdallah Bari, Kenneth Street, Eddy De Pauw, Jalal Eddin Omari, and Chandra M. Biradar International Center for Agricultural Research in the Dry Areas, Rabat Institutes, Rabat, Morocco Phenology is an important plant trait not only for assessing and forecasting food production but also for searching in genebanks for adaptive traits. Among the phenological parameters we have been considering to search for such adaptive and rare traits are the onset (sowing period) and the seasonality (growing period). Currently an application is being developed as part of the focused identification of germplasm strategy (FIGS) approach to use climatic data in order to identify crop growing seasons and characterize them in terms of onset and duration. These approximations of growing period characteristics can then be used to estimate flowering and maturity dates for dryland crops, such as wheat, barley, faba bean, lentils and chickpea, and assess, among others, phenology-related traits such as days to heading [dhe] and grain filling period [gfp]. The approach followed here is based on first calculating long term average daily temperatures by fitting a curve to the monthly data over days from beginning of the year. Prior to the identification of these phenological stages the onset is extracted first from onset integer raster GIS layers developed based on a model of the growing period that considers both moisture and temperature limitations. The paper presents some examples of real applications of the approach to search for rare and adaptive traits.

  8. A Comparison of a Brain-Based Adaptive System and a Manual Adaptable System for Invoking Automation

    NASA Technical Reports Server (NTRS)

    Bailey, Nathan R.; Scerbo, Mark W.; Freeman, Frederick G.; Mikulka, Peter J.; Scott, Lorissa A.

    2004-01-01

    Two experiments are presented that examine alternative methods for invoking automation. In each experiment, participants were asked to perform simultaneously a monitoring task and a resource management task as well as a tracking task that changed between automatic and manual modes. The monitoring task required participants to detect failures of an automated system to correct aberrant conditions under either high or low system reliability. Performance on each task was assessed as well as situation awareness and subjective workload. In the first experiment, half of the participants worked with a brain-based system that used their EEG signals to switch the tracking task between automatic and manual modes. The remaining participants were yoked to participants from the adaptive condition and received the same schedule of mode switches, but their EEG had no effect on the automation. Within each group, half of the participants were assigned to either the low or high reliability monitoring task. In addition, within each combination of automation invocation and system reliability, participants were separated into high and low complacency potential groups. The results revealed no significant effects of automation invocation on the performance measures; however, the high complacency individuals demonstrated better situation awareness when working with the adaptive automation system. The second experiment was the same as the first with one important exception. Automation was invoked manually. Thus, half of the participants pressed a button to invoke automation for 10 s. The remaining participants were yoked to participants from the adaptable condition and received the same schedule of mode switches, but they had no control over the automation. The results showed that participants who could invoke automation performed more poorly on the resource management task and reported higher levels of subjective workload. Further, those who invoked automation more frequently performed

  9. Method and system for spatial data input, manipulation and distribution via an adaptive wireless transceiver

    NASA Technical Reports Server (NTRS)

    Wang, Ray (Inventor)

    2009-01-01

    A method and system for spatial data manipulation input and distribution via an adaptive wireless transceiver. The method and system include a wireless transceiver for automatically and adaptively controlling wireless transmissions using a Waveform-DNA method. The wireless transceiver can operate simultaneously over both the short and long distances. The wireless transceiver is automatically adaptive and wireless devices can send and receive wireless digital and analog data from various sources rapidly in real-time via available networks and network services.

  10. Failure of Anisotropic Unstructured Mesh Adaption Based on Multidimensional Residual Minimization

    NASA Technical Reports Server (NTRS)

    Wood, William A.; Kleb, William L.

    2003-01-01

    An automated anisotropic unstructured mesh adaptation strategy is proposed, implemented, and assessed for the discretization of viscous flows. The adaption criteria is based upon the minimization of the residual fluctuations of a multidimensional upwind viscous flow solver. For scalar advection, this adaption strategy has been shown to use fewer grid points than gradient based adaption, naturally aligning mesh edges with discontinuities and characteristic lines. The adaption utilizes a compact stencil and is local in scope, with four fundamental operations: point insertion, point deletion, edge swapping, and nodal displacement. Evaluation of the solution-adaptive strategy is performed for a two-dimensional blunt body laminar wind tunnel case at Mach 10. The results demonstrate that the strategy suffers from a lack of robustness, particularly with regard to alignment of the bow shock in the vicinity of the stagnation streamline. In general, constraining the adaption to such a degree as to maintain robustness results in negligible improvement to the solution. Because the present method fails to consistently or significantly improve the flow solution, it is rejected in favor of simple uniform mesh refinement.

  11. Adaptive optics in digital micromirror based confocal microscopy

    NASA Astrophysics Data System (ADS)

    Pozzi, P.; Wilding, D.; Soloviev, O.; Vdovin, G.; Verhaegen, M.

    2016-03-01

    This proceeding reports early results in the development of a new technique for adaptive optics in confocal microscopy. The term adaptive optics refers to the branch of optics in which an active element in the optical system is used to correct inhomogeneities in the media through which light propagates. In its most classical form, mostly used in astronomical imaging, adaptive optics is achieved through a closed loop in which the actuators of a deformable mirror are driven by a wavefront sensor. This approach is severely limited in fluorescence microscopy, as the use of a wavefront sensor requires the presence of a bright, point like source in the field of view, a condition rarely satisfied in microscopy samples. Previously reported approaches to adaptive optics in fluorescence microscopy are therefore limited to the inclusion of fluorescent microspheres in the sample, to use as bright stars for wavefront sensors, or time consuming sensorless optimization procedures, requiring several seconds of optimization before the acquisition of a single image. We propose an alternative approach to the problem, implementing sensorless adaptive optics in a Programmable array microscope. A programmable array microscope is a microscope based on a digital micromirror device, in which the single elements of the micromirror act both as point sources and pinholes.

  12. A Novel Adaptive Frequency Estimation Algorithm Based on Interpolation FFT and Improved Adaptive Notch Filter

    NASA Astrophysics Data System (ADS)

    Shen, Ting-ao; Li, Hua-nan; Zhang, Qi-xin; Li, Ming

    2017-02-01

    The convergence rate and the continuous tracking precision are two main problems of the existing adaptive notch filter (ANF) for frequency tracking. To solve the problems, the frequency is detected by interpolation FFT at first, which aims to overcome the convergence rate of the ANF. Then, referring to the idea of negative feedback, an evaluation factor is designed to monitor the ANF parameters and realize continuously high frequency tracking accuracy. According to the principle, a novel adaptive frequency estimation algorithm based on interpolation FFT and improved ANF is put forward. Its basic idea, specific measures and implementation steps are described in detail. The proposed algorithm obtains a fast estimation of the signal frequency, higher accuracy and better universality qualities. Simulation results verified the superiority and validity of the proposed algorithm when compared with original algorithms.

  13. Grid generation and adaptation for the Direct Simulation Monte Carlo Method. [for complex flows past wedges and cones

    NASA Technical Reports Server (NTRS)

    Olynick, David P.; Hassan, H. A.; Moss, James N.

    1988-01-01

    A grid generation and adaptation procedure based on the method of transfinite interpolation is incorporated into the Direct Simulation Monte Carlo Method of Bird. In addition, time is advanced based on a local criterion. The resulting procedure is used to calculate steady flows past wedges and cones. Five chemical species are considered. In general, the modifications result in a reduced computational effort. Moreover, preliminary results suggest that the simulation method is time step dependent if requirements on cell sizes are not met.

  14. Adaptation of a-Stratified Method in Variable Length Computerized Adaptive Testing.

    ERIC Educational Resources Information Center

    Wen, Jian-Bing; Chang, Hua-Hua; Hau, Kit-Tai

    Test security has often been a problem in computerized adaptive testing (CAT) because the traditional wisdom of item selection overly exposes high discrimination items. The a-stratified (STR) design advocated by H. Chang and his collaborators, which uses items of less discrimination in earlier stages of testing, has been shown to be very…

  15. Systems and Methods for Derivative-Free Adaptive Control

    NASA Technical Reports Server (NTRS)

    Yucelen, Tansel (Inventor); Kim, Kilsoo (Inventor); Calise, Anthony J. (Inventor)

    2015-01-01

    An adaptive control system is disclosed. The control system can control uncertain dynamic systems. The control system can employ one or more derivative-free adaptive control architectures. The control system can further employ one or more derivative-free weight update laws. The derivative-free weight update laws can comprise a time-varying estimate of an ideal vector of weights. The control system of the present invention can therefore quickly stabilize systems that undergo sudden changes in dynamics, caused by, for example, sudden changes in weight. Embodiments of the present invention can also provide a less complex control system than existing adaptive control systems. The control system can control aircraft and other dynamic systems, such as, for example, those with non-minimum phase dynamics.

  16. Study of adaptive methods for data compression of scanner data

    NASA Technical Reports Server (NTRS)

    1977-01-01

    The performance of adaptive image compression techniques and the applicability of a variety of techniques to the various steps in the data dissemination process are examined in depth. It is concluded that the bandwidth of imagery generated by scanners can be reduced without introducing significant degradation such that the data can be transmitted over an S-band channel. This corresponds to a compression ratio equivalent to 1.84 bits per pixel. It is also shown that this can be achieved using at least two fairly simple techniques with weight-power requirements well within the constraints of the LANDSAT-D satellite. These are the adaptive 2D DPCM and adaptive hybrid techniques.

  17. Development and implementation of a coupled computational muscle force optimization bone shape adaptation modeling method.

    PubMed

    Florio, C S

    2015-04-01

    Improved methods to analyze and compare the muscle-based influences that drive bone strength adaptation can aid in the understanding of the wide array of experimental observations about the effectiveness of various mechanical countermeasures to losses in bone strength that result from age, disuse, and reduced gravity environments. The coupling of gradient-based and gradientless numerical optimization routines with finite element methods in this work results in a modeling technique that determines the individual magnitudes of the muscle forces acting in a multisegment musculoskeletal system and predicts the improvement in the stress state uniformity and, therefore, strength, of a targeted bone through simulated local cortical material accretion and resorption. With a performance-based stopping criteria, no experimentally based or system-based parameters, and designed to include the direct and indirect effects of muscles attached to the targeted bone as well as to its neighbors, shape and strength alterations resulting from a wide range of boundary conditions can be consistently quantified. As demonstrated in a representative parametric study, the developed technique effectively provides a clearer foundation for the study of the relationships between muscle forces and the induced changes in bone strength. Its use can lead to the better control of such adaptive phenomena.

  18. Modeling Molecular Systems at Extreme Pressure by an Extension of the Polarizable Continuum Model (PCM) Based on the Symmetry-Adapted Cluster-Configuration Interaction (SAC-CI) Method: Confined Electronic Excited States of Furan as a Test Case.

    PubMed

    Fukuda, Ryoichi; Ehara, Masahiro; Cammi, Roberto

    2015-05-12

    Novel molecular photochemistry can be developed by combining high pressure and laser irradiation. For studying such high-pressure effects on the confined electronic ground and excited states, we extend the PCM (polarizable continuum model) SAC (symmetry-adapted cluster) and SAC-CI (SAC-configuration interaction) methods to the PCM-XP (extreme pressure) framework. By using the PCM-XP SAC/SAC-CI method, molecular systems in various electronic states can be confined by polarizable media in a smooth and flexible way. The PCM-XP SAC/SAC-CI method is applied to a furan (C4H4O) molecule in cyclohexane at high pressure (1-60 GPa). The relationship between the calculated free-energy and cavity volume can be approximately represented with the Murnaghan equation of state. The excitation energies of furan in cyclohexane show blueshifts with increasing pressure, and the extents of the blueshifts significantly depend on the character of the excitations. Particularly large confinement effects are found in the Rydberg states. The energy ordering of the lowest Rydberg and valence states alters under high-pressure. The pressure effects on the electronic structure may be classified into two contributions: a confinement of the molecular orbital and a suppression of the mixing between the valence and Rydberg configurations. The valence or Rydberg character in an excited state is, therefore, enhanced under high pressure.

  19. Adapt

    NASA Astrophysics Data System (ADS)

    Bargatze, L. F.

    2015-12-01

    Active Data Archive Product Tracking (ADAPT) is a collection of software routines that permits one to generate XML metadata files to describe and register data products in support of the NASA Heliophysics Virtual Observatory VxO effort. ADAPT is also a philosophy. The ADAPT concept is to use any and all available metadata associated with scientific data to produce XML metadata descriptions in a consistent, uniform, and organized fashion to provide blanket access to the full complement of data stored on a targeted data server. In this poster, we present an application of ADAPT to describe all of the data products that are stored by using the Common Data File (CDF) format served out by the CDAWEB and SPDF data servers hosted at the NASA Goddard Space Flight Center. These data servers are the primary repositories for NASA Heliophysics data. For this purpose, the ADAPT routines have been used to generate data resource descriptions by using an XML schema named Space Physics Archive, Search, and Extract (SPASE). SPASE is the designated standard for documenting Heliophysics data products, as adopted by the Heliophysics Data and Model Consortium. The set of SPASE XML resource descriptions produced by ADAPT includes high-level descriptions of numerical data products, display data products, or catalogs and also includes low-level "Granule" descriptions. A SPASE Granule is effectively a universal access metadata resource; a Granule associates an individual data file (e.g. a CDF file) with a "parent" high-level data resource description, assigns a resource identifier to the file, and lists the corresponding assess URL(s). The CDAWEB and SPDF file systems were queried to provide the input required by the ADAPT software to create an initial set of SPASE metadata resource descriptions. Then, the CDAWEB and SPDF data repositories were queried subsequently on a nightly basis and the CDF file lists were checked for any changes such as the occurrence of new, modified, or deleted

  20. A high-throughput multiplex method adapted for GMO detection.

    PubMed

    Chaouachi, Maher; Chupeau, Gaëlle; Berard, Aurélie; McKhann, Heather; Romaniuk, Marcel; Giancola, Sandra; Laval, Valérie; Bertheau, Yves; Brunel, Dominique

    2008-12-24

    A high-throughput multiplex assay for the detection of genetically modified organisms (GMO) was developed on the basis of the existing SNPlex method designed for SNP genotyping. This SNPlex assay allows the simultaneous detection of up to 48 short DNA sequences (approximately 70 bp; "signature sequences") from taxa endogenous reference genes, from GMO constructions, screening targets, construct-specific, and event-specific targets, and finally from donor organisms. This assay avoids certain shortcomings of multiplex PCR-based methods already in widespread use for GMO detection. The assay demonstrated high specificity and sensitivity. The results suggest that this assay is reliable, flexible, and cost- and time-effective for high-throughput GMO detection.

  1. Computation of variably saturated subsurface flow by adaptive mixed hybrid finite element methods

    NASA Astrophysics Data System (ADS)

    Bause, M.; Knabner, P.

    2004-06-01

    We present adaptive mixed hybrid finite element discretizations of the Richards equation, a nonlinear parabolic partial differential equation modeling the flow of water into a variably saturated porous medium. The approach simultaneously constructs approximations of the flux and the pressure head in Raviart-Thomas spaces. The resulting nonlinear systems of equations are solved by a Newton method. For the linear problems of the Newton iteration a multigrid algorithm is used. We consider two different kinds of error indicators for space adaptive grid refinement: superconvergence and residual based indicators. They can be calculated easily by means of the available finite element approximations. This seems attractive for computations since no additional (sub-)problems have to be solved. Computational experiments conducted for realistic water table recharge problems illustrate the effectiveness and robustness of the approach.

  2. Assessing Implementation Fidelity and Adaptation in a Community-Based Childhood Obesity Prevention Intervention

    ERIC Educational Resources Information Center

    Richards, Zoe; Kostadinov, Iordan; Jones, Michelle; Richard, Lucie; Cargo, Margaret

    2014-01-01

    Little research has assessed the fidelity, adaptation or integrity of activities implemented within community-based obesity prevention initiatives. To address this gap, a mixed-method process evaluation was undertaken in the context of the South Australian Obesity Prevention and Lifestyle (OPAL) initiative. An ecological coding procedure assessed…

  3. Adaptive reproducing kernel particle method for extraction of the cortical surface.

    PubMed

    Xu, Meihe; Thompson, Paul M; Toga, Arthur W

    2006-06-01

    We propose a novel adaptive approach based on the Reproducing Kernel Particle Method (RKPM) to extract the cortical surfaces of the brain from three-dimensional (3-D) magnetic resonance images (MRIs). To formulate the discrete equations of the deformable model, a flexible particle shape function is employed in the Galerkin approximation of the weak form of the equilibrium equations. The proposed support generation method ensures that support of all particles cover the entire computational domains. The deformable model is adaptively adjusted by dilating the shape function and by inserting or merging particles in the high curvature regions or regions stopped by the target boundary. The shape function of the particle with a dilation parameter is adaptively constructed in response to particle insertion or merging. The proposed method offers flexibility in representing highly convolved structures and in refining the deformable models. Self-intersection of the surface, during evolution, is prevented by tracing backward along gradient descent direction from the crest interface of the distance field, which is computed by fast marching. These operations involve a significant computational cost. The initial model for the deformable surface is simple and requires no prior knowledge of the segmented structure. No specific template is required, e.g., an average cortical surface obtained from many subjects. The extracted cortical surface efficiently localizes the depths of the cerebral sulci, unlike some other active surface approaches that penalize regions of high curvature. Comparisons with manually segmented landmark data are provided to demonstrate the high accuracy of the proposed method. We also compare the proposed method to the finite element method, and to a commonly used cortical surface extraction approach, the CRUISE method. We also show that the independence of the shape functions of the RKPM from the underlying mesh enhances the convergence speed of the deformable

  4. [Adaptive moving averaging based estimation of single event-related potentials].

    PubMed

    Qi, C; Liang, D; Jiang, X

    2001-03-01

    Event-related potentials (ERP) is pertinent to medical research and clinical diagnosis. Estimation of single event-related potentials (sERP) is the objective of ERP processing. A new technique, adaptive moving averaging based method for estimation of sERP, is presented. After analysis of the properties of background noise by crossing zero, the window length of moving averaging is adaptively set according to the maximum width of the impulse noise for each recorded raw data. The experiments are made with real recorded data and the results demonstrate that the performance of sERP estimation is excellent. So the method proposed is suitable to sERP processing.

  5. Anti-synchronization for stochastic memristor-based neural networks with non-modeled dynamics via adaptive control approach

    NASA Astrophysics Data System (ADS)

    Zhao, Hui; Li, Lixiang; Peng, Haipeng; Kurths, Jürgen; Xiao, Jinghua; Yang, Yixian

    2015-05-01

    In this paper, exponential anti-synchronization in mean square of an uncertain memristor-based neural network is studied. The uncertain terms include non-modeled dynamics with boundary and stochastic perturbations. Based on the differential inclusions theory, linear matrix inequalities, Gronwall's inequality and adaptive control technique, an adaptive controller with update laws is developed to realize the exponential anti-synchronization. Adaptive controller can adjust itself behavior to get the best performance, according to the environment is changing or the environment has changed, which has the ability to adapt to environmental change. Furthermore, a numerical example is provided to validate the effectiveness of the proposed method.

  6. Inner string cementing adapter and method of use

    SciTech Connect

    Helms, L.C.

    1991-08-20

    This patent describes an inner string cementing adapter for use on a work string in a well casing having floating equipment therein. It comprises mandrel means for connecting to a lower end of the work string; and sealing means adjacent to the mandrel means for substantially flatly sealing against a surface of the floating equipment without engaging a central opening in the floating equipment.

  7. An adaptive precision gradient method for optimal control.

    NASA Technical Reports Server (NTRS)

    Klessig, R.; Polak, E.

    1973-01-01

    This paper presents a gradient algorithm for unconstrained optimal control problems. The algorithm is stated in terms of numerical integration formulas, the precision of which is controlled adaptively by a test that ensures convergence. Empirical results show that this algorithm is considerably faster than its fixed precision counterpart.-

  8. Output-based mesh adaptation for high order Navier-Stokes simulations on deformable domains

    NASA Astrophysics Data System (ADS)

    Kast, Steven M.; Fidkowski, Krzysztof J.

    2013-11-01

    We present an output-based mesh adaptation strategy for Navier-Stokes simulations on deforming domains. The equations are solved with an arbitrary Lagrangian-Eulerian (ALE) approach, using a discontinuous Galerkin finite-element discretization in both space and time. Discrete unsteady adjoint solutions, derived for both the state and the geometric conservation law, provide output error estimates and drive adaptation of the space-time mesh. Spatial adaptation consists of dynamic order increment or decrement on a fixed tessellation of the domain, while a combination of coarsening and refinement is used to provide an efficient time step distribution. Results from compressible Navier-Stokes simulations in both two and three dimensions demonstrate the accuracy and efficiency of the proposed approach. In particular, the method is shown to outperform other common adaptation strategies, which, while sometimes adequate for static problems, struggle in the presence of mesh motion.

  9. Spatially adaptive stochastic numerical methods for intrinsic fluctuations in reaction-diffusion systems

    SciTech Connect

    Atzberger, Paul J.

    2010-05-01

    Stochastic partial differential equations are introduced for the continuum concentration fields of reaction-diffusion systems. The stochastic partial differential equations account for fluctuations arising from the finite number of molecules which diffusively migrate and react. Spatially adaptive stochastic numerical methods are developed for approximation of the stochastic partial differential equations. The methods allow for adaptive meshes with multiple levels of resolution, Neumann and Dirichlet boundary conditions, and domains having geometries with curved boundaries. A key issue addressed by the methods is the formulation of consistent discretizations for the stochastic driving fields at coarse-refined interfaces of the mesh and at boundaries. Methods are also introduced for the efficient generation of the required stochastic driving fields on such meshes. As a demonstration of the methods, investigations are made of the role of fluctuations in a biological model for microorganism direction sensing based on concentration gradients. Also investigated, a mechanism for spatial pattern formation induced by fluctuations. The discretization approaches introduced for SPDEs have the potential to be widely applicable in the development of numerical methods for the study of spatially extended stochastic systems.

  10. Fostering Healthy Futures for Teens: Adaptation of an Evidence-Based Program

    PubMed Central

    Taussig, Heather; Weiler, Lindsey; Rhodes, Tara; Hambrick, Erin; Wertheimer, Robyn; Fireman, Orah; Combs, Melody

    2015-01-01

    Objective This article describes the process of adapting and implementing a complex, multicomponent intervention for a new population. Specifically, the article delineates the development and implementation of the Fostering Healthy Futures for Teens (FHF-T) program, which is an adaptation and extension of the Fostering Healthy Futures® (FHF) preventive intervention. FHF is a 9-month mentoring and skills group program for 9 to 11 year olds recently placed in foster care. Following the designation of FHF as an evidence-based intervention, there was increasing demand for the program. However, the narrow population for which FHF had demonstrated efficacy limited broader implementation of the existing intervention. FHF-T was designed to extend the reach of the program by adapting the FHF intervention for adolescents in the early years of high school who have a history of out-of-home care. Specifically, this adaptation recognizes key developmental differences between preadolescent and adolescent populations. Method After designing a program model and adapting the program components, the FHF-T mentoring program was implemented with 42 youth over 2 program years. Results Of the teens who were offered the program, 75% chose to enroll, and 88% of those graduated 9 months later. Although the program evidenced high rates of uptake and participant satisfaction, some unexpected challenges were encountered that will need to be addressed in future iterations of the program. Conclusions Too often program adaptations are made without careful consideration of important contextual issues, and too infrequently, these adapted programs are studied. Our process of program adaptation with rigorous measurement of program implementation provides a useful model for other evidence-based programs seeking thoughtful adaptation. PMID:27019678

  11. Adaptive skin segmentation via feature-based face detection

    NASA Astrophysics Data System (ADS)

    Taylor, Michael J.; Morris, Tim

    2014-05-01

    Variations in illumination can have significant effects on the apparent colour of skin, which can be damaging to the efficacy of any colour-based segmentation approach. We attempt to overcome this issue by presenting a new adaptive approach, capable of generating skin colour models at run-time. Our approach adopts a Viola-Jones feature-based face detector, in a moderate-recall, high-precision configuration, to sample faces within an image, with an emphasis on avoiding potentially detrimental false positives. From these samples, we extract a set of pixels that are likely to be from skin regions, filter them according to their relative luma values in an attempt to eliminate typical non-skin facial features (eyes, mouths, nostrils, etc.), and hence establish a set of pixels that we can be confident represent skin. Using this representative set, we train a unimodal Gaussian function to model the skin colour in the given image in the normalised rg colour space - a combination of modelling approach and colour space that benefits us in a number of ways. A generated function can subsequently be applied to every pixel in the given image, and, hence, the probability that any given pixel represents skin can be determined. Segmentation of the skin, therefore, can be as simple as applying a binary threshold to the calculated probabilities. In this paper, we touch upon a number of existing approaches, describe the methods behind our new system, present the results of its application to arbitrary images of people with detectable faces, which we have found to be extremely encouraging, and investigate its potential to be used as part of real-time systems.

  12. A New Method to Cancel RFI---The Adaptive Filter

    NASA Astrophysics Data System (ADS)

    Bradley, R.; Barnbaum, C.

    1996-12-01

    An increasing amount of precious radio frequency spectrum in the VHF, UHF, and microwave bands is being utilized each year to support new commercial and military ventures, and all have the potential to interfere with radio astronomy observations. Some radio spectral lines of astronomical interest occur outside the protected radio astronomy bands and are unobservable due to heavy interference. Conventional approaches to deal with RFI include legislation, notch filters, RF shielding, and post-processing techniques. Although these techniques are somewhat successful, each suffers from insufficient interference cancellation. One concept of interference excision that has not been used before in radio astronomy is adaptive interference cancellation. The concept of adaptive interference canceling was first introduced in the mid-1970s as a way to reduce unwanted noise in low frequency (audio) systems. Examples of such systems include the canceling of maternal ECG in fetal electrocardiography and the reduction of engine noise in the passenger compartment of automobiles. Only recently have high-speed digital filter chips made adaptive filtering possible in a bandwidth as large a few megahertz, finally opening the door to astronomical uses. The system consists of two receivers: the main beam of the radio telescope receives the desired signal corrupted by RFI coming in the sidelobes, and the reference antenna receives only the RFI. The reference antenna is processed using a digital adaptive filter and then subtracted from the signal in the main beam, thus producing the system output. The weights of the digital filter are adjusted by way of an algorithm that minimizes, in a least-squares sense, the power output of the system. Through an adaptive-iterative process, the interference canceler will lock onto the RFI and the filter will adjust itself to minimize the effect of the RFI at the system output. We are building a prototype 100 MHz receiver and will measure the cancellation

  13. Adaptive gamma correction based on cumulative histogram for enhancing near-infrared images

    NASA Astrophysics Data System (ADS)

    Huang, Zhenghua; Zhang, Tianxu; Li, Qian; Fang, Hao

    2016-11-01

    Histogram-based methods have been proven their ability in image enhancement. To improve low contrast while preserving details and high brightness in near-infrared images, a novel method called adaptive gamma correction based on cumulative histogram (AGCCH) is studied in this paper. This novel image enhancement method improves the contrast of local pixels through adaptive gamma correction (AGC), which is formed by incorporating a cumulative histogram or cumulative sub-histogram into the weighting distribution. Both qualitatively and quantitatively, experimental results demonstrate that the proposed image enhancement with the AGCCH method can perform well in brightness preservation, contrast enhancement, and detail preservation, and it is superior to previous state-of-the-art methods.

  14. Vivid Motor Imagery as an Adaptation Method for Head Turns on a Short-Arm Centrifuge

    NASA Technical Reports Server (NTRS)

    Newby, N. J.; Mast, F. W.; Natapoff, A.; Paloski, W. H.

    2006-01-01

    from one another. For the perceived duration of sensations, the CG group again exhibited the least amount of adaptation. However, the rates of adaptation of the PA and the MA groups were indistinguishable, suggesting that the imagined pseudostimulus appeared to be just as effective a means of adaptation as the actual stimulus. The MA group's rate of adaptation to motion sickness symptoms was also comparable to the PA group. The use of vivid motor imagery may be an effective method for adapting to the illusory sensations and motion sickness symptoms produced by cross-coupled stimuli. For space-based AG applications, this technique may prove quite useful in retaining astronauts considered highly susceptible to motion sickness as it reduces the number of actual CCS required to attain adaptation.

  15. The use of the spectral method within the fast adaptive composite grid method

    SciTech Connect

    McKay, S.M.

    1994-12-31

    The use of efficient algorithms for the solution of partial differential equations has been sought for many years. The fast adaptive composite grid (FAC) method combines an efficient algorithm with high accuracy to obtain low cost solutions to partial differential equations. The FAC method achieves fast solution by combining solutions on different grids with varying discretizations and using multigrid like techniques to find fast solution. Recently, the continuous FAC (CFAC) method has been developed which utilizes an analytic solution within a subdomain to iterate to a solution of the problem. This has been shown to achieve excellent results when the analytic solution can be found. The CFAC method will be extended to allow solvers which construct a function for the solution, e.g., spectral and finite element methods. In this discussion, the spectral methods will be used to provide a fast, accurate solution to the partial differential equation. As spectral methods are more accurate than finite difference methods, the ensuing accuracy from this hybrid method outside of the subdomain will be investigated.

  16. An adaptive grid/Navier-Stokes methodology for the calculation of nozzle afterbody base flows with a supersonic freestream

    NASA Technical Reports Server (NTRS)

    Williams, Morgan; Lim, Dennis; Ungewitter, Ronald

    1993-01-01

    This paper describes an adaptive grid method for base flows in a supersonic freestream. The method is based on the direct finite-difference statement of the equidistribution principle. The weighting factor is a combination of the Mach number, density, and velocity first-derivative gradients in the radial direction. Two key ideas of the method are to smooth the weighting factor by using a type of implicit smoothing and to allow boundary points to move in the grid adaptation process. An AGARD nozzle afterbody base flow configuration is used to demonstrate the performance of the adaptive grid methodology. Computed base pressures are compared to experimental data. The adapted grid solutions offer a dramatic improvement in base pressure prediction compared to solutions computed on a nonadapted grid. A total-variation-diminishing (TVD) Navier-Stokes scheme is used to solve the governing flow equations.

  17. The Pilates method and cardiorespiratory adaptation to training.

    PubMed

    Tinoco-Fernández, Maria; Jiménez-Martín, Miguel; Sánchez-Caravaca, M Angeles; Fernández-Pérez, Antonio M; Ramírez-Rodrigo, Jesús; Villaverde-Gutiérrez, Carmen

    2016-01-01

    Although all authors report beneficial health changes following training based on the Pilates method, no explicit analysis has been performed of its cardiorespiratory effects. The objective of this study was to evaluate possible changes in cardiorespiratory parameters with the Pilates method. A total of 45 university students aged 18-35 years (77.8% female and 22.2% male), who did not routinely practice physical exercise or sports, volunteered for the study and signed informed consent. The Pilates training was conducted over 10 weeks, with three 1-hour sessions per week. Physiological cardiorespiratory responses were assessed using a MasterScreen CPX apparatus. After the 10-week training, statistically significant improvements were observed in mean heart rate (135.4-124.2 beats/min), respiratory exchange ratio (1.1-0.9) and oxygen equivalent (30.7-27.6) values, among other spirometric parameters, in submaximal aerobic testing. These findings indicate that practice of the Pilates method has a positive influence on cardiorespiratory parameters in healthy adults who do not routinely practice physical exercise activities.

  18. Node Based Adaptive Sampling and Advanced AUV Capabilities

    DTIC Science & Technology

    2001-09-30

    is to develop and refine node based adaptive sampling and hovering technology using FAU Morpheus vehicle as a test platform. The former one is a...included two days of testing with a “dummy” vehicle followed by two days of testing with the real Morpheus . The initial tests were done with the dummy...vehicle because the Morpheus was unavailable for docking experiments at the time. These tests were conducted in order to get a better sense of

  19. Node Based Adaptive Sampling and Advanced AUV Capabilities

    DTIC Science & Technology

    2002-09-30

    is to develop and refine node based adaptive sampling and hovering technology using FAU Morpheus vehicle as a test platform. The former one is a...dummy” vehicle followed by two days of testing with the real Morpheus . The initial tests were done with the dummy vehicle because the Morpheus was... Morpheus when it became available. The dummy vehicle was constructed from empty Morpheus modules with weight placed inside each at a calculated

  20. Classical FEM-BEM coupling methods: nonlinearities, well-posedness, and adaptivity

    NASA Astrophysics Data System (ADS)

    Aurada, Markus; Feischl, Michael; Führer, Thomas; Karkulik, Michael; Melenk, Jens Markus; Praetorius, Dirk

    2013-04-01

    We consider a (possibly) nonlinear interface problem in 2D and 3D, which is solved by use of various adaptive FEM-BEM coupling strategies, namely the Johnson-Nédélec coupling, the Bielak-MacCamy coupling, and Costabel's symmetric coupling. We provide a framework to prove that the continuous as well as the discrete Galerkin solutions of these coupling methods additionally solve an appropriate operator equation with a Lipschitz continuous and strongly monotone operator. Therefore, the original coupling formulations are well-defined, and the Galerkin solutions are quasi-optimal in the sense of a Céa-type lemma. For the respective Galerkin discretizations with lowest-order polynomials, we provide reliable residual-based error estimators. Together with an estimator reduction property, we prove convergence of the adaptive FEM-BEM coupling methods. A key point for the proof of the estimator reduction are novel inverse-type estimates for the involved boundary integral operators which are advertized. Numerical experiments conclude the work and compare performance and effectivity of the three adaptive coupling procedures in the presence of generic singularities.

  1. An Adaptive Watershed Management Assessment Based on Watershed Investigation Data

    NASA Astrophysics Data System (ADS)

    Kang, Min Goo; Park, Seung Woo

    2015-05-01

    The aim of this study was to assess the states of watersheds in South Korea and to formulate new measures to improve identified inadequacies. The study focused on the watersheds of the Han River basin and adopted an adaptive watershed management framework. Using data collected during watershed investigation projects, we analyzed the management context of the study basin and identified weaknesses in water use management, flood management, and environmental and ecosystems management in the watersheds. In addition, we conducted an interview survey to obtain experts' opinions on the possible management of watersheds in the future. The results of the assessment show that effective management of the Han River basin requires adaptive watershed management, which includes stakeholders' participation and social learning. Urbanization was the key variable in watershed management of the study basin. The results provide strong guidance for future watershed management and suggest that nonstructural measures are preferred to improve the states of the watersheds and that consistent implementation of the measures can lead to successful watershed management. The results also reveal that governance is essential for adaptive watershed management in the study basin. A special ordinance is necessary to establish governance and aid social learning. Based on the findings, a management process is proposed to support new watershed management practices. The results will be of use to policy makers and practitioners who can implement the measures recommended here in the early stages of adaptive watershed management in the Han River basin. The measures can also be applied to other river basins.

  2. An adaptive watershed management assessment based on watershed investigation data.

    PubMed

    Kang, Min Goo; Park, Seung Woo

    2015-05-01

    The aim of this study was to assess the states of watersheds in South Korea and to formulate new measures to improve identified inadequacies. The study focused on the watersheds of the Han River basin and adopted an adaptive watershed management framework. Using data collected during watershed investigation projects, we analyzed the management context of the study basin and identified weaknesses in water use management, flood management, and environmental and ecosystems management in the watersheds. In addition, we conducted an interview survey to obtain experts' opinions on the possible management of watersheds in the future. The results of the assessment show that effective management of the Han River basin requires adaptive watershed management, which includes stakeholders' participation and social learning. Urbanization was the key variable in watershed management of the study basin. The results provide strong guidance for future watershed management and suggest that nonstructural measures are preferred to improve the states of the watersheds and that consistent implementation of the measures can lead to successful watershed management. The results also reveal that governance is essential for adaptive watershed management in the study basin. A special ordinance is necessary to establish governance and aid social learning. Based on the findings, a management process is proposed to support new watershed management practices. The results will be of use to policy makers and practitioners who can implement the measures recommended here in the early stages of adaptive watershed management in the Han River basin. The measures can also be applied to other river basins.

  3. Adaptive correction method for an OCXO and investigation of analytical cumulative time error upper bound.

    PubMed

    Zhou, Hui; Kunz, Thomas; Schwartz, Howard

    2011-01-01

    Traditional oscillators used in timing modules of CDMA and WiMAX base stations are large and expensive. Applying cheaper and smaller, albeit more inaccurate, oscillators in timing modules is an interesting research challenge. An adaptive control algorithm is presented to enhance the oscillators to meet the requirements of base stations during holdover mode. An oscillator frequency stability model is developed for the adaptive control algorithm. This model takes into account the control loop which creates the correction signal when the timing module is in locked mode. A recursive prediction error method is used to identify the system model parameters. Simulation results show that an oscillator enhanced by our adaptive control algorithm improves the oscillator performance significantly, compared with uncorrected oscillators. Our results also show the benefit of explicitly modeling the control loop. Finally, the cumulative time error upper bound of such enhanced oscillators is investigated analytically and comparison results between the analytical and simulated upper bound are provided. The results show that the analytical upper bound can serve as a practical guide for system designers.

  4. Two-layer and Adaptive Entropy Coding Algorithms for H.264-based Lossless Image Coding

    DTIC Science & Technology

    2008-04-01

    adaptive binary arithmetic coding (CABAC) [7], and context-based adaptive variable length coding (CAVLC) [3], should be adaptively adopted for advancing...Sep. 2006. [7] H. Schwarz, D. Marpe and T. Wiegand, Context-based adaptive binary arithmetic coding in the H.264/AVC video compression standard, IEEE

  5. Adaptive finite element methods for two-dimensional problems in computational fracture mechanics

    NASA Technical Reports Server (NTRS)

    Min, J. B.; Bass, J. M.; Spradley, L. W.

    1994-01-01

    Some recent results obtained using solution-adaptive finite element methods in two-dimensional problems in linear elastic fracture mechanics are presented. The focus is on the basic issue of adaptive finite element methods for validating the new methodology by computing demonstration problems and comparing the stress intensity factors to analytical results.

  6. A Cartesian Adaptive Level Set Method for Two-Phase Flows

    NASA Technical Reports Server (NTRS)

    Ham, F.; Young, Y.-N.

    2003-01-01

    In the present contribution we develop a level set method based on local anisotropic Cartesian adaptation as described in Ham et al. (2002). Such an approach should allow for the smallest possible Cartesian grid capable of resolving a given flow. The remainder of the paper is organized as follows. In section 2 the level set formulation for free surface calculations is presented and its strengths and weaknesses relative to the other free surface methods reviewed. In section 3 the collocated numerical method is described. In section 4 the method is validated by solving the 2D and 3D drop oscilation problem. In section 5 we present some results from more complex cases including the 3D drop breakup in an impulsively accelerated free stream, and the 3D immiscible Rayleigh-Taylor instability. Conclusions are given in section 6.

  7. Adaptive Projection Subspace Dimension for the Thick-Restart Lanczos Method

    SciTech Connect

    Yamazaki, Ichitaro; Bai, Zhaojun; Simon, Horst; Wang, Lin-Wang; Wu, K.

    2008-10-01

    The Thick-Restart Lanczos (TRLan) method is an effective method for solving large-scale Hermitian eigenvalue problems. However, its performance strongly depends on the dimension of the projection subspace. In this paper, we propose an objective function to quantify the effectiveness of a chosen subspace dimension, and then introduce an adaptive scheme to dynamically adjust the dimension at each restart. An open-source software package, nu-TRLan, which implements the TRLan method with this adaptive projection subspace dimension is available in the public domain. The numerical results of synthetic eigenvalue problems are presented to demonstrate that nu-TRLan achieves speedups of between 0.9 and 5.1 over the static method using a default subspace dimension. To demonstrate the effectiveness of nu-TRLan in a real application, we apply it to the electronic structure calculations of quantum dots. We show that nu-TRLan can achieve speedups of greater than 1.69 over the state-of-the-art eigensolver for this application, which is based on the Conjugate Gradient method with a powerful preconditioner.

  8. A Parallel Adaptive Wavelet Method for the Simulation of Compressible Reacting Flows

    NASA Astrophysics Data System (ADS)

    Zikoski, Zachary; Paolucci, Samuel

    2011-11-01

    The Wavelet Adaptive Multiresolution Representation (WAMR) method provides a robust method for controlling spatial grid adaption--fine grid spacing in regions of a solution requiring high resolution (i.e. near steep gradients, singularities, or near- singularities) and using much coarser grid spacing where the solution is slowly varying. The sparse grids produced using the WAMR method exhibit very high compression ratios compared to uniform grids of equivalent resolution. Subsequently, a wide range of spatial scales often occurring in continuum physics models can be captured efficiently. Furthermore, the wavelet transform provides a direct measure of local error at each grid point, effectively producing automatically verified solutions. The algorithm is parallelized using an MPI-based domain decomposition approach suitable for a wide range of distributed-memory parallel architectures. The method is applied to the solution of the compressible, reactive Navier-Stokes equations and includes multi-component diffusive transport and chemical kinetics models. Results for the method's parallel performance are reported, and its effectiveness on several challenging compressible reacting flow problems is highlighted.

  9. Prediction-based manufacturing center self-adaptive demand side energy optimization in cyber physical systems

    NASA Astrophysics Data System (ADS)

    Sun, Xinyao; Wang, Xue; Wu, Jiangwei; Liu, Youda

    2014-05-01

    Cyber physical systems(CPS) recently emerge as a new technology which can provide promising approaches to demand side management(DSM), an important capability in industrial power systems. Meanwhile, the manufacturing center is a typical industrial power subsystem with dozens of high energy consumption devices which have complex physical dynamics. DSM, integrated with CPS, is an effective methodology for solving energy optimization problems in manufacturing center. This paper presents a prediction-based manufacturing center self-adaptive energy optimization method for demand side management in cyber physical systems. To gain prior knowledge of DSM operating results, a sparse Bayesian learning based componential forecasting method is introduced to predict 24-hour electric load levels for specific industrial areas in China. From this data, a pricing strategy is designed based on short-term load forecasting results. To minimize total energy costs while guaranteeing manufacturing center service quality, an adaptive demand side energy optimization algorithm is presented. The proposed scheme is tested in a machining center energy optimization experiment. An AMI sensing system is then used to measure the demand side energy consumption of the manufacturing center. Based on the data collected from the sensing system, the load prediction-based energy optimization scheme is implemented. By employing both the PSO and the CPSO method, the problem of DSM in the manufacturing center is solved. The results of the experiment show the self-adaptive CPSO energy optimization method enhances optimization by 5% compared with the traditional PSO optimization method.

  10. A Massively Parallel Adaptive Fast Multipole Method on Heterogeneous Architectures

    SciTech Connect

    Lashuk, Ilya; Chandramowlishwaran, Aparna; Langston, Harper; Nguyen, Tuan-Anh; Sampath, Rahul S; Shringarpure, Aashay; Vuduc, Richard; Ying, Lexing; Zorin, Denis; Biros, George

    2012-01-01

    We describe a parallel fast multipole method (FMM) for highly nonuniform distributions of particles. We employ both distributed memory parallelism (via MPI) and shared memory parallelism (via OpenMP and GPU acceleration) to rapidly evaluate two-body nonoscillatory potentials in three dimensions on heterogeneous high performance computing architectures. We have performed scalability tests with up to 30 billion particles on 196,608 cores on the AMD/CRAY-based Jaguar system at ORNL. On a GPU-enabled system (NSF's Keeneland at Georgia Tech/ORNL), we observed 30x speedup over a single core CPU and 7x speedup over a multicore CPU implementation. By combining GPUs with MPI, we achieve less than 10 ns/particle and six digits of accuracy for a run with 48 million nonuniformly distributed particles on 192 GPUs.

  11. Adaptive bit truncation and compensation method for EZW image coding

    NASA Astrophysics Data System (ADS)

    Dai, Sheng-Kui; Zhu, Guangxi; Wang, Yao

    2003-09-01

    The embedded zero-tree wavelet algorithm (EZW) is widely adopted to compress wavelet coefficients of images with the property that the bits stream can be truncated and produced anywhere. The lower bit plane of the wavelet coefficents is verified to be less important than the higher bit plane. Therefore it can be truncated and not encoded. Based on experiments, a generalized function, which can provide a glancing guide for EZW encoder to intelligently decide the number of low bit plane to be truncated, is deduced in this paper. In the EZW decoder, a simple method is presented to compensate for the truncated wavelet coefficients, and finally it can surprisingly enhance the quality of reconstructed image and spend scarcely any additional cost at the same time.

  12. Method and apparatus for adaptive force and position control of manipulators

    NASA Technical Reports Server (NTRS)

    Seraji, Homayoun (Inventor)

    1989-01-01

    The present invention discloses systematic methods and apparatus for the design of real time controllers. Real-time control employs adaptive force/position by use of feedforward and feedback controllers, with the feedforward controller being the inverse of the linearized model of robot dynamics and containing only proportional-double-derivative terms is disclosed. The feedback controller, of the proportional-integral-derivative type, ensures that manipulator joints follow reference trajectories and the feedback controller achieves robust tracking of step-plus-exponential trajectories, all in real time. The adaptive controller includes adaptive force and position control within a hybrid control architecture. The adaptive controller, for force control, achieves tracking of desired force setpoints, and the adaptive position controller accomplishes tracking of desired position trajectories. Circuits in the adaptive feedback and feedforward controllers are varied by adaptation laws.

  13. Efficient reconstruction method for ground layer adaptive optics with mixed natural and laser guide stars.

    PubMed

    Wagner, Roland; Helin, Tapio; Obereder, Andreas; Ramlau, Ronny

    2016-02-20

    The imaging quality of modern ground-based telescopes such as the planned European Extremely Large Telescope is affected by atmospheric turbulence. In consequence, they heavily depend on stable and high-performance adaptive optics (AO) systems. Using measurements of incoming light from guide stars, an AO system compensates for the effects of turbulence by adjusting so-called deformable mirror(s) (DMs) in real time. In this paper, we introduce a novel reconstruction method for ground layer adaptive optics. In the literature, a common approach to this problem is to use Bayesian inference in order to model the specific noise structure appearing due to spot elongation. This approach leads to large coupled systems with high computational effort. Recently, fast solvers of linear order, i.e., with computational complexity O(n), where n is the number of DM actuators, have emerged. However, the quality of such methods typically degrades in low flux conditions. Our key contribution is to achieve the high quality of the standard Bayesian approach while at the same time maintaining the linear order speed of the recent solvers. Our method is based on performing a separate preprocessing step before applying the cumulative reconstructor (CuReD). The efficiency and performance of the new reconstructor are demonstrated using the OCTOPUS, the official end-to-end simulation environment of the ESO for extremely large telescopes. For more specific simulations we also use the MOST toolbox.

  14. Designing an Adaptive Web-Based Learning System Based on Students' Cognitive Styles Identified Online

    ERIC Educational Resources Information Center

    Lo, Jia-Jiunn; Chan, Ya-Chen; Yeh, Shiou-Wen

    2012-01-01

    This study developed an adaptive web-based learning system focusing on students' cognitive styles. The system is composed of a student model and an adaptation model. It collected students' browsing behaviors to update the student model for unobtrusively identifying student cognitive styles through a multi-layer feed-forward neural network (MLFF).…

  15. The adapted augmented Lagrangian method: a new method for the resolution of the mechanical frictional contact problem

    NASA Astrophysics Data System (ADS)

    Bussetta, Philippe; Marceau, Daniel; Ponthot, Jean-Philippe

    2012-02-01

    The aim of this work is to propose a new numerical method for solving the mechanical frictional contact problem in the general case of multi-bodies in a three dimensional space. This method is called adapted augmented Lagrangian method (AALM) and can be used in a multi-physical context (like thermo-electro-mechanical fields problems). This paper presents this new method and its advantages over other classical methods such as penalty method (PM), adapted penalty method (APM) and, augmented Lagrangian method (ALM). In addition, the efficiency and the reliability of the AALM are proved with some academic problems and an industrial thermo-electromechanical problem.

  16. Episodic memories predict adaptive value-based decision-making.

    PubMed

    Murty, Vishnu P; FeldmanHall, Oriel; Hunter, Lindsay E; Phelps, Elizabeth A; Davachi, Lila

    2016-05-01

    Prior research illustrates that memory can guide value-based decision-making. For example, previous work has implicated both working memory and procedural memory (i.e., reinforcement learning) in guiding choice. However, other types of memories, such as episodic memory, may also influence decision-making. Here we test the role for episodic memory-specifically item versus associative memory-in supporting value-based choice. Participants completed a task where they first learned the value associated with trial unique lotteries. After a short delay, they completed a decision-making task where they could choose to reengage with previously encountered lotteries, or new never before seen lotteries. Finally, participants completed a surprise memory test for the lotteries and their associated values. Results indicate that participants chose to reengage more often with lotteries that resulted in high versus low rewards. Critically, participants not only formed detailed, associative memories for the reward values coupled with individual lotteries, but also exhibited adaptive decision-making only when they had intact associative memory. We further found that the relationship between adaptive choice and associative memory generalized to more complex, ecologically valid choice behavior, such as social decision-making. However, individuals more strongly encode experiences of social violations-such as being treated unfairly, suggesting a bias for how individuals form associative memories within social contexts. Together, these findings provide an important integration of episodic memory and decision-making literatures to better understand key mechanisms supporting adaptive behavior.

  17. Adaptive directional lifting-based wavelet transform for image coding.

    PubMed

    Ding, Wenpeng; Wu, Feng; Wu, Xiaolin; Li, Shipeng; Li, Houqiang

    2007-02-01

    We present a novel 2-D wavelet transform scheme of adaptive directional lifting (ADL) in image coding. Instead of alternately applying horizontal and vertical lifting, as in present practice, ADL performs lifting-based prediction in local windows in the direction of high pixel correlation. Hence, it adapts far better to the image orientation features in local windows. The ADL transform is achieved by existing 1-D wavelets and is seamlessly integrated into the global wavelet transform. The predicting and updating signals of ADL can be derived even at the fractional pixel precision level to achieve high directional resolution, while still maintaining perfect reconstruction. To enhance the ADL performance, a rate-distortion optimized directional segmentation scheme is also proposed to form and code a hierarchical image partition adapting to local features. Experimental results show that the proposed ADL-based image coding technique outperforms JPEG 2000 in both PSNR and visual quality, with the improvement up to 2.0 dB on images with rich orientation features.

  18. Building Adaptive Capacity with the Delphi Method and Mediated Modeling for Water Quality and Climate Change Adaptation in Lake Champlain Basin

    NASA Astrophysics Data System (ADS)

    Coleman, S.; Hurley, S.; Koliba, C.; Zia, A.; Exler, S.

    2014-12-01

    Eutrophication and nutrient pollution of surface waters occur within complex governance, social, hydrologic and biophysical basin contexts. The pervasive and perennial nutrient pollution in Lake Champlain Basin, despite decades of efforts, exemplifies problems found across the world's surface waters. Stakeholders with diverse values, interests, and forms of explicit and tacit knowledge determine water quality impacts through land use, agricultural and water resource decisions. Uncertainty, ambiguity and dynamic feedback further complicate the ability to promote the continual provision of water quality and ecosystem services. Adaptive management of water resources and land use requires mechanisms to allow for learning and integration of new information over time. The transdisciplinary Research on Adaptation to Climate Change (RACC) team is working to build regional adaptive capacity in Lake Champlain Basin while studying and integrating governance, land use, hydrological, and biophysical systems to evaluate implications for adaptive management. The RACC team has engaged stakeholders through mediated modeling workshops, online forums, surveys, focus groups and interviews. In March 2014, CSS2CC.org, an interactive online forum to source and identify adaptive interventions from a group of stakeholders across sectors was launched. The forum, based on the Delphi Method, brings forward the collective wisdom of stakeholders and experts to identify potential interventions and governance designs in response to scientific uncertainty and ambiguity surrounding the effectiveness of any strategy, climate change impacts, and the social and natural systems governing water quality and eutrophication. A Mediated Modeling Workshop followed the forum in May 2014, where participants refined and identified plausible interventions under different governance, policy and resource scenarios. Results from the online forum and workshop can identify emerging consensus across scales and sectors

  19. MEMS-based extreme adaptive optics for planet detection

    SciTech Connect

    Macintosh, B A; Graham, J R; Oppenheimer, B; Poyneer, L; Sivaramakrishnan, A; Veran, J

    2005-11-18

    The next major step in the study of extrasolar planets will be the direct detection, resolved from their parent star, of a significant sample of Jupiter-like extrasolar giant planets. Such detection will open up new parts of the extrasolar planet distribution and allow spectroscopic characterization of the planets themselves. Detecting Jovian planets at 5-50 AU scale orbiting nearby stars requires adaptive optics systems and coronagraphs an order of magnitude more powerful than those available today--the realm of ''Extreme'' adaptive optics. We present the basic requirements and design for such a system, the Gemini Planet Imager (GPI.) GPI will require a MEMS-based deformable mirror with good surface quality, 2-4 micron stroke (operated in tandem with a conventional low-order ''woofer'' mirror), and a fully-functional 48-actuator-diameter aperture.

  20. Fast complex memory polynomial-based adaptive digital predistorter

    NASA Astrophysics Data System (ADS)

    Singh Sappal, Amandeep; Singh Patterh, Manjeet; Sharma, Sanjay

    2011-07-01

    Today's 3G wireless systems require both high linearity and high power amplifier (PA) efficiency. The high peak-to-average ratios of the digital modulation schemes used in 3G wireless systems require that the RF PA maintain high linearity over a large range while maintaining this high efficiency; these two requirements are often at odds with each other with many of the traditional amplifier architectures. In this article, a fast and easy-to-implement adaptive digital predistorter has been presented for Wideband Code Division Multiplexed signals using complex memory polynomial work function. The proposed algorithm has been implemented to test a Motorola LDMOSFET PA. The proposed technique also takes care of the memory effects of the PA, which have been ignored in many proposed techniques in the literature. The results show that the new complex memory polynomial-based adaptive digital predistorter has better linearisation performance than conventional predistortion techniques.

  1. Normalized iterative denoising ghost imaging based on the adaptive threshold

    NASA Astrophysics Data System (ADS)

    Li, Gaoliang; Yang, Zhaohua; Zhao, Yan; Yan, Ruitao; Liu, Xia; Liu, Baolei

    2017-02-01

    An approach for improving ghost imaging (GI) quality is proposed. In this paper, an iteration model based on normalized GI is built through theoretical analysis. An adaptive threshold value is selected in the iteration model. The initial value of the iteration model is estimated as a step to remove the correlated noise. The simulation and experimental results reveal that the proposed strategy reconstructs a better image than traditional and normalized GI, without adding complexity. The NIDGI-AT scheme does not require prior information regarding the object, and can also choose the threshold adaptively. More importantly, the signal-to-noise ratio (SNR) of the reconstructed image is greatly improved. Therefore, this methodology represents another step towards practical real-world applications.

  2. Adaptive bad pixel correction algorithm for IRFPA based on PCNN

    NASA Astrophysics Data System (ADS)

    Leng, Hanbing; Zhou, Zuofeng; Cao, Jianzhong; Yi, Bo; Yan, Aqi; Zhang, Jian

    2013-10-01

    Bad pixels and response non-uniformity are the primary obstacles when IRFPA is used in different thermal imaging systems. The bad pixels of IRFPA include fixed bad pixels and random bad pixels. The former is caused by material or manufacture defect and their positions are always fixed, the latter is caused by temperature drift and their positions are always changing. Traditional radiometric calibration-based bad pixel detection and compensation algorithm is only valid to the fixed bad pixels. Scene-based bad pixel correction algorithm is the effective way to eliminate these two kinds of bad pixels. Currently, the most used scene-based bad pixel correction algorithm is based on adaptive median filter (AMF). In this algorithm, bad pixels are regarded as image noise and then be replaced by filtered value. However, missed correction and false correction often happens when AMF is used to handle complex infrared scenes. To solve this problem, a new adaptive bad pixel correction algorithm based on pulse coupled neural networks (PCNN) is proposed. Potential bad pixels are detected by PCNN in the first step, then image sequences are used periodically to confirm the real bad pixels and exclude the false one, finally bad pixels are replaced by the filtered result. With the real infrared images obtained from a camera, the experiment results show the effectiveness of the proposed algorithm.

  3. Novel Multistatic Adaptive Microwave Imaging Methods for Early Breast Cancer Detection

    NASA Astrophysics Data System (ADS)

    Xie, Yao; Guo, Bin; Li, Jian; Stoica, Petre

    2006-12-01

    Multistatic adaptive microwave imaging (MAMI) methods are presented and compared for early breast cancer detection. Due to the significant contrast between the dielectric properties of normal and malignant breast tissues, developing microwave imaging techniques for early breast cancer detection has attracted much interest lately. MAMI is one of the microwave imaging modalities and employs multiple antennas that take turns to transmit ultra-wideband (UWB) pulses while all antennas are used to receive the reflected signals. MAMI can be considered as a special case of the multi-input multi-output (MIMO) radar with the multiple transmitted waveforms being either UWB pulses or zeros. Since the UWB pulses transmitted by different antennas are displaced in time, the multiple transmitted waveforms are orthogonal to each other. The challenge to microwave imaging is to improve resolution and suppress strong interferences caused by the breast skin, nipple, and so forth. The MAMI methods we investigate herein utilize the data-adaptive robust Capon beamformer (RCB) to achieve high resolution and interference suppression. We will demonstrate the effectiveness of our proposed methods for breast cancer detection via numerical examples with data simulated using the finite-difference time-domain method based on a 3D realistic breast model.

  4. An HP Adaptive Discontinuous Galerkin Method for Hyperbolic Conservation Laws. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Bey, Kim S.

    1994-01-01

    This dissertation addresses various issues for model classes of hyperbolic conservation laws. The basic approach developed in this work employs a new family of adaptive, hp-version, finite element methods based on a special discontinuous Galerkin formulation for hyperbolic problems. The discontinuous Galerkin formulation admits high-order local approximations on domains of quite general geometry, while providing a natural framework for finite element approximations and for theoretical developments. The use of hp-versions of the finite element method makes possible exponentially convergent schemes with very high accuracies in certain cases; the use of adaptive hp-schemes allows h-refinement in regions of low regularity and p-enrichment to deliver high accuracy, while keeping problem sizes manageable and dramatically smaller than many conventional approaches. The use of discontinuous Galerkin methods is uncommon in applications, but the methods rest on a reasonable mathematical basis for low-order cases and has local approximation features that can be exploited to produce very efficient schemes, especially in a parallel, multiprocessor environment. The place of this work is to first and primarily focus on a model class of linear hyperbolic conservation laws for which concrete mathematical results, methodologies, error estimates, convergence criteria, and parallel adaptive strategies can be developed, and to then briefly explore some extensions to more general cases. Next, we provide preliminaries to the study and a review of some aspects of the theory of hyperbolic conservation laws. We also provide a review of relevant literature on this subject and on the numerical analysis of these types of problems.

  5. Enhancement of ELDA Tracker Based on CNN Features and Adaptive Model Update

    PubMed Central

    Gao, Changxin; Shi, Huizhang; Yu, Jin-Gang; Sang, Nong

    2016-01-01

    Appearance representation and the observation model are the most important components in designing a robust visual tracking algorithm for video-based sensors. Additionally, the exemplar-based linear discriminant analysis (ELDA) model has shown good performance in object tracking. Based on that, we improve the ELDA tracking algorithm by deep convolutional neural network (CNN) features and adaptive model update. Deep CNN features have been successfully used in various computer vision tasks. Extracting CNN features on all of the candidate windows is time consuming. To address this problem, a two-step CNN feature extraction method is proposed by separately computing convolutional layers and fully-connected layers. Due to the strong discriminative ability of CNN features and the exemplar-based model, we update both object and background models to improve their adaptivity and to deal with the tradeoff between discriminative ability and adaptivity. An object updating method is proposed to select the “good” models (detectors), which are quite discriminative and uncorrelated to other selected models. Meanwhile, we build the background model as a Gaussian mixture model (GMM) to adapt to complex scenes, which is initialized offline and updated online. The proposed tracker is evaluated on a benchmark dataset of 50 video sequences with various challenges. It achieves the best overall performance among the compared state-of-the-art trackers, which demonstrates the effectiveness and robustness of our tracking algorithm. PMID:27092505

  6. Investigating Item Exposure Control Methods in Computerized Adaptive Testing

    ERIC Educational Resources Information Center

    Ozturk, Nagihan Boztunc; Dogan, Nuri

    2015-01-01

    This study aims to investigate the effects of item exposure control methods on measurement precision and on test security under various item selection methods and item pool characteristics. In this study, the Randomesque (with item group sizes of 5 and 10), Sympson-Hetter, and Fade-Away methods were used as item exposure control methods. Moreover,…

  7. Designing Training for Temporal and Adaptive Transfer: A Comparative Evaluation of Three Training Methods for Process Control Tasks

    ERIC Educational Resources Information Center

    Kluge, Annette; Sauer, Juergen; Burkolter, Dina; Ritzmann, Sandrina

    2010-01-01

    Training in process control environments requires operators to be prepared for temporal and adaptive transfer of skill. Three training methods were compared with regard to their effectiveness in supporting transfer: Drill & Practice (D&P), Error Training (ET), and procedure-based and error heuristics training (PHT). Communication…

  8. Rule-based mechanisms of learning for intelligent adaptive flight control

    NASA Technical Reports Server (NTRS)

    Handelman, David A.; Stengel, Robert F.

    1990-01-01

    How certain aspects of human learning can be used to characterize learning in intelligent adaptive control systems is investigated. Reflexive and declarative memory and learning are described. It is shown that model-based systems-theoretic adaptive control methods exhibit attributes of reflexive learning, whereas the problem-solving capabilities of knowledge-based systems of artificial intelligence are naturally suited for implementing declarative learning. Issues related to learning in knowledge-based control systems are addressed, with particular attention given to rule-based systems. A mechanism for real-time rule-based knowledge acquisition is suggested, and utilization of this mechanism within the context of failure diagnosis for fault-tolerant flight control is demonstrated.

  9. A goal-oriented adaptive procedure for the quasi-continuum method with cluster approximation

    NASA Astrophysics Data System (ADS)

    Memarnahavandi, Arash; Larsson, Fredrik; Runesson, Kenneth

    2015-04-01

    We present a strategy for adaptive error control for the quasi-continuum (QC) method applied to molecular statics problems. The QC-method is introduced in two steps: Firstly, introducing QC-interpolation while accounting for the exact summation of all the bond-energies, we compute goal-oriented error estimators in a straight-forward fashion based on the pertinent adjoint (dual) problem. Secondly, for large QC-elements the bond energy and its derivatives are typically computed using an appropriate discrete quadrature using cluster approximations, which introduces a model error. The combined error is estimated approximately based on the same dual problem in conjunction with a hierarchical strategy for approximating the residual. As a model problem, we carry out atomistic-to-continuum homogenization of a graphene monolayer, where the Carbon-Carbon energy bonds are modeled via the Tersoff-Brenner potential, which involves next-nearest neighbor couplings. In particular, we are interested in computing the representative response for an imperfect lattice. Within the goal-oriented framework it becomes natural to choose the macro-scale (continuum) stress as the "quantity of interest". Two different formulations are adopted: The Basic formulation and the Global formulation. The presented numerical investigation shows the accuracy and robustness of the proposed error estimator and the pertinent adaptive algorithm.

  10. Advances in Patch-Based Adaptive Mesh Refinement Scalability

    DOE PAGES

    Gunney, Brian T.N.; Anderson, Robert W.

    2015-12-18

    Patch-based structured adaptive mesh refinement (SAMR) is widely used for high-resolution simu- lations. Combined with modern supercomputers, it could provide simulations of unprecedented size and resolution. A persistent challenge for this com- bination has been managing dynamically adaptive meshes on more and more MPI tasks. The dis- tributed mesh management scheme in SAMRAI has made some progress SAMR scalability, but early al- gorithms still had trouble scaling past the regime of 105 MPI tasks. This work provides two critical SAMR regridding algorithms, which are integrated into that scheme to ensure efficiency of the whole. The clustering algorithm is an extensionmore » of the tile- clustering approach, making it more flexible and efficient in both clustering and parallelism. The partitioner is a new algorithm designed to prevent the network congestion experienced by its prede- cessor. We evaluated performance using weak- and strong-scaling benchmarks designed to be difficult for dynamic adaptivity. Results show good scaling on up to 1.5M cores and 2M MPI tasks. Detailed timing diagnostics suggest scaling would continue well past that.« less

  11. Locally adaptive Nakagami-based ultrasound similarity measures.

    PubMed

    Wachinger, Christian; Klein, Tassilo; Navab, Nassir

    2012-04-01

    The derivation of statistically optimal similarity measures for intensity-based registration is possible by modeling the underlying image noise distribution. The parameters of these distributions are, however, commonly set heuristically across all images. In this article, we show that the estimation of the parameters on the present images largely improves the registration, which is a consequence of the more accurate characterization of the image noise. More precisely, instead of having constant parameters over the entire image domain, we estimate them on patches, leading to a local adaptation of the similarity measure. While this basic idea of creating locally adaptive metrics is interesting for various fields of application, we present the derivation for ultrasound imaging. The domain of ultrasound is particularly appealing for this approach, due to the inherent contamination with speckle noise. Furthermore, there exist detailed analyses of suitable noise distributions in the literature. We present experiments for applying a bivariate Nakagami distribution that facilitates modeling of several scattering scenarios prominent in medical ultrasound. Depending on the number of scatterers per resolution cell and the presence of coherent structures, different Nakagami parameters are required to obtain a valid approximation of the intensity statistics and to account for distributional locality. Our registration results on radio-frequency ultrasound data confirm the theoretical necessity for a spatial adaptation of similarity metrics.

  12. Advances in Patch-Based Adaptive Mesh Refinement Scalability

    SciTech Connect

    Gunney, Brian T.N.; Anderson, Robert W.

    2015-12-18

    Patch-based structured adaptive mesh refinement (SAMR) is widely used for high-resolution simu- lations. Combined with modern supercomputers, it could provide simulations of unprecedented size and resolution. A persistent challenge for this com- bination has been managing dynamically adaptive meshes on more and more MPI tasks. The dis- tributed mesh management scheme in SAMRAI has made some progress SAMR scalability, but early al- gorithms still had trouble scaling past the regime of 105 MPI tasks. This work provides two critical SAMR regridding algorithms, which are integrated into that scheme to ensure efficiency of the whole. The clustering algorithm is an extension of the tile- clustering approach, making it more flexible and efficient in both clustering and parallelism. The partitioner is a new algorithm designed to prevent the network congestion experienced by its prede- cessor. We evaluated performance using weak- and strong-scaling benchmarks designed to be difficult for dynamic adaptivity. Results show good scaling on up to 1.5M cores and 2M MPI tasks. Detailed timing diagnostics suggest scaling would continue well past that.

  13. Data-adapted moving least squares method for 3-D image interpolation

    NASA Astrophysics Data System (ADS)

    Jang, Sumi; Nam, Haewon; Lee, Yeon Ju; Jeong, Byeongseon; Lee, Rena; Yoon, Jungho

    2013-12-01

    In this paper, we present a nonlinear three-dimensional interpolation scheme for gray-level medical images. The scheme is based on the moving least squares method but introduces a fundamental modification. For a given evaluation point, the proposed method finds the local best approximation by reproducing polynomials of a certain degree. In particular, in order to obtain a better match to the local structures of the given image, we employ locally data-adapted least squares methods that can improve the classical one. Some numerical experiments are presented to demonstrate the performance of the proposed method. Five types of data sets are used: MR brain, MR foot, MR abdomen, CT head, and CT foot. From each of the five types, we choose five volumes. The scheme is compared with some well-known linear methods and other recently developed nonlinear methods. For quantitative comparison, we follow the paradigm proposed by Grevera and Udupa (1998). (Each slice is first assumed to be unknown then interpolated by each method. The performance of each interpolation method is assessed statistically.) The PSNR results for the estimated volumes are also provided. We observe that the new method generates better results in both quantitative and visual quality comparisons.

  14. An examination of an adapter method for measuring the vibration transmitted to the human arms.

    PubMed

    Xu, Xueyan S; Dong, Ren G; Welcome, Daniel E; Warren, Christopher; McDowell, Thomas W

    2015-09-01

    The objective of this study is to evaluate an adapter method for measuring the vibration on the human arms. Four instrumented adapters with different weights were used to measure the vibration transmitted to the wrist, forearm, and upper arm of each subject. Each adapter was attached at each location on the subjects using an elastic cloth wrap. Two laser vibrometers were also used to measure the transmitted vibration at each location to evaluate the validity of the adapter method. The apparent mass at the palm of the hand along the forearm direction was also measured to enhance the evaluation. This study found that the adapter and laser-measured transmissibility spectra were comparable with some systematic differences. While increasing the adapter mass reduced the resonant frequency at the measurement location, increasing the tightness of the adapter attachment increased the resonant frequency. However, the use of lightweight (≤15 g) adapters under medium attachment tightness did not change the basic trends of the transmissibility spectrum. The resonant features observed in the transmissibility spectra were also correlated with those observed in the apparent mass spectra. Because the local coordinate systems of the adapters may be significantly misaligned relative to the global coordinates of the vibration test systems, large errors were observed for the adapter-measured transmissibility in some individual orthogonal directions. This study, however, also demonstrated that the misalignment issue can be resolved by either using the total vibration transmissibility or by measuring the misalignment angles to correct the errors. Therefore, the adapter method is acceptable for understanding the basic characteristics of the vibration transmission in the human arms, and the adapter-measured data are acceptable for approximately modeling the system.

  15. An examination of an adapter method for measuring the vibration transmitted to the human arms

    PubMed Central

    Xu, Xueyan S.; Dong, Ren G.; Welcome, Daniel E.; Warren, Christopher; McDowell, Thomas W.

    2016-01-01

    The objective of this study is to evaluate an adapter method for measuring the vibration on the human arms. Four instrumented adapters with different weights were used to measure the vibration transmitted to the wrist, forearm, and upper arm of each subject. Each adapter was attached at each location on the subjects using an elastic cloth wrap. Two laser vibrometers were also used to measure the transmitted vibration at each location to evaluate the validity of the adapter method. The apparent mass at the palm of the hand along the forearm direction was also measured to enhance the evaluation. This study found that the adapter and laser-measured transmissibility spectra were comparable with some systematic differences. While increasing the adapter mass reduced the resonant frequency at the measurement location, increasing the tightness of the adapter attachment increased the resonant frequency. However, the use of lightweight (≤15 g) adapters under medium attachment tightness did not change the basic trends of the transmissibility spectrum. The resonant features observed in the transmissibility spectra were also correlated with those observed in the apparent mass spectra. Because the local coordinate systems of the adapters may be significantly misaligned relative to the global coordinates of the vibration test systems, large errors were observed for the adapter-measured transmissibility in some individual orthogonal directions. This study, however, also demonstrated that the misalignment issue can be resolved by either using the total vibration transmissibility or by measuring the misalignment angles to correct the errors. Therefore, the adapter method is acceptable for understanding the basic characteristics of the vibration transmission in the human arms, and the adapter-measured data are acceptable for approximately modeling the system. PMID:26834309

  16. Implementer-Initiated Adaptation of Evidence-Based Interventions: Kids Remember the Blue Wig

    ERIC Educational Resources Information Center

    Gibbs, D. A.; Krieger, K. E.; Cutbush, S. L.; Clinton-Sherrod, A. M.; Miller, S.

    2016-01-01

    Adaptation of evidence-based interventions by implementers is widespread. Although frequently viewed as departures from fidelity, adaptations may be positive in impact and consistent with fidelity. Research typically catalogs adaptations but rarely includes the implementers' perspectives on adaptation. We report data on individuals implementing an…

  17. FPGA-based RF spectrum merging and adaptive hopset selection

    NASA Astrophysics Data System (ADS)

    McLean, R. K.; Flatley, B. N.; Silvius, M. D.; Hopkinson, K. M.

    The radio frequency (RF) spectrum is a limited resource. Spectrum allotment disputes stem from this scarcity as many radio devices are confined to a fixed frequency or frequency sequence. One alternative is to incorporate cognition within a reconfigurable radio platform, therefore enabling the radio to adapt to dynamic RF spectrum environments. In this way, the radio is able to actively sense the RF spectrum, decide, and act accordingly, thereby sharing the spectrum and operating in more flexible manner. In this paper, we present a novel solution for merging many distributed RF spectrum maps into one map and for subsequently creating an adaptive hopset. We also provide an example of our system in operation, the result of which is a pseudorandom adaptive hopset. The paper then presents a novel hardware design for the frequency merger and adaptive hopset selector, both of which are written in VHDL and implemented as a custom IP core on an FPGA-based embedded system using the Xilinx Embedded Development Kit (EDK) software tool. The design of the custom IP core is optimized for area, and it can process a high-volume digital input via a low-latency circuit architecture. The complete embedded system includes the Xilinx PowerPC microprocessor, UART serial connection, and compact flash memory card IP cores, and our custom map merging/hopset selection IP core, all of which are targeted to the Virtex IV FPGA. This system is then incorporated into a cognitive radio prototype on a Rice University Wireless Open Access Research Platform (WARP) reconfigurable radio.

  18. Adaptive PID control based on orthogonal endocrine neural networks.

    PubMed

    Milovanović, Miroslav B; Antić, Dragan S; Milojković, Marko T; Nikolić, Saša S; Perić, Staniša Lj; Spasić, Miodrag D

    2016-12-01

    A new intelligent hybrid structure used for online tuning of a PID controller is proposed in this paper. The structure is based on two adaptive neural networks, both with built-in Chebyshev orthogonal polynomials. First substructure network is a regular orthogonal neural network with implemented artificial endocrine factor (OENN), in the form of environmental stimuli, to its weights. It is used for approximation of control signals and for processing system deviation/disturbance signals which are introduced in the form of environmental stimuli. The output values of OENN are used to calculate artificial environmental stimuli (AES), which represent required adaptation measure of a second network-orthogonal endocrine adaptive neuro-fuzzy inference system (OEANFIS). OEANFIS is used to process control, output and error signals of a system and to generate adjustable values of proportional, derivative, and integral parameters, used for online tuning of a PID controller. The developed structure is experimentally tested on a laboratory model of the 3D crane system in terms of analysing tracking performances and deviation signals (error signals) of a payload. OENN-OEANFIS performances are compared with traditional PID and 6 intelligent PID type controllers. Tracking performance comparisons (in transient and steady-state period) showed that the proposed adaptive controller possesses performances within the range of other tested controllers. The main contribution of OENN-OEANFIS structure is significant minimization of deviation signals (17%-79%) compared to other controllers. It is recommended to exploit it when dealing with a highly nonlinear system which operates in the presence of undesirable disturbances.

  19. Global models of human decision-making for land-based mitigation and adaptation assessment

    NASA Astrophysics Data System (ADS)

    Arneth, A.; Brown, C.; Rounsevell, M. D. A.

    2014-07-01

    Understanding the links between land-use change (LUC) and climate change is vital in developing effective land-based climate mitigation policies and adaptation measures. Although mitigation and adaptation are human-mediated processes, current global-scale modelling tools do not account for societal learning and other human responses to environmental change. We propose the agent functional type (AFT) method to advance the representation of these processes, by combining socio-economics (agent-based modelling) with natural sciences (dynamic global vegetation models). Initial AFT-based simulations show the emergence of realistic LUC patterns that reflect known LUC processes, demonstrating the potential of the method to enhance our understanding of the role of people in the Earth system.

  20. A Newton method with adaptive finite elements for solving phase-change problems with natural convection

    NASA Astrophysics Data System (ADS)

    Danaila, Ionut; Moglan, Raluca; Hecht, Frédéric; Le Masson, Stéphane

    2014-10-01

    We present a new numerical system using finite elements with mesh adaptivity for the simulation of solid-liquid phase change systems. In the liquid phase, the natural convection flow is simulated by solving the incompressible Navier-Stokes equations with Boussinesq approximation. A variable viscosity model allows the velocity to progressively vanish in the solid phase, through an intermediate mushy region. The phase change is modeled by introducing an implicit enthalpy source term in the heat equation. The final system of equations describing the liquid-solid system by a single domain approach is solved using a Newton iterative algorithm. The space discretization is based on a P2-P1 Taylor-Hood finite elements and mesh adaptivity by metric control is used to accurately track the solid-liquid interface or the density inversion interface for water flows. The numerical method is validated against classical benchmarks that progressively add strong non-linearities in the system of equations: natural convection of air, natural convection of water, melting of a phase-change material and water freezing. Very good agreement with experimental data is obtained for each test case, proving the capability of the method to deal with both melting and solidification problems with convection. The presented numerical method is easy to implement using FreeFem++ software using a syntax close to the mathematical formulation.

  1. Adapting School-Based Substance Use Prevention Curriculum Through Cultural Grounding: A Review and Exemplar of Adaptation Processes for Rural Schools

    PubMed Central

    Colby, Margaret; Hecht, Michael L.; Miller-Day, Michelle; Krieger, Janice L.; Syvertsen, Amy K.; Graham, John W.; Pettigrew, Jonathan

    2014-01-01

    A central challenge facing twenty-first century community-based researchers and prevention scientists is curriculum adaptation processes. While early prevention efforts sought to develop effective programs, taking programs to scale implies that they will be adapted, especially as programs are implemented with populations other than those with whom they were developed or tested. The principle of cultural grounding, which argues that health message adaptation should be informed by knowledge of the target population and by cultural insiders, provides a theoretical rational for cultural regrounding and presents an illustrative case of methods used to reground the keepin’ it REAL substance use prevention curriculum for a rural adolescent population. We argue that adaptation processes like those presented should be incorporated into the design and dissemination of prevention interventions. PMID:22961604

  2. Adaptive Filtering Methods for Identifying Cross-Frequency Couplings in Human EEG

    PubMed Central

    Van Zaen, Jérôme; Murray, Micah M.; Meuli, Reto A.; Vesin, Jean-Marc

    2013-01-01

    Oscillations have been increasingly recognized as a core property of neural responses that contribute to spontaneous, induced, and evoked activities within and between individual neurons and neural ensembles. They are considered as a prominent mechanism for information processing within and communication between brain areas. More recently, it has been proposed that interactions between periodic components at different frequencies, known as cross-frequency couplings, may support the integration of neuronal oscillations at different temporal and spatial scales. The present study details methods based on an adaptive frequency tracking approach that improve the quantification and statistical analysis of oscillatory components and cross-frequency couplings. This approach allows for time-varying instantaneous frequency, which is particularly important when measuring phase interactions between components. We compared this adaptive approach to traditional band-pass filters in their measurement of phase-amplitude and phase-phase cross-frequency couplings. Evaluations were performed with synthetic signals and EEG data recorded from healthy humans performing an illusory contour discrimination task. First, the synthetic signals in conjunction with Monte Carlo simulations highlighted two desirable features of the proposed algorithm vs. classical filter-bank approaches: resilience to broad-band noise and oscillatory interference. Second, the analyses with real EEG signals revealed statistically more robust effects (i.e. improved sensitivity) when using an adaptive frequency tracking framework, particularly when identifying phase-amplitude couplings. This was further confirmed after generating surrogate signals from the real EEG data. Adaptive frequency tracking appears to improve the measurements of cross-frequency couplings through precise extraction of neuronal oscillations. PMID:23560098

  3. Adaptive filtering methods for identifying cross-frequency couplings in human EEG.

    PubMed

    Van Zaen, Jérôme; Murray, Micah M; Meuli, Reto A; Vesin, Jean-Marc

    2013-01-01

    Oscillations have been increasingly recognized as a core property of neural responses that contribute to spontaneous, induced, and evoked activities within and between individual neurons and neural ensembles. They are considered as a prominent mechanism for information processing within and communication between brain areas. More recently, it has been proposed that interactions between periodic components at different frequencies, known as cross-frequency couplings, may support the integration of neuronal oscillations at different temporal and spatial scales. The present study details methods based on an adaptive frequency tracking approach that improve the quantification and statistical analysis of oscillatory components and cross-frequency couplings. This approach allows for time-varying instantaneous frequency, which is particularly important when measuring phase interactions between components. We compared this adaptive approach to traditional band-pass filters in their measurement of phase-amplitude and phase-phase cross-frequency couplings. Evaluations were performed with synthetic signals and EEG data recorded from healthy humans performing an illusory contour discrimination task. First, the synthetic signals in conjunction with Monte Carlo simulations highlighted two desirable features of the proposed algorithm vs. classical filter-bank approaches: resilience to broad-band noise and oscillatory interference. Second, the analyses with real EEG signals revealed statistically more robust effects (i.e. improved sensitivity) when using an adaptive frequency tracking framework, particularly when identifying phase-amplitude couplings. This was further confirmed after generating surrogate signals from the real EEG data. Adaptive frequency tracking appears to improve the measurements of cross-frequency couplings through precise extraction of neuronal oscillations.

  4. Adaptive wavelet-based recognition of oscillatory patterns on electroencephalograms

    NASA Astrophysics Data System (ADS)

    Nazimov, Alexey I.; Pavlov, Alexey N.; Hramov, Alexander E.; Grubov, Vadim V.; Koronovskii, Alexey A.; Sitnikova, Evgenija Y.

    2013-02-01

    The problem of automatic recognition of specific oscillatory patterns on electroencephalograms (EEG) is addressed using the continuous wavelet-transform (CWT). A possibility of improving the quality of recognition by optimizing the choice of CWT parameters is discussed. An adaptive approach is proposed to identify sleep spindles (SS) and spike wave discharges (SWD) that assumes automatic selection of CWT-parameters reflecting the most informative features of the analyzed time-frequency structures. Advantages of the proposed technique over the standard wavelet-based approaches are considered.

  5. Adaptive multiresolution semi-Lagrangian discontinuous Galerkin methods for the Vlasov equations

    NASA Astrophysics Data System (ADS)

    Besse, N.; Deriaz, E.; Madaule, É.

    2017-03-01

    We develop adaptive numerical schemes for the Vlasov equation by combining discontinuous Galerkin discretisation, multiresolution analysis and semi-Lagrangian time integration. We implement a tree based structure in order to achieve adaptivity. Both multi-wavelets and discontinuous Galerkin rely on a local polynomial basis. The schemes are tested and validated using Vlasov-Poisson equations for plasma physics and astrophysics.

  6. Robust observer-based adaptive fuzzy sliding mode controller

    NASA Astrophysics Data System (ADS)

    Oveisi, Atta; Nestorović, Tamara

    2016-08-01

    In this paper, a new observer-based adaptive fuzzy integral sliding mode controller is proposed based on the Lyapunov stability theorem. The plant is subjected to a square-integrable disturbance and is assumed to have mismatch uncertainties both in state- and input-matrices. Based on the classical sliding mode controller, the equivalent control effort is obtained to satisfy the sufficient requirement of sliding mode controller and then the control law is modified to guarantee the reachability of the system trajectory to the sliding manifold. In order to relax the norm-bounded constrains on the control law and solve the chattering problem of sliding mode controller, a fuzzy logic inference mechanism is combined with the controller. An adaptive law is then introduced to tune the parameters of the fuzzy system on-line. Finally, for evaluating the controller and the robust performance of the closed-loop system, the proposed regulator is implemented on a real-time mechanical vibrating system.

  7. Adaptation of an ethnographic method for investigation of the task domain in diagnostic radiology

    NASA Astrophysics Data System (ADS)

    Ramey, Judith A.; Rowberg, Alan H.; Robinson, Carol

    1992-07-01

    A number of user-centered methods for designing radiology workstations have been described by researchers at Carleton University (Ottawa), Georgetown University, George Washington University, and University of Arizona, among others. The approach described here differs in that it enriches standard human-factors practices with methods adapted from ethnography to study users (in this case, diagnostic radiologists) as members of a distinct culture. The overall approach combines several methods; the core method, based on ethnographic ''stream of behavior chronicles'' and their analysis, has four phases: (1) first, we gather the stream of behavior by videotaping a radiologist as he or she works; (2) we view the tape ourselves and formulate questions and hypothesis about the work; and then (3) in a second videotaped session, we show the radiologist the original tape and ask for a running commentary on the work, into which, at the appropriate points, we interject our questions for clarification. We then (4) categorize/index the behavior on the ''raw data'' tapes for various kinds of follow-on analysis. We describe and illustrate this method in detail, describe how we analyze the ''raw data'' videotapes and the commentary tapes, and explain how the method can be integrated into an overall user-centered design process based on standard human-factors techniques.

  8. UAV multiple image dense matching based on self-adaptive patch

    NASA Astrophysics Data System (ADS)

    Zhu, Jin; Ding, Yazhou; Xiao, Xiongwu; Guo, Bingxuan; Li, Deren; Yang, Nan; Zhang, Weilong; Huang, Xiangxiang; Li, Linhui; Peng, Zhe; Pan, Fei

    2015-12-01

    This article using some state-of-art multi-view dense matching methods for reference, proposes an UAV multiple image dense matching algorithm base on Self-Adaptive patch (UAV-AP) in view of the specialty of UAV images. The main idea of matching propagating based on Self-Adaptive patch is to build patches centered by seed points which are already matched. The extent and figure of the patches can adapt to the terrain relief automatically: when the surface is smooth, the extent of the patch would become bigger to cover the whole smooth terrain; while the terrain is very rough, the extent of the patch would become smaller to describe the details of the surface. With this approach, the UAV image sequences and the given or previously triangulated orientation elements are taken as inputs. The main processing procedures are as follows: (1) multi-view initial feature matching, (2) matching propagating based on Self-Adaptive patch, (3) filtering the erroneous matching points. Finally, the algorithm outputs a dense colored point cloud. Experiments indicate that this method surpassed the existing related algorithm in efficiency and the matching precision is also quite ideal.

  9. Adaptive PCA based fault diagnosis scheme in imperial smelting process.

    PubMed

    Hu, Zhikun; Chen, Zhiwen; Gui, Weihua; Jiang, Bin

    2014-09-01

    In this paper, an adaptive fault detection scheme based on a recursive principal component analysis (PCA) is proposed to deal with the problem of false alarm due to normal process changes in real process. Our further study is also dedicated to develop a fault isolation approach based on Generalized Likelihood Ratio (GLR) test and Singular Value Decomposition (SVD) which is one of general techniques of PCA, on which the off-set and scaling fault can be easily isolated with explicit off-set fault direction and scaling fault classification. The identification of off-set and scaling fault is also applied. The complete scheme of PCA-based fault diagnosis procedure is proposed. The proposed scheme is first applied to Imperial Smelting Process, and the results show that the proposed strategies can be able to mitigate false alarms and isolate faults efficiently.

  10. A beam halo monitor based on adaptive optics

    NASA Astrophysics Data System (ADS)

    Welsch, C. P.; Bravin, E.; Lefèvre, T.

    2007-06-01

    In future high intensity, high energy accelerators, beam losses have to be minimized to maximize performance and reduce activation of accelerator components. It is imperative to have a clear understanding of the mechanisms that can lead to halo formation and to have the possibility to test available theoretical models with an adequate experimental setup. Measurements based on optical transition radiation (OTR) provide an interesting opportunity for high resolution measurements of the transverse beam profile. An imaging system based on a beam core-suppression technique, in which the core of the beam is deflected by means of a micro mirror array, to allow for direct observation of the halo has been developed. In this contribution, a possible layout of a novel diagnostic system based on adaptive optics is presented and the results of first tests carried out in our optical lab are summarized.

  11. Adaptive fiber optics collimator based on flexible hinges.

    PubMed

    Zhi, Dong; Ma, Yanxing; Ma, Pengfei; Si, Lei; Wang, Xiaolin; Zhou, Pu

    2014-08-20

    In this manuscript, we present a new design for an adaptive fiber optics collimator (AFOC) based on flexible hinges by using piezoelectric stacks actuators for X-Y displacement. Different from traditional AFOC, the new structure is based on flexible hinges to drive the fiber end cap instead of naked fiber. We fabricated a real AFOC based on flexible hinges, and the end cap's deviation and resonance frequency of the device were measured. Experimental results show that this new AFOC can provide fast control of tip-tilt deviation of the laser beam emitting from the end cap. As a result, the fiber end cap can support much higher power than naked fiber, which makes the new structure ideal for tip-tilt controlling in a high-power fiber laser system.

  12. Cultural Adaptation and Implementation of Family Evidence-Based Interventions with Diverse Populations.

    PubMed

    Kumpfer, Karol; Magalhães, Catia; Xie, Jing

    2016-10-18

    Family evidence-based interventions (FEBIs) are effective in creating lasting improvements and preventing children's behavioral health problems, even in genetically at-risk children. Most FEBIs, however, were designed for English-speaking families. Consequently, providers have difficulty engaging non-English-speaking populations in their own country or in other countries where the content, language, and recruitment methods of the FEBIs do not reflect their culture. The practical solution has been to culturally adapt existing FEBIs. Research suggests this can increase family engagement by about 40 %. This article covers background, theory, and research on FEBIs and the need to engage more diverse families. Steps for culturally adapting FEBIs with fidelity are presented based on our own and local implementers' experiences in 36 countries with the Strengthening Families Program. These steps, also previously recommended by a United Nations Office on Drugs and Crime panel of experts in family skills interventions, include: (1) creating a cultural advisory group, (2) assessing specific needs of cultural subgroups, (3) language translation, (4) hiring implementers from the culture, (5) developing culturally adapted training systems, (6) making cultural adaptations cautiously during repeated delivery, (7) continuous implementation quality and outcome evaluation to assure effectiveness in comparison with the original FEBI, (8) developing local and international dissemination partnerships, and (9) securing funding support for sustainability. Future efficacy trials should compare existing FEBIs to culturally adapted versions to determine comparative cost effectiveness.

  13. An adaptive MR-CT registration method for MRI-guided prostate cancer radiotherapy

    NASA Astrophysics Data System (ADS)

    Zhong, Hualiang; Wen, Ning; Gordon, James J.; Elshaikh, Mohamed A.; Movsas, Benjamin; Chetty, Indrin J.

    2015-04-01

    Magnetic Resonance images (MRI) have superior soft tissue contrast compared with CT images. Therefore, MRI might be a better imaging modality to differentiate the prostate from surrounding normal organs. Methods to accurately register MRI to simulation CT images are essential, as we transition the use of MRI into the routine clinic setting. In this study, we present a finite element method (FEM) to improve the performance of a commercially available, B-spline-based registration algorithm in the prostate region. Specifically, prostate contours were delineated independently on ten MRI and CT images using the Eclipse treatment planning system. Each pair of MRI and CT images was registered with the B-spline-based algorithm implemented in the VelocityAI system. A bounding box that contains the prostate volume in the CT image was selected and partitioned into a tetrahedral mesh. An adaptive finite element method was then developed to adjust the displacement vector fields (DVFs) of the B-spline-based registrations within the box. The B-spline and FEM-based registrations were evaluated based on the variations of prostate volume and tumor centroid, the unbalanced energy of the generated DVFs, and the clarity of the reconstructed anatomical structures. The results showed that the volumes of the prostate contours warped with the B-spline-based DVFs changed 10.2% on average, relative to the volumes of the prostate contours on the original MR images. This discrepancy was reduced to 1.5% for the FEM-based DVFs. The average unbalanced energy was 2.65 and 0.38 mJ cm-3, and the prostate centroid deviation was 0.37 and 0.28 cm, for the B-spline and FEM-based registrations, respectively. Different from the B-spline-warped MR images, the FEM-warped MR images have clear boundaries between prostates and bladders, and their internal prostatic structures are consistent with those of the original MR images. In summary, the developed adaptive FEM method preserves the prostate volume

  14. Co-production of knowledge: recipe for success in land-based climate change adaptation?

    NASA Astrophysics Data System (ADS)

    Coninx, Ingrid; Swart, Rob

    2015-04-01

    After multiple failures of scientists to trigger policymakers and other relevant actors to take action when communicating research findings, the request for co-production (or co-creation) of knowledge and stakeholder involvement in climate change adaptation efforts has rapidly increased over the past few years. In particular for land-based adaptation, on-the-ground action is often met by societal resistance towards solutions proposed by scientists, by a misfit of potential solutions with the local context, leading to misunderstanding and even rejection of scientific recommendations. A fully integrative co-creation process in which both scientists and practitioners discuss climate vulnerability and possible responses, exploring perspectives and designing adaptation measures based on their own knowledge, is expected to prevent the adaptation deadlock. The apparent conviction that co-creation processes result in successful adaptation, has not yet been unambiguously empirically demonstrated, but has resulted in co-creation being one of basic principles in many new research and policy programmes. But is co-creation that brings knowledge of scientists and practitioners together always the best recipe for success in climate change adaptation? Assessing a number of actual cases, the authors have serious doubts. The paper proposes additional considerations for adaptively managing the environment that should be taken into account in the design of participatory knowledge development in which climate scientists play a role. These include the nature of the problem at stake; the values, interests and perceptions of the actors involved; the methods used to build trust, strengthen alignment and develop reciprocal relationships among scientists and practitioners; and the concreteness of the co-creation output.

  15. Adaptive ultrasonic imaging with the total focusing method for inspection of complex components immersed in water

    NASA Astrophysics Data System (ADS)

    Le Jeune, L.; Robert, S.; Dumas, P.; Membre, A.; Prada, C.

    2015-03-01

    In this paper, we propose an ultrasonic adaptive imaging method based on the phased-array technology and the synthetic focusing algorithm Total Focusing Method (TFM). The general principle is to image the surface by applying the TFM algorithm in a semi-infinite water medium. Then, the reconstructed surface is taken into account to make a second TFM image inside the component. In the surface reconstruction step, the TFM algorithm has been optimized to decrease computation time and to limit noise in water. In the second step, the ultrasonic paths through the reconstructed surface are calculated by the Fermat's principle and an iterative algorithm, and the classical TFM is applied to obtain an image inside the component. This paper presents several results of TFM imaging in components of different geometries, and a result obtained with a new technology of probes equipped with a flexible wedge filled with water (manufactured by Imasonic).

  16. Domain Adaptation Methods for Improving Lab-to-field Generalization of Cocaine Detection using Wearable ECG

    PubMed Central

    Natarajan, Annamalai; Angarita, Gustavo; Gaiser, Edward; Malison, Robert; Ganesan, Deepak; Marlin, Benjamin M.

    2016-01-01

    Mobile health research on illicit drug use detection typically involves a two-stage study design where data to learn detectors is first collected in lab-based trials, followed by a deployment to subjects in a free-living environment to assess detector performance. While recent work has demonstrated the feasibility of wearable sensors for illicit drug use detection in the lab setting, several key problems can limit lab-to-field generalization performance. For example, lab-based data collection often has low ecological validity, the ground-truth event labels collected in the lab may not be available at the same level of temporal granularity in the field, and there can be significant variability between subjects. In this paper, we present domain adaptation methods for assessing and mitigating potential sources of performance loss in lab-to-field generalization and apply them to the problem of cocaine use detection from wearable electrocardiogram sensor data. PMID:28090605

  17. Domain Adaptation Methods for Improving Lab-to-field Generalization of Cocaine Detection using Wearable ECG.

    PubMed

    Natarajan, Annamalai; Angarita, Gustavo; Gaiser, Edward; Malison, Robert; Ganesan, Deepak; Marlin, Benjamin M

    2016-09-01

    Mobile health research on illicit drug use detection typically involves a two-stage study design where data to learn detectors is first collected in lab-based trials, followed by a deployment to subjects in a free-living environment to assess detector performance. While recent work has demonstrated the feasibility of wearable sensors for illicit drug use detection in the lab setting, several key problems can limit lab-to-field generalization performance. For example, lab-based data collection often has low ecological validity, the ground-truth event labels collected in the lab may not be available at the same level of temporal granularity in the field, and there can be significant variability between subjects. In this paper, we present domain adaptation methods for assessing and mitigating potential sources of performance loss in lab-to-field generalization and apply them to the problem of cocaine use detection from wearable electrocardiogram sensor data.

  18. Analysis of modified SMI method for adaptive array weight control

    NASA Technical Reports Server (NTRS)

    Dilsavor, R. L.; Moses, R. L.

    1989-01-01

    An adaptive array is applied to the problem of receiving a desired signal in the presence of weak interference signals which need to be suppressed. A modification, suggested by Gupta, of the sample matrix inversion (SMI) algorithm controls the array weights. In the modified SMI algorithm, interference suppression is increased by subtracting a fraction F of the noise power from the diagonal elements of the estimated covariance matrix. Given the true covariance matrix and the desired signal direction, the modified algorithm is shown to maximize a well-defined, intuitive output power ratio criterion. Expressions are derived for the expected value and variance of the array weights and output powers as a function of the fraction F and the number of snapshots used in the covariance matrix estimate. These expressions are compared with computer simulation and good agreement is found. A trade-off is found to exist between the desired level of interference suppression and the number of snapshots required in order to achieve that level with some certainty. The removal of noise eigenvectors from the covariance matrix inverse is also discussed with respect to this application. Finally, the type and severity of errors which occur in the covariance matrix estimate are characterized through simulation.

  19. Parallel architectures for iterative methods on adaptive, block structured grids

    NASA Technical Reports Server (NTRS)

    Gannon, D.; Vanrosendale, J.

    1983-01-01

    A parallel computer architecture well suited to the solution of partial differential equations in complicated geometries is proposed. Algorithms for partial differential equations contain a great deal of parallelism. But this parallelism can be difficult to exploit, particularly on complex problems. One approach to extraction of this parallelism is the use of special purpose architectures tuned to a given problem class. The architecture proposed here is tuned to boundary value problems on complex domains. An adaptive elliptic algorithm which maps effectively onto the proposed architecture is considered in detail. Two levels of parallelism are exploited by the proposed architecture. First, by making use of the freedom one has in grid generation, one can construct grids which are locally regular, permitting a one to one mapping of grids to systolic style processor arrays, at least over small regions. All local parallelism can be extracted by this approach. Second, though there may be a regular global structure to the grids constructed, there will be parallelism at this level. One approach to finding and exploiting this parallelism is to use an architecture having a number of processor clusters connected by a switching network. The use of such a network creates a highly flexible architecture which automatically configures to the problem being solved.

  20. Adaptive thresholding technique for retinal vessel segmentation based on GLCM-energy information.

    PubMed

    Mapayi, Temitope; Viriri, Serestina; Tapamo, Jules-Raymond

    2015-01-01

    Although retinal vessel segmentation has been extensively researched, a robust and time efficient segmentation method is highly needed. This paper presents a local adaptive thresholding technique based on gray level cooccurrence matrix- (GLCM-) energy information for retinal vessel segmentation. Different thresholds were computed using GLCM-energy information. An experimental evaluation on DRIVE database using the grayscale intensity and Green Channel of the retinal image demonstrates the high performance of the proposed local adaptive thresholding technique. The maximum average accuracy rates of 0.9511 and 0.9510 with maximum average sensitivity rates of 0.7650 and 0.7641 were achieved on DRIVE and STARE databases, respectively. When compared to the widely previously used techniques on the databases, the proposed adaptive thresholding technique is time efficient with a higher average sensitivity and average accuracy rates in the same range of very good specificity.

  1. Study on adaptive PID algorithm of hydraulic turbine governing system based on fuzzy neural network

    NASA Astrophysics Data System (ADS)

    Tang, Liangbao; Bao, Jumin

    2006-11-01

    The conventional hydraulic turbine governing system can't automatically modulate PID parameters according to the dynamic process of the system, the generator speed is unstable and the mains frequency fluctuation results in. To solve the above problem, the fuzzy neural network (FNN) and the adaptive control are combined to design an adaptive PID algorithm based on the fuzzy neural network which can effectively control the hydraulic turbine governing system. Finally, the improved mathematic model is simulated. The simulation results are compared with the conventional hydraulic turbine's. Thus the validity and superiority of the fuzzy neural network PID algorithm have been proved. The simulation results show that the algorithm not only retains the functions of fuzzy control, but also provides the ability to approach to the non-linear system. Also the dynamic process of the system can be reflected more precisely and the on-line adaptive control is implemented. The algorithm is superior to other methods in response and control effect.

  2. Lossless image compression based on optimal prediction, adaptive lifting, and conditional arithmetic coding.

    PubMed

    Boulgouris, N V; Tzovaras, D; Strintzis, M G

    2001-01-01

    The optimal predictors of a lifting scheme in the general n-dimensional case are obtained and applied for the lossless compression of still images using first quincunx sampling and then simple row-column sampling. In each case, the efficiency of the linear predictors is enhanced nonlinearly. Directional postprocessing is used in the quincunx case, and adaptive-length postprocessing in the row-column case. Both methods are seen to perform well. The resulting nonlinear interpolation schemes achieve extremely efficient image decorrelation. We further investigate context modeling and adaptive arithmetic coding of wavelet coefficients in a lossless compression framework. Special attention is given to the modeling contexts and the adaptation of the arithmetic coder to the actual data. Experimental evaluation shows that the best of the resulting coders produces better results than other known algorithms for multiresolution-based lossless image coding.

  3. A study of interceptor attitude control based on adaptive wavelet neural networks

    NASA Astrophysics Data System (ADS)

    Li, Da; Wang, Qing-chao

    2005-12-01

    This paper engages to study the 3-DOF attitude control problem of the kinetic interceptor. When the kinetic interceptor enters into terminal guidance it has to maneuver with large angles. The characteristic of interceptor attitude system is nonlinearity, strong-coupling and MIMO. A kind of inverse control approach based on adaptive wavelet neural networks was proposed in this paper. Instead of using one complex neural network as the controller, the nonlinear dynamics of the interceptor can be approximated by three independent subsystems applying exact feedback-linearization firstly, and then controllers for each subsystem are designed using adaptive wavelet neural networks respectively. This method avoids computing a large amount of the weights and bias in one massive neural network and the control parameters can be adaptive changed online. Simulation results betray that the proposed controller performs remarkably well.

  4. Multichannel Speech Enhancement Based on Generalized Gamma Prior Distribution with Its Online Adaptive Estimation

    NASA Astrophysics Data System (ADS)

    Dat, Tran Huy; Takeda, Kazuya; Itakura, Fumitada

    We present a multichannel speech enhancement method based on MAP speech spectral magnitude estimation using a generalized gamma model of speech prior distribution, where the model parameters are adapted from actual noisy speech in a frame-by-frame manner. The utilization of a more general prior distribution with its online adaptive estimation is shown to be effective for speech spectral estimation in noisy environments. Furthermore, the multi-channel information in terms of cross-channel statistics are shown to be useful to better adapt the prior distribution parameters to the actual observation, resulting in better performance of speech enhancement algorithm. We tested the proposed algorithm in an in-car speech database and obtained significant improvements of the speech recognition performance, particularly under non-stationary noise conditions such as music, air-conditioner and open window.

  5. An adaptive fusion approach for infrared and visible images based on NSCT and compressed sensing

    NASA Astrophysics Data System (ADS)

    Zhang, Qiong; Maldague, Xavier

    2016-01-01

    A novel nonsubsampled contourlet transform (NSCT) based image fusion approach, implementing an adaptive-Gaussian (AG) fuzzy membership method, compressed sensing (CS) technique, total variation (TV) based gradient descent reconstruction algorithm, is proposed for the fusion computation of infrared and visible images. Compared with wavelet, contourlet, or any other multi-resolution analysis method, NSCT has many evident advantages, such as multi-scale, multi-direction, and translation invariance. As is known, a fuzzy set is characterized by its membership function (MF), while the commonly known Gaussian fuzzy membership degree can be introduced to establish an adaptive control of the fusion processing. The compressed sensing technique can sparsely sample the image information in a certain sampling rate, and the sparse signal can be recovered by solving a convex problem employing gradient descent based iterative algorithm(s). In the proposed fusion process, the pre-enhanced infrared image and the visible image are decomposed into low-frequency subbands and high-frequency subbands, respectively, via the NSCT method as a first step. The low-frequency coefficients are fused using the adaptive regional average energy rule; the highest-frequency coefficients are fused using the maximum absolute selection rule; the other high-frequency coefficients are sparsely sampled, fused using the adaptive-Gaussian regional standard deviation rule, and then recovered by employing the total variation based gradient descent recovery algorithm. Experimental results and human visual perception illustrate the effectiveness and advantages of the proposed fusion approach. The efficiency and robustness are also analyzed and discussed through different evaluation methods, such as the standard deviation, Shannon entropy, root-mean-square error, mutual information and edge-based similarity index.

  6. Adaptive Discontinuous Evolution Galerkin Method for Dry Atmospheric Flow

    DTIC Science & Technology

    2013-04-02

    standard one-dimensional approximate Riemann solver used for the flux integration demonstrate better stability, accuracy as well as reliability of the...discontinuous evolution Galerkin method for dry atmospheric convection. Comparisons with the standard one-dimensional approximate Riemann solver used...instead of a standard one- dimensional approximate Riemann solver, the flux integration within the discontinuous Galerkin method is now realized by

  7. hp-Adaptive time integration based on the BDF for viscous flows

    NASA Astrophysics Data System (ADS)

    Hay, A.; Etienne, S.; Pelletier, D.; Garon, A.

    2015-06-01

    This paper presents a procedure based on the Backward Differentiation Formulas of order 1 to 5 to obtain efficient time integration of the incompressible Navier-Stokes equations. The adaptive algorithm performs both stepsize and order selections to control respectively the solution accuracy and the computational efficiency of the time integration process. The stepsize selection (h-adaptivity) is based on a local error estimate and an error controller to guarantee that the numerical solution accuracy is within a user prescribed tolerance. The order selection (p-adaptivity) relies on the idea that low-accuracy solutions can be computed efficiently by low order time integrators while accurate solutions require high order time integrators to keep computational time low. The selection is based on a stability test that detects growing numerical noise and deems a method of order p stable if there is no method of lower order that delivers the same solution accuracy for a larger stepsize. Hence, it guarantees both that (1) the used method of integration operates inside of its stability region and (2) the time integration procedure is computationally efficient. The proposed time integration procedure also features a time-step rejection and quarantine mechanisms, a modified Newton method with a predictor and dense output techniques to compute solution at off-step points.

  8. Adaptive and robust statistical methods for processing near-field scanning microwave microscopy images.

    PubMed

    Coakley, K J; Imtiaz, A; Wallis, T M; Weber, J C; Berweger, S; Kabos, P

    2015-03-01

    Near-field scanning microwave microscopy offers great potential to facilitate characterization, development and modeling of materials. By acquiring microwave images at multiple frequencies and amplitudes (along with the other modalities) one can study material and device physics at different lateral and depth scales. Images are typically noisy and contaminated by artifacts that can vary from scan line to scan line and planar-like trends due to sample tilt errors. Here, we level images based on an estimate of a smooth 2-d trend determined with a robust implementation of a local regression method. In this robust approach, features and outliers which are not due to the trend are automatically downweighted. We denoise images with the Adaptive Weights Smoothing method. This method smooths out additive noise while preserving edge-like features in images. We demonstrate the feasibility of our methods on topography images and microwave |S11| images. For one challenging test case, we demonstrate that our method outperforms alternative methods from the scanning probe microscopy data analysis software package Gwyddion. Our methods should be useful for massive image data sets where manual selection of landmarks or image subsets by a user is impractical.

  9. Adaptive Controls Method Demonstrated for the Active Suppression of Instabilities in Engine Combustors

    NASA Technical Reports Server (NTRS)

    Kopasakis, George

    2004-01-01

    An adaptive feedback control method was demonstrated that suppresses thermoacoustic instabilities in a liquid-fueled combustor of a type used in aircraft engines. Extensive research has been done to develop lean-burning (low fuel-to-air ratio) combustors that can reduce emissions throughout the mission cycle to reduce the environmental impact of aerospace propulsion systems. However, these lean-burning combustors are susceptible to thermoacoustic instabilities (high-frequency pressure waves), which can fatigue combustor components and even the downstream turbine blades. This can significantly decrease the safe operating lives of the combustor and turbine. Thus, suppressing the thermoacoustic combustor instabilities is an enabling technology for lean, low-emissions combustors under NASA's Propulsion and Power Program. This control methodology has been developed and tested in a partnership of the NASA Glenn Research Center, Pratt & Whitney, United Technologies Research Center, and the Georgia Institute of Technology. Initial combustor rig testing of the controls algorithm was completed during 2002. Subsequently, the test results were analyzed and improvements to the method were incorporated in 2003, which culminated in the final status of this controls algorithm. This control methodology is based on adaptive phase shifting. The combustor pressure oscillations are sensed and phase shifted, and a high-frequency fuel valve is actuated to put pressure oscillations into the combustor to cancel pressure oscillations produced by the instability.

  10. An Adaptive Fast Multipole Boundary Element Method for Poisson−Boltzmann Electrostatics

    PubMed Central

    2009-01-01

    The numerical solution of the Poisson−Boltzmann (PB) equation is a useful but a computationally demanding tool for studying electrostatic solvation effects in chemical and biomolecular systems. Recently, we have described a boundary integral equation-based PB solver accelerated by a new version of the fast multipole method (FMM). The overall algorithm shows an order N complexity in both the computational cost and memory usage. Here, we present an updated version of the solver by using an adaptive FMM for accelerating the convolution type matrix-vector multiplications. The adaptive algorithm, when compared to our previous nonadaptive one, not only significantly improves the performance of the overall memory usage but also remarkably speeds the calculation because of an improved load balancing between the local- and far-field calculations. We have also implemented a node-patch discretization scheme that leads to a reduction of unknowns by a factor of 2 relative to the constant element method without sacrificing accuracy. As a result of these improvements, the new solver makes the PB calculation truly feasible for large-scale biomolecular systems such as a 30S ribosome molecule even on a typical 2008 desktop computer. PMID:19517026

  11. An Adaptive Fast Multipole Boundary Element Method for Poisson-Boltzmann Electrostatics

    SciTech Connect

    Lu, Benzhuo; Cheng, Xiaolin; Huang, Jingfang; McCammon, Jonathan

    2009-01-01

    The numerical solution of the Poisson Boltzmann (PB) equation is a useful but a computationally demanding tool for studying electrostatic solvation effects in chemical and biomolecular systems. Recently, we have described a boundary integral equation-based PB solver accelerated by a new version of the fast multipole method (FMM). The overall algorithm shows an order N complexity in both the computational cost and memory usage. Here, we present an updated version of the solver by using an adaptive FMM for accelerating the convolution type matrix-vector multiplications. The adaptive algorithm, when compared to our previous nonadaptive one, not only significantly improves the performance of the overall memory usage but also remarkably speeds the calculation because of an improved load balancing between the local- and far-field calculations. We have also implemented a node-patch discretization scheme that leads to a reduction of unknowns by a factor of 2 relative to the constant element method without sacrificing accuracy. As a result of these improvements, the new solver makes the PB calculation truly feasible for large-scale biomolecular systems such as a 30S ribosome molecule even on a typical 2008 desktop computer.

  12. An adaptation of Krylov subspace methods to path following

    SciTech Connect

    Walker, H.F.

    1996-12-31

    Krylov subspace methods at present constitute a very well known and highly developed class of iterative linear algebra methods. These have been effectively applied to nonlinear system solving through Newton-Krylov methods, in which Krylov subspace methods are used to solve the linear systems that characterize steps of Newton`s method (the Newton equations). Here, we will discuss the application of Krylov subspace methods to path following problems, in which the object is to track a solution curve as a parameter varies. Path following methods are typically of predictor-corrector form, in which a point near the solution curve is {open_quotes}predicted{close_quotes} by some easy but relatively inaccurate means, and then a series of Newton-like corrector iterations is used to return approximately to the curve. The analogue of the Newton equation is underdetermined, and an additional linear condition must be specified to determine corrector steps uniquely. This is typically done by requiring that the steps be orthogonal to an approximate tangent direction. Augmenting the under-determined system with this orthogonality condition in a straightforward way typically works well if direct linear algebra methods are used, but Krylov subspace methods are often ineffective with this approach. We will discuss recent work in which this orthogonality condition is imposed directly as a constraint on the corrector steps in a certain way. The means of doing this preserves problem conditioning, allows the use of preconditioners constructed for the fixed-parameter case, and has certain other advantages. Experiments on standard PDE continuation test problems indicate that this approach is effective.

  13. The adaptive EVP method for solving the sea ice momentum equation

    NASA Astrophysics Data System (ADS)

    Kimmritz, Madlen; Danilov, Sergey; Losch, Martin

    2016-04-01

    Most dynamic sea ice models for climate-type simulations are based on the viscous-plastic (VP) rheology. The resulting stiff system of partial differential equations for the sea ice velocity is either solved implicitly at great computational cost, or explicitly with added pseudo-elasticity (elastic-viscous-plastic, EVP). Bouillon et al. (Ocean Modell., 2013) reinterpreted the EVP method for solving the sea ice momentum equation as an iterative pseudotime VP solver with improved convergence properties. In Kimmritz et al. (J. Comput. Physics, 2015) we showed that this modified EVP (mEVP) scheme should warrant converging solutions if its stability is maintained and the number of pseudotime iterations is sufficiently high. Here, we focus on the role of spatial discretizations. We analyze stability and convergence of mEVP on B- and C-grids. We show that the implementation on B-grids is less restrictive with respect to stability constraints than on C-grids. Additionally, convergence on C-grids is sensitive to the discretization of the viscosities and can be lost for some variants of discretization. Building on these findings we present an adaptive version of the mEVP scheme, which satisfies local stability constraints and aims to accelerate convergence where possible. This is achieved by local adaptation of the parameters governing the pseudotime subcycling of the scheme. We analyze the performance of this new ``adaptive EVP'' approach in a series of experiments with the sea ice component of the general circulation model MITgcm, which is formulated on a C-grid. We show that convergence in realistic settings is sensitive to the details of the implementation of the rheology. In particular, the use of the pressure replacement method deteriorates convergence.

  14. Adaptive Finite Element Method for Solving the Exact Kohn-Sham Equation of Density Functional Theory

    SciTech Connect

    Bylaska, Eric J.; Holst, Michael; Weare, John H.

    2009-04-14

    Results of the application of an adaptive finite element (FE) based solution using the FETK library of M. Holst to Density Functional Theory (DFT) approximation to the electronic structure of atoms and molecules are reported. The severe problem associated with the rapid variation of the electronic wave functions in the near singular regions of the atomic centers is treated by implementing completely unstructured simplex meshes that resolve these features around atomic nuclei. This concentrates the computational work in the regions in which the shortest length scales are necessary and provides for low resolution in regions for which there is no electron density. The accuracy of the solutions significantly improved when adaptive mesh refinement was applied, and it was found that the essential difficulties of the Kohn-Sham eigenvalues equation were the result of the singular behavior of the atomic potentials. Even though the matrix representations of the discrete Hamiltonian operator in the adaptive finite element basis are always sparse with a linear complexity in the number of discretization points, the overall memory and computational requirements for the solver implemented were found to be quite high. The number of mesh vertices per atom as a function of the atomic number Z and the required accuracy e (in atomic units) was esitmated to be v (e;Z) = 122:37 * Z2:2346 /1:1173 , and the number of floating point operations per minimization step for a system of NA atoms was found to be 0(N3A*v(e,Z0) (e.g. Z=26, e=0.0015 au, and NA=100, the memory requirement and computational cost would be ~0.2 terabytes and ~25 petaflops). It was found that the high cost of the method could be reduced somewhat by using a geometric based refinement strategy to fix the error near the singularities.

  15. Optimization-based wavefront sensorless adaptive optics for multiphoton microscopy.

    PubMed

    Antonello, Jacopo; van Werkhoven, Tim; Verhaegen, Michel; Truong, Hoa H; Keller, Christoph U; Gerritsen, Hans C

    2014-06-01

    Optical aberrations have detrimental effects in multiphoton microscopy. These effects can be curtailed by implementing model-based wavefront sensorless adaptive optics, which only requires the addition of a wavefront shaping device, such as a deformable mirror (DM) to an existing microscope. The aberration correction is achieved by maximizing a suitable image quality metric. We implement a model-based aberration correction algorithm in a second-harmonic microscope. The tip, tilt, and defocus aberrations are removed from the basis functions used for the control of the DM, as these aberrations induce distortions in the acquired images. We compute the parameters of a quadratic polynomial that is used to model the image quality metric directly from experimental input-output measurements. Finally, we apply the aberration correction by maximizing the image quality metric using the least-squares estimate of the unknown aberration.

  16. MONITORING METHODS ADAPTABLE TO VAPOR INTRUSION MONITORING - USEPA COMPENDIUM METHODS TO-15, TO-15 SUPPLEMENT (DRAFT), AND TO-17

    EPA Science Inventory

    USEPA ambient air monitoring methods for volatile organic compounds (VOCs) using specially-prepared canisters and solid adsorbents are directly adaptable to monitoring for vapors in the indoor environment. The draft Method TO-15 Supplement, an extension of the USEPA Method TO-15,...

  17. Automatic multirate methods for ordinary differential equations. [Adaptive time steps

    SciTech Connect

    Gear, C.W.

    1980-01-01

    A study is made of the application of integration methods in which different step sizes are used for different members of a system of equations. Such methods can result in savings if the cost of derivative evaluation is high or if a system is sparse; however, the estimation and control of errors is very difficult and can lead to high overheads. Three approaches are discussed, and it is shown that the least intuitive is the most promising. 2 figures.

  18. Systems and Methods for Parameter Dependent Riccati Equation Approaches to Adaptive Control

    NASA Technical Reports Server (NTRS)

    Kim, Kilsoo (Inventor); Yucelen, Tansel (Inventor); Calise, Anthony J. (Inventor)

    2015-01-01

    Systems and methods for adaptive control are disclosed. The systems and methods can control uncertain dynamic systems. The control system can comprise a controller that employs a parameter dependent Riccati equation. The controller can produce a response that causes the state of the system to remain bounded. The control system can control both minimum phase and non-minimum phase systems. The control system can augment an existing, non-adaptive control design without modifying the gains employed in that design. The control system can also avoid the use of high gains in both the observer design and the adaptive control law.

  19. ZZ-Type a posteriori error estimators for adaptive boundary element methods on a curve☆

    PubMed Central

    Feischl, Michael; Führer, Thomas; Karkulik, Michael; Praetorius, Dirk

    2014-01-01

    In the context of the adaptive finite element method (FEM), ZZ-error estimators named after Zienkiewicz and Zhu (1987) [52] are mathematically well-established and widely used in practice. In this work, we propose and analyze ZZ-type error estimators for the adaptive boundary element method (BEM). We consider weakly singular and hyper-singular integral equations and prove, in particular, convergence of the related adaptive mesh-refining algorithms. Throughout, the theoretical findings are underlined by numerical experiments. PMID:24748725

  20. Adaptive error covariances estimation methods for ensemble Kalman filters

    SciTech Connect

    Zhen, Yicun; Harlim, John

    2015-08-01

    This paper presents a computationally fast algorithm for estimating, both, the system and observation noise covariances of nonlinear dynamics, that can be used in an ensemble Kalman filtering framework. The new method is a modification of Belanger's recursive method, to avoid an expensive computational cost in inverting error covariance matrices of product of innovation processes of different lags when the number of observations becomes large. When we use only product of innovation processes up to one-lag, the computational cost is indeed comparable to a recently proposed method by Berry–Sauer's. However, our method is more flexible since it allows for using information from product of innovation processes of more than one-lag. Extensive numerical comparisons between the proposed method and both the original Belanger's and Berry–Sauer's schemes are shown in various examples, ranging from low-dimensional linear and nonlinear systems of SDEs and 40-dimensional stochastically forced Lorenz-96 model. Our numerical results suggest that the proposed scheme is as accurate as the original Belanger's scheme on low-dimensional problems and has a wider range of more accurate estimates compared to Berry–Sauer's method on L-96 example.

  1. AP-Cloud: Adaptive Particle-in-Cloud method for optimal solutions to Vlasov–Poisson equation

    SciTech Connect

    Wang, Xingyu; Samulyak, Roman; Jiao, Xiangmin; Yu, Kwangmin

    2016-07-01

    We propose a new adaptive Particle-in-Cloud (AP-Cloud) method for obtaining optimal numerical solutions to the Vlasov–Poisson equation. Unlike the traditional particle-in-cell (PIC) method, which is commonly used for solving this problem, the AP-Cloud adaptively selects computational nodes or particles to deliver higher accuracy and efficiency when the particle distribution is highly non-uniform. Unlike other adaptive techniques for PIC, our method balances the errors in PDE discretization and Monte Carlo integration, and discretizes the differential operators using a generalized finite difference (GFD) method based on a weighted least square formulation. As a result, AP-Cloud is independent of the geometric shapes of computational domains and is free of artificial parameters. Efficient and robust implementation is achieved through an octree data structure with 2:1 balance. We analyze the accuracy and convergence order of AP-Cloud theoretically, and verify the method using an electrostatic problem of a particle beam with halo. Simulation results show that the AP-Cloud method is substantially more accurate and faster than the traditional PIC, and it is free of artificial forces that are typical for some adaptive PIC techniques.

  2. AP-Cloud: Adaptive particle-in-cloud method for optimal solutions to Vlasov–Poisson equation

    SciTech Connect

    Wang, Xingyu; Samulyak, Roman; Jiao, Xiangmin; Yu, Kwangmin

    2016-04-19

    We propose a new adaptive Particle-in-Cloud (AP-Cloud) method for obtaining optimal numerical solutions to the Vlasov–Poisson equation. Unlike the traditional particle-in-cell (PIC) method, which is commonly used for solving this problem, the AP-Cloud adaptively selects computational nodes or particles to deliver higher accuracy and efficiency when the particle distribution is highly non-uniform. Unlike other adaptive techniques for PIC, our method balances the errors in PDE discretization and Monte Carlo integration, and discretizes the differential operators using a generalized finite difference (GFD) method based on a weighted least square formulation. As a result, AP-Cloud is independent of the geometric shapes of computational domains and is free of artificial parameters. Efficient and robust implementation is achieved through an octree data structure with 2:1 balance. We analyze the accuracy and convergence order of AP-Cloud theoretically, and verify the method using an electrostatic problem of a particle beam with halo. Here, simulation results show that the AP-Cloud method is substantially more accurate and faster than the traditional PIC, and it is free of artificial forces that are typical for some adaptive PIC techniques.

  3. AP-Cloud: Adaptive particle-in-cloud method for optimal solutions to Vlasov–Poisson equation

    DOE PAGES

    Wang, Xingyu; Samulyak, Roman; Jiao, Xiangmin; ...

    2016-04-19

    We propose a new adaptive Particle-in-Cloud (AP-Cloud) method for obtaining optimal numerical solutions to the Vlasov–Poisson equation. Unlike the traditional particle-in-cell (PIC) method, which is commonly used for solving this problem, the AP-Cloud adaptively selects computational nodes or particles to deliver higher accuracy and efficiency when the particle distribution is highly non-uniform. Unlike other adaptive techniques for PIC, our method balances the errors in PDE discretization and Monte Carlo integration, and discretizes the differential operators using a generalized finite difference (GFD) method based on a weighted least square formulation. As a result, AP-Cloud is independent of the geometric shapes ofmore » computational domains and is free of artificial parameters. Efficient and robust implementation is achieved through an octree data structure with 2:1 balance. We analyze the accuracy and convergence order of AP-Cloud theoretically, and verify the method using an electrostatic problem of a particle beam with halo. Here, simulation results show that the AP-Cloud method is substantially more accurate and faster than the traditional PIC, and it is free of artificial forces that are typical for some adaptive PIC techniques.« less

  4. AP-Cloud: Adaptive Particle-in-Cloud method for optimal solutions to Vlasov-Poisson equation

    NASA Astrophysics Data System (ADS)

    Wang, Xingyu; Samulyak, Roman; Jiao, Xiangmin; Yu, Kwangmin

    2016-07-01

    We propose a new adaptive Particle-in-Cloud (AP-Cloud) method for obtaining optimal numerical solutions to the Vlasov-Poisson equation. Unlike the traditional particle-in-cell (PIC) method, which is commonly used for solving this problem, the AP-Cloud adaptively selects computational nodes or particles to deliver higher accuracy and efficiency when the particle distribution is highly non-uniform. Unlike other adaptive techniques for PIC, our method balances the errors in PDE discretization and Monte Carlo integration, and discretizes the differential operators using a generalized finite difference (GFD) method based on a weighted least square formulation. As a result, AP-Cloud is independent of the geometric shapes of computational domains and is free of artificial parameters. Efficient and robust implementation is achieved through an octree data structure with 2:1 balance. We analyze the accuracy and convergence order of AP-Cloud theoretically, and verify the method using an electrostatic problem of a particle beam with halo. Simulation results show that the AP-Cloud method is substantially more accurate and faster than the traditional PIC, and it is free of artificial forces that are typical for some adaptive PIC techniques.

  5. Astronomical image denoising by means of improved adaptive backtracking-based matching pursuit algorithm.

    PubMed

    Liu, Qianshun; Bai, Jian; Yu, Feihong

    2014-11-10

    In an effort to improve compressive sensing and spare signal reconstruction by way of the backtracking-based adaptive orthogonal matching pursuit (BAOMP), a new sparse coding algorithm called improved adaptive backtracking-based OMP (ABOMP) is proposed in this study. Many aspects have been improved compared to the original BAOMP method, including replacing the fixed threshold with an adaptive one, adding residual feedback and support set verification, and others. Because of these ameliorations, the proposed algorithm can more precisely choose the atoms. By adding the adaptive step-size mechanism, it requires much less iteration and thus executes more efficiently. Additionally, a simple but effective contrast enhancement method is also adopted to further improve the denoising results and visual effect. By combining the IABOMP algorithm with the state-of-art dictionary learning algorithm K-SVD, the proposed algorithm achieves better denoising effects for astronomical images. Numerous experimental results show that the proposed algorithm performs successfully and effectively on Gaussian and Poisson noise removal.

  6. Adaptive nonlocal means filtering based on local noise level for CT denoising

    SciTech Connect

    Li, Zhoubo; Trzasko, Joshua D.; Lake, David S.; Blezek, Daniel J.; Manduca, Armando; Yu, Lifeng; Fletcher, Joel G.; McCollough, Cynthia H.

    2014-01-15

    Purpose: To develop and evaluate an image-domain noise reduction method based on a modified nonlocal means (NLM) algorithm that is adaptive to local noise level of CT images and to implement this method in a time frame consistent with clinical workflow. Methods: A computationally efficient technique for local noise estimation directly from CT images was developed. A forward projection, based on a 2D fan-beam approximation, was used to generate the projection data, with a noise model incorporating the effects of the bowtie filter and automatic exposure control. The noise propagation from projection data to images was analytically derived. The analytical noise map was validated using repeated scans of a phantom. A 3D NLM denoising algorithm was modified to adapt its denoising strength locally based on this noise map. The performance of this adaptive NLM filter was evaluated in phantom studies in terms of in-plane and cross-plane high-contrast spatial resolution, noise power spectrum (NPS), subjective low-contrast spatial resolution using the American College of Radiology (ACR) accreditation phantom, and objective low-contrast spatial resolution using a channelized Hotelling model observer (CHO). Graphical processing units (GPU) implementation of this noise map calculation and the adaptive NLM filtering were developed to meet demands of clinical workflow. Adaptive NLM was piloted on lower dose scans in clinical practice. Results: The local noise level estimation matches the noise distribution determined from multiple repetitive scans of a phantom, demonstrated by small variations in the ratio map between the analytical noise map and the one calculated from repeated scans. The phantom studies demonstrated that the adaptive NLM filter can reduce noise substantially without degrading the high-contrast spatial resolution, as illustrated by modulation transfer function and slice sensitivity profile results. The NPS results show that adaptive NLM denoising preserves the

  7. The Validation of a Classroom Observation Instrument Based on the Construct of Teacher Adaptive Practice

    ERIC Educational Resources Information Center

    Loughland, Tony; Vlies, Penny

    2016-01-01

    Teacher adaptability is a key disposition for teachers that has been linked to outcomes of interests to schools. The aim of this study was to examine how the broader disposition of teacher adaptability might be observable as classroom-based adaptive practices using an argument-based approach to validation. The findings from the initial phase of…

  8. Application study of piecewise context-based adaptive binary arithmetic coding combined with modified LZC

    NASA Astrophysics Data System (ADS)

    Su, Yan; Jun, Xie Cheng

    2006-08-01

    An algorithm of combining LZC and arithmetic coding algorithm for image compression is presented and both theory deduction and simulation result prove the correctness and feasibility of the algorithm. According to the characteristic of context-based adaptive binary arithmetic coding and entropy, LZC was modified to cooperate the optimized piecewise arithmetic coding, this algorithm improved the compression ratio without any additional time consumption compared to traditional method.

  9. Taking a broad approach to public health program adaptation: adapting a family-based diabetes education program.

    PubMed

    Reinschmidt, Kerstin M; Teufel-Shone, Nicolette I; Bradford, Gail; Drummond, Rebecca L; Torres, Emma; Redondo, Floribella; Elenes, Jo Jean; Sanders, Alicia; Gastelum, Sylvia; Moore-Monroy, Martha; Barajas, Salvador; Fernandez, Lourdes; Alvidrez, Rosy; de Zapien, Jill Guernsey; Staten, Lisa K

    2010-04-01

    Diabetes health disparities among Hispanic populations have been countered with federally funded health promotion and disease prevention programs. Dissemination has focused on program adaptation to local cultural contexts for greater acceptability and sustainability. Taking a broader approach and drawing on our experience in Mexican American communities at the U.S.-Mexico Border, we demonstrate how interventions are adapted at the intersection of multiple cultural contexts: the populations targeted, the community- and university-based entities designing and implementing interventions, and the field team delivering the materials. Program adaptation involves negotiations between representatives of all contexts and is imperative in promoting local ownership and program sustainability.

  10. Measuring Fidelity and Adaptation: Reliability of a Instrument for School-Based Prevention Programs.

    PubMed

    Bishop, Dana C; Pankratz, Melinda M; Hansen, William B; Albritton, Jordan; Albritton, Lauren; Strack, Joann

    2014-06-01

    There is a need to standardize methods for assessing fidelity and adaptation. Such standardization would allow program implementation to be examined in a manner that will be useful for understanding the moderating role of fidelity in dissemination research. This article describes a method for collecting data about fidelity of implementation for school-based prevention programs, including measures of adherence, quality of delivery, dosage, participant engagement, and adaptation. We report about the reliability of these methods when applied by four observers who coded video recordings of teachers delivering All Stars, a middle school drug prevention program. Interrater agreement for scaled items was assessed for an instrument designed to evaluate program fidelity. Results indicated sound interrater reliability for items assessing adherence, dosage, quality of teaching, teacher understanding of concepts, and program adaptations. The interrater reliability for items assessing potential program effectiveness, classroom management, achievement of activity objectives, and adaptation valences was improved by dichotomizing the response options for these items. The item that assessed student engagement demonstrated only modest interrater reliability and was not improved through dichotomization. Several coder pairs were discordant on items that overall demonstrated good interrater reliability. Proposed modifications to the coding manual and protocol are discussed.

  11. Adaptive Bayesian-based speck-reduction in SAR images using complex wavelet transform

    NASA Astrophysics Data System (ADS)

    Ma, Ning; Yan, Wei; Zhang, Peng

    2005-10-01

    In this paper, an improved adaptive speckle reduction method is presented based on dual tree complex wavelet transform (CWT). It combines the characteristics of additive noise reduction of soft thresholding with the CWT's directional selectivity, being its main contribution to adapt the effective threshold to preserve the edge detail. A Bayesian estimator is applied to the decomposed data also to estimate the best value for the noise-free complex wavelet coefficients. This estimation is based on alpha-stable and Gaussian distribution hypotheses for complex wavelet coefficients of the signal and noise, respectively. Experimental results show that the denoising performance is among the state-of-the-art techniques based on real discrete wavelet transform (DWT).

  12. Adjoint-Based Algorithms for Adaptation and Design Optimizations on Unstructured Grids

    NASA Technical Reports Server (NTRS)

    Nielsen, Eric J.

    2006-01-01

    Schemes based on discrete adjoint algorithms present several exciting opportunities for significantly advancing the current state of the art in computational fluid dynamics. Such methods provide an extremely efficient means for obtaining discretely consistent sensitivity information for hundreds of design variables, opening the door to rigorous, automated design optimization of complex aerospace configuration using the Navier-Stokes equation. Moreover, the discrete adjoint formulation provides a mathematically rigorous foundation for mesh adaptation and systematic reduction of spatial discretization error. Error estimates are also an inherent by-product of an adjoint-based approach, valuable information that is virtually non-existent in today's large-scale CFD simulations. An overview of the adjoint-based algorithm work at NASA Langley Research Center is presented, with examples demonstrating the potential impact on complex computational problems related to design optimization as well as mesh adaptation.

  13. On Mixed Data and Event Driven Design for Adaptive-Critic-Based Nonlinear $H∞ Control.

    PubMed

    Wang, Ding; Mu, Chaoxu; Liu, Derong; Ma, Hongwen

    2017-02-01

    In this paper, based on the adaptive critic learning technique, the H∞ control for a class of unknown nonlinear dynamic systems is investigated by adopting a mixed data and event driven design approach. The nonlinear H∞ control problem is formulated as a two-player zero-sum differential game and the adaptive critic method is employed to cope with the data-based optimization. The novelty lies in that the data driven learning identifier is combined with the event driven design formulation, in order to develop the adaptive critic controller, thereby accomplishing the nonlinear H∞ control. The event driven optimal control law and the time driven worst case disturbance law are approximated by constructing and tuning a critic neural network. Applying the event driven feedback control, the closed-loop system is built with stability analysis. Simulation studies are conducted to verify the theoretical results and illustrate the control performance. It is significant to observe that the present research provides a new avenue of integrating data-based control and event-triggering mechanism into establishing advanced adaptive critic systems.

  14. Community-Based Adaptation To A Changing Climate

    EPA Pesticide Factsheets

    This resource discusses how climate change is affecting community services, presents sample adaptation strategies, gives examples of successful community adaptation actions, and provides links to other key federal resources.

  15. Robust control for a biaxial servo with time delay system based on adaptive tuning technique.

    PubMed

    Chen, Tien-Chi; Yu, Chih-Hsien

    2009-07-01

    A robust control method for synchronizing a biaxial servo system motion is proposed in this paper. A new network based cross-coupled control and adaptive tuning techniques are used together to cancel out the skew error. The conventional fixed gain PID cross-coupled controller (CCC) is replaced with the adaptive cross-coupled controller (ACCC) in the proposed control scheme to maintain biaxial servo system synchronization motion. Adaptive-tuning PID (APID) position and velocity controllers provide the necessary control actions to maintain synchronization while following a variable command trajectory. A delay-time compensator (DTC) with an adaptive controller was augmented to set the time delay element, effectively moving it outside the closed loop, enhancing the stability of the robust controlled system. This scheme provides strong robustness with respect to uncertain dynamics and disturbances. The simulation and experimental results reveal that the proposed control structure adapts to a wide range of operating conditions and provides promising results under parameter variations and load changes.

  16. SuBSENSE: a universal change detection method with local adaptive sensitivity.

    PubMed

    St-Charles, Pierre-Luc; Bilodeau, Guillaume-Alexandre; Bergevin, Robert

    2015-01-01

    Foreground/background segmentation via change detection in video sequences is often used as a stepping stone in high-level analytics and applications. Despite the wide variety of methods that have been proposed for this problem, none has been able to fully address the complex nature of dynamic scenes in real surveillance tasks. In this paper, we present a universal pixel-level segmentation method that relies on spatiotemporal binary features as well as color information to detect changes. This allows camouflaged foreground objects to be detected more easily while most illumination variations are ignored. Besides, instead of using manually set, frame-wide constants to dictate model sensitivity and adaptation speed, we use pixel-level feedback loops to dynamically adjust our method's internal parameters without user intervention. These adjustments are based on the continuous monitoring of model fidelity and local segmentation noise levels. This new approach enables us to outperform all 32 previously tested state-of-the-art methods on the 2012 and 2014 versions of the ChangeDetection.net dataset in terms of overall F-Measure. The use of local binary image descriptors for pixel-level modeling also facilitates high-speed parallel implementations: our own version, which used no low-level or architecture-specific instruction, reached real-time processing speed on a midlevel desktop CPU. A complete C++ implementation based on OpenCV is available online.

  17. Land-based approach to evaluate sustainable land management and adaptive capacity of ecosystems/lands

    NASA Astrophysics Data System (ADS)

    Kust, German; Andreeva, Olga

    2015-04-01

    A number of new concepts and paradigms appeared during last decades, such as sustainable land management (SLM), climate change (CC) adaptation, environmental services, ecosystem health, and others. All of these initiatives still not having the common scientific platform although some agreements in terminology were reached, schemes of links and feedback loops created, and some models developed. Nevertheless, in spite of all these scientific achievements, the land related issues are still not in the focus of CC adaptation and mitigation. The last did not grow much beyond the "greenhouse gases" (GHG) concept, which makes land degradation as the "forgotten side of climate change" The possible decision to integrate concepts of climate and desertification/land degradation could be consideration of the "GHG" approach providing global solution, and "land" approach providing local solution covering other "locally manifesting" issues of global importance (biodiversity conservation, food security, disasters and risks, etc.) to serve as a central concept among those. SLM concept is a land-based approach, which includes the concepts of both ecosystem-based approach (EbA) and community-based approach (CbA). SLM can serve as in integral CC adaptation strategy, being based on the statement "the more healthy and resilient the system is, the less vulnerable and more adaptive it will be to any external changes and forces, including climate" The biggest scientific issue is the methods to evaluate the SLM and results of the SLM investments. We suggest using the approach based on the understanding of the balance or equilibrium of the land and nature components as the major sign of the sustainable system. Prom this point of view it is easier to understand the state of the ecosystem stress, size of the "health", range of adaptive capacity, drivers of degradation and SLM nature, as well as the extended land use, and the concept of environmental land management as the improved SLM approach

  18. Knowledge-based control of an adaptive interface

    NASA Technical Reports Server (NTRS)

    Lachman, Roy

    1989-01-01

    The analysis, development strategy, and preliminary design for an intelligent, adaptive interface is reported. The design philosophy couples knowledge-based system technology with standard human factors approaches to interface development for computer workstations. An expert system has been designed to drive the interface for application software. The intelligent interface will be linked to application packages, one at a time, that are planned for multiple-application workstations aboard Space Station Freedom. Current requirements call for most Space Station activities to be conducted at the workstation consoles. One set of activities will consist of standard data management services (DMS). DMS software includes text processing, spreadsheets, data base management, etc. Text processing was selected for the first intelligent interface prototype because text-processing software can be developed initially as fully functional but limited with a small set of commands. The program's complexity then can be increased incrementally. The intelligent interface includes the operator's behavior and three types of instructions to the underlying application software are included in the rule base. A conventional expert-system inference engine searches the data base for antecedents to rules and sends the consequents of fired rules as commands to the underlying software. Plans for putting the expert system on top of a second application, a database management system, will be carried out following behavioral research on the first application. The intelligent interface design is suitable for use with ground-based workstations now common in government, industrial, and educational organizations.

  19. An adaptive gyroscope-based algorithm for temporal gait analysis.

    PubMed

    Greene, Barry R; McGrath, Denise; O'Neill, Ross; O'Donovan, Karol J; Burns, Adrian; Caulfield, Brian

    2010-12-01

    Body-worn kinematic sensors have been widely proposed as the optimal solution for portable, low cost, ambulatory monitoring of gait. This study aims to evaluate an adaptive gyroscope-based algorithm for automated temporal gait analysis using body-worn wireless gyroscopes. Gyroscope data from nine healthy adult subjects performing four walks at four different speeds were then compared against data acquired simultaneously using two force plates and an optical motion capture system. Data from a poliomyelitis patient, exhibiting pathological gait walking with and without the aid of a crutch, were also compared to the force plate. Results show that the mean true error between the adaptive gyroscope algorithm and force plate was -4.5 ± 14.4 ms and 43.4 ± 6.0 ms for IC and TC points, respectively, in healthy subjects. Similarly, the mean true error when data from the polio patient were compared against the force plate was -75.61 ± 27.53 ms and 99.20 ± 46.00 ms for IC and TC points, respectively. A comparison of the present algorithm against temporal gait parameters derived from an optical motion analysis system showed good agreement for nine healthy subjects at four speeds. These results show that the algorithm reported here could constitute the basis of a robust, portable, low-cost system for ambulatory monitoring of gait.

  20. Lens-based wavefront sensorless adaptive optics swept source OCT

    NASA Astrophysics Data System (ADS)

    Jian, Yifan; Lee, Sujin; Ju, Myeong Jin; Heisler, Morgan; Ding, Weiguang; Zawadzki, Robert J.; Bonora, Stefano; Sarunic, Marinko V.

    2016-06-01

    Optical coherence tomography (OCT) has revolutionized modern ophthalmology, providing depth resolved images of the retinal layers in a system that is suited to a clinical environment. Although the axial resolution of OCT system, which is a function of the light source bandwidth, is sufficient to resolve retinal features at a micrometer scale, the lateral resolution is dependent on the delivery optics and is limited by ocular aberrations. Through the combination of wavefront sensorless adaptive optics and the use of dual deformable transmissive optical elements, we present a compact lens-based OCT system at an imaging wavelength of 1060 nm for high resolution retinal imaging. We utilized a commercially available variable focal length lens to correct for a wide range of defocus commonly found in patient’s eyes, and a novel multi-actuator adaptive lens for aberration correction to achieve near diffraction limited imaging performance at the retina. With a parallel processing computational platform, high resolution cross-sectional and en face retinal image acquisition and display was performed in real time. In order to demonstrate the system functionality and clinical utility, we present images of the photoreceptor cone mosaic and other retinal layers acquired in vivo from research subjects.