Science.gov

Sample records for adaptive method based

  1. A New Adaptive Image Denoising Method Based on Neighboring Coefficients

    NASA Astrophysics Data System (ADS)

    Biswas, Mantosh; Om, Hari

    2016-03-01

    Many good techniques have been discussed for image denoising that include NeighShrink, improved adaptive wavelet denoising method based on neighboring coefficients (IAWDMBNC), improved wavelet shrinkage technique for image denoising (IWST), local adaptive wiener filter (LAWF), wavelet packet thresholding using median and wiener filters (WPTMWF), adaptive image denoising method based on thresholding (AIDMT). These techniques are based on local statistical description of the neighboring coefficients in a window. These methods however do not give good quality of the images since they cannot modify and remove too many small wavelet coefficients simultaneously due to the threshold. In this paper, a new image denoising method is proposed that shrinks the noisy coefficients using an adaptive threshold. Our method overcomes these drawbacks and it has better performance than the NeighShrink, IAWDMBNC, IWST, LAWF, WPTMWF, and AIDMT denoising methods.

  2. Adaptive reconnection-based arbitrary Lagrangian Eulerian method

    NASA Astrophysics Data System (ADS)

    Bo, Wurigen; Shashkov, Mikhail

    2015-10-01

    eW present a new adaptive Arbitrary Lagrangian Eulerian (ALE) method. This method is based on the reconnection-based ALE (ReALE) methodology of Refs. [35,34,6]. The main elements in a standard ReALE method are: an explicit Lagrangian phase on an arbitrary polygonal (in 2D) mesh in which the solution and positions of grid nodes are updated; a rezoning phase in which a new grid is defined by changing the connectivity (using Voronoi tessellation) but not the number of cells; and a remapping phase in which the Lagrangian solution is transferred onto the new grid. In the standard ReALE method, the rezoned mesh is smoothed by using one or several steps toward centroidal Voronoi tessellation, but it is not adapted to the solution in any way. In the current paper we present a new adaptive ReALE method, A-ReALE, that is based on the following design principles. First, a monitor function (or error indicator) based on the Hessian of some flow parameter(s) is utilized. Second, an equi-distribution principle for the monitor function is used as a criterion for adapting the mesh. Third, a centroidal Voronoi tessellation is used to adapt the mesh. Fourth, we scale the monitor function to avoid very small and large cells and then smooth it to permit the use of theoretical results related to weighted centroidal Voronoi tessellation. In the A-ReALE method, both number of cells and their locations are allowed to change at the rezone stage on each time step. The number of generators at each time step is chosen to guarantee the required spatial resolution in regions where monitor function reaches its maximum value. We present all details required for implementation of new adaptive A-ReALE method and demonstrate its performance in comparison with standard ReALE method on series of numerical examples.

  3. Adaptive reconnection-based arbitrary Lagrangian Eulerian method

    SciTech Connect

    Bo, Wurigen; Shashkov, Mikhail

    2015-07-21

    We present a new adaptive Arbitrary Lagrangian Eulerian (ALE) method. This method is based on the reconnection-based ALE (ReALE) methodology of Refs. [35], [34] and [6]. The main elements in a standard ReALE method are: an explicit Lagrangian phase on an arbitrary polygonal (in 2D) mesh in which the solution and positions of grid nodes are updated; a rezoning phase in which a new grid is defined by changing the connectivity (using Voronoi tessellation) but not the number of cells; and a remapping phase in which the Lagrangian solution is transferred onto the new grid. Furthermore, in the standard ReALE method, the rezoned mesh is smoothed by using one or several steps toward centroidal Voronoi tessellation, but it is not adapted to the solution in any way.

  4. Adaptive reconnection-based arbitrary Lagrangian Eulerian method

    DOE PAGES

    Bo, Wurigen; Shashkov, Mikhail

    2015-07-21

    We present a new adaptive Arbitrary Lagrangian Eulerian (ALE) method. This method is based on the reconnection-based ALE (ReALE) methodology of Refs. [35], [34] and [6]. The main elements in a standard ReALE method are: an explicit Lagrangian phase on an arbitrary polygonal (in 2D) mesh in which the solution and positions of grid nodes are updated; a rezoning phase in which a new grid is defined by changing the connectivity (using Voronoi tessellation) but not the number of cells; and a remapping phase in which the Lagrangian solution is transferred onto the new grid. Furthermore, in the standard ReALEmore » method, the rezoned mesh is smoothed by using one or several steps toward centroidal Voronoi tessellation, but it is not adapted to the solution in any way.« less

  5. Comparing Anisotropic Output-Based Grid Adaptation Methods by Decomposition

    NASA Technical Reports Server (NTRS)

    Park, Michael A.; Loseille, Adrien; Krakos, Joshua A.; Michal, Todd

    2015-01-01

    Anisotropic grid adaptation is examined by decomposing the steps of flow solution, ad- joint solution, error estimation, metric construction, and simplex grid adaptation. Multiple implementations of each of these steps are evaluated by comparison to each other and expected analytic results when available. For example, grids are adapted to analytic metric fields and grid measures are computed to illustrate the properties of multiple independent implementations of grid adaptation mechanics. Different implementations of each step in the adaptation process can be evaluated in a system where the other components of the adaptive cycle are fixed. Detailed examination of these properties allows comparison of different methods to identify the current state of the art and where further development should be targeted.

  6. Adaptive Ripple Down Rules Method based on Description Length

    NASA Astrophysics Data System (ADS)

    Yoshida, Tetsuya; Wada, Takuya; Motoda, Hiroshi; Washio, Takashi

    A knowledge acquisition method Ripple Down Rules (RDR) can directly acquire and encode knowledge from human experts. It is an incremental acquisition method and each new piece of knowledge is added as an exception to the existing knowledge base. Past researches on RDR method assume that the problem domain is stable. This is not the case in reality, especially when an environment changes. Things change over time. This paper proposes an adaptive Ripple Down Rules method based on the Minimum Description Length Principle aiming at knowledge acquisition in a dynamically changing environment. We consider the change in the correspondence between attribute-values and class labels as a typical change in the environment. When such a change occurs, some pieces of knowledge previously acquired become worthless, and the existence of such knowledge may hinder acquisition of new knowledge. In our approach knowledge deletion is carried out as well as knowledge acquisition so that useless knowledge is properly discarded to ensure efficient knowledge acquisition while maintaining the prediction accuracy for future data. Furthermore, pruning is incorporated into the incremental knowledge acquisition in RDR to improve the prediction accuracy of the constructed knowledge base. Experiments were conducted by simulating the change in the correspondence between attribute-values and class labels using the datasets in UCI repository. The results are encouraging.

  7. Adaptive enhancement method of infrared image based on scene feature

    NASA Astrophysics Data System (ADS)

    Zhang, Xiao; Bai, Tingzhu; Shang, Fei

    2008-12-01

    All objects emit radiation in amounts related to their temperature and their ability to emit radiation. The infrared image shows the invisible infrared radiation emitted directly. Because of the advantages, the technology of infrared imaging is applied to many kinds of fields. But compared with visible image, the disadvantages of infrared image are obvious. The characteristics of low luminance, low contrast and the inconspicuous difference target and background are the main disadvantages of infrared image. The aim of infrared image enhancement is to improve the interpretability or perception of information in infrared image for human viewers, or to provide 'better' input for other automated image processing techniques. Most of the adaptive algorithm for image enhancement is mainly based on the gray-scale distribution of infrared image, and is not associated with the actual image scene of the features. So the pertinence of infrared image enhancement is not strong, and the infrared image is not conducive to the application of infrared surveillance. In this paper we have developed a scene feature-based algorithm to enhance the contrast of infrared image adaptively. At first, after analyzing the scene feature of different infrared image, we have chosen the feasible parameters to describe the infrared image. In the second place, we have constructed the new histogram distributing base on the chosen parameters by using Gaussian function. In the last place, the infrared image is enhanced by constructing a new form of histogram. Experimental results show that the algorithm has better performance than other methods mentioned in this paper for infrared scene images.

  8. An Adaptive Derivative-based Method for Function Approximation

    SciTech Connect

    Tong, C

    2008-10-22

    To alleviate the high computational cost of large-scale multi-physics simulations to study the relationships between the model parameters and the outputs of interest, response surfaces are often used in place of the exact functional relationships. This report explores a method for response surface construction using adaptive sampling guided by derivative information at each selected sample point. This method is especially suitable for applications that can readily provide added information such as gradients and Hessian with respect to the input parameters under study. When higher order terms (third and above) in the Taylor series are negligible, the approximation error for this method can be controlled. We present details of the adaptive algorithm and numerical results on a few test problems.

  9. An adaptive unsupervised hyperspectral classification method based on Gaussian distribution

    NASA Astrophysics Data System (ADS)

    Yue, Jiang; Wu, Jing-wei; Zhang, Yi; Bai, Lian-fa

    2014-11-01

    In order to achieve adaptive unsupervised clustering in the high precision, a method using Gaussian distribution to fit the similarity of the inter-class and the noise distribution is proposed in this paper, and then the automatic segmentation threshold is determined by the fitting result. First, according with the similarity measure of the spectral curve, this method assumes that the target and the background both in Gaussian distribution, the distribution characteristics is obtained through fitting the similarity measure of minimum related windows and center pixels with Gaussian function, and then the adaptive threshold is achieved. Second, make use of the pixel minimum related windows to merge adjacent similar pixels into a picture-block, then the dimensionality reduction is completed and the non-supervised classification is realized. AVIRIS data and a set of hyperspectral data we caught are used to evaluate the performance of the proposed method. Experimental results show that the proposed algorithm not only realizes the adaptive but also outperforms K-MEANS and ISODATA on the classification accuracy, edge recognition and robustness.

  10. The Formative Method for Adapting Psychotherapy (FMAP): A community-based developmental approach to culturally adapting therapy

    PubMed Central

    Hwang, Wei-Chin

    2010-01-01

    How do we culturally adapt psychotherapy for ethnic minorities? Although there has been growing interest in doing so, few therapy adaptation frameworks have been developed. The majority of these frameworks take a top-down theoretical approach to adapting psychotherapy. The purpose of this paper is to introduce a community-based developmental approach to modifying psychotherapy for ethnic minorities. The Formative Method for Adapting Psychotherapy (FMAP) is a bottom-up approach that involves collaborating with consumers to generate and support ideas for therapy adaptation. It involves 5-phases that target developing, testing, and reformulating therapy modifications. These phases include: (a) generating knowledge and collaborating with stakeholders (b) integrating generated information with theory and empirical and clinical knowledge, (c) reviewing the initial culturally adapted clinical intervention with stakeholders and revising the culturally adapted intervention, (d) testing the culturally adapted intervention, and (e) finalizing the culturally adapted intervention. Application of the FMAP is illustrated using examples from a study adapting psychotherapy for Chinese Americans, but can also be readily applied to modify therapy for other ethnic groups. PMID:20625458

  11. Quantification of organ motion based on an adaptive image-based scale invariant feature method

    SciTech Connect

    Paganelli, Chiara; Peroni, Marta

    2013-11-15

    Purpose: The availability of corresponding landmarks in IGRT image series allows quantifying the inter and intrafractional motion of internal organs. In this study, an approach for the automatic localization of anatomical landmarks is presented, with the aim of describing the nonrigid motion of anatomo-pathological structures in radiotherapy treatments according to local image contrast.Methods: An adaptive scale invariant feature transform (SIFT) was developed from the integration of a standard 3D SIFT approach with a local image-based contrast definition. The robustness and invariance of the proposed method to shape-preserving and deformable transforms were analyzed in a CT phantom study. The application of contrast transforms to the phantom images was also tested, in order to verify the variation of the local adaptive measure in relation to the modification of image contrast. The method was also applied to a lung 4D CT dataset, relying on manual feature identification by an expert user as ground truth. The 3D residual distance between matches obtained in adaptive-SIFT was then computed to verify the internal motion quantification with respect to the expert user. Extracted corresponding features in the lungs were used as regularization landmarks in a multistage deformable image registration (DIR) mapping the inhale vs exhale phase. The residual distances between the warped manual landmarks and their reference position in the inhale phase were evaluated, in order to provide a quantitative indication of the registration performed with the three different point sets.Results: The phantom study confirmed the method invariance and robustness properties to shape-preserving and deformable transforms, showing residual matching errors below the voxel dimension. The adapted SIFT algorithm on the 4D CT dataset provided automated and accurate motion detection of peak to peak breathing motion. The proposed method resulted in reduced residual errors with respect to standard SIFT

  12. Wavelet-based acoustic emission detection method with adaptive thresholding

    NASA Astrophysics Data System (ADS)

    Menon, Sunil; Schoess, Jeffrey N.; Hamza, Rida; Busch, Darryl

    2000-06-01

    Reductions in Navy maintenance budgets and available personnel have dictated the need to transition from time-based to 'condition-based' maintenance. Achieving this will require new enabling diagnostic technologies. One such technology, the use of acoustic emission for the early detection of helicopter rotor head dynamic component faults, has been investigated by Honeywell Technology Center for its rotor acoustic monitoring system (RAMS). This ambitious, 38-month, proof-of-concept effort, which was a part of the Naval Surface Warfare Center Air Vehicle Diagnostics System program, culminated in a successful three-week flight test of the RAMS system at Patuxent River Flight Test Center in September 1997. The flight test results demonstrated that stress-wave acoustic emission technology can detect signals equivalent to small fatigue cracks in rotor head components and can do so across the rotating articulated rotor head joints and in the presence of other background acoustic noise generated during flight operation. This paper presents the results of stress wave data analysis of the flight-test dataset using wavelet-based techniques to assess background operational noise vs. machinery failure detection results.

  13. Method for reducing the drag of blunt-based vehicles by adaptively increasing forebody roughness

    NASA Technical Reports Server (NTRS)

    Whitmore, Stephen A. (Inventor); Saltzman, Edwin J. (Inventor); Moes, Timothy R. (Inventor); Iliff, Kenneth W. (Inventor)

    2005-01-01

    A method for reducing drag upon a blunt-based vehicle by adaptively increasing forebody roughness to increase drag at the roughened area of the forebody, which results in a decrease in drag at the base of this vehicle, and in total vehicle drag.

  14. Adaptive Algebraic Multigrid Methods

    SciTech Connect

    Brezina, M; Falgout, R; MacLachlan, S; Manteuffel, T; McCormick, S; Ruge, J

    2004-04-09

    Our ability to simulate physical processes numerically is constrained by our ability to solve the resulting linear systems, prompting substantial research into the development of multiscale iterative methods capable of solving these linear systems with an optimal amount of effort. Overcoming the limitations of geometric multigrid methods to simple geometries and differential equations, algebraic multigrid methods construct the multigrid hierarchy based only on the given matrix. While this allows for efficient black-box solution of the linear systems associated with discretizations of many elliptic differential equations, it also results in a lack of robustness due to assumptions made on the near-null spaces of these matrices. This paper introduces an extension to algebraic multigrid methods that removes the need to make such assumptions by utilizing an adaptive process. The principles which guide the adaptivity are highlighted, as well as their application to algebraic multigrid solution of certain symmetric positive-definite linear systems.

  15. A density-based adaptive quantum mechanical/molecular mechanical method.

    PubMed

    Waller, Mark P; Kumbhar, Sadhana; Yang, Jack

    2014-10-20

    We present a density-based adaptive quantum mechanical/molecular mechanical (DBA-QM/MM) method, whereby molecules can switch layers from the QM to the MM region and vice versa. The adaptive partitioning of the molecular system ensures that the layer assignment can change during the optimization procedure, that is, on the fly. The switch from a QM molecule to a MM molecule is determined if there is an absence of noncovalent interactions to any atom of the QM core region. The presence/absence of noncovalent interactions is determined by analysis of the reduced density gradient. Therefore, the location of the QM/MM boundary is based on physical arguments, and this neatly removes some empiricism inherent in previous adaptive QM/MM partitioning schemes. The DBA-QM/MM method is validated by using a water-in-water setup and an explicitly solvated L-alanyl-L-alanine dipeptide. PMID:24954803

  16. A density-based adaptive quantum mechanical/molecular mechanical method.

    PubMed

    Waller, Mark P; Kumbhar, Sadhana; Yang, Jack

    2014-10-20

    We present a density-based adaptive quantum mechanical/molecular mechanical (DBA-QM/MM) method, whereby molecules can switch layers from the QM to the MM region and vice versa. The adaptive partitioning of the molecular system ensures that the layer assignment can change during the optimization procedure, that is, on the fly. The switch from a QM molecule to a MM molecule is determined if there is an absence of noncovalent interactions to any atom of the QM core region. The presence/absence of noncovalent interactions is determined by analysis of the reduced density gradient. Therefore, the location of the QM/MM boundary is based on physical arguments, and this neatly removes some empiricism inherent in previous adaptive QM/MM partitioning schemes. The DBA-QM/MM method is validated by using a water-in-water setup and an explicitly solvated L-alanyl-L-alanine dipeptide.

  17. Sparse regularization-based reconstruction for bioluminescence tomography using a multilevel adaptive finite element method.

    PubMed

    He, Xiaowei; Hou, Yanbin; Chen, Duofang; Jiang, Yuchuan; Shen, Man; Liu, Junting; Zhang, Qitan; Tian, Jie

    2011-01-01

    Bioluminescence tomography (BLT) is a promising tool for studying physiological and pathological processes at cellular and molecular levels. In most clinical or preclinical practices, fine discretization is needed for recovering sources with acceptable resolution when solving BLT with finite element method (FEM). Nevertheless, uniformly fine meshes would cause large dataset and overfine meshes might aggravate the ill-posedness of BLT. Additionally, accurately quantitative information of density and power has not been simultaneously obtained so far. In this paper, we present a novel multilevel sparse reconstruction method based on adaptive FEM framework. In this method, permissible source region gradually reduces with adaptive local mesh refinement. By using sparse reconstruction with l(1) regularization on multilevel adaptive meshes, simultaneous recovery of density and power as well as accurate source location can be achieved. Experimental results for heterogeneous phantom and mouse atlas model demonstrate its effectiveness and potentiality in the application of quantitative BLT.

  18. Adaptively deformed mesh based interface method for elliptic equations with discontinuous coefficients

    PubMed Central

    Xia, Kelin; Zhan, Meng; Wan, Decheng; Wei, Guo-Wei

    2011-01-01

    Mesh deformation methods are a versatile strategy for solving partial differential equations (PDEs) with a vast variety of practical applications. However, these methods break down for elliptic PDEs with discontinuous coefficients, namely, elliptic interface problems. For this class of problems, the additional interface jump conditions are required to maintain the well-posedness of the governing equation. Consequently, in order to achieve high accuracy and high order convergence, additional numerical algorithms are required to enforce the interface jump conditions in solving elliptic interface problems. The present work introduces an interface technique based adaptively deformed mesh strategy for resolving elliptic interface problems. We take the advantages of the high accuracy, flexibility and robustness of the matched interface and boundary (MIB) method to construct an adaptively deformed mesh based interface method for elliptic equations with discontinuous coefficients. The proposed method generates deformed meshes in the physical domain and solves the transformed governed equations in the computational domain, which maintains regular Cartesian meshes. The mesh deformation is realized by a mesh transformation PDE, which controls the mesh redistribution by a source term. The source term consists of a monitor function, which builds in mesh contraction rules. Both interface geometry based deformed meshes and solution gradient based deformed meshes are constructed to reduce the L∞ and L2 errors in solving elliptic interface problems. The proposed adaptively deformed mesh based interface method is extensively validated by many numerical experiments. Numerical results indicate that the adaptively deformed mesh based interface method outperforms the original MIB method for dealing with elliptic interface problems. PMID:22586356

  19. Simultaneous seismic data interpolation and denoising with a new adaptive method based on dreamlet transform

    NASA Astrophysics Data System (ADS)

    Wang, Benfeng; Wu, Ru-Shan; Chen, Xiaohong; Li, Jingye

    2015-05-01

    Interpolation and random noise removal is a pre-requisite for multichannel techniques because the irregularity and random noise in observed data can affect their performances. Projection Onto Convex Sets (POCS) method can better handle seismic data interpolation if the data's signal-to-noise ratio (SNR) is high, while it has difficulty in noisy situations because it inserts the noisy observed seismic data in each iteration. Weighted POCS method can weaken the noise effects, while the performance is affected by the choice of weight factors and is still unsatisfactory. Thus, a new weighted POCS method is derived through the Iterative Hard Threshold (IHT) view, and in order to eliminate random noise, a new adaptive method is proposed to achieve simultaneous seismic data interpolation and denoising based on dreamlet transform. Performances of the POCS method, the weighted POCS method and the proposed method are compared in simultaneous seismic data interpolation and denoising which demonstrate the validity of the proposed method. The recovered SNRs confirm that the proposed adaptive method is the most effective among the three methods. Numerical examples on synthetic and real data demonstrate the validity of the proposed adaptive method.

  20. Adjoint-based error estimation and mesh adaptation for the correction procedure via reconstruction method

    NASA Astrophysics Data System (ADS)

    Shi, Lei; Wang, Z. J.

    2015-08-01

    Adjoint-based mesh adaptive methods are capable of distributing computational resources to areas which are important for predicting an engineering output. In this paper, we develop an adjoint-based h-adaptation approach based on the high-order correction procedure via reconstruction formulation (CPR) to minimize the output or functional error. A dual-consistent CPR formulation of hyperbolic conservation laws is developed and its dual consistency is analyzed. Super-convergent functional and error estimate for the output with the CPR method are obtained. Factors affecting the dual consistency, such as the solution point distribution, correction functions, boundary conditions and the discretization approach for the non-linear flux divergence term, are studied. The presented method is then used to perform simulations for the 2D Euler and Navier-Stokes equations with mesh adaptation driven by the adjoint-based error estimate. Several numerical examples demonstrate the ability of the presented method to dramatically reduce the computational cost comparing with uniform grid refinement.

  1. Fuzzy physical programming for Space Manoeuvre Vehicles trajectory optimization based on hp-adaptive pseudospectral method

    NASA Astrophysics Data System (ADS)

    Chai, Runqi; Savvaris, Al; Tsourdos, Antonios

    2016-06-01

    In this paper, a fuzzy physical programming (FPP) method has been introduced for solving multi-objective Space Manoeuvre Vehicles (SMV) skip trajectory optimization problem based on hp-adaptive pseudospectral methods. The dynamic model of SMV is elaborated and then, by employing hp-adaptive pseudospectral methods, the problem has been transformed to nonlinear programming (NLP) problem. According to the mission requirements, the solutions were calculated for each single-objective scenario. To get a compromised solution for each target, the fuzzy physical programming (FPP) model is proposed. The preference function is established with considering the fuzzy factor of the system such that a proper compromised trajectory can be acquired. In addition, the NSGA-II is tested to obtain the Pareto-optimal solution set and verify the Pareto optimality of the FPP solution. Simulation results indicate that the proposed method is effective and feasible in terms of dealing with the multi-objective skip trajectory optimization for the SMV.

  2. Phylogeny-based comparative methods question the adaptive nature of sporophytic specializations in mosses.

    PubMed

    Huttunen, Sanna; Olsson, Sanna; Buchbender, Volker; Enroth, Johannes; Hedenäs, Lars; Quandt, Dietmar

    2012-01-01

    Adaptive evolution has often been proposed to explain correlations between habitats and certain phenotypes. In mosses, a high frequency of species with specialized sporophytic traits in exposed or epiphytic habitats was, already 100 years ago, suggested as due to adaptation. We tested this hypothesis by contrasting phylogenetic and morphological data from two moss families, Neckeraceae and Lembophyllaceae, both of which show parallel shifts to a specialized morphology and to exposed epiphytic or epilithic habitats. Phylogeny-based tests for correlated evolution revealed that evolution of four sporophytic traits is correlated with a habitat shift. For three of them, evolutionary rates of dual character-state changes suggest that habitat shifts appear prior to changes in morphology. This suggests that they could have evolved as adaptations to new habitats. Regarding the fourth correlated trait the specialized morphology had already evolved before the habitat shift. In addition, several other specialized "epiphytic" traits show no correlation with a habitat shift. Besides adaptive diversification, other processes thus also affect the match between phenotype and environment. Several potential factors such as complex genetic and developmental pathways yielding the same phenotypes, differences in strength of selection, or constraints in phenotypic evolution may lead to an inability of phylogeny-based comparative methods to detect potential adaptations.

  3. A wavelet-MRA-based adaptive semi-Lagrangian method for the relativistic Vlasov Maxwell system

    NASA Astrophysics Data System (ADS)

    Besse, Nicolas; Latu, Guillaume; Ghizzo, Alain; Sonnendrücker, Eric; Bertrand, Pierre

    2008-08-01

    In this paper we present a new method for the numerical solution of the relativistic Vlasov-Maxwell system on a phase-space grid using an adaptive semi-Lagrangian method. The adaptivity is performed through a wavelet multiresolution analysis, which gives a powerful and natural refinement criterion based on the local measurement of the approximation error and regularity of the distribution function. Therefore, the multiscale expansion of the distribution function allows to get a sparse representation of the data and thus save memory space and CPU time. We apply this numerical scheme to reduced Vlasov-Maxwell systems arising in laser-plasma physics. Interaction of relativistically strong laser pulses with overdense plasma slabs is investigated. These Vlasov simulations revealed a rich variety of phenomena associated with the fast particle dynamics induced by electromagnetic waves as electron trapping, particle acceleration, and electron plasma wavebreaking. However, the wavelet based adaptive method that we developed here, does not yield significant improvements compared to Vlasov solvers on a uniform mesh due to the substantial overhead that the method introduces. Nonetheless they might be a first step towards more efficient adaptive solvers based on different ideas for the grid refinement or on a more efficient implementation. Here the Vlasov simulations are performed in a two-dimensional phase-space where the development of thin filaments, strongly amplified by relativistic effects requires an important increase of the total number of points of the phase-space grid as they get finer as time goes on. The adaptive method could be more useful in cases where these thin filaments that need to be resolved are a very small fraction of the hyper-volume, which arises in higher dimensions because of the surface-to-volume scaling and the essentially one-dimensional structure of the filaments. Moreover, the main way to improve the efficiency of the adaptive method is to

  4. A novel timestamp based adaptive clock method for circuit emulation service over packet network

    NASA Astrophysics Data System (ADS)

    Dai, Jin-you; Yu, Shao-hua

    2007-11-01

    It is necessary to transport TDM (time division multiplexing) over packet network such as IP and Ethernet, and synchronization is a problem when carrying TDM over the packet network. Clock methods for TDM over packet network are introduced. A new adaptive clock method is presented. The method is a kind of timestamp based adaptive method, but no timestamp needs transporting over packet network. By using the local oscillator and a counter, the timestamp information (local timestamp) related to the service clock of the remote PE (provide edge) and the near PE can be attained. By using D-EWMA filter algorithm, the noise caused by packet network can be filtered and the useful timestamp can be extracted out. With the timestamp and a voltage-controlled oscillator, clock frequency of near PE can be adjusted the same as clock frequency of the remote PE. A kind of simulation device is designed and a test network topology is set up to test and verify the method. The experiment result shows that synthetical performance of the new method is better than ordinary buffer based method and ordinary timestamp based method.

  5. Investigation of self-adaptive LED surgical lighting based on entropy contrast enhancing method

    NASA Astrophysics Data System (ADS)

    Liu, Peng; Wang, Huihui; Zhang, Yaqin; Shen, Junfei; Wu, Rengmao; Zheng, Zhenrong; Li, Haifeng; Liu, Xu

    2014-05-01

    Investigation was performed to explore the possibility of enhancing contrast by varying the spectral distribution (SPD) of the surgical lighting. The illumination scenes with different SPDs were generated by the combination of a self-adaptive white light optimization method and the LED ceiling system, the images of biological sample are taken by a CCD camera and then processed by an 'Entropy' based contrast evaluation model which is proposed specific for surgery occasion. Compared with the neutral white LED based and traditional algorithm based image enhancing methods, the illumination based enhancing method turns out a better performance in contrast enhancing and improves the average contrast value about 9% and 6%, respectively. This low cost method is simple, practicable, and thus may provide an alternative solution for the expensive visual facility medical instruments.

  6. Adaptation strategies for high order discontinuous Galerkin methods based on Tau-estimation

    NASA Astrophysics Data System (ADS)

    Kompenhans, Moritz; Rubio, Gonzalo; Ferrer, Esteban; Valero, Eusebio

    2016-02-01

    In this paper three p-adaptation strategies based on the minimization of the truncation error are presented for high order discontinuous Galerkin methods. The truncation error is approximated by means of a τ-estimation procedure and enables the identification of mesh regions that require adaptation. Three adaptation strategies are developed and termed a posteriori, quasi-a priori and quasi-a priori corrected. All strategies require fine solutions, which are obtained by enriching the polynomial order, but while the former needs time converged solutions, the last two rely on non-converged solutions, which lead to faster computations. In addition, the high order method permits the spatial decoupling for the estimated errors and enables anisotropic p-adaptation. These strategies are verified and compared in terms of accuracy and computational cost for the Euler and the compressible Navier-Stokes equations. It is shown that the two quasi-a priori methods achieve a significant reduction in computational cost when compared to a uniform polynomial enrichment. Namely, for a viscous boundary layer flow, we obtain a speedup of 6.6 and 7.6 for the quasi-a priori and quasi-a priori corrected approaches, respectively.

  7. Adaptive circle-ellipse fitting method for estimating tree diameter based on single terrestrial laser scanning

    NASA Astrophysics Data System (ADS)

    Bu, Guochao; Wang, Pei

    2016-04-01

    Terrestrial laser scanning (TLS) has been used to extract accurate forest biophysical parameters for inventory purposes. The diameter at breast height (DBH) is a key parameter for individual trees because it has the potential for modeling the height, volume, biomass, and carbon sequestration potential of the tree based on empirical allometric scaling equations. In order to extract the DBH from the single-scan data of TLS automatically and accurately within a certain range, we proposed an adaptive circle-ellipse fitting method based on the point cloud transect. This proposed method can correct the error caused by the simple circle fitting method when a tree is slanted. A slanted tree was detected by the circle-ellipse fitting analysis, then the corresponding slant angle was found based on the ellipse fitting result. With this information, the DBH of the trees could be recalculated based on reslicing the point cloud data at breast height. Artificial stem data simulated by a cylindrical model of leaning trees and the scanning data acquired with the RIEGL VZ-400 were used to test the proposed adaptive fitting method. The results shown that the proposed method can detect the trees and accurately estimate the DBH for leaning trees.

  8. An adaptive filter-based method for robust, automatic detection and frequency estimation of whistles.

    PubMed

    Johansson, A Torbjorn; White, Paul R

    2011-08-01

    This paper proposes an adaptive filter-based method for detection and frequency estimation of whistle calls, such as the calls of birds and marine mammals, which are typically analyzed in the time-frequency domain using a spectrogram. The approach taken here is based on adaptive notch filtering, which is an established technique for frequency tracking. For application to automatic whistle processing, methods for detection and improved frequency tracking through frequency crossings as well as interfering transients are developed and coupled to the frequency tracker. Background noise estimation and compensation is accomplished using order statistics and pre-whitening. Using simulated signals as well as recorded calls of marine mammals and a human whistled speech utterance, it is shown that the proposed method can detect more simultaneous whistles than two competing spectrogram-based methods while not reporting any false alarms on the example datasets. In one example, it extracts complete 1.4 and 1.8 s bottlenose dolphin whistles successfully through frequency crossings. The method performs detection and estimates frequency tracks even at high sweep rates. The algorithm is also shown to be effective on human whistled utterances. PMID:21877804

  9. Adaptive scene-based nonuniformity correction method for infrared-focal plane arrays

    NASA Astrophysics Data System (ADS)

    Torres, Sergio N.; Vera, Esteban M.; Reeves, Rodrigo A.; Sobarzo, Sergio K.

    2003-08-01

    The non-uniform response in infrared focal plane array (IRFPA) detectors produces corrupted images with a fixed-pattern noise. In this paper we present an enhanced adaptive scene-based non-uniformity correction (NUC) technique. The method simultaneously estimates detector's parameters and performs the non-uniformity compensation using a neural network approach. In addition, the proposed method doesn't make any assumption on the kind or amount of non-uniformity presented on the raw data. The strength and robustness of the proposed method relies in avoiding the presence of ghosting artifacts through the use of optimization techniques in the parameter estimation learning process, such as: momentum, regularization, and adaptive learning rate. The proposed method has been tested with video sequences of simulated and real infrared data taken with an InSb IRFPA, reaching high correction levels, reducing the fixed pattern noise, decreasing the ghosting, and obtaining an effective frame by frame adaptive estimation of each detector's gain and offset.

  10. On the Use of Adaptive Wavelet-based Methods for Ocean Modeling and Data Assimilation Problems

    NASA Astrophysics Data System (ADS)

    Vasilyev, Oleg V.; Yousuff Hussaini, M.; Souopgui, Innocent

    2014-05-01

    Latest advancements in parallel wavelet-based numerical methodologies for the solution of partial differential equations, combined with the unique properties of wavelet analysis to unambiguously identify and isolate localized dynamically dominant flow structures, make it feasible to start developing integrated approaches for ocean modeling and data assimilation problems that take advantage of temporally and spatially varying meshes. In this talk the Parallel Adaptive Wavelet Collocation Method with spatially and temporarily varying thresholding is presented and the feasibility/potential advantages of its use for ocean modeling are discussed. The second half of the talk focuses on the recently developed Simultaneous Space-time Adaptive approach that addresses one of the main challenges of variational data assimilation, namely the requirement to have a forward solution available when solving the adjoint problem. The issue is addressed by concurrently solving forward and adjoint problems in the entire space-time domain on a near optimal adaptive computational mesh that automatically adapts to spatio-temporal structures of the solution. The compressed space-time form of the solution eliminates the need to save or recompute forward solution for every time slice, as it is typically done in traditional time marching variational data assimilation approaches. The simultaneous spacio-temporal discretization of both the forward and the adjoint problems makes it possible to solve both of them concurrently on the same space-time adaptive computational mesh reducing the amount of saved data to the strict minimum for a given a priori controlled accuracy of the solution. The simultaneous space-time adaptive approach of variational data assimilation is demonstrated for the advection diffusion problem in 1D-t and 2D-t dimensions.

  11. Improved methods in neural network-based adaptive output feedback control, with applications to flight control

    NASA Astrophysics Data System (ADS)

    Kim, Nakwan

    Utilizing the universal approximation property of neural networks, we develop several novel approaches to neural network-based adaptive output feedback control of nonlinear systems, and illustrate these approaches for several flight control applications. In particular, we address the problem of non-affine systems and eliminate the fixed point assumption present in earlier work. All of the stability proofs are carried out in a form that eliminates an algebraic loop in the neural network implementation. An approximate input/output feedback linearizing controller is augmented with a neural network using input/output sequences of the uncertain system. These approaches permit adaptation to both parametric uncertainty and unmodeled dynamics. All physical systems also have control position and rate limits, which may either deteriorate performance or cause instability for a sufficiently high control bandwidth. Here we apply a method for protecting an adaptive process from the effects of input saturation and time delays, known as "pseudo control hedging". This method was originally developed for the state feedback case, and we provide a stability analysis that extends its domain of applicability to the case of output feedback. The approach is illustrated by the design of a pitch-attitude flight control system for a linearized model of an R-50 experimental helicopter, and by the design of a pitch-rate control system for a 58-state model of a flexible aircraft consisting of rigid body dynamics coupled with actuator and flexible modes. A new approach to augmentation of an existing linear controller is introduced. It is especially useful when there is limited information concerning the plant model, and the existing controller. The approach is applied to the design of an adaptive autopilot for a guided munition. Design of a neural network adaptive control that ensures asymptotically stable tracking performance is also addressed.

  12. Adaptive non-uniformity correction method based on temperature for infrared detector array

    NASA Astrophysics Data System (ADS)

    Zhang, Zhijie; Yue, Song; Hong, Pu; Jia, Guowei; Lei, Bo

    2013-09-01

    The existence of non-uniformities in the responsitivity of the element array is a severe problem typical to common infrared detector. These non-uniformities result in a "curtain'' like fixed pattern noises (FPN) that appear in the image. Some random noise can be restrained by the method kind of equalization method. But the fixed pattern noise can only be removed by .non uniformity correction method. The produce of non uniformities of detector array is the combined action of infrared detector array, readout circuit, semiconductor device performance, the amplifier circuit and optical system. Conventional linear correction techniques require costly recalibration due to the drift of the detector or changes in temperature. Therefore, an adaptive non-uniformity method is needed to solve this problem. A lot factors including detectors and environment conditions variety are considered to analyze and conduct the cause of detector drift. Several experiments are designed to verify the guess. Based on the experiments, an adaptive non-uniformity correction method is put forward in this paper. The strength of this method lies in its simplicity and low computational complexity. Extensive experimental results demonstrate the disadvantage of traditional non-uniformity correct method is conquered by the proposed scheme.

  13. Adaptive f-k deghosting method based on non-Gaussianity

    NASA Astrophysics Data System (ADS)

    Liu, Lei; Lu, Wenkai

    2016-04-01

    For conventional horizontal towed streamer data, the f-k deghosting method is widely used to remove receiver ghosts. In the traditional f-k deghosting method, the depth of the streamer and the sea surface reflection coefficient are two key ghost parameters. In general, for one seismic line, these two parameters are fixed for all shot gathers and given by the users. In practice, these two parameters often vary during acquisition because of the rough sea condition. This paper proposes an automatic method to adaptively obtain these two ghost parameters for every shot gather. Since the proposed method is based on the non-Gaussianity of the deghosting result, it is important to choose a proper non-Gaussian criterion to ensure high accuracy of the parameter estimation. We evaluate six non-Gaussian criteria by synthetic experiment. The conclusion of our experiment is expected to provide a reference for choosing the most appropriate criterion. We apply the proposed method on a 2D real field example. Experimental results show that the optimal parameters vary among shot gathers and validate effectiveness of the parameter estimation process. Moreover, despite that this method ignores the parameter variation within one shot, the adaptive deghosting results show improvements when compared with the deghosting results obtained by using constant parameters for the whole line.

  14. Adaptive method for real-time gait phase detection based on ground contact forces.

    PubMed

    Yu, Lie; Zheng, Jianbin; Wang, Yang; Song, Zhengge; Zhan, Enqi

    2015-01-01

    A novel method is presented to detect real-time gait phases based on ground contact forces (GCFs) measured by force sensitive resistors (FSRs). The traditional threshold method (TM) sets a threshold to divide the GCFs into on-ground and off-ground statuses. However, TM is neither an adaptive nor real-time method. The threshold setting is based on body weight or the maximum and minimum GCFs in the gait cycles, resulting in different thresholds needed for different walking conditions. Additionally, the maximum and minimum GCFs are only obtainable after data processing. Therefore, this paper proposes a proportion method (PM) that calculates the sums and proportions of GCFs wherein the GCFs are obtained from FSRs. A gait analysis is then implemented by the proposed gait phase detection algorithm (GPDA). Finally, the PM reliability is determined by comparing the detection results between PM and TM. Experimental results demonstrate that the proposed PM is highly reliable in all walking conditions. In addition, PM could be utilized to analyze gait phases in real time. Finally, PM exhibits strong adaptability to different walking conditions.

  15. Parallel level-set methods on adaptive tree-based grids

    NASA Astrophysics Data System (ADS)

    Mirzadeh, Mohammad; Guittet, Arthur; Burstedde, Carsten; Gibou, Frederic

    2016-10-01

    We present scalable algorithms for the level-set method on dynamic, adaptive Quadtree and Octree Cartesian grids. The algorithms are fully parallelized and implemented using the MPI standard and the open-source p4est library. We solve the level set equation with a semi-Lagrangian method which, similar to its serial implementation, is free of any time-step restrictions. This is achieved by introducing a scalable global interpolation scheme on adaptive tree-based grids. Moreover, we present a simple parallel reinitialization scheme using the pseudo-time transient formulation. Both parallel algorithms scale on the Stampede supercomputer, where we are currently using up to 4096 CPU cores, the limit of our current account. Finally, a relevant application of the algorithms is presented in modeling a crystallization phenomenon by solving a Stefan problem, illustrating a level of detail that would be impossible to achieve without a parallel adaptive strategy. We believe that the algorithms presented in this article will be of interest and useful to researchers working with the level-set framework and modeling multi-scale physics in general.

  16. Comparative adaptation accuracy of acrylic denture bases evaluated by two different methods.

    PubMed

    Lee, Chung-Jae; Bok, Sung-Bem; Bae, Ji-Young; Lee, Hae-Hyoung

    2010-08-01

    This study examined the adaptation accuracy of acrylic denture base processed using fluid-resin (PERform), injection-moldings (SR-Ivocap, Success, Mak Press), and two compression-molding techniques. The adaptation accuracy was measured primarily by the posterior border gaps at the mid-palatal area using a microscope and subsequently by weighing of the weight of the impression material between the denture base and master cast using hand-mixed and automixed silicone. The correlation between the data measured using these two test methods was examined. The PERform and Mak Press produced significantly smaller maximum palatal gap dimensions than the other groups (p<0.05). Mak Press also showed a significantly smaller weight of automixed silicone material than the other groups (p<0.05), while SR-Ivocap and Success showed similar adaptation accuracy to the compression-molding denture. The correlationship between the magnitude of the posterior border gap and the weight of the silicone impression materials was affected by either the material or mixing variables.

  17. Wavefront detection method of a single-sensor based adaptive optics system.

    PubMed

    Wang, Chongchong; Hu, Lifa; Xu, Huanyu; Wang, Yukun; Li, Dayu; Wang, Shaoxin; Mu, Quanquan; Yang, Chengliang; Cao, Zhaoliang; Lu, Xinghai; Xuan, Li

    2015-08-10

    In adaptive optics system (AOS) for optical telescopes, the reported wavefront sensing strategy consists of two parts: a specific sensor for tip-tilt (TT) detection and another wavefront sensor for other distortions detection. Thus, a part of incident light has to be used for TT detection, which decreases the light energy used by wavefront sensor and eventually reduces the precision of wavefront correction. In this paper, a single Shack-Hartmann wavefront sensor based wavefront measurement method is presented for both large amplitude TT and other distortions' measurement. Experiments were performed for testing the presented wavefront method and validating the wavefront detection and correction ability of the single-sensor based AOS. With adaptive correction, the root-mean-square of residual TT was less than 0.2 λ, and a clear image was obtained in the lab. Equipped on a 1.23-meter optical telescope, the binary stars with angle distance of 0.6″ were clearly resolved using the AOS. This wavefront measurement method removes the separate TT sensor, which not only simplifies the AOS but also saves light energy for subsequent wavefront sensing and imaging, and eventually improves the detection and imaging capability of the AOS. PMID:26367988

  18. 3D Continuum Radiative Transfer. An adaptive grid construction algorithm based on the Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Niccolini, G.; Alcolea, J.

    Solving the radiative transfer problem is a common problematic to may fields in astrophysics. With the increasing angular resolution of spatial or ground-based telescopes (VLTI, HST) but also with the next decade instruments (NGST, ALMA, ...), astrophysical objects reveal and will certainly reveal complex spatial structures. Consequently, it is necessary to develop numerical tools being able to solve the radiative transfer equation in three dimensions in order to model and interpret these observations. I present a 3D radiative transfer program, using a new method for the construction of an adaptive spatial grid, based on the Monte Claro method. With the help of this tools, one can solve the continuum radiative transfer problem (e.g. a dusty medium), computes the temperature structure of the considered medium and obtain the flux of the object (SED and images).

  19. Patched based methods for adaptive mesh refinement solutions of partial differential equations

    SciTech Connect

    Saltzman, J.

    1997-09-02

    This manuscript contains the lecture notes for a course taught from July 7th through July 11th at the 1997 Numerical Analysis Summer School sponsored by C.E.A., I.N.R.I.A., and E.D.F. The subject area was chosen to support the general theme of that year`s school which is ``Multiscale Methods and Wavelets in Numerical Simulation.`` The first topic covered in these notes is a description of the problem domain. This coverage is limited to classical PDEs with a heavier emphasis on hyperbolic systems and constrained hyperbolic systems. The next topic is difference schemes. These schemes are the foundation for the adaptive methods. After the background material is covered, attention is focused on a simple patched based adaptive algorithm and its associated data structures for square grids and hyperbolic conservation laws. Embellishments include curvilinear meshes, embedded boundary and overset meshes. Next, several strategies for parallel implementations are examined. The remainder of the notes contains descriptions of elliptic solutions on the mesh hierarchy, elliptically constrained flow solution methods and elliptically constrained flow solution methods with diffusion.

  20. Adaptive homochromous disturbance elimination and feature selection based mean-shift vehicle tracking method

    NASA Astrophysics Data System (ADS)

    Ding, Jie; Lei, Bo; Hong, Pu; Wang, Chensheng

    2011-11-01

    This paper introduces a novel method to adaptively diminish the effects of disturbance in the airborne camera shooting traffic video. Based on the moving vector of the tracked vehicle, a search area in the next frame is predicted, which is the area of interest (AOI) to the mean-shift method. Background color estimation is performed according to the previous tracking, which is used to judge whether there is possible disturbance in the predicted search area in the next frame. Without disturbance, the difference image of vehicle and background could be used as input features to the mean-shift algorithm; with disturbance, the histogram of colors in the predict area is calculated to find the most and second disturbing color. Experiments proved this method could diminish or eliminate the effects of homochromous disturbance and lead to more precise and more robust tracking.

  1. A Wavelet-Based ECG Delineation Method: Adaptation to an Experimental Electrograms with Manifested Global Ischemia.

    PubMed

    Hejč, Jakub; Vítek, Martin; Ronzhina, Marina; Nováková, Marie; Kolářová, Jana

    2015-09-01

    We present a novel wavelet-based ECG delineation method with robust classification of P wave and T wave. The work is aimed on an adaptation of the method to long-term experimental electrograms (EGs) measured on isolated rabbit heart and to evaluate the effect of global ischemia in experimental EGs on delineation performance. The algorithm was tested on a set of 263 rabbit EGs with established reference points and on human signals using standard Common Standards for Quantitative Electrocardiography Standard Database (CSEDB). On CSEDB, standard deviation (SD) of measured errors satisfies given criterions in each point and the results are comparable to other published works. In rabbit signals, our QRS detector reached sensitivity of 99.87% and positive predictivity of 99.89% despite an overlay of spectral components of QRS complex, P wave and power line noise. The algorithm shows great performance in suppressing J-point elevation and reached low overall error in both, QRS onset (SD = 2.8 ms) and QRS offset (SD = 4.3 ms) delineation. T wave offset is detected with acceptable error (SD = 12.9 ms) and sensitivity nearly 99%. Variance of the errors during global ischemia remains relatively stable, however more failures in detection of T wave and P wave occur. Due to differences in spectral and timing characteristics parameters of rabbit based algorithm have to be highly adaptable and set more precisely than in human ECG signals to reach acceptable performance. PMID:26577367

  2. Tensor Product Model Transformation Based Adaptive Integral-Sliding Mode Controller: Equivalent Control Method

    PubMed Central

    Zhao, Guoliang; Li, Hongxing

    2013-01-01

    This paper proposes new methodologies for the design of adaptive integral-sliding mode control. A tensor product model transformation based adaptive integral-sliding mode control law with respect to uncertainties and perturbations is studied, while upper bounds on the perturbations and uncertainties are assumed to be unknown. The advantage of proposed controllers consists in having a dynamical adaptive control gain to establish a sliding mode right at the beginning of the process. Gain dynamics ensure a reasonable adaptive gain with respect to the uncertainties. Finally, efficacy of the proposed controller is verified by simulations on an uncertain nonlinear system model. PMID:24453897

  3. Tensor product model transformation based adaptive integral-sliding mode controller: equivalent control method.

    PubMed

    Zhao, Guoliang; Sun, Kaibiao; Li, Hongxing

    2013-01-01

    This paper proposes new methodologies for the design of adaptive integral-sliding mode control. A tensor product model transformation based adaptive integral-sliding mode control law with respect to uncertainties and perturbations is studied, while upper bounds on the perturbations and uncertainties are assumed to be unknown. The advantage of proposed controllers consists in having a dynamical adaptive control gain to establish a sliding mode right at the beginning of the process. Gain dynamics ensure a reasonable adaptive gain with respect to the uncertainties. Finally, efficacy of the proposed controller is verified by simulations on an uncertain nonlinear system model.

  4. An adaptive distance-based group contribution method for thermodynamic property prediction.

    PubMed

    He, Tanjin; Li, Shuang; Chi, Yawei; Zhang, Hong-Bo; Wang, Zhi; Yang, Bin; He, Xin; You, Xiaoqing

    2016-09-14

    In the search for an accurate yet inexpensive method to predict thermodynamic properties of large hydrocarbon molecules, we have developed an automatic and adaptive distance-based group contribution (DBGC) method. The method characterizes the group interaction within a molecule with an exponential decay function of the group-to-group distance, defined as the number of bonds between the groups. A database containing the molecular bonding information and the standard enthalpy of formation (Hf,298K) for alkanes, alkenes, and their radicals at the M06-2X/def2-TZVP//B3LYP/6-31G(d) level of theory was constructed. Multiple linear regression (MLR) and artificial neural network (ANN) fitting were used to obtain the contributions from individual groups and group interactions for further predictions. Compared with the conventional group additivity (GA) method, the DBGC method predicts Hf,298K for alkanes more accurately using the same training sets. Particularly for some highly branched large hydrocarbons, the discrepancy with the literature data is smaller for the DBGC method than the conventional GA method. When extended to other molecular classes, including alkenes and radicals, the overall accuracy level of this new method is still satisfactory. PMID:27522953

  5. Adaptive model-based control systems and methods for controlling a gas turbine

    NASA Technical Reports Server (NTRS)

    Brunell, Brent Jerome (Inventor); Mathews, Jr., Harry Kirk (Inventor); Kumar, Aditya (Inventor)

    2004-01-01

    Adaptive model-based control systems and methods are described so that performance and/or operability of a gas turbine in an aircraft engine, power plant, marine propulsion, or industrial application can be optimized under normal, deteriorated, faulted, failed and/or damaged operation. First, a model of each relevant system or component is created, and the model is adapted to the engine. Then, if/when deterioration, a fault, a failure or some kind of damage to an engine component or system is detected, that information is input to the model-based control as changes to the model, constraints, objective function, or other control parameters. With all the information about the engine condition, and state and directives on the control goals in terms of an objective function and constraints, the control then solves an optimization so the optimal control action can be determined and taken. This model and control may be updated in real-time to account for engine-to-engine variation, deterioration, damage, faults and/or failures using optimal corrective control action command(s).

  6. Intelligent Condition Diagnosis Method Based on Adaptive Statistic Test Filter and Diagnostic Bayesian Network

    PubMed Central

    Li, Ke; Zhang, Qiuju; Wang, Kun; Chen, Peng; Wang, Huaqing

    2016-01-01

    A new fault diagnosis method for rotating machinery based on adaptive statistic test filter (ASTF) and Diagnostic Bayesian Network (DBN) is presented in this paper. ASTF is proposed to obtain weak fault features under background noise, ASTF is based on statistic hypothesis testing in the frequency domain to evaluate similarity between reference signal (noise signal) and original signal, and remove the component of high similarity. The optimal level of significance α is obtained using particle swarm optimization (PSO). To evaluate the performance of the ASTF, evaluation factor Ipq is also defined. In addition, a simulation experiment is designed to verify the effectiveness and robustness of ASTF. A sensitive evaluation method using principal component analysis (PCA) is proposed to evaluate the sensitiveness of symptom parameters (SPs) for condition diagnosis. By this way, the good SPs that have high sensitiveness for condition diagnosis can be selected. A three-layer DBN is developed to identify condition of rotation machinery based on the Bayesian Belief Network (BBN) theory. Condition diagnosis experiment for rolling element bearings demonstrates the effectiveness of the proposed method. PMID:26761006

  7. Intelligent Condition Diagnosis Method Based on Adaptive Statistic Test Filter and Diagnostic Bayesian Network.

    PubMed

    Li, Ke; Zhang, Qiuju; Wang, Kun; Chen, Peng; Wang, Huaqing

    2016-01-01

    A new fault diagnosis method for rotating machinery based on adaptive statistic test filter (ASTF) and Diagnostic Bayesian Network (DBN) is presented in this paper. ASTF is proposed to obtain weak fault features under background noise, ASTF is based on statistic hypothesis testing in the frequency domain to evaluate similarity between reference signal (noise signal) and original signal, and remove the component of high similarity. The optimal level of significance α is obtained using particle swarm optimization (PSO). To evaluate the performance of the ASTF, evaluation factor Ipq is also defined. In addition, a simulation experiment is designed to verify the effectiveness and robustness of ASTF. A sensitive evaluation method using principal component analysis (PCA) is proposed to evaluate the sensitiveness of symptom parameters (SPs) for condition diagnosis. By this way, the good SPs that have high sensitiveness for condition diagnosis can be selected. A three-layer DBN is developed to identify condition of rotation machinery based on the Bayesian Belief Network (BBN) theory. Condition diagnosis experiment for rolling element bearings demonstrates the effectiveness of the proposed method.

  8. Intelligent Condition Diagnosis Method Based on Adaptive Statistic Test Filter and Diagnostic Bayesian Network.

    PubMed

    Li, Ke; Zhang, Qiuju; Wang, Kun; Chen, Peng; Wang, Huaqing

    2016-01-01

    A new fault diagnosis method for rotating machinery based on adaptive statistic test filter (ASTF) and Diagnostic Bayesian Network (DBN) is presented in this paper. ASTF is proposed to obtain weak fault features under background noise, ASTF is based on statistic hypothesis testing in the frequency domain to evaluate similarity between reference signal (noise signal) and original signal, and remove the component of high similarity. The optimal level of significance α is obtained using particle swarm optimization (PSO). To evaluate the performance of the ASTF, evaluation factor Ipq is also defined. In addition, a simulation experiment is designed to verify the effectiveness and robustness of ASTF. A sensitive evaluation method using principal component analysis (PCA) is proposed to evaluate the sensitiveness of symptom parameters (SPs) for condition diagnosis. By this way, the good SPs that have high sensitiveness for condition diagnosis can be selected. A three-layer DBN is developed to identify condition of rotation machinery based on the Bayesian Belief Network (BBN) theory. Condition diagnosis experiment for rolling element bearings demonstrates the effectiveness of the proposed method. PMID:26761006

  9. Dynamic Adaptive Runtime Systems for Advanced Multipole Method-based Science Achievement

    NASA Astrophysics Data System (ADS)

    Debuhr, Jackson; Anderson, Matthew; Sterling, Thomas; Zhang, Bo

    2015-04-01

    Multipole methods are a key computational kernel for a large class of scientific applications spanning multiple disciplines. Yet many of these applications are strong scaling constrained when using conventional programming practices. Hardware parallelism continues to grow, emphasizing medium and fine-grained thread parallelism rather than the coarse-grained process parallelism favored by conventional programming practices. Emerging, dynamic task management execution models can go beyond these conventional practices to significantly improve both efficiency and scalability for algorithms like multipole methods which exhibit irregular and time-varying execution properties. We present a new scientific library, DASHMM, built on the ParalleX HPX-5 runtime system, which explores the use of dynamic adaptive runtime techniques to improve scalability and efficiency for multipole-method based scientific computing. DASHMM allows application scientists to rapidly create custom, scalable, and efficient multipole methods, especially targeting the Fast Multipole Method and the Barnes-Hut N-body algorithm. After a discussion of the system and its goals, some application examples will be presented.

  10. Adaptive contour-based statistical background subtraction method for moving target detection in infrared video sequences

    NASA Astrophysics Data System (ADS)

    Akula, Aparna; Khanna, Nidhi; Ghosh, Ripul; Kumar, Satish; Das, Amitava; Sardana, H. K.

    2014-03-01

    A robust contour-based statistical background subtraction method for detection of non-uniform thermal targets in infrared imagery is presented. The foremost step of the method comprises of generation of background frame using statistical information of an initial set of frames not containing any targets. The generated background frame is made adaptive by continuously updating the background using the motion information of the scene. The background subtraction method followed by a clutter rejection stage ensure the detection of foreground objects. The next step comprises of detection of contours and distinguishing the target boundaries from the noisy background. This is achieved by using the Canny edge detector that extracts the contours followed by a k-means clustering approach to differentiate the object contour from the background contours. The post processing step comprises of morphological edge linking approach to close any broken contours and finally flood fill is performed to generate the silhouettes of moving targets. This method is validated on infrared video data consisting of a variety of moving targets. Experimental results demonstrate a high detection rate with minimal false alarms establishing the robustness of the proposed method.

  11. A New Feedback-Based Method for Parameter Adaptation in Image Processing Routines

    PubMed Central

    Mikut, Ralf; Reischl, Markus

    2016-01-01

    The parametrization of automatic image processing routines is time-consuming if a lot of image processing parameters are involved. An expert can tune parameters sequentially to get desired results. This may not be productive for applications with difficult image analysis tasks, e.g. when high noise and shading levels in an image are present or images vary in their characteristics due to different acquisition conditions. Parameters are required to be tuned simultaneously. We propose a framework to improve standard image segmentation methods by using feedback-based automatic parameter adaptation. Moreover, we compare algorithms by implementing them in a feedforward fashion and then adapting their parameters. This comparison is proposed to be evaluated by a benchmark data set that contains challenging image distortions in an increasing fashion. This promptly enables us to compare different standard image segmentation algorithms in a feedback vs. feedforward implementation by evaluating their segmentation quality and robustness. We also propose an efficient way of performing automatic image analysis when only abstract ground truth is present. Such a framework evaluates robustness of different image processing pipelines using a graded data set. This is useful for both end-users and experts. PMID:27764213

  12. Simple method for model reference adaptive control

    NASA Technical Reports Server (NTRS)

    Seraji, H.

    1989-01-01

    A simple method is presented for combined signal synthesis and parameter adaptation within the framework of model reference adaptive control theory. The results are obtained using a simple derivation based on an improved Liapunov function.

  13. Vibration-based structural health monitoring using adaptive statistical method under varying environmental condition

    NASA Astrophysics Data System (ADS)

    Jin, Seung-Seop; Jung, Hyung-Jo

    2014-03-01

    It is well known that the dynamic properties of a structure such as natural frequencies depend not only on damage but also on environmental condition (e.g., temperature). The variation in dynamic characteristics of a structure due to environmental condition may mask damage of the structure. Without taking the change of environmental condition into account, false-positive or false-negative damage diagnosis may occur so that structural health monitoring becomes unreliable. In order to address this problem, an approach to construct a regression model based on structural responses considering environmental factors has been usually used by many researchers. The key to success of this approach is the formulation between the input and output variables of the regression model to take into account the environmental variations. However, it is quite challenging to determine proper environmental variables and measurement locations in advance for fully representing the relationship between the structural responses and the environmental variations. One alternative (i.e., novelty detection) is to remove the variations caused by environmental factors from the structural responses by using multivariate statistical analysis (e.g., principal component analysis (PCA), factor analysis, etc.). The success of this method is deeply depending on the accuracy of the description of normal condition. Generally, there is no prior information on normal condition during data acquisition, so that the normal condition is determined by subjective perspective with human-intervention. The proposed method is a novel adaptive multivariate statistical analysis for monitoring of structural damage detection under environmental change. One advantage of this method is the ability of a generative learning to capture the intrinsic characteristics of the normal condition. The proposed method is tested on numerically simulated data for a range of noise in measurement under environmental variation. A comparative

  14. Automatic off-body overset adaptive Cartesian mesh method based on an octree approach

    SciTech Connect

    Peron, Stephanie; Benoit, Christophe

    2013-01-01

    This paper describes a method for generating adaptive structured Cartesian grids within a near-body/off-body mesh partitioning framework for the flow simulation around complex geometries. The off-body Cartesian mesh generation derives from an octree structure, assuming each octree leaf node defines a structured Cartesian block. This enables one to take into account the large scale discrepancies in terms of resolution between the different bodies involved in the simulation, with minimum memory requirements. Two different conversions from the octree to Cartesian grids are proposed: the first one generates Adaptive Mesh Refinement (AMR) type grid systems, and the second one generates abutting or minimally overlapping Cartesian grid set. We also introduce an algorithm to control the number of points at each adaptation, that automatically determines relevant values of the refinement indicator driving the grid refinement and coarsening. An application to a wing tip vortex computation assesses the capability of the method to capture accurately the flow features.

  15. Development and evaluation of a method of calibrating medical displays based on fixed adaptation

    SciTech Connect

    Sund, Patrik Månsson, Lars Gunnar; Båth, Magnus

    2015-04-15

    Purpose: The purpose of this work was to develop and evaluate a new method for calibration of medical displays that includes the effect of fixed adaptation and by using equipment and luminance levels typical for a modern radiology department. Methods: Low contrast sinusoidal test patterns were derived at nine luminance levels from 2 to 600 cd/m{sup 2} and used in a two alternative forced choice observer study, where the adaptation level was fixed at the logarithmic average of 35 cd/m{sup 2}. The contrast sensitivity at each luminance level was derived by establishing a linear relationship between the ten pattern contrast levels used at every luminance level and a detectability index (d′) calculated from the fraction of correct responses. A Gaussian function was fitted to the data and normalized to the adaptation level. The corresponding equation was used in a display calibration method that included the grayscale standard display function (GSDF) but compensated for fixed adaptation. In the evaluation study, the contrast of circular objects with a fixed pixel contrast was displayed using both calibration methods and was rated on a five-grade scale. Results were calculated using a visual grading characteristics method. Error estimations in both observer studies were derived using a bootstrap method. Results: The contrast sensitivities for the darkest and brightest patterns compared to the contrast sensitivity at the adaptation luminance were 37% and 56%, respectively. The obtained Gaussian fit corresponded well with similar studies. The evaluation study showed a higher degree of equally distributed contrast throughout the luminance range with the calibration method compensated for fixed adaptation than for the GSDF. The two lowest scores for the GSDF were obtained for the darkest and brightest patterns. These scores were significantly lower than the lowest score obtained for the compensated GSDF. For the GSDF, the scores for all luminance levels were statistically

  16. Parallel processing of Eulerian-Lagrangian, cell-based adaptive method for moving boundary problems

    NASA Astrophysics Data System (ADS)

    Kuan, Chih-Kuang

    In this study, issues and techniques related to the parallel processing of the Eulerian-Lagrangian method for multi-scale moving boundary computation are investigated. The scope of the study consists of the Eulerian approach for field equations, explicit interface-tracking, Lagrangian interface modification and reconstruction algorithms, and a cell-based unstructured adaptive mesh refinement (AMR) in a distributed-memory computation framework. We decomposed the Eulerian domain spatially along with AMR to balance the computational load of solving field equations, which is a primary cost of the entire solver. The Lagrangian domain is partitioned based on marker vicinities with respect to the Eulerian partitions to minimize inter-processor communication. Overall, the performance of an Eulerian task peaks at 10,000-20,000 cells per processor, and it is the upper bound of the performance of the Eulerian- Lagrangian method. Moreover, the load imbalance of the Lagrangian task is not as influential as the communication overhead of the Eulerian-Lagrangian tasks on the overall performance. To assess the parallel processing capabilities, a high Weber number drop collision is simulated. The high convective to viscous length scale ratios result in disparate length scale distributions; together with the moving and topologically irregular interfaces, the computational tasks require temporally and spatially resolved treatment adaptively. The techniques presented enable us to perform original studies to meet such computational requirements. Coalescence, stretch, and break-up of satellite droplets due to the interfacial instability are observed in current study, and the history of interface evolution is in good agreement with the experimental data. The competing mechanisms of the primary and secondary droplet break up, along with the gas-liquid interfacial dynamics are systematically investigated. This study shows that Rayleigh-Taylor instability on the edge of an extruding sheet

  17. Comparing Computer-Adaptive and Curriculum-Based Measurement Methods of Assessment

    ERIC Educational Resources Information Center

    Shapiro, Edward S.; Gebhardt, Sarah N.

    2012-01-01

    This article reported the concurrent, predictive, and diagnostic accuracy of a computer-adaptive test (CAT) and curriculum-based measurements (CBM; both computation and concepts/application measures) for universal screening in mathematics among students in first through fourth grade. Correlational analyses indicated moderate to strong…

  18. Method for Reducing the Drag of Increasing Forebody Roughness Blunt-Based Vehicles by Adaptively Increasing Forebody Roughness

    NASA Technical Reports Server (NTRS)

    Whitmore, Stephen A. (Inventor); Saltzman, Edwin J. (Inventor); Moes, Timothy R. (Inventor); Iliff, Kenneth W. (Inventor)

    2005-01-01

    A method for reducing drag upon a blunt-based vehicle by adaptively increasing forebody roughness to increase drag at the roughened area of the forebody, which results in a decrease in drag at the base of this vehicle, and in total vehicle drag.

  19. An edge-based solution-adaptive method applied to the AIRPLANE code

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak; Thomas, Scott D.; Cliff, Susan E.

    1995-01-01

    Computational methods to solve large-scale realistic problems in fluid flow can be made more efficient and cost effective by using them in conjunction with dynamic mesh adaption procedures that perform simultaneous coarsening and refinement to capture flow features of interest. This work couples the tetrahedral mesh adaption scheme, 3D_TAG, with the AIRPLANE code to solve complete aircraft configuration problems in transonic and supersonic flow regimes. Results indicate that the near-field sonic boom pressure signature of a cone-cylinder is improved, the oblique and normal shocks are better resolved on a transonic wing, and the bow shock ahead of an unstarted inlet is better defined.

  20. Refinement trajectory and determination of eigenstates by a wavelet based adaptive method

    SciTech Connect

    Pipek, Janos; Nagy, Szilvia

    2006-11-07

    The detail structure of the wave function is analyzed at various refinement levels using the methods of wavelet analysis. The eigenvalue problem of a model system is solved in granular Hilbert spaces, and the trajectory of the eigenstates is traced in terms of the resolution. An adaptive method is developed for identifying the fine structure localization regions, where further refinement of the wave function is necessary.

  1. A Digitalized Gyroscope System Based on a Modified Adaptive Control Method.

    PubMed

    Xia, Dunzhu; Hu, Yiwei; Ni, Peizhen

    2016-03-04

    In this work we investigate the possibility of applying the adaptive control algorithm to Micro-Electro-Mechanical System (MEMS) gyroscopes. Through comparing the gyroscope working conditions with the reference model, the adaptive control method can provide online estimation of the key parameters and the proper control strategy for the system. The digital second-order oscillators in the reference model are substituted for two phase locked loops (PLLs) to achieve a more steady amplitude and frequency control. The adaptive law is modified to satisfy the condition of unequal coupling stiffness and coupling damping coefficient. The rotation mode of the gyroscope system is considered in our work and a rotation elimination section is added to the digitalized system. Before implementing the algorithm in the hardware platform, different simulations are conducted to ensure the algorithm can meet the requirement of the angular rate sensor, and some of the key adaptive law coefficients are optimized. The coupling components are detected and suppressed respectively and Lyapunov criterion is applied to prove the stability of the system. The modified adaptive control algorithm is verified in a set of digitalized gyroscope system, the control system is realized in digital domain, with the application of Field Programmable Gate Array (FPGA). Key structure parameters are measured and compared with the estimation results, which validated that the algorithm is feasible in the setup. Extra gyroscopes are used in repeated experiments to prove the commonality of the algorithm.

  2. A Digitalized Gyroscope System Based on a Modified Adaptive Control Method.

    PubMed

    Xia, Dunzhu; Hu, Yiwei; Ni, Peizhen

    2016-01-01

    In this work we investigate the possibility of applying the adaptive control algorithm to Micro-Electro-Mechanical System (MEMS) gyroscopes. Through comparing the gyroscope working conditions with the reference model, the adaptive control method can provide online estimation of the key parameters and the proper control strategy for the system. The digital second-order oscillators in the reference model are substituted for two phase locked loops (PLLs) to achieve a more steady amplitude and frequency control. The adaptive law is modified to satisfy the condition of unequal coupling stiffness and coupling damping coefficient. The rotation mode of the gyroscope system is considered in our work and a rotation elimination section is added to the digitalized system. Before implementing the algorithm in the hardware platform, different simulations are conducted to ensure the algorithm can meet the requirement of the angular rate sensor, and some of the key adaptive law coefficients are optimized. The coupling components are detected and suppressed respectively and Lyapunov criterion is applied to prove the stability of the system. The modified adaptive control algorithm is verified in a set of digitalized gyroscope system, the control system is realized in digital domain, with the application of Field Programmable Gate Array (FPGA). Key structure parameters are measured and compared with the estimation results, which validated that the algorithm is feasible in the setup. Extra gyroscopes are used in repeated experiments to prove the commonality of the algorithm. PMID:26959019

  3. A Digitalized Gyroscope System Based on a Modified Adaptive Control Method

    PubMed Central

    Xia, Dunzhu; Hu, Yiwei; Ni, Peizhen

    2016-01-01

    In this work we investigate the possibility of applying the adaptive control algorithm to Micro-Electro-Mechanical System (MEMS) gyroscopes. Through comparing the gyroscope working conditions with the reference model, the adaptive control method can provide online estimation of the key parameters and the proper control strategy for the system. The digital second-order oscillators in the reference model are substituted for two phase locked loops (PLLs) to achieve a more steady amplitude and frequency control. The adaptive law is modified to satisfy the condition of unequal coupling stiffness and coupling damping coefficient. The rotation mode of the gyroscope system is considered in our work and a rotation elimination section is added to the digitalized system. Before implementing the algorithm in the hardware platform, different simulations are conducted to ensure the algorithm can meet the requirement of the angular rate sensor, and some of the key adaptive law coefficients are optimized. The coupling components are detected and suppressed respectively and Lyapunov criterion is applied to prove the stability of the system. The modified adaptive control algorithm is verified in a set of digitalized gyroscope system, the control system is realized in digital domain, with the application of Field Programmable Gate Array (FPGA). Key structure parameters are measured and compared with the estimation results, which validated that the algorithm is feasible in the setup. Extra gyroscopes are used in repeated experiments to prove the commonality of the algorithm. PMID:26959019

  4. An adaptive Newton-method based on a dynamical systems approach

    NASA Astrophysics Data System (ADS)

    Amrein, Mario; Wihler, Thomas P.

    2014-09-01

    The traditional Newton method for solving nonlinear operator equations in Banach spaces is discussed within the context of the continuous Newton method. This setting makes it possible to interpret the Newton method as a discrete dynamical system and thereby to cast it in the framework of an adaptive step size control procedure. In so doing, our goal is to reduce the chaotic behavior of the original method without losing its quadratic convergence property close to the roots. The performance of the modified scheme is illustrated with various examples from algebraic and differential equations.

  5. Robust Optimal Adaptive Control Method with Large Adaptive Gain

    NASA Technical Reports Server (NTRS)

    Nguyen, Nhan T.

    2009-01-01

    In the presence of large uncertainties, a control system needs to be able to adapt rapidly to regain performance. Fast adaptation is referred to the implementation of adaptive control with a large adaptive gain to reduce the tracking error rapidly. However, a large adaptive gain can lead to high-frequency oscillations which can adversely affect robustness of an adaptive control law. A new adaptive control modification is presented that can achieve robust adaptation with a large adaptive gain without incurring high-frequency oscillations as with the standard model-reference adaptive control. The modification is based on the minimization of the Y2 norm of the tracking error, which is formulated as an optimal control problem. The optimality condition is used to derive the modification using the gradient method. The optimal control modification results in a stable adaptation and allows a large adaptive gain to be used for better tracking while providing sufficient stability robustness. Simulations were conducted for a damaged generic transport aircraft with both standard adaptive control and the adaptive optimal control modification technique. The results demonstrate the effectiveness of the proposed modification in tracking a reference model while maintaining a sufficient time delay margin.

  6. An adaptive segment method for smoothing lidar signal based on noise estimation

    NASA Astrophysics Data System (ADS)

    Wang, Yuzhao; Luo, Pingping

    2014-10-01

    An adaptive segmentation smoothing method (ASSM) is introduced in the paper to smooth the signal and suppress the noise. In the ASSM, the noise is defined as the 3σ of the background signal. An integer number N is defined for finding the changing positions in the signal curve. If the difference of adjacent two points is greater than 3Nσ, the position is recorded as an end point of the smoothing segment. All the end points detected as above are recorded and the curves between them will be smoothed separately. In the traditional method, the end points of the smoothing windows in the signals are fixed. The ASSM creates changing end points in different signals and the smoothing windows could be set adaptively. The windows are always set as the half of the segmentations and then the average smoothing method will be applied in the segmentations. The Iterative process is required for reducing the end-point aberration effect in the average smoothing method and two or three times are enough. In ASSM, the signals are smoothed in the spacial area nor frequent area, that means the frequent disturbance will be avoided. A lidar echo was simulated in the experimental work. The echo was supposed to be created by a space-born lidar (e.g. CALIOP). And white Gaussian noise was added to the echo to act as the random noise resulted from environment and the detector. The novel method, ASSM, was applied to the noisy echo to filter the noise. In the test, N was set to 3 and the Iteration time is two. The results show that, the signal could be smoothed adaptively by the ASSM, but the N and the Iteration time might be optimized when the ASSM is applied in a different lidar.

  7. Thickness-based adaptive mesh refinement methods for multi-phase flow simulations with thin regions

    SciTech Connect

    Chen, Xiaodong; Yang, Vigor

    2014-07-15

    In numerical simulations of multi-scale, multi-phase flows, grid refinement is required to resolve regions with small scales. A notable example is liquid-jet atomization and subsequent droplet dynamics. It is essential to characterize the detailed flow physics with variable length scales with high fidelity, in order to elucidate the underlying mechanisms. In this paper, two thickness-based mesh refinement schemes are developed based on distance- and topology-oriented criteria for thin regions with confining wall/plane of symmetry and in any situation, respectively. Both techniques are implemented in a general framework with a volume-of-fluid formulation and an adaptive-mesh-refinement capability. The distance-oriented technique compares against a critical value, the ratio of an interfacial cell size to the distance between the mass center of the cell and a reference plane. The topology-oriented technique is developed from digital topology theories to handle more general conditions. The requirement for interfacial mesh refinement can be detected swiftly, without the need of thickness information, equation solving, variable averaging or mesh repairing. The mesh refinement level increases smoothly on demand in thin regions. The schemes have been verified and validated against several benchmark cases to demonstrate their effectiveness and robustness. These include the dynamics of colliding droplets, droplet motions in a microchannel, and atomization of liquid impinging jets. Overall, the thickness-based refinement technique provides highly adaptive meshes for problems with thin regions in an efficient and fully automatic manner.

  8. Adaptive neural network nonlinear control for BTT missile based on the differential geometry method

    NASA Astrophysics Data System (ADS)

    Wu, Hao; Wang, Yongji; Xu, Jiangsheng

    2007-11-01

    A new nonlinear control strategy incorporated the differential geometry method with adaptive neural networks is presented for the nonlinear coupling system of Bank-to-Turn missile in reentry phase. The basic control law is designed using the differential geometry feedback linearization method, and the online learning neural networks are used to compensate the system errors due to aerodynamic parameter errors and external disturbance in view of the arbitrary nonlinear mapping and rapid online learning ability for multi-layer neural networks. The online weights and thresholds tuning rules are deduced according to the tracking error performance functions by Levenberg-Marquardt algorithm, which will make the learning process faster and more stable. The six degree of freedom simulation results show that the attitude angles can track the desired trajectory precisely. It means that the proposed strategy effectively enhance the stability, the tracking performance and the robustness of the control system.

  9. An improved human visual system based reversible data hiding method using adaptive histogram modification

    NASA Astrophysics Data System (ADS)

    Hong, Wien; Chen, Tung-Shou; Wu, Mei-Chen

    2013-03-01

    Jung et al., IEEE Signal Processing Letters, 18, 2, 95, 2011 proposed a reversible data hiding method considering the human visual system (HVS). They employed the mean of visited neighboring pixels to predict the current pixel value, and estimated the just noticeable difference (JND) of the current pixel. Message bits are then embedded by adjusting the embedding level according to the calculated JND. Jung et al.'s method achieved excellent image quality. However, the embedding algorithm they used may result in over modification of pixel values and a large location map, which may deteriorate the image quality and decrease the pure payload. The proposed method exploits the nearest neighboring pixels to predict the visited pixel value and to estimate the corresponding JND. The cover pixels are preprocessed adaptively to reduce the size of the location map. We also employ an embedding level selection mechanism to prevent near-saturated pixels from being over modified. Experimental results show that the image quality of the proposed method is higher than that of Jung et al.'s method, and the payload can also be increased due to the reduction of the location map.

  10. A wavelet-MRA-based adaptive semi-Lagrangian method for the relativistic Vlasov-Maxwell system

    SciTech Connect

    Besse, Nicolas Latu, Guillaume Ghizzo, Alain Sonnendruecker, Eric Bertrand, Pierre

    2008-08-10

    In this paper we present a new method for the numerical solution of the relativistic Vlasov-Maxwell system on a phase-space grid using an adaptive semi-Lagrangian method. The adaptivity is performed through a wavelet multiresolution analysis, which gives a powerful and natural refinement criterion based on the local measurement of the approximation error and regularity of the distribution function. Therefore, the multiscale expansion of the distribution function allows to get a sparse representation of the data and thus save memory space and CPU time. We apply this numerical scheme to reduced Vlasov-Maxwell systems arising in laser-plasma physics. Interaction of relativistically strong laser pulses with overdense plasma slabs is investigated. These Vlasov simulations revealed a rich variety of phenomena associated with the fast particle dynamics induced by electromagnetic waves as electron trapping, particle acceleration, and electron plasma wavebreaking. However, the wavelet based adaptive method that we developed here, does not yield significant improvements compared to Vlasov solvers on a uniform mesh due to the substantial overhead that the method introduces. Nonetheless they might be a first step towards more efficient adaptive solvers based on different ideas for the grid refinement or on a more efficient implementation. Here the Vlasov simulations are performed in a two-dimensional phase-space where the development of thin filaments, strongly amplified by relativistic effects requires an important increase of the total number of points of the phase-space grid as they get finer as time goes on. The adaptive method could be more useful in cases where these thin filaments that need to be resolved are a very small fraction of the hyper-volume, which arises in higher dimensions because of the surface-to-volume scaling and the essentially one-dimensional structure of the filaments. Moreover, the main way to improve the efficiency of the adaptive method is to

  11. A New Sparse Adaptive Channel Estimation Method Based on Compressive Sensing for FBMC/OQAM Transmission Network

    PubMed Central

    Wang, Han; Du, Wencai; Xu, Lingwei

    2016-01-01

    The conventional channel estimation methods based on a preamble for filter bank multicarrier with offset quadrature amplitude modulation (FBMC/OQAM) systems in mobile-to-mobile sensor networks are inefficient. By utilizing the intrinsicsparsity of wireless channels, channel estimation is researched as a compressive sensing (CS) problem to improve the estimation performance. In this paper, an AdaptiveRegularized Compressive Sampling Matching Pursuit (ARCoSaMP) algorithm is proposed. Unlike anterior greedy algorithms, the new algorithm can achieve the accuracy of reconstruction by choosing the support set adaptively, and exploiting the regularization process, which realizes the second selecting of atoms in the support set although the sparsity of the channel is unknown. Simulation results show that CS-based methods obtain significant channel estimation performance improvement compared to that of conventional preamble-based methods. The proposed ARCoSaMP algorithm outperforms the conventional sparse adaptive matching pursuit (SAMP) algorithm. ARCoSaMP provides even more interesting results than the mostadvanced greedy compressive sampling matching pursuit (CoSaMP) algorithm without a prior sparse knowledge of the channel. PMID:27347967

  12. A New Sparse Adaptive Channel Estimation Method Based on Compressive Sensing for FBMC/OQAM Transmission Network.

    PubMed

    Wang, Han; Du, Wencai; Xu, Lingwei

    2016-01-01

    The conventional channel estimation methods based on a preamble for filter bank multicarrier with offset quadrature amplitude modulation (FBMC/OQAM) systems in mobile-to-mobile sensor networks are inefficient. By utilizing the intrinsicsparsity of wireless channels, channel estimation is researched as a compressive sensing (CS) problem to improve the estimation performance. In this paper, an AdaptiveRegularized Compressive Sampling Matching Pursuit (ARCoSaMP) algorithm is proposed. Unlike anterior greedy algorithms, the new algorithm can achieve the accuracy of reconstruction by choosing the support set adaptively, and exploiting the regularization process, which realizes the second selecting of atoms in the support set although the sparsity of the channel is unknown. Simulation results show that CS-based methods obtain significant channel estimation performance improvement compared to that of conventional preamble-based methods. The proposed ARCoSaMP algorithm outperforms the conventional sparse adaptive matching pursuit (SAMP) algorithm. ARCoSaMP provides even more interesting results than the mostadvanced greedy compressive sampling matching pursuit (CoSaMP) algorithm without a prior sparse knowledge of the channel. PMID:27347967

  13. A New Sparse Adaptive Channel Estimation Method Based on Compressive Sensing for FBMC/OQAM Transmission Network.

    PubMed

    Wang, Han; Du, Wencai; Xu, Lingwei

    2016-06-24

    The conventional channel estimation methods based on a preamble for filter bank multicarrier with offset quadrature amplitude modulation (FBMC/OQAM) systems in mobile-to-mobile sensor networks are inefficient. By utilizing the intrinsicsparsity of wireless channels, channel estimation is researched as a compressive sensing (CS) problem to improve the estimation performance. In this paper, an AdaptiveRegularized Compressive Sampling Matching Pursuit (ARCoSaMP) algorithm is proposed. Unlike anterior greedy algorithms, the new algorithm can achieve the accuracy of reconstruction by choosing the support set adaptively, and exploiting the regularization process, which realizes the second selecting of atoms in the support set although the sparsity of the channel is unknown. Simulation results show that CS-based methods obtain significant channel estimation performance improvement compared to that of conventional preamble-based methods. The proposed ARCoSaMP algorithm outperforms the conventional sparse adaptive matching pursuit (SAMP) algorithm. ARCoSaMP provides even more interesting results than the mostadvanced greedy compressive sampling matching pursuit (CoSaMP) algorithm without a prior sparse knowledge of the channel.

  14. Parallel adaptive mesh refinement method based on WENO finite difference scheme for the simulation of multi-dimensional detonation

    NASA Astrophysics Data System (ADS)

    Wang, Cheng; Dong, XinZhuang; Shu, Chi-Wang

    2015-10-01

    For numerical simulation of detonation, computational cost using uniform meshes is large due to the vast separation in both time and space scales. Adaptive mesh refinement (AMR) is advantageous for problems with vastly different scales. This paper aims to propose an AMR method with high order accuracy for numerical investigation of multi-dimensional detonation. A well-designed AMR method based on finite difference weighted essentially non-oscillatory (WENO) scheme, named as AMR&WENO is proposed. A new cell-based data structure is used to organize the adaptive meshes. The new data structure makes it possible for cells to communicate with each other quickly and easily. In order to develop an AMR method with high order accuracy, high order prolongations in both space and time are utilized in the data prolongation procedure. Based on the message passing interface (MPI) platform, we have developed a workload balancing parallel AMR&WENO code using the Hilbert space-filling curve algorithm. Our numerical experiments with detonation simulations indicate that the AMR&WENO is accurate and has a high resolution. Moreover, we evaluate and compare the performance of the uniform mesh WENO scheme and the parallel AMR&WENO method. The comparison results provide us further insight into the high performance of the parallel AMR&WENO method.

  15. An Investigation of Adaptive Pen Pressure Discretization Method Based on Personal Pen Pressure Use Profile

    NASA Astrophysics Data System (ADS)

    Xin, Yizhong; Ren, Xiangshi

    Continuous pen pressure can be used to operate multi-state widgets such as menus in pen based user interfaces. The number of levels into which the pen pressure space is divided determines the number of states in the multi-state widgets. To increase the optimal number of divisions of the pen pressure space and achieve greater pen pressure usability, we propose a new discretization method which divides the pen pressure space according to a personal pen pressure use profile. We present here four variations of the method: discretization according to personal/aggregation pen pressure use profile with/without visual feedback of uniform level widths and the traditional even discretization method. Two experiments were conducted respectively to investigate pen pressure use profile and to comparatively evaluate the performance of these methods. Results indicate that the subjects performed fastest and with the fewest errors when the pen pressure space was divided according to personal profile with visual feedback of uniform level widths (PU) and performed worst when the pen pressure space was divided evenly. With PU method, the optimal number of divisions of the pen pressure space was 8. Visual feedback of uniform level widths enhanced performance of uneven discretization. The findings of this study have implications for human-oriented pen pressure use in pen pressure based user interface designs.

  16. Fast and adaptive method for SAR superresolution imaging based on point scattering model and optimal basis selection.

    PubMed

    Wang, Zheng-ming; Wang, Wei-wei

    2009-07-01

    A novel fast and adaptive method for synthetic aperture radar (SAR) superresolution imaging is developed. Based on the point scattering model in the phase history domain, a dictionary is constructed so that the superresolution imaging process can be converted to a problem of sparse parameter estimation. The approximate orthogonality of this dictionary is exploited by theoretical derivation and experimental verification. Based on the orthogonality of the dictionary, we propose a fast algorithm for basis selection. Meanwhile, a threshold for obtaining the number and positions of the scattering centers is determined automatically from the inner product curves of the bases and observed data. Furthermore, the sensitivity of the threshold on estimation performance is analyzed. To reduce the burden of mass calculation and memory, a simplified superresolution imaging process is designed according to the characteristics of the imaging parameters. The experimental results of the simulated images and an MSTAR image illustrate the validity of this method and its robustness in the case of the high noise level. Compared with the traditional regularization method with the sparsity constraint, our proposed method suffers less computation complexity and has better adaptability.

  17. Total enthalpy-based lattice Boltzmann method with adaptive mesh refinement for solid-liquid phase change

    NASA Astrophysics Data System (ADS)

    Huang, Rongzong; Wu, Huiying

    2016-06-01

    A total enthalpy-based lattice Boltzmann (LB) method with adaptive mesh refinement (AMR) is developed in this paper to efficiently simulate solid-liquid phase change problem where variables vary significantly near the phase interface and thus finer grid is required. For the total enthalpy-based LB method, the velocity field is solved by an incompressible LB model with multiple-relaxation-time (MRT) collision scheme, and the temperature field is solved by a total enthalpy-based MRT LB model with the phase interface effects considered and the deviation term eliminated. With a kinetic assumption that the density distribution function for solid phase is at equilibrium state, a volumetric LB scheme is proposed to accurately realize the nonslip velocity condition on the diffusive phase interface and in the solid phase. As compared with the previous schemes, this scheme can avoid nonphysical flow in the solid phase. As for the AMR approach, it is developed based on multiblock grids. An indicator function is introduced to control the adaptive generation of multiblock grids, which can guarantee the existence of overlap area between adjacent blocks for information exchange. Since MRT collision schemes are used, the information exchange is directly carried out in the moment space. Numerical tests are firstly performed to validate the strict satisfaction of the nonslip velocity condition, and then melting problems in a square cavity with different Prandtl numbers and Rayleigh numbers are simulated, which demonstrate that the present method can handle solid-liquid phase change problem with high efficiency and accuracy.

  18. Feedback in Videogame-Based Adaptive Training

    ERIC Educational Resources Information Center

    Rivera, Iris Daliz

    2010-01-01

    The field of training has been changing rapidly due to advances in technology such as videogame-based adaptive training. Videogame-based adaptive training has provided flexibility and adaptability for training in cost-effective ways. Although this method of training may have many benefits for the trainee, current research has not kept up to pace…

  19. Method For Model-Reference Adaptive Control

    NASA Technical Reports Server (NTRS)

    Seraji, Homayoun

    1990-01-01

    Relatively simple method of model-reference adaptive control (MRAC) developed from two prior classes of MRAC techniques: signal-synthesis method and parameter-adaption method. Incorporated into unified theory, which yields more general adaptation scheme.

  20. New intensity-hue-saturation pan-sharpening method based on texture analysis and genetic algorithm-adaption

    NASA Astrophysics Data System (ADS)

    Masoudi, Rasoul; Kabiri, Peyman

    2014-01-01

    Pansharpening aims to fuse a low-resolution multispectral image with a high-resolution panchromatic image to create a multispectral image with high spatial and spectral resolution. The intensity-hue-saturation (IHS) fusion method transforms an image from RGB space to IHS space. This paper reports a method to improve the spectral resolution of a final multispectral image. The proposed method implies two modifications on the basic IHS method to improve the sharpness of the final image. First, the paper proposes a method based on a genetic algorithm to find the weight of each band of multispectral image in the fusion process. Later on, a texture-based technique is proposed to save the spectral information of the final image with respect to the texture boundaries. Spectral quality metrics in terms of SAM, SID, Q-average, RASE, RMSE, CC, ERGAS and UIQI are used in our experiments. Experimental results on IKONOS and QuickBird data show that the proposed method is more efficient than the original IHS-based fusion approach and some of its extensions, such as IKONOS IHS, edge-adaptive IHS and explicit band coefficient IHS, in preserving spectral information of multispectral images.

  1. SU-D-207-04: GPU-Based 4D Cone-Beam CT Reconstruction Using Adaptive Meshing Method

    SciTech Connect

    Zhong, Z; Gu, X; Iyengar, P; Mao, W; Wang, J; Guo, X

    2015-06-15

    Purpose: Due to the limited number of projections at each phase, the image quality of a four-dimensional cone-beam CT (4D-CBCT) is often degraded, which decreases the accuracy of subsequent motion modeling. One of the promising methods is the simultaneous motion estimation and image reconstruction (SMEIR) approach. The objective of this work is to enhance the computational speed of the SMEIR algorithm using adaptive feature-based tetrahedral meshing and GPU-based parallelization. Methods: The first step is to generate the tetrahedral mesh based on the features of a reference phase 4D-CBCT, so that the deformation can be well captured and accurately diffused from the mesh vertices to voxels of the image volume. After the mesh generation, the updated motion model and other phases of 4D-CBCT can be obtained by matching the 4D-CBCT projection images at each phase with the corresponding forward projections of the deformed reference phase of 4D-CBCT. The entire process of this 4D-CBCT reconstruction method is implemented on GPU, resulting in significantly increasing the computational efficiency due to its tremendous parallel computing ability. Results: A 4D XCAT digital phantom was used to test the proposed mesh-based image reconstruction algorithm. The image Result shows both bone structures and inside of the lung are well-preserved and the tumor position can be well captured. Compared to the previous voxel-based CPU implementation of SMEIR, the proposed method is about 157 times faster for reconstructing a 10 -phase 4D-CBCT with dimension 256×256×150. Conclusion: The GPU-based parallel 4D CBCT reconstruction method uses the feature-based mesh for estimating motion model and demonstrates equivalent image Result with previous voxel-based SMEIR approach, with significantly improved computational speed.

  2. Ultimately accurate SRAF replacement for practical phases using an adaptive search algorithm based on the optimal gradient method

    NASA Astrophysics Data System (ADS)

    Maeda, Shimon; Nosato, Hirokazu; Matsunawa, Tetsuaki; Miyairi, Masahiro; Nojima, Shigeki; Tanaka, Satoshi; Sakanashi, Hidenori; Murakawa, Masahiro; Saito, Tamaki; Higuchi, Tetsuya; Inoue, Soichi

    2010-04-01

    SRAF (Sub Resolution Assist Feature) technique has been widely used for DOF enhancement. Below 40nm design node, even in the case of using the SRAF technique, the resolution limit is approached due to the use of hyper NA imaging or low k1 lithography conditions especially for the contact layer. As a result, complex layout patterns or random patterns like logic data or intermediate pitch patterns become increasingly sensitive to photo-resist pattern fidelity. This means that the need for more accurate resolution technique is increasing in order to cope with lithographic patterning fidelity issues in low k1 lithography conditions. To face with these issues, new SRAF technique like model based SRAF using an interference map or inverse lithography technique has been proposed. But these approaches don't have enough assurance for accuracy or performance, because the ideal mask generated by these techniques is lost when switching to a manufacturable mask with Manhattan structures. As a result it might be very hard to put these things into practice and production flow. In this paper, we propose the novel method for extremely accurate SRAF placement using an adaptive search algorithm. In this method, the initial position of SRAF is generated by the traditional SRAF placement such as rule based SRAF, and it is adjusted by adaptive algorithm using the evaluation of lithography simulation. This method has three advantages which are preciseness, efficiency and industrial applicability. That is, firstly, the lithography simulation uses actual computational model considering process window, thus our proposed method can precisely adjust the SRAF positions, and consequently we can acquire the best SRAF positions. Secondly, because our adaptive algorithm is based on optimal gradient method, which is very simple algorithm and rectilinear search, the SRAF positions can be adjusted with high efficiency. Thirdly, our proposed method, which utilizes the traditional SRAF placement, is

  3. Development and Evaluation of an E-Learning Course for Deaf and Hard of Hearing Based on the Advanced Adapted Pedagogical Index Method

    ERIC Educational Resources Information Center

    Debevc, Matjaž; Stjepanovic, Zoran; Holzinger, Andreas

    2014-01-01

    Web-based and adapted e-learning materials provide alternative methods of learning to those used in a traditional classroom. Within the study described in this article, deaf and hard of hearing people used an adaptive e-learning environment to improve their computer literacy. This environment included streaming video with sign language interpreter…

  4. Large Eddy simulation of compressible flows with a low-numerical dissipation patch-based adaptive mesh refinement method

    NASA Astrophysics Data System (ADS)

    Pantano, Carlos

    2005-11-01

    We describe a hybrid finite difference method for large-eddy simulation (LES) of compressible flows with a low-numerical dissipation scheme and structured adaptive mesh refinement (SAMR). Numerical experiments and validation calculations are presented including a turbulent jet and the strongly shock-driven mixing of a Richtmyer-Meshkov instability. The approach is a conservative flux-based SAMR formulation and as such, it utilizes refinement to computational advantage. The numerical method for the resolved scale terms encompasses the cases of scheme alternation and internal mesh interfaces resulting from SAMR. An explicit centered scheme that is consistent with a skew-symmetric finite difference formulation is used in turbulent flow regions while a weighted essentially non-oscillatory (WENO) scheme is employed to capture shocks. The subgrid stresses and transports are calculated by means of the streched-vortex model, Misra & Pullin (1997)

  5. A low numerical dissipation patch-based adaptive mesh refinement method for large-eddy simulation of compressible flows

    NASA Astrophysics Data System (ADS)

    Pantano, C.; Deiterding, R.; Hill, D. J.; Pullin, D. I.

    2007-01-01

    We present a methodology for the large-eddy simulation of compressible flows with a low-numerical dissipation scheme and structured adaptive mesh refinement (SAMR). A description of a conservative, flux-based hybrid numerical method that uses both centered finite-difference and a weighted essentially non-oscillatory (WENO) scheme is given, encompassing the cases of scheme alternation and internal mesh interfaces resulting from SAMR. In this method, the centered scheme is used in turbulent flow regions while WENO is employed to capture shocks. One-, two- and three-dimensional numerical experiments and example simulations are presented including homogeneous shock-free turbulence, a turbulent jet and the strongly shock-driven mixing of a Richtmyer-Meshkov instability.

  6. Improving the performance of lesion-based computer-aided detection schemes of breast masses using a case-based adaptive cueing method

    NASA Astrophysics Data System (ADS)

    Tan, Maxine; Aghaei, Faranak; Wang, Yunzhi; Qian, Wei; Zheng, Bin

    2016-03-01

    Current commercialized CAD schemes have high false-positive (FP) detection rates and also have high correlations in positive lesion detection with radiologists. Thus, we recently investigated a new approach to improve the efficacy of applying CAD to assist radiologists in reading and interpreting screening mammograms. Namely, we developed a new global feature based CAD approach/scheme that can cue the warning sign on the cases with high risk of being positive. In this study, we investigate the possibility of fusing global feature or case-based scores with the local or lesion-based CAD scores using an adaptive cueing method. We hypothesize that the information from the global feature extraction (features extracted from the whole breast regions) are different from and can provide supplementary information to the locally-extracted features (computed from the segmented lesion regions only). On a large and diverse full-field digital mammography (FFDM) testing dataset with 785 cases (347 negative and 438 cancer cases with masses only), we ran our lesion-based and case-based CAD schemes "as is" on the whole dataset. To assess the supplementary information provided by the global features, we used an adaptive cueing method to adaptively adjust the original CAD-generated detection scores (Sorg) of a detected suspicious mass region based on the computed case-based score (Scase) of the case associated with this detected region. Using the adaptive cueing method, better sensitivity results were obtained at lower FP rates (<= 1 FP per image). Namely, increases of sensitivities (in the FROC curves) of up to 6.7% and 8.2% were obtained for the ROI and Case-based results, respectively.

  7. Particle System Based Adaptive Sampling on Spherical Parameter Space to Improve the MDL Method for Construction of Statistical Shape Models

    PubMed Central

    Zhou, Xiangrong; Hirano, Yasushi; Tachibana, Rie; Hara, Takeshi; Kido, Shoji; Fujita, Hiroshi

    2013-01-01

    Minimum description length (MDL) based group-wise registration was a state-of-the-art method to determine the corresponding points of 3D shapes for the construction of statistical shape models (SSMs). However, it suffered from the problem that determined corresponding points did not uniformly spread on original shapes, since corresponding points were obtained by uniformly sampling the aligned shape on the parameterized space of unit sphere. We proposed a particle-system based method to obtain adaptive sampling positions on the unit sphere to resolve this problem. Here, a set of particles was placed on the unit sphere to construct a particle system whose energy was related to the distortions of parameterized meshes. By minimizing this energy, each particle was moved on the unit sphere. When the system became steady, particles were treated as vertices to build a spherical mesh, which was then relaxed to slightly adjust vertices to obtain optimal sampling-positions. We used 47 cases of (left and right) lungs and 50 cases of livers, (left and right) kidneys, and spleens for evaluations. Experiments showed that the proposed method was able to resolve the problem of the original MDL method, and the proposed method performed better in the generalization and specificity tests. PMID:23861721

  8. An adaptive level set method

    SciTech Connect

    Milne, R.B.

    1995-12-01

    This thesis describes a new method for the numerical solution of partial differential equations of the parabolic type on an adaptively refined mesh in two or more spatial dimensions. The method is motivated and developed in the context of the level set formulation for the curvature dependent propagation of surfaces in three dimensions. In that setting, it realizes the multiple advantages of decreased computational effort, localized accuracy enhancement, and compatibility with problems containing a range of length scales.

  9. Ontology-Based Adaptive Dynamic e-Learning Map Planning Method for Conceptual Knowledge Learning

    ERIC Educational Resources Information Center

    Chen, Tsung-Yi; Chu, Hui-Chuan; Chen, Yuh-Min; Su, Kuan-Chun

    2016-01-01

    E-learning improves the shareability and reusability of knowledge, and surpasses the constraints of time and space to achieve remote asynchronous learning. Since the depth of learning content often varies, it is thus often difficult to adjust materials based on the individual levels of learners. Therefore, this study develops an ontology-based…

  10. Novel image fusion method based on adaptive pulse coupled neural network and discrete multi-parameter fractional random transform

    NASA Astrophysics Data System (ADS)

    Lang, Jun; Hao, Zhengchao

    2014-01-01

    In this paper, we first propose the discrete multi-parameter fractional random transform (DMPFRNT), which can make the spectrum distributed randomly and uniformly. Then we introduce this new spectrum transform into the image fusion field and present a new approach for the remote sensing image fusion, which utilizes both adaptive pulse coupled neural network (PCNN) and the discrete multi-parameter fractional random transform in order to meet the requirements of both high spatial resolution and low spectral distortion. In the proposed scheme, the multi-spectral (MS) and panchromatic (Pan) images are converted into the discrete multi-parameter fractional random transform domains, respectively. In DMPFRNT spectrum domain, high amplitude spectrum (HAS) and low amplitude spectrum (LAS) components carry different informations of original images. We take full advantage of the synchronization pulse issuance characteristics of PCNN to extract the HAS and LAS components properly, and give us the PCNN ignition mapping images which can be used to determine the fusion parameters. In the fusion process, local standard deviation of the amplitude spectrum is chosen as the link strength of pulse coupled neural network. Numerical simulations are performed to demonstrate that the proposed method is more reliable and superior than several existing methods based on Hue Saturation Intensity representation, Principal Component Analysis, the discrete fractional random transform etc.

  11. A Support Method with Changeable Training Strategies Based on Mutual Adaptation between a Ubiquitous Pet and a Learner

    NASA Astrophysics Data System (ADS)

    Ye, Xianzhi; Jing, Lei; Kansen, Mizuo; Wang, Junbo; Ota, Kaoru; Cheng, Zixue

    With the progress of ubiquitous technology, ubiquitous learning presents new opportunities to learners. Situations of a learner can be grasped through analyzing the learner's actions collected by sensors, RF-IDs, or cameras in order to provide support at proper time, proper place, and proper situation. Training for acquiring skills and enhancing physical abilities through exercise and experience in the real world is an important domain in u-learning. A training program may last for several days and has one or more training units (exercises) for a day. A learner's performance in a unit is considered as short term state. The performance in a series of units may change with patterns: progress, plateau, and decline. Long term state in a series of units is accumulatively computed based on short term states. In a learning/training program, it is necessary to apply different support strategies to adapt to different states of the learner. Adaptation in learning support is significant, because a learner loses his/her interests easily without adaptation. Systems with the adaptive support usually provide stimulators to a learner, and a learner can have a great motivation in learning at beginning. However, when the stimulators reach some levels, the learner may lose his/her motivation, because the long term state of the learner changes dynamically, which means a progress state may change to a plateau state or a decline state. In different long term learning states, different types of stimulators are needed. However, the stimulators and advice provided by the existing systems are monotonic without changeable support strategies. We propose a mutual adaptive support. The mutual adaptation means each of the system and the learner has their own states. On one hand, the system tries to change its state to adapt to the learner's state for providing adaptive support. On the other hand, the learner can change its performance following the advice given based on the state of the system

  12. Domain adaptive boosting method and its applications

    NASA Astrophysics Data System (ADS)

    Geng, Jie; Miao, Zhenjiang

    2015-03-01

    Differences of data distributions widely exist among datasets, i.e., domains. For many pattern recognition, nature language processing, and content-based analysis systems, a decrease in performance caused by the domain differences between the training and testing datasets is still a notable problem. We propose a domain adaptation method called domain adaptive boosting (DAB). It is based on the AdaBoost approach with extensions to cover the domain differences between the source and target domains. Two main stages are contained in this approach: source-domain clustering and source-domain sample selection. By iteratively adding the selected training samples from the source domain, the discrimination model is able to achieve better domain adaptation performance based on a small validation set. The DAB algorithm is suitable for the domains with large scale samples and easy to extend for multisource adaptation. We implement this method on three computer vision systems: the skin detection model in single images, the video concept detection model, and the object classification model. In the experiments, we compare the performances of several commonly used methods and the proposed DAB. Under most situations, the DAB is superior.

  13. A Parallel Adaptive Finite Element Method for the Simulation of Photon Migration with the Radiative-Transfer-Based Model

    PubMed Central

    Lu, Yujie; Chatziioannou, Arion F.

    2009-01-01

    Whole-body optical molecular imaging of mouse models in preclinical research is rapidly developing in recent years. In this context, it is essential and necessary to develop novel simulation methods of light propagation for optical imaging, especially when a priori knowledge, large-volume domain and a wide-range of optical properties need to be considered in the reconstruction algorithm. In this paper, we propose a three dimensional parallel adaptive finite element method with simplified spherical harmonics (SPN) approximation to simulate optical photon propagation in large-volumes of heterogenous tissues. The simulation speed is significantly improved by a posteriori parallel adaptive mesh refinement and dynamic mesh repartitioning. Compared with the diffusion equation and the Monte Carlo methods, the SPN method shows improved performance and the necessity of high-order approximation in heterogeneous domains. Optimal solver selection and time-costing analysis in real mouse geometry further improve the performance of the proposed algorithm and show the superiority of the proposed parallel adaptive framework for whole-body optical molecular imaging in murine models. PMID:20052300

  14. A Parallel Adaptive Finite Element Method for the Simulation of Photon Migration with the Radiative-Transfer-Based Model.

    PubMed

    Lu, Yujie; Chatziioannou, Arion F

    2009-01-01

    Whole-body optical molecular imaging of mouse models in preclinical research is rapidly developing in recent years. In this context, it is essential and necessary to develop novel simulation methods of light propagation for optical imaging, especially when a priori knowledge, large-volume domain and a wide-range of optical properties need to be considered in the reconstruction algorithm. In this paper, we propose a three dimensional parallel adaptive finite element method with simplified spherical harmonics (SP(N)) approximation to simulate optical photon propagation in large-volumes of heterogenous tissues. The simulation speed is significantly improved by a posteriori parallel adaptive mesh refinement and dynamic mesh repartitioning. Compared with the diffusion equation and the Monte Carlo methods, the SP(N) method shows improved performance and the necessity of high-order approximation in heterogeneous domains. Optimal solver selection and time-costing analysis in real mouse geometry further improve the performance of the proposed algorithm and show the superiority of the proposed parallel adaptive framework for whole-body optical molecular imaging in murine models.

  15. Structured adaptive grid generation using algebraic methods

    NASA Technical Reports Server (NTRS)

    Yang, Jiann-Cherng; Soni, Bharat K.; Roger, R. P.; Chan, Stephen C.

    1993-01-01

    The accuracy of the numerical algorithm depends not only on the formal order of approximation but also on the distribution of grid points in the computational domain. Grid adaptation is a procedure which allows optimal grid redistribution as the solution progresses. It offers the prospect of accurate flow field simulations without the use of an excessively timely, computationally expensive, grid. Grid adaptive schemes are divided into two basic categories: differential and algebraic. The differential method is based on a variational approach where a function which contains a measure of grid smoothness, orthogonality and volume variation is minimized by using a variational principle. This approach provided a solid mathematical basis for the adaptive method, but the Euler-Lagrange equations must be solved in addition to the original governing equations. On the other hand, the algebraic method requires much less computational effort, but the grid may not be smooth. The algebraic techniques are based on devising an algorithm where the grid movement is governed by estimates of the local error in the numerical solution. This is achieved by requiring the points in the large error regions to attract other points and points in the low error region to repel other points. The development of a fast, efficient, and robust algebraic adaptive algorithm for structured flow simulation applications is presented. This development is accomplished in a three step process. The first step is to define an adaptive weighting mesh (distribution mesh) on the basis of the equidistribution law applied to the flow field solution. The second, and probably the most crucial step, is to redistribute grid points in the computational domain according to the aforementioned weighting mesh. The third and the last step is to reevaluate the flow property by an appropriate search/interpolate scheme at the new grid locations. The adaptive weighting mesh provides the information on the desired concentration

  16. Parallel adaptive wavelet collocation method for PDEs

    SciTech Connect

    Nejadmalayeri, Alireza; Vezolainen, Alexei; Brown-Dymkoski, Eric; Vasilyev, Oleg V.

    2015-10-01

    A parallel adaptive wavelet collocation method for solving a large class of Partial Differential Equations is presented. The parallelization is achieved by developing an asynchronous parallel wavelet transform, which allows one to perform parallel wavelet transform and derivative calculations with only one data synchronization at the highest level of resolution. The data are stored using tree-like structure with tree roots starting at a priori defined level of resolution. Both static and dynamic domain partitioning approaches are developed. For the dynamic domain partitioning, trees are considered to be the minimum quanta of data to be migrated between the processes. This allows fully automated and efficient handling of non-simply connected partitioning of a computational domain. Dynamic load balancing is achieved via domain repartitioning during the grid adaptation step and reassigning trees to the appropriate processes to ensure approximately the same number of grid points on each process. The parallel efficiency of the approach is discussed based on parallel adaptive wavelet-based Coherent Vortex Simulations of homogeneous turbulence with linear forcing at effective non-adaptive resolutions up to 2048{sup 3} using as many as 2048 CPU cores.

  17. New evidence-based adaptive clinical trial methods for optimally integrating predictive biomarkers into oncology clinical development programs

    PubMed Central

    Beckman, Robert A.; Chen, Cong

    2013-01-01

    Predictive biomarkers are important to the future of oncology; they can be used to identify patient populations who will benefit from therapy, increase the value of cancer medicines, and decrease the size and cost of clinical trials while increasing their chance of success. But predictive biomarkers do not always work. When unsuccessful, they add cost, complexity, and time to drug development. This perspective describes phases 2 and 3 development methods that efficiently and adaptively check the ability of a biomarker to predict clinical outcomes. In the end, the biomarker is emphasized to the extent that it can actually predict. PMID:23489587

  18. An Atlas-Based Electron Density Mapping Method for Magnetic Resonance Imaging (MRI)-Alone Treatment Planning and Adaptive MRI-Based Prostate Radiation Therapy

    SciTech Connect

    Dowling, Jason A.; Lambert, Jonathan; Parker, Joel; Salvado, Olivier; Fripp, Jurgen; Capp, Anne; Wratten, Chris; Denham, James W.; Greer, Peter B.

    2012-05-01

    Purpose: Prostate radiation therapy dose planning directly on magnetic resonance imaging (MRI) scans would reduce costs and uncertainties due to multimodality image registration. Adaptive planning using a combined MRI-linear accelerator approach will also require dose calculations to be performed using MRI data. The aim of this work was to develop an atlas-based method to map realistic electron densities to MRI scans for dose calculations and digitally reconstructed radiograph (DRR) generation. Methods and Materials: Whole-pelvis MRI and CT scan data were collected from 39 prostate patients. Scans from 2 patients showed significantly different anatomy from that of the remaining patient population, and these patients were excluded. A whole-pelvis MRI atlas was generated based on the manually delineated MRI scans. In addition, a conjugate electron-density atlas was generated from the coregistered computed tomography (CT)-MRI scans. Pseudo-CT scans for each patient were automatically generated by global and nonrigid registration of the MRI atlas to the patient MRI scan, followed by application of the same transformations to the electron-density atlas. Comparisons were made between organ segmentations by using the Dice similarity coefficient (DSC) and point dose calculations for 26 patients on planning CT and pseudo-CT scans. Results: The agreement between pseudo-CT and planning CT was quantified by differences in the point dose at isocenter and distance to agreement in corresponding voxels. Dose differences were found to be less than 2%. Chi-squared values indicated that the planning CT and pseudo-CT dose distributions were equivalent. No significant differences (p > 0.9) were found between CT and pseudo-CT Hounsfield units for organs of interest. Mean {+-} standard deviation DSC scores for the atlas-based segmentation of the pelvic bones were 0.79 {+-} 0.12, 0.70 {+-} 0.14 for the prostate, 0.64 {+-} 0.16 for the bladder, and 0.63 {+-} 0.16 for the rectum

  19. An adaptive selective frequency damping method

    NASA Astrophysics Data System (ADS)

    Jordi, Bastien; Cotter, Colin; Sherwin, Spencer

    2015-03-01

    The selective frequency damping (SFD) method is used to obtain unstable steady-state solutions of dynamical systems. The stability of this method is governed by two parameters that are the control coefficient and the filter width. Convergence is not guaranteed for arbitrary choice of these parameters. Even when the method does converge, the time necessary to reach a steady-state solution may be very long. We present an adaptive SFD method. We show that by modifying the control coefficient and the filter width all along the solver execution, we can reach an optimum convergence rate. This method is based on successive approximations of the dominant eigenvalue of the flow studied. We design a one-dimensional model to select SFD parameters that enable us to control the evolution of the least stable eigenvalue of the system. These parameters are then used for the application of the SFD method to the multi-dimensional flow problem. We apply this adaptive method to a set of classical test cases of computational fluid dynamics and show that the steady-state solutions obtained are similar to what can be found in the literature. Then we apply it to a specific vortex dominated flow (of interest for the automotive industry) whose stability had never been studied before. Seventh Framework Programme of the European Commission - ANADE project under Grant Contract PITN-GA-289428.

  20. A low-numerical dissipation, patch-based adaptive-mesh-refinement method for large-eddy simulation of compressible flows

    NASA Astrophysics Data System (ADS)

    Pantano, C.; Deiterding, R.; Hill, D. J.; Pullin, D. I.

    2006-09-01

    This paper describes a hybrid finite-difference method for the large-eddy simulation of compressible flows with low-numerical dissipation and structured adaptive mesh refinement (SAMR). A conservative flux-based approach is described with an explicit centered scheme used in turbulent flow regions while a weighted essentially non-oscillatory (WENO) scheme is employed to capture shocks. Three-dimensional numerical simulations of a Richtmyer-Meshkov instability are presented.

  1. A new method based on Adaptive Discrete Wavelet Entropy Energy and Neural Network Classifier (ADWEENN) for recognition of urine cells from microscopic images independent of rotation and scaling.

    PubMed

    Avci, Derya; Leblebicioglu, Mehmet Kemal; Poyraz, Mustafa; Dogantekin, Esin

    2014-02-01

    So far, analysis and classification of urine cells number has become an important topic for medical diagnosis of some diseases. Therefore, in this study, we suggest a new technique based on Adaptive Discrete Wavelet Entropy Energy and Neural Network Classifier (ADWEENN) for Recognition of Urine Cells from Microscopic Images Independent of Rotation and Scaling. Some digital image processing methods such as noise reduction, contrast enhancement, segmentation, and morphological process are used for feature extraction stage of this ADWEENN in this study. Nowadays, the image processing and pattern recognition topics have come into prominence. The image processing concludes operation and design of systems that recognize patterns in data sets. In the past years, very difficulty in classification of microscopic images was the deficiency of enough methods to characterize. Lately, it is seen that, multi-resolution image analysis methods such as Gabor filters, discrete wavelet decompositions are superior to other classic methods for analysis of these microscopic images. In this study, the structure of the ADWEENN method composes of four stages. These are preprocessing stage, feature extraction stage, classification stage and testing stage. The Discrete Wavelet Transform (DWT) and adaptive wavelet entropy and energy is used for adaptive feature extraction in feature extraction stage to strengthen the premium features of the Artificial Neural Network (ANN) classifier in this study. Efficiency of the developed ADWEENN method was tested showing that an avarage of 97.58% recognition succes was obtained.

  2. New communication schemes based on adaptive synchronization

    NASA Astrophysics Data System (ADS)

    Yu, Wenwu; Cao, Jinde; Wong, Kwok-Wo; Lü, Jinhu

    2007-09-01

    In this paper, adaptive synchronization with unknown parameters is discussed for a unified chaotic system by using the Lyapunov method and the adaptive control approach. Some communication schemes, including chaotic masking, chaotic modulation, and chaotic shift key strategies, are then proposed based on the modified adaptive method. The transmitted signal is masked by chaotic signal or modulated into the system, which effectively blurs the constructed return map and can resist this return map attack. The driving system with unknown parameters and functions is almost completely unknown to the attackers, so it is more secure to apply this method into the communication. Finally, some simulation examples based on the proposed communication schemes and some cryptanalysis works are also given to verify the theoretical analysis in this paper.

  3. A composite control method based on the adaptive RBFNN feedback control and the ESO for two-axis inertially stabilized platforms.

    PubMed

    Lei, Xusheng; Zou, Ying; Dong, Fei

    2015-11-01

    Due to the nonlinearity and time variation of a two-axis inertially stabilized platform (ISP) system, the conventional feedback control cannot be utilized directly. To realize the control performance with fast dynamic response and high stabilization precision, the dynamic model of the ISP system is expected to match the ideal model which satisfies the desired control performance. Therefore, a composite control method based on the adaptive radial basis function neural network (RBFNN) feedback control and the extended state observer (ESO), is proposed for ISP. The adaptive RBFNN is proposed to generate the feedback control parameters online. Based on the state error information in the working process, the adaptive RBFNN can be constructed and optimized directly. Therefore, no priori training data is needed for the construction of the RBFNN. Furthermore, a linear second-order ESO is constructed to compensate for the composite disturbance. The asymptotic stability of the proposed control method has been proven by the Lyapunov stability theory. The applicability of the proposed method is validated by a series of simulations and flight tests.

  4. A composite control method based on the adaptive RBFNN feedback control and the ESO for two-axis inertially stabilized platforms.

    PubMed

    Lei, Xusheng; Zou, Ying; Dong, Fei

    2015-11-01

    Due to the nonlinearity and time variation of a two-axis inertially stabilized platform (ISP) system, the conventional feedback control cannot be utilized directly. To realize the control performance with fast dynamic response and high stabilization precision, the dynamic model of the ISP system is expected to match the ideal model which satisfies the desired control performance. Therefore, a composite control method based on the adaptive radial basis function neural network (RBFNN) feedback control and the extended state observer (ESO), is proposed for ISP. The adaptive RBFNN is proposed to generate the feedback control parameters online. Based on the state error information in the working process, the adaptive RBFNN can be constructed and optimized directly. Therefore, no priori training data is needed for the construction of the RBFNN. Furthermore, a linear second-order ESO is constructed to compensate for the composite disturbance. The asymptotic stability of the proposed control method has been proven by the Lyapunov stability theory. The applicability of the proposed method is validated by a series of simulations and flight tests. PMID:26434418

  5. Ensemble transform sensitivity method for adaptive observations

    NASA Astrophysics Data System (ADS)

    Zhang, Yu; Xie, Yuanfu; Wang, Hongli; Chen, Dehui; Toth, Zoltan

    2016-01-01

    The Ensemble Transform (ET) method has been shown to be useful in providing guidance for adaptive observation deployment. It predicts forecast error variance reduction for each possible deployment using its corresponding transformation matrix in an ensemble subspace. In this paper, a new ET-based sensitivity (ETS) method, which calculates the gradient of forecast error variance reduction in terms of analysis error variance reduction, is proposed to specify regions for possible adaptive observations. ETS is a first order approximation of the ET; it requires just one calculation of a transformation matrix, increasing computational efficiency (60%-80% reduction in computational cost). An explicit mathematical formulation of the ETS gradient is derived and described. Both the ET and ETS methods are applied to the Hurricane Irene (2011) case and a heavy rainfall case for comparison. The numerical results imply that the sensitive areas estimated by the ETS and ET are similar. However, ETS is much more efficient, particularly when the resolution is higher and the number of ensemble members is larger.

  6. Adaptive GOP structure based on motion coherence

    NASA Astrophysics Data System (ADS)

    Ma, Yanzhuo; Wan, Shuai; Chang, Yilin; Yang, Fuzheng; Wang, Xiaoyu

    2009-08-01

    Adaptive Group of Pictures (GOP) is helpful for increasing the efficiency of video encoding by taking account of characteristics of video content. This paper proposes a method for adaptive GOP structure selection for video encoding based on motion coherence, which extracts key frames according to motion acceleration, and assigns coding type for each key and non-key frame correspondingly. Motion deviation is then used instead of motion magnitude in the selection of the number of B frames. Experimental results show that the proposed method for adaptive GOP structure selection achieves performance gain of 0.2-1dB over the fixed GOP, and has the advantage of better transmission resilience. Moreover, this method can be used in real-time video coding due to its low complexity.

  7. Variational method for adaptive grid generation

    SciTech Connect

    Brackbill, J.U.

    1983-01-01

    A variational method for generating adaptive meshes is described. Functionals measuring smoothness, skewness, orientation, and the Jacobian are minimized to generate a mapping from a rectilinear domain in natural coordinate to an arbitrary domain in physical coordinates. From the mapping, a mesh is easily constructed. In using the method to adaptively zone computational problems, as few as one third the number of mesh points are required in each coordinate direction compared with a uniformly zoned mesh.

  8. Online Adaptive Replanning Method for Prostate Radiotherapy

    SciTech Connect

    Ahunbay, Ergun E.; Peng Cheng; Holmes, Shannon; Godley, Andrew; Lawton, Colleen; Li, X. Allen

    2010-08-01

    Purpose: To report the application of an adaptive replanning technique for prostate cancer radiotherapy (RT), consisting of two steps: (1) segment aperture morphing (SAM), and (2) segment weight optimization (SWO), to account for interfraction variations. Methods and Materials: The new 'SAM+SWO' scheme was retroactively applied to the daily CT images acquired for 10 prostate cancer patients on a linear accelerator and CT-on-Rails combination during the course of RT. Doses generated by the SAM+SWO scheme based on the daily CT images were compared with doses generated after patient repositioning using the current planning target volume (PTV) margin (5 mm, 3 mm toward rectum) and a reduced margin (2 mm), along with full reoptimization scans based on the daily CT images to evaluate dosimetry benefits. Results: For all cases studied, the online replanning method provided significantly better target coverage when compared with repositioning with reduced PTV (13% increase in minimum prostate dose) and improved organ sparing when compared with repositioning with regular PTV (13% decrease in the generalized equivalent uniform dose of rectum). The time required to complete the online replanning process was 6 {+-} 2 minutes. Conclusion: The proposed online replanning method can be used to account for interfraction variations for prostate RT with a practically acceptable time frame (5-10 min) and with significant dosimetric benefits. On the basis of this study, the developed online replanning scheme is being implemented in the clinic for prostate RT.

  9. Identification of nonlinear optical systems using adaptive kernel methods

    NASA Astrophysics Data System (ADS)

    Wang, Xiaodong; Zhang, Changjiang; Zhang, Haoran; Feng, Genliang; Xu, Xiuling

    2005-12-01

    An identification approach of nonlinear optical dynamic systems, based on adaptive kernel methods which are modified version of least squares support vector machine (LS-SVM), is presented in order to obtain the reference dynamic model for solving real time applications such as adaptive signal processing of the optical systems. The feasibility of this approach is demonstrated with the computer simulation through identifying a Bragg acoustic-optical bistable system. Unlike artificial neural networks, the adaptive kernel methods possess prominent advantages: over fitting is unlikely to occur by employing structural risk minimization criterion, the global optimal solution can be uniquely obtained owing to that its training is performed through the solution of a set of linear equations. Also, the adaptive kernel methods are still effective for the nonlinear optical systems with a variation of the system parameter. This method is robust with respect to noise, and it constitutes another powerful tool for the identification of nonlinear optical systems.

  10. Development of an Output-based Adaptive Method for Multi-Dimensional Euler and Navier-Stokes Simulations

    NASA Technical Reports Server (NTRS)

    Darmofal, David L.

    2003-01-01

    The use of computational simulations in the prediction of complex aerodynamic flows is becoming increasingly prevalent in the design process within the aerospace industry. Continuing advancements in both computing technology and algorithmic development are ultimately leading to attempts at simulating ever-larger, more complex problems. However, by increasing the reliance on computational simulations in the design cycle, we must also increase the accuracy of these simulations in order to maintain or improve the reliability arid safety of the resulting aircraft. At the same time, large-scale computational simulations must be made more affordable so that their potential benefits can be fully realized within the design cycle. Thus, a continuing need exists for increasing the accuracy and efficiency of computational algorithms such that computational fluid dynamics can become a viable tool in the design of more reliable, safer aircraft. The objective of this research was the development of an error estimation and grid adaptive strategy for reducing simulation errors in integral outputs (functionals) such as lift or drag from from multi-dimensional Euler and Navier-Stokes simulations. In this final report, we summarize our work during this grant.

  11. Adaptive discrete cosine transform-based image compression method on a heterogeneous system platform using Open Computing Language

    NASA Astrophysics Data System (ADS)

    Alqudami, Nasser; Kim, Shin-Dug

    2014-11-01

    Discrete cosine transform (DCT) is one of the major operations in image compression standards and it requires intensive and complex computations. Recent computer systems and handheld devices are equipped with high computing capability devices such as a general-purpose graphics processing unit (GPGPU) in addition to the traditional multicores CPU. We develop an optimized parallel implementation of the forward DCT algorithm for the JPEG image compression using the recently proposed Open Computing Language (OpenCL). This OpenCL parallel implementation combines a multicore CPU and a GPGPU in a single solution to perform DCT computations in an efficient manner by applying certain optimization techniques to enhance the kernel execution time and data movements. A separate optimal OpenCL kernel code was developed (CPU-based and GPU-based kernels) based on certain appropriate device-based optimization factors, such as thread-mapping, thread granularity, vector-based memory access, and the given workload. The performance of DCT is evaluated on a heterogeneous environment and our OpenCL parallel implementation results in speeding up the execution of the DCT by the factors of 3.68 and 5.58 for different image sizes and formats in terms of workload allocations and data transfer mechanisms. The obtained speedup indicates the scalability of the DCT performance.

  12. Adaptive Fourier decomposition based ECG denoising.

    PubMed

    Wang, Ze; Wan, Feng; Wong, Chi Man; Zhang, Liming

    2016-10-01

    A novel ECG denoising method is proposed based on the adaptive Fourier decomposition (AFD). The AFD decomposes a signal according to its energy distribution, thereby making this algorithm suitable for separating pure ECG signal and noise with overlapping frequency ranges but different energy distributions. A stop criterion for the iterative decomposition process in the AFD is calculated on the basis of the estimated signal-to-noise ratio (SNR) of the noisy signal. The proposed AFD-based method is validated by the synthetic ECG signal using an ECG model and also real ECG signals from the MIT-BIH Arrhythmia Database both with additive Gaussian white noise. Simulation results of the proposed method show better performance on the denoising and the QRS detection in comparing with major ECG denoising schemes based on the wavelet transform, the Stockwell transform, the empirical mode decomposition, and the ensemble empirical mode decomposition.

  13. Hysteresis compensation of the piezoelectric ceramic actuators-based tip/tilt mirror with a neural network method in adaptive optics

    NASA Astrophysics Data System (ADS)

    Wang, Chongchong; Wang, Yukun; Hu, Lifa; Wang, Shaoxin; Cao, Zhaoliang; Mu, Quanquan; Li, Dayu; Yang, Chengliang; Xuan, Li

    2016-05-01

    The intrinsic hysteresis nonlinearity of the piezo-actuators can severely degrade the positioning accuracy of a tip-tilt mirror (TTM) in an adaptive optics system. This paper focuses on compensating this hysteresis nonlinearity by feed-forward linearization with an inverse hysteresis model. This inverse hysteresis model is based on the classical Presiach model, and the neural network (NN) is used to describe the hysteresis loop. In order to apply it in the real-time adaptive correction, an analytical nonlinear function derived from the NN is introduced to compute the inverse hysteresis model output instead of the time-consuming NN simulation process. Experimental results show that the proposed method effectively linearized the TTM behavior with the static hysteresis nonlinearity of TTM reducing from 15.6% to 1.4%. In addition, the tip-tilt tracking experiments using the integrator with and without hysteresis compensation are conducted. The wavefront tip-tilt aberration rejection ability of the TTM control system is significantly improved with the -3 dB error rejection bandwidth increasing from 46 to 62 Hz.

  14. Laser Raman detection for oral cancer based on an adaptive Gaussian process classification method with posterior probabilities

    NASA Astrophysics Data System (ADS)

    Du, Zhanwei; Yang, Yongjian; Bai, Yuan; Wang, Lijun; Su, Le; Chen, Yong; Li, Xianchang; Zhou, Xiaodong; Jia, Jun; Shen, Aiguo; Hu, Jiming

    2013-03-01

    The existing methods for early and differential diagnosis of oral cancer are limited due to the unapparent early symptoms and the imperfect imaging examination methods. In this paper, the classification models of oral adenocarcinoma, carcinoma tissues and a control group with just four features are established by utilizing the hybrid Gaussian process (HGP) classification algorithm, with the introduction of the mechanisms of noise reduction and posterior probability. HGP shows much better performance in the experimental results. During the experimental process, oral tissues were divided into three groups, adenocarcinoma (n = 87), carcinoma (n = 100) and the control group (n = 134). The spectral data for these groups were collected. The prospective application of the proposed HGP classification method improved the diagnostic sensitivity to 56.35% and the specificity to about 70.00%, and resulted in a Matthews correlation coefficient (MCC) of 0.36. It is proved that the utilization of HGP in LRS detection analysis for the diagnosis of oral cancer gives accurate results. The prospect of application is also satisfactory.

  15. Adaptive Finite Element Methods for Continuum Damage Modeling

    NASA Technical Reports Server (NTRS)

    Min, J. B.; Tworzydlo, W. W.; Xiques, K. E.

    1995-01-01

    The paper presents an application of adaptive finite element methods to the modeling of low-cycle continuum damage and life prediction of high-temperature components. The major objective is to provide automated and accurate modeling of damaged zones through adaptive mesh refinement and adaptive time-stepping methods. The damage modeling methodology is implemented in an usual way by embedding damage evolution in the transient nonlinear solution of elasto-viscoplastic deformation problems. This nonlinear boundary-value problem is discretized by adaptive finite element methods. The automated h-adaptive mesh refinements are driven by error indicators, based on selected principal variables in the problem (stresses, non-elastic strains, damage, etc.). In the time domain, adaptive time-stepping is used, combined with a predictor-corrector time marching algorithm. The time selection is controlled by required time accuracy. In order to take into account strong temperature dependency of material parameters, the nonlinear structural solution a coupled with thermal analyses (one-way coupling). Several test examples illustrate the importance and benefits of adaptive mesh refinements in accurate prediction of damage levels and failure time.

  16. Wavelet methods in multi-conjugate adaptive optics

    NASA Astrophysics Data System (ADS)

    Helin, T.; Yudytskiy, M.

    2013-08-01

    The next generation ground-based telescopes rely heavily on adaptive optics for overcoming the limitation of atmospheric turbulence. In the future adaptive optics modalities, like multi-conjugate adaptive optics (MCAO), atmospheric tomography is the major mathematical and computational challenge. In this severely ill-posed problem, a fast and stable reconstruction algorithm is needed that can take into account many real-life phenomena of telescope imaging. We introduce a novel reconstruction method for the atmospheric tomography problem and demonstrate its performance and flexibility in the context of MCAO. Our method is based on using locality properties of compactly supported wavelets, both in the spatial and frequency domains. The reconstruction in the atmospheric tomography problem is obtained by solving the Bayesian MAP estimator with a conjugate-gradient-based algorithm. An accelerated algorithm with preconditioning is also introduced. Numerical performance is demonstrated on the official end-to-end simulation tool OCTOPUS of European Southern Observatory.

  17. A fast tree-based method for estimating column densities in adaptive mesh refinement codes. Influence of UV radiation field on the structure of molecular clouds

    NASA Astrophysics Data System (ADS)

    Valdivia, Valeska; Hennebelle, Patrick

    2014-11-01

    Context. Ultraviolet radiation plays a crucial role in molecular clouds. Radiation and matter are tightly coupled and their interplay influences the physical and chemical properties of gas. In particular, modeling the radiation propagation requires calculating column densities, which can be numerically expensive in high-resolution multidimensional simulations. Aims: Developing fast methods for estimating column densities is mandatory if we are interested in the dynamical influence of the radiative transfer. In particular, we focus on the effect of the UV screening on the dynamics and on the statistical properties of molecular clouds. Methods: We have developed a tree-based method for a fast estimate of column densities, implemented in the adaptive mesh refinement code RAMSES. We performed numerical simulations using this method in order to analyze the influence of the screening on the clump formation. Results: We find that the accuracy for the extinction of the tree-based method is better than 10%, while the relative error for the column density can be much more. We describe the implementation of a method based on precalculating the geometrical terms that noticeably reduces the calculation time. To study the influence of the screening on the statistical properties of molecular clouds we present the probability distribution function of gas and the associated temperature per density bin and the mass spectra for different density thresholds. Conclusions: The tree-based method is fast and accurate enough to be used during numerical simulations since no communication is needed between CPUs when using a fully threaded tree. It is then suitable to parallel computing. We show that the screening for far UV radiation mainly affects the dense gas, thereby favoring low temperatures and affecting the fragmentation. We show that when we include the screening, more structures are formed with higher densities in comparison to the case that does not include this effect. We

  18. Method and system for environmentally adaptive fault tolerant computing

    NASA Technical Reports Server (NTRS)

    Copenhaver, Jason L. (Inventor); Jeremy, Ramos (Inventor); Wolfe, Jeffrey M. (Inventor); Brenner, Dean (Inventor)

    2010-01-01

    A method and system for adapting fault tolerant computing. The method includes the steps of measuring an environmental condition representative of an environment. An on-board processing system's sensitivity to the measured environmental condition is measured. It is determined whether to reconfigure a fault tolerance of the on-board processing system based in part on the measured environmental condition. The fault tolerance of the on-board processing system may be reconfigured based in part on the measured environmental condition.

  19. A fourth order accurate adaptive mesh refinement method forpoisson's equation

    SciTech Connect

    Barad, Michael; Colella, Phillip

    2004-08-20

    We present a block-structured adaptive mesh refinement (AMR) method for computing solutions to Poisson's equation in two and three dimensions. It is based on a conservative, finite-volume formulation of the classical Mehrstellen methods. This is combined with finite volume AMR discretizations to obtain a method that is fourth-order accurate in solution error, and with easily verifiable solvability conditions for Neumann and periodic boundary conditions.

  20. Adaptive multiscale model reduction with Generalized Multiscale Finite Element Methods

    NASA Astrophysics Data System (ADS)

    Chung, Eric; Efendiev, Yalchin; Hou, Thomas Y.

    2016-09-01

    In this paper, we discuss a general multiscale model reduction framework based on multiscale finite element methods. We give a brief overview of related multiscale methods. Due to page limitations, the overview focuses on a few related methods and is not intended to be comprehensive. We present a general adaptive multiscale model reduction framework, the Generalized Multiscale Finite Element Method. Besides the method's basic outline, we discuss some important ingredients needed for the method's success. We also discuss several applications. The proposed method allows performing local model reduction in the presence of high contrast and no scale separation.

  1. Predictor-Based Model Reference Adaptive Control

    NASA Technical Reports Server (NTRS)

    Lavretsky, Eugene; Gadient, Ross; Gregory, Irene M.

    2009-01-01

    This paper is devoted to robust, Predictor-based Model Reference Adaptive Control (PMRAC) design. The proposed adaptive system is compared with the now-classical Model Reference Adaptive Control (MRAC) architecture. Simulation examples are presented. Numerical evidence indicates that the proposed PMRAC tracking architecture has better than MRAC transient characteristics. In this paper, we presented a state-predictor based direct adaptive tracking design methodology for multi-input dynamical systems, with partially known dynamics. Efficiency of the design was demonstrated using short period dynamics of an aircraft. Formal proof of the reported PMRAC benefits constitute future research and will be reported elsewhere.

  2. The model adaptive space shrinkage (MASS) approach: a new method for simultaneous variable selection and outlier detection based on model population analysis.

    PubMed

    Wen, Ming; Deng, Bai-Chuan; Cao, Dong-Sheng; Yun, Yong-Huan; Yang, Rui-Han; Lu, Hong-Mei; Liang, Yi-Zeng

    2016-10-01

    Variable selection and outlier detection are important processes in chemical modeling. Usually, they affect each other. Their performing orders also strongly affect the modeling results. Currently, many studies perform these processes separately and in different orders. In this study, we examined the interaction between outliers and variables and compared the modeling procedures performed with different orders of variable selection and outlier detection. Because the order of outlier detection and variable selection can affect the interpretation of the model, it is difficult to decide which order is preferable when the predictabilities (prediction error) of the different orders are relatively close. To address this problem, a simultaneous variable selection and outlier detection approach called Model Adaptive Space Shrinkage (MASS) was developed. This proposed approach is based on model population analysis (MPA). Through weighted binary matrix sampling (WBMS) from model space, a large number of partial least square (PLS) regression models were built, and the elite parts of the models were selected to statistically reassign the weight of each variable and sample. Then, the whole process was repeated until the weights of the variables and samples converged. Finally, MASS adaptively found a high performance model which consisted of the optimized variable subset and sample subset. The combination of these two subsets could be considered as the cleaned dataset used for chemical modeling. In the proposed approach, the problem of the order of variable selection and outlier detection is avoided. One near infrared spectroscopy (NIR) dataset and one quantitative structure-activity relationship (QSAR) dataset were used to test this approach. The result demonstrated that MASS is a useful method for data cleaning before building a predictive model. PMID:27435388

  3. The model adaptive space shrinkage (MASS) approach: a new method for simultaneous variable selection and outlier detection based on model population analysis.

    PubMed

    Wen, Ming; Deng, Bai-Chuan; Cao, Dong-Sheng; Yun, Yong-Huan; Yang, Rui-Han; Lu, Hong-Mei; Liang, Yi-Zeng

    2016-10-01

    Variable selection and outlier detection are important processes in chemical modeling. Usually, they affect each other. Their performing orders also strongly affect the modeling results. Currently, many studies perform these processes separately and in different orders. In this study, we examined the interaction between outliers and variables and compared the modeling procedures performed with different orders of variable selection and outlier detection. Because the order of outlier detection and variable selection can affect the interpretation of the model, it is difficult to decide which order is preferable when the predictabilities (prediction error) of the different orders are relatively close. To address this problem, a simultaneous variable selection and outlier detection approach called Model Adaptive Space Shrinkage (MASS) was developed. This proposed approach is based on model population analysis (MPA). Through weighted binary matrix sampling (WBMS) from model space, a large number of partial least square (PLS) regression models were built, and the elite parts of the models were selected to statistically reassign the weight of each variable and sample. Then, the whole process was repeated until the weights of the variables and samples converged. Finally, MASS adaptively found a high performance model which consisted of the optimized variable subset and sample subset. The combination of these two subsets could be considered as the cleaned dataset used for chemical modeling. In the proposed approach, the problem of the order of variable selection and outlier detection is avoided. One near infrared spectroscopy (NIR) dataset and one quantitative structure-activity relationship (QSAR) dataset were used to test this approach. The result demonstrated that MASS is a useful method for data cleaning before building a predictive model.

  4. A Novel Method for Predicting Late Genitourinary Toxicity After Prostate Radiation Therapy and the Need for Age-Based Risk-Adapted Dose Constraints

    SciTech Connect

    Ahmed, Awad A.; Egleston, Brian; Alcantara, Pino; Li, Linna; Pollack, Alan; Horwitz, Eric M.; Buyyounouski, Mark K.

    2013-07-15

    Background: There are no well-established normal tissue sparing dose–volume histogram (DVH) criteria that limit the risk of urinary toxicity from prostate radiation therapy (RT). The aim of this study was to determine which criteria predict late toxicity among various DVH parameters when contouring the entire solid bladder and its contents versus the bladder wall. The area under the histogram curve (AUHC) was also analyzed. Methods and Materials: From 1993 to 2000, 503 men with prostate cancer received 3-dimensional conformal RT (median follow-up time, 71 months). The whole bladder and the bladder wall were contoured in all patients. The primary endpoint was grade ≥2 genitourinary (GU) toxicity occurring ≥3 months after completion of RT. Cox regressions of time to grade ≥2 toxicity were estimated separately for the entire bladder and bladder wall. Concordance probability estimates (CPE) assessed model discriminative ability. Before training the models, an external random test group of 100 men was set aside for testing. Separate analyses were performed based on the mean age (≤ 68 vs >68 years). Results: Age, pretreatment urinary symptoms, mean dose (entire bladder and bladder wall), and AUHC (entire bladder and bladder wall) were significant (P<.05) in multivariable analysis. Overall, bladder wall CPE values were higher than solid bladder values. The AUHC for bladder wall provided the greatest discrimination for late bladder toxicity when compared with alternative DVH points, with CPE values of 0.68 for age ≤68 years and 0.81 for age >68 years. Conclusion: The AUHC method based on bladder wall volumes was superior for predicting late GU toxicity. Age >68 years was associated with late grade ≥2 GU toxicity, which suggests that risk-adapted dose constraints based on age should be explored.

  5. Solving Chemical Master Equations by an Adaptive Wavelet Method

    SciTech Connect

    Jahnke, Tobias; Galan, Steffen

    2008-09-01

    Solving chemical master equations is notoriously difficult due to the tremendous number of degrees of freedom. We present a new numerical method which efficiently reduces the size of the problem in an adaptive way. The method is based on a sparse wavelet representation and an algorithm which, in each time step, detects the essential degrees of freedom required to approximate the solution up to the desired accuracy.

  6. Workshop on adaptive grid methods for fusion plasmas

    SciTech Connect

    Wiley, J.C.

    1995-07-01

    The author describes a general `hp` finite element method with adaptive grids. The code was based on the work of Oden, et al. The term `hp` refers to the method of spatial refinement (h), in conjunction with the order of polynomials used as a part of the finite element discretization (p). This finite element code seems to handle well the different mesh grid sizes occuring between abuted grids with different resolutions.

  7. Adaptive Accommodation Control Method for Complex Assembly

    NASA Astrophysics Data System (ADS)

    Kang, Sungchul; Kim, Munsang; Park, Shinsuk

    Robotic systems have been used to automate assembly tasks in manufacturing and in teleoperation. Conventional robotic systems, however, have been ineffective in controlling contact force in multiple contact states of complex assemblythat involves interactions between complex-shaped parts. Unlike robots, humans excel at complex assembly tasks by utilizing their intrinsic impedance, forces and torque sensation, and tactile contact clues. By examining the human behavior in assembling complex parts, this study proposes a novel geometry-independent control method for robotic assembly using adaptive accommodation (or damping) algorithm. Two important conditions for complex assembly, target approachability and bounded contact force, can be met by the proposed control scheme. It generates target approachable motion that leads the object to move closer to a desired target position, while contact force is kept under a predetermined value. Experimental results from complex assembly tests have confirmed the feasibility and applicability of the proposed method.

  8. Adaptive method with intercessory feedback control for an intelligent agent

    DOEpatents

    Goldsmith, Steven Y.

    2004-06-22

    An adaptive architecture method with feedback control for an intelligent agent provides for adaptively integrating reflexive and deliberative responses to a stimulus according to a goal. An adaptive architecture method with feedback control for multiple intelligent agents provides for coordinating and adaptively integrating reflexive and deliberative responses to a stimulus according to a goal. Re-programming of the adaptive architecture is through a nexus which coordinates reflexive and deliberator components.

  9. Adapting implicit methods to parallel processors

    SciTech Connect

    Reeves, L.; McMillin, B.; Okunbor, D.; Riggins, D.

    1994-12-31

    When numerically solving many types of partial differential equations, it is advantageous to use implicit methods because of their better stability and more flexible parameter choice, (e.g. larger time steps). However, since implicit methods usually require simultaneous knowledge of the entire computational domain, these methods axe difficult to implement directly on distributed memory parallel processors. This leads to infrequent use of implicit methods on parallel/distributed systems. The usual implementation of implicit methods is inefficient due to the nature of parallel systems where it is common to take the computational domain and distribute the grid points over the processors so as to maintain a relatively even workload per processor. This creates a problem at the locations in the domain where adjacent points are not on the same processor. In order for the values at these points to be calculated, messages have to be exchanged between the corresponding processors. Without special adaptation, this will result in idle processors during part of the computation, and as the number of idle processors increases, the lower the effective speed improvement by using a parallel processor.

  10. Linearly-Constrained Adaptive Signal Processing Methods

    NASA Astrophysics Data System (ADS)

    Griffiths, Lloyd J.

    1988-01-01

    In adaptive least-squares estimation problems, a desired signal d(n) is estimated using a linear combination of L observation values samples xi (n), x2(n), . . . , xL-1(n) and denoted by the vector X(n). The estimate is formed as the inner product of this vector with a corresponding L-dimensional weight vector W. One particular weight vector of interest is Wopt which minimizes the mean-square between d(n) and the estimate. In this context, the term `mean-square difference' is a quadratic measure such as statistical expectation or time average. The specific value of W which achieves the minimum is given by the prod-uct of the inverse data covariance matrix and the cross-correlation between the data vector and the desired signal. The latter is often referred to as the P-vector. For those cases in which time samples of both the desired and data vector signals are available, a variety of adaptive methods have been proposed which will guarantee that an iterative weight vector Wa(n) converges (in some sense) to the op-timal solution. Two which have been extensively studied are the recursive least-squares (RLS) method and the LMS gradient approximation approach. There are several problems of interest in the communication and radar environment in which the optimal least-squares weight set is of interest and in which time samples of the desired signal are not available. Examples can be found in array processing in which only the direction of arrival of the desired signal is known and in single channel filtering where the spectrum of the desired response is known a priori. One approach to these problems which has been suggested is the P-vector algorithm which is an LMS-like approximate gradient method. Although it is easy to derive the mean and variance of the weights which result with this algorithm, there has never been an identification of the corresponding underlying error surface which the procedure searches. The purpose of this paper is to suggest an alternative

  11. Domain Adaptation of Deformable Part-Based Models.

    PubMed

    Xu, Jiaolong; Ramos, Sebastian; Vázquez, David; López, Antonio M

    2014-12-01

    The accuracy of object classifiers can significantly drop when the training data (source domain) and the application scenario (target domain) have inherent differences. Therefore, adapting the classifiers to the scenario in which they must operate is of paramount importance. We present novel domain adaptation (DA) methods for object detection. As proof of concept, we focus on adapting the state-of-the-art deformable part-based model (DPM) for pedestrian detection. We introduce an adaptive structural SVM (A-SSVM) that adapts a pre-learned classifier between different domains. By taking into account the inherent structure in feature space (e.g., the parts in a DPM), we propose a structure-aware A-SSVM (SA-SSVM). Neither A-SSVM nor SA-SSVM needs to revisit the source-domain training data to perform the adaptation. Rather, a low number of target-domain training examples (e.g., pedestrians) are used. To address the scenario where there are no target-domain annotated samples, we propose a self-adaptive DPM based on a self-paced learning (SPL) strategy and a Gaussian Process Regression (GPR). Two types of adaptation tasks are assessed: from both synthetic pedestrians and general persons (PASCAL VOC) to pedestrians imaged from an on-board camera. Results show that our proposals avoid accuracy drops as high as 15 points when comparing adapted and non-adapted detectors. PMID:26353145

  12. Gradient-based adaptation of continuous dynamic model structures

    NASA Astrophysics Data System (ADS)

    La Cava, William G.; Danai, Kourosh

    2016-01-01

    A gradient-based method of symbolic adaptation is introduced for a class of continuous dynamic models. The proposed model structure adaptation method starts with the first-principles model of the system and adapts its structure after adjusting its individual components in symbolic form. A key contribution of this work is its introduction of the model's parameter sensitivity as the measure of symbolic changes to the model. This measure, which is essential to defining the structural sensitivity of the model, not only accommodates algebraic evaluation of candidate models in lieu of more computationally expensive simulation-based evaluation, but also makes possible the implementation of gradient-based optimisation in symbolic adaptation. The proposed method is applied to models of several virtual and real-world systems that demonstrate its potential utility.

  13. Adaptive model training system and method

    DOEpatents

    Bickford, Randall L; Palnitkar, Rahul M

    2014-11-18

    An adaptive model training system and method for filtering asset operating data values acquired from a monitored asset for selectively choosing asset operating data values that meet at least one predefined criterion of good data quality while rejecting asset operating data values that fail to meet at least the one predefined criterion of good data quality; and recalibrating a previously trained or calibrated model having a learned scope of normal operation of the asset by utilizing the asset operating data values that meet at least the one predefined criterion of good data quality for adjusting the learned scope of normal operation of the asset for defining a recalibrated model having the adjusted learned scope of normal operation of the asset.

  14. Adaptive model training system and method

    DOEpatents

    Bickford, Randall L; Palnitkar, Rahul M; Lee, Vo

    2014-04-15

    An adaptive model training system and method for filtering asset operating data values acquired from a monitored asset for selectively choosing asset operating data values that meet at least one predefined criterion of good data quality while rejecting asset operating data values that fail to meet at least the one predefined criterion of good data quality; and recalibrating a previously trained or calibrated model having a learned scope of normal operation of the asset by utilizing the asset operating data values that meet at least the one predefined criterion of good data quality for adjusting the learned scope of normal operation of the asset for defining a recalibrated model having the adjusted learned scope of normal operation of the asset.

  15. Adaptable state based control system

    NASA Technical Reports Server (NTRS)

    Rasmussen, Robert D. (Inventor); Dvorak, Daniel L. (Inventor); Gostelow, Kim P. (Inventor); Starbird, Thomas W. (Inventor); Gat, Erann (Inventor); Chien, Steve Ankuo (Inventor); Keller, Robert M. (Inventor)

    2004-01-01

    An autonomous controller, comprised of a state knowledge manager, a control executor, hardware proxies and a statistical estimator collaborates with a goal elaborator, with which it shares common models of the behavior of the system and the controller. The elaborator uses the common models to generate from temporally indeterminate sets of goals, executable goals to be executed by the controller. The controller may be updated to operate in a different system or environment than that for which it was originally designed by the replacement of shared statistical models and by the instantiation of a new set of state variable objects derived from a state variable class. The adaptation of the controller does not require substantial modification of the goal elaborator for its application to the new system or environment.

  16. A feature extraction method of the particle swarm optimization algorithm based on adaptive inertia weight and chaos optimization for Brillouin scattering spectra

    NASA Astrophysics Data System (ADS)

    Zhang, Yanjun; Zhao, Yu; Fu, Xinghu; Xu, Jinrui

    2016-10-01

    A novel particle swarm optimization algorithm based on adaptive inertia weight and chaos optimization is proposed for extracting the features of Brillouin scattering spectra. Firstly, the adaptive inertia weight parameter of the velocity is introduced to the basic particle swarm algorithm. Based on the current iteration number of particles and the adaptation value, the algorithm can change the weight coefficient and adjust the iteration speed of searching space for particles, so the local optimization ability can be enhanced. Secondly, the logical self-mapping chaotic search is carried out by using the chaos optimization in particle swarm optimization algorithm, which makes the particle swarm optimization algorithm jump out of local optimum. The novel algorithm is compared with finite element analysis-Levenberg Marquardt algorithm, particle swarm optimization-Levenberg Marquardt algorithm and particle swarm optimization algorithm by changing the linewidth, the signal-to-noise ratio and the linear weight ratio of Brillouin scattering spectra. Then the algorithm is applied to the feature extraction of Brillouin scattering spectra in different temperatures. The simulation analysis and experimental results show that this algorithm has a high fitting degree and small Brillouin frequency shift error for different linewidth, SNR and linear weight ratio. Therefore, this algorithm can be applied to the distributed optical fiber sensing system based on Brillouin optical time domain reflection, which can effectively improve the accuracy of Brillouin frequency shift extraction.

  17. An Adaptive Cross-Architecture Combination Method for Graph Traversal

    SciTech Connect

    You, Yang; Song, Shuaiwen; Kerbyson, Darren J.

    2014-06-18

    Breadth-First Search (BFS) is widely used in many real-world applications including computational biology, social networks, and electronic design automation. The combination method, using both top-down and bottom-up techniques, is the most effective BFS approach. However, current combination methods rely on trial-and-error and exhaustive search to locate the optimal switching point, which may cause significant runtime overhead. To solve this problem, we design an adaptive method based on regression analysis to predict an optimal switching point for the combination method at runtime within less than 0.1% of the BFS execution time.

  18. Note: high frequency vibration rejection using a linear shaft actuator-based image stabilizing device via vestibulo-ocular reflex adaptation control method.

    PubMed

    Koh, Doo-Yeol; Kim, Young-Kook; Kim, Kyung-Soo; Kim, Soohyun

    2013-08-01

    In mobile robotics, obtaining stable image of a mounted camera is crucial for operating a mobile system to complete given tasks. This note presents the development of a high-speed image stabilizing device using linear shaft actuator, and a new image stabilization method inspired by human gaze stabilization process known as vestibulo-ocular reflex (VOR). In the proposed control, the reference is adaptively adjusted by the VOR adaptation control to reject residual vibration of a camera as the VOR gain converges to optimal state. Through experiments on a pneumatic vibrator, it will be shown that the proposed system is capable of stabilizing 10 Hz platform vibration, which shows potential applicability of the device to a high-speed mobile robot.

  19. Note: High frequency vibration rejection using a linear shaft actuator-based image stabilizing device via vestibulo-ocular reflex adaptation control method

    NASA Astrophysics Data System (ADS)

    Koh, Doo-Yeol; Kim, Young-Kook; Kim, Kyung-Soo; Kim, Soohyun

    2013-08-01

    In mobile robotics, obtaining stable image of a mounted camera is crucial for operating a mobile system to complete given tasks. This note presents the development of a high-speed image stabilizing device using linear shaft actuator, and a new image stabilization method inspired by human gaze stabilization process known as vestibulo-ocular reflex (VOR). In the proposed control, the reference is adaptively adjusted by the VOR adaptation control to reject residual vibration of a camera as the VOR gain converges to optimal state. Through experiments on a pneumatic vibrator, it will be shown that the proposed system is capable of stabilizing 10 Hz platform vibration, which shows potential applicability of the device to a high-speed mobile robot.

  20. Note: high frequency vibration rejection using a linear shaft actuator-based image stabilizing device via vestibulo-ocular reflex adaptation control method.

    PubMed

    Koh, Doo-Yeol; Kim, Young-Kook; Kim, Kyung-Soo; Kim, Soohyun

    2013-08-01

    In mobile robotics, obtaining stable image of a mounted camera is crucial for operating a mobile system to complete given tasks. This note presents the development of a high-speed image stabilizing device using linear shaft actuator, and a new image stabilization method inspired by human gaze stabilization process known as vestibulo-ocular reflex (VOR). In the proposed control, the reference is adaptively adjusted by the VOR adaptation control to reject residual vibration of a camera as the VOR gain converges to optimal state. Through experiments on a pneumatic vibrator, it will be shown that the proposed system is capable of stabilizing 10 Hz platform vibration, which shows potential applicability of the device to a high-speed mobile robot. PMID:24007125

  1. Advanced numerical methods in mesh generation and mesh adaptation

    SciTech Connect

    Lipnikov, Konstantine; Danilov, A; Vassilevski, Y; Agonzal, A

    2010-01-01

    Numerical solution of partial differential equations requires appropriate meshes, efficient solvers and robust and reliable error estimates. Generation of high-quality meshes for complex engineering models is a non-trivial task. This task is made more difficult when the mesh has to be adapted to a problem solution. This article is focused on a synergistic approach to the mesh generation and mesh adaptation, where best properties of various mesh generation methods are combined to build efficiently simplicial meshes. First, the advancing front technique (AFT) is combined with the incremental Delaunay triangulation (DT) to build an initial mesh. Second, the metric-based mesh adaptation (MBA) method is employed to improve quality of the generated mesh and/or to adapt it to a problem solution. We demonstrate with numerical experiments that combination of all three methods is required for robust meshing of complex engineering models. The key to successful mesh generation is the high-quality of the triangles in the initial front. We use a black-box technique to improve surface meshes exported from an unattainable CAD system. The initial surface mesh is refined into a shape-regular triangulation which approximates the boundary with the same accuracy as the CAD mesh. The DT method adds robustness to the AFT. The resulting mesh is topologically correct but may contain a few slivers. The MBA uses seven local operations to modify the mesh topology. It improves significantly the mesh quality. The MBA method is also used to adapt the mesh to a problem solution to minimize computational resources required for solving the problem. The MBA has a solid theoretical background. In the first two experiments, we consider the convection-diffusion and elasticity problems. We demonstrate the optimal reduction rate of the discretization error on a sequence of adaptive strongly anisotropic meshes. The key element of the MBA method is construction of a tensor metric from hierarchical edge-based

  2. Optimal and adaptive methods of processing hydroacoustic signals (review)

    NASA Astrophysics Data System (ADS)

    Malyshkin, G. S.; Sidel'nikov, G. B.

    2014-09-01

    Different methods of optimal and adaptive processing of hydroacoustic signals for multipath propagation and scattering are considered. Advantages and drawbacks of the classical adaptive (Capon, MUSIC, and Johnson) algorithms and "fast" projection algorithms are analyzed for the case of multipath propagation and scattering of strong signals. The classical optimal approaches to detecting multipath signals are presented. A mechanism of controlled normalization of strong signals is proposed to automatically detect weak signals. The results of simulating the operation of different detection algorithms for a linear equidistant array under multipath propagation and scattering are presented. An automatic detector is analyzed, which is based on classical or fast projection algorithms, which estimates the background proceeding from median filtering or the method of bilateral spatial contrast.

  3. Adaptive numerical methods for partial differential equations

    SciTech Connect

    Cololla, P.

    1995-07-01

    This review describes a structured approach to adaptivity. The Automated Mesh Refinement (ARM) algorithms developed by M Berger are described, touching on hyperbolic and parabolic applications. Adaptivity is achieved by overlaying finer grids only in areas flagged by a generalized error criterion. The author discusses some of the issues involved in abutting disparate-resolution grids, and demonstrates that suitable algorithms exist for dissipative as well as hyperbolic systems.

  4. Space-time adaptive numerical methods for geophysical applications.

    PubMed

    Castro, C E; Käser, M; Toro, E F

    2009-11-28

    In this paper we present high-order formulations of the finite volume and discontinuous Galerkin finite-element methods for wave propagation problems with a space-time adaptation technique using unstructured meshes in order to reduce computational cost without reducing accuracy. Both methods can be derived in a similar mathematical framework and are identical in their first-order version. In their extension to higher order accuracy in space and time, both methods use spatial polynomials of higher degree inside each element, a high-order solution of the generalized Riemann problem and a high-order time integration method based on the Taylor series expansion. The static adaptation strategy uses locally refined high-resolution meshes in areas with low wave speeds to improve the approximation quality. Furthermore, the time step length is chosen locally adaptive such that the solution is evolved explicitly in time by an optimal time step determined by a local stability criterion. After validating the numerical approach, both schemes are applied to geophysical wave propagation problems such as tsunami waves and seismic waves comparing the new approach with the classical global time-stepping technique. The problem of mesh partitioning for large-scale applications on multi-processor architectures is discussed and a new mesh partition approach is proposed and tested to further reduce computational cost. PMID:19840984

  5. Vortical Flow Prediction Using an Adaptive Unstructured Grid Method

    NASA Technical Reports Server (NTRS)

    Pirzadeh, Shahyar Z.

    2003-01-01

    A computational fluid dynamics (CFD) method has been employed to compute vortical flows around slender wing/body configurations. The emphasis of the paper is on the effectiveness of an adaptive grid procedure in "capturing" concentrated vortices generated at sharp edges or flow separation lines of lifting surfaces flying at high angles of attack. The method is based on a tetrahedral unstructured grid technology developed at the NASA Langley Research Center. Two steady-state, subsonic, inviscid and Navier-Stokes flow test cases are presented to demonstrate the applicability of the method for solving practical vortical flow problems. The first test case concerns vortex flow over a simple 65 delta wing with different values of leading-edge radius. Although the geometry is quite simple, it poses a challenging problem for computing vortices originating from blunt leading edges. The second case is that of a more complex fighter configuration. The superiority of the adapted solutions in capturing the vortex flow structure over the conventional unadapted results is demonstrated by comparisons with the wind-tunnel experimental data. The study shows that numerical prediction of vortical flows is highly sensitive to the local grid resolution and that the implementation of grid adaptation is essential when applying CFD methods to such complicated flow problems.

  6. Vortical Flow Prediction Using an Adaptive Unstructured Grid Method

    NASA Technical Reports Server (NTRS)

    Pirzadeh, Shahyar Z.

    2001-01-01

    A computational fluid dynamics (CFD) method has been employed to compute vortical flows around slender wing/body configurations. The emphasis of the paper is on the effectiveness of an adaptive grid procedure in "capturing" concentrated vortices generated at sharp edges or flow separation lines of lifting surfaces flying at high angles of attack. The method is based on a tetrahedral unstructured grid technology developed at the NASA Langley Research Center. Two steady-state, subsonic, inviscid and Navier-Stokes flow test cases are presented to demonstrate the applicability of the method for solving practical vortical flow problems. The first test case concerns vortex flow over a simple 65deg delta wing with different values of leading-edge bluntness, and the second case is that of a more complex fighter configuration. The superiority of the adapted solutions in capturing the vortex flow structure over the conventional unadapted results is demonstrated by comparisons with the windtunnel experimental data. The study shows that numerical prediction of vortical flows is highly sensitive to the local grid resolution and that the implementation of grid adaptation is essential when applying CFD methods to such complicated flow problems.

  7. Adaptive density partitioning technique in the auxiliary plane wave method

    NASA Astrophysics Data System (ADS)

    Kurashige, Yuki; Nakajima, Takahito; Hirao, Kimihiko

    2006-01-01

    We have developed the adaptive density partitioning technique (ADPT) in the auxiliary plane wave method, in which a part of the density is expanded to plane waves, for the fast evaluation of Coulomb matrix. Our partitioning is based on the error estimations and allows us to control the accuracy and efficiency. Moreover, we can drastically reduce the core Gaussian products that are left in Gaussian representation (its analytical integrals is the bottleneck in this method). For the taxol molecule with 6-31G** basis, the core Gaussian products accounted only for 5% in submicrohartree error.

  8. QPSO-based adaptive DNA computing algorithm.

    PubMed

    Karakose, Mehmet; Cigdem, Ugur

    2013-01-01

    DNA (deoxyribonucleic acid) computing that is a new computation model based on DNA molecules for information storage has been increasingly used for optimization and data analysis in recent years. However, DNA computing algorithm has some limitations in terms of convergence speed, adaptability, and effectiveness. In this paper, a new approach for improvement of DNA computing is proposed. This new approach aims to perform DNA computing algorithm with adaptive parameters towards the desired goal using quantum-behaved particle swarm optimization (QPSO). Some contributions provided by the proposed QPSO based on adaptive DNA computing algorithm are as follows: (1) parameters of population size, crossover rate, maximum number of operations, enzyme and virus mutation rate, and fitness function of DNA computing algorithm are simultaneously tuned for adaptive process, (2) adaptive algorithm is performed using QPSO algorithm for goal-driven progress, faster operation, and flexibility in data, and (3) numerical realization of DNA computing algorithm with proposed approach is implemented in system identification. Two experiments with different systems were carried out to evaluate the performance of the proposed approach with comparative results. Experimental results obtained with Matlab and FPGA demonstrate ability to provide effective optimization, considerable convergence speed, and high accuracy according to DNA computing algorithm.

  9. A Diffusion Synthetic Acceleration Method for Block Adaptive Mesh Refinement.

    SciTech Connect

    Ward, R. C.; Baker, R. S.; Morel, J. E.

    2005-01-01

    A prototype two-dimensional Diffusion Synthetic Acceleration (DSA) method on a Block-based Adaptive Mesh Refinement (BAMR) transport mesh has been developed. The Block-Adaptive Mesh Refinement Diffusion Synthetic Acceleration (BAMR-DSA) method was tested in the PARallel TIme-Dependent SN (PARTISN) deterministic transport code. The BAMR-DSA equations are derived by differencing the DSA equation using a vertex-centered diffusion discretization that is diamond-like and may be characterized as 'partially' consistent. The derivation of a diffusion discretization that is fully consistent with diamond transport differencing on BAMR mesh does not appear to be possible. However, despite being partially consistent, the BAMR-DSA method is effective for many applications. The BAMR-DSA solver was implemented and tested in two dimensions for rectangular (XY) and cylindrical (RZ) geometries. Testing results confirm that a partially consistent BAMR-DSA method will introduce instabilities for extreme cases, e.g., scattering ratios approaching 1.0 with optically thick cells, but for most realistic problems the BAMR-DSA method provides effective acceleration. The initial use of a full matrix to store and LU-Decomposition to solve the BAMR-DSA equations has been extended to include Compressed Sparse Row (CSR) storage and a Conjugate Gradient (CG) solver. The CSR and CG methods provide significantly more efficient and faster storage and solution methods.

  10. Principles and Methods of Adapted Physical Education.

    ERIC Educational Resources Information Center

    Arnheim, Daniel D.; And Others

    Programs in adapted physical education are presented preceded by a background of services for the handicapped, by the psychosocial implications of disability, and by the growth and development of the handicapped. Elements of conducting programs discussed are organization and administration, class organization, facilities, exercise programs…

  11. An Innovative Adaptive Pushover Procedure Based on Storey Shear

    SciTech Connect

    Shakeri, Kazem; Shayanfar, Mohsen A.

    2008-07-08

    Since the conventional pushover analyses are unable to consider the effect of the higher modes and progressive variation in dynamic properties, recent years have witnessed the development of some advanced adaptive pushover methods. However in these methods, using the quadratic combination rules to combine the modal forces result in a positive value in load pattern at all storeys and the reversal sign of the modes is removed; consequently these methods do not have a major advantage over their non-adaptive counterparts. Herein an innovative adaptive pushover method based on storey shear is proposed which can take into account the reversal signs in higher modes. In each storey the applied load pattern is derived from the storey shear profile; consequently, the sign of the applied loads in consecutive steps could be changed. Accuracy of the proposed procedure is examined by applying it to a 20-storey steel building. It illustrates a good estimation of the peak response in inelastic phase.

  12. A wavelet-based Projector Augmented-Wave (PAW) method: Reaching frozen-core all-electron precision with a systematic, adaptive and localized wavelet basis set

    NASA Astrophysics Data System (ADS)

    Rangel, T.; Caliste, D.; Genovese, L.; Torrent, M.

    2016-11-01

    We present a Projector Augmented-Wave (PAW) method based on a wavelet basis set. We implemented our wavelet-PAW method as a PAW library in the ABINIT package [http://www.abinit.org] and into BigDFT [http://www.bigdft.org]. We test our implementation in prototypical systems to illustrate the potential usage of our code. By using the wavelet-PAW method, we can simulate charged and special boundary condition systems with frozen-core all-electron precision. Furthermore, our work paves the way to large-scale and potentially order- N simulations within a PAW method.

  13. Adaptive method for electron bunch profile prediction

    SciTech Connect

    Scheinker, Alexander; Gessner, Spencer

    2015-10-01

    We report on an experiment performed at the Facility for Advanced Accelerator Experimental Tests (FACET) at SLAC National Accelerator Laboratory, in which a new adaptive control algorithm, one with known, bounded update rates, despite operating on analytically unknown cost functions, was utilized in order to provide quasi-real-time bunch property estimates of the electron beam. Multiple parameters, such as arbitrary rf phase settings and other time-varying accelerator properties, were simultaneously tuned in order to match a simulated bunch energy spectrum with a measured energy spectrum. The simple adaptive scheme was digitally implemented using matlab and the experimental physics and industrial control system. The main result is a nonintrusive, nondestructive, real-time diagnostic scheme for prediction of bunch profiles, as well as other beam parameters, the precise control of which are important for the plasma wakefield acceleration experiments being explored at FACET. © 2015 authors. Published by the American Physical Society.

  14. Solution-adaptive finite element method in computational fracture mechanics

    NASA Technical Reports Server (NTRS)

    Min, J. B.; Bass, J. M.; Spradley, L. W.

    1993-01-01

    Some recent results obtained using solution-adaptive finite element method in linear elastic two-dimensional fracture mechanics problems are presented. The focus is on the basic issue of adaptive finite element method for validating the applications of new methodology to fracture mechanics problems by computing demonstration problems and comparing the stress intensity factors to analytical results.

  15. Matched filter based iterative adaptive approach

    NASA Astrophysics Data System (ADS)

    Nepal, Ramesh; Zhang, Yan Rockee; Li, Zhengzheng; Blake, William

    2016-05-01

    Matched Filter sidelobes from diversified LPI waveform design and sensor resolution are two important considerations in radars and active sensors in general. Matched Filter sidelobes can potentially mask weaker targets, and low sensor resolution not only causes a high margin of error but also limits sensing in target-rich environment/ sector. The improvement in those factors, in part, concern with the transmitted waveform and consequently pulse compression techniques. An adaptive pulse compression algorithm is hence desired that can mitigate the aforementioned limitations. A new Matched Filter based Iterative Adaptive Approach, MF-IAA, as an extension to traditional Iterative Adaptive Approach, IAA, has been developed. MF-IAA takes its input as the Matched Filter output. The motivation here is to facilitate implementation of Iterative Adaptive Approach without disrupting the processing chain of traditional Matched Filter. Similar to IAA, MF-IAA is a user parameter free, iterative, weighted least square based spectral identification algorithm. This work focuses on the implementation of MF-IAA. The feasibility of MF-IAA is studied using a realistic airborne radar simulator as well as actual measured airborne radar data. The performance of MF-IAA is measured with different test waveforms, and different Signal-to-Noise (SNR) levels. In addition, Range-Doppler super-resolution using MF-IAA is investigated. Sidelobe reduction as well as super-resolution enhancement is validated. The robustness of MF-IAA with respect to different LPI waveforms and SNR levels is also demonstrated.

  16. A multilevel adaptive projection method for unsteady incompressible flow

    NASA Technical Reports Server (NTRS)

    Howell, Louis H.

    1993-01-01

    There are two main requirements for practical simulation of unsteady flow at high Reynolds number: the algorithm must accurately propagate discontinuous flow fields without excessive artificial viscosity, and it must have some adaptive capability to concentrate computational effort where it is most needed. We satisfy the first of these requirements with a second-order Godunov method similar to those used for high-speed flows with shocks, and the second with a grid-based refinement scheme which avoids some of the drawbacks associated with unstructured meshes. These two features of our algorithm place certain constraints on the projection method used to enforce incompressibility. Velocities are cell-based, leading to a Laplacian stencil for the projection which decouples adjacent grid points. We discuss features of the multigrid and multilevel iteration schemes required for solution of the resulting decoupled problem. Variable-density flows require use of a modified projection operator--we have found a multigrid method for this modified projection that successfully handles density jumps of thousands to one. Numerical results are shown for the 2D adaptive and 3D variable-density algorithms.

  17. Outlier Measures and Norming Methods for Computerized Adaptive Tests.

    ERIC Educational Resources Information Center

    Bradlow, Eric T.; Weiss, Robert E.

    2001-01-01

    Compares four methods that map outlier statistics to a familiarity probability scale (a "P" value). Explored these methods in the context of computerized adaptive test data from a 1995 nationally administered computerized examination for professionals in the medical industry. (SLD)

  18. Adaptive skin detection based on online training

    NASA Astrophysics Data System (ADS)

    Zhang, Ming; Tang, Liang; Zhou, Jie; Rong, Gang

    2007-11-01

    Skin is a widely used cue for porn image classification. Most conventional methods are off-line training schemes. They usually use a fixed boundary to segment skin regions in the images and are effective only in restricted conditions: e.g. good lightness and unique human race. This paper presents an adaptive online training scheme for skin detection which can handle these tough cases. In our approach, skin detection is considered as a classification problem on Gaussian mixture model. For each image, human face is detected and the face color is used to establish a primary estimation of skin color distribution. Then an adaptive online training algorithm is used to find the real boundary between skin color and background color in current image. Experimental results on 450 images showed that the proposed method is more robust in general situations than the conventional ones.

  19. Robust time and frequency domain estimation methods in adaptive control

    NASA Technical Reports Server (NTRS)

    Lamaire, Richard Orville

    1987-01-01

    A robust identification method was developed for use in an adaptive control system. The type of estimator is called the robust estimator, since it is robust to the effects of both unmodeled dynamics and an unmeasurable disturbance. The development of the robust estimator was motivated by a need to provide guarantees in the identification part of an adaptive controller. To enable the design of a robust control system, a nominal model as well as a frequency-domain bounding function on the modeling uncertainty associated with this nominal model must be provided. Two estimation methods are presented for finding parameter estimates, and, hence, a nominal model. One of these methods is based on the well developed field of time-domain parameter estimation. In a second method of finding parameter estimates, a type of weighted least-squares fitting to a frequency-domain estimated model is used. The frequency-domain estimator is shown to perform better, in general, than the time-domain parameter estimator. In addition, a methodology for finding a frequency-domain bounding function on the disturbance is used to compute a frequency-domain bounding function on the additive modeling error due to the effects of the disturbance and the use of finite-length data. The performance of the robust estimator in both open-loop and closed-loop situations is examined through the use of simulations.

  20. Hybrid and adaptive meta-model-based global optimization

    NASA Astrophysics Data System (ADS)

    Gu, J.; Li, G. Y.; Dong, Z.

    2012-01-01

    As an efficient and robust technique for global optimization, meta-model-based search methods have been increasingly used in solving complex and computation intensive design optimization problems. In this work, a hybrid and adaptive meta-model-based global optimization method that can automatically select appropriate meta-modelling techniques during the search process to improve search efficiency is introduced. The search initially applies three representative meta-models concurrently. Progress towards a better performing model is then introduced by selecting sample data points adaptively according to the calculated values of the three meta-models to improve modelling accuracy and search efficiency. To demonstrate the superior performance of the new algorithm over existing search methods, the new method is tested using various benchmark global optimization problems and applied to a real industrial design optimization example involving vehicle crash simulation. The method is particularly suitable for design problems involving computation intensive, black-box analyses and simulations.

  1. Assessing Adaptive Instructional Design Tools and Methods in ADAPT[IT].

    ERIC Educational Resources Information Center

    Eseryel, Deniz; Spector, J. Michael

    ADAPT[IT] (Advanced Design Approach for Personalized Training - Interactive Tools) is a European project within the Information Society Technologies program that is providing design methods and tools to guide a training designer according to the latest cognitive science and standardization principles. ADAPT[IT] addresses users in two significantly…

  2. Adaptive fusion method of visible light and infrared images based on non-subsampled shearlet transform and fast non-negative matrix factorization

    NASA Astrophysics Data System (ADS)

    Kong, Weiwei; Lei, Yang; Zhao, Huaixun

    2014-11-01

    The issue of visible light and infrared images fusion has been an active topic in both military and civilian areas, and a great many relevant algorithms and techniques have been developed accordingly. This paper addresses a novel adaptive approach to the above two patterns of images fusion problem, employing multi-scale geometry analysis (MGA) of non-subsampled shearlet transform (NSST) and fast non-negative matrix factorization (FNMF) together. Compared with other existing conventional MGA tools, NSST owns not only better feature-capturing capabilities, but also much lower computational complexities. As a modification version of the classic NMF model, FNMF overcomes the local optimum property inherent in NMF to a large extent. Furthermore, use of the FNMF with a less complex structure and much fewer iteration numbers required leads to the enhancement of the overall computational efficiency, which is undoubtedly meaningful and promising in so many real-time applications especially the military and medical technologies. Experimental results indicate that the proposed method is superior to other current popular ones in both aspects of subjective visual and objective performance.

  3. Adaptive Cognitive-Based Selection of Learning Objects

    ERIC Educational Resources Information Center

    Karampiperis, Pythagoras; Lin, Taiyu; Sampson, Demetrios G.; Kinshuk

    2006-01-01

    Adaptive cognitive-based selection is recognized as among the most significant open issues in adaptive web-based learning systems. In order to adaptively select learning resources, the definition of adaptation rules according to the cognitive style or learning preferences of the learners is required. Although some efforts have been reported in…

  4. Adaptive computational methods for aerothermal heating analysis

    NASA Technical Reports Server (NTRS)

    Price, John M.; Oden, J. Tinsley

    1988-01-01

    The development of adaptive gridding techniques for finite-element analysis of fluid dynamics equations is described. The developmental work was done with the Euler equations with concentration on shock and inviscid flow field capturing. Ultimately this methodology is to be applied to a viscous analysis for the purpose of predicting accurate aerothermal loads on complex shapes subjected to high speed flow environments. The development of local error estimate strategies as a basis for refinement strategies is discussed, as well as the refinement strategies themselves. The application of the strategies to triangular elements and a finite-element flux-corrected-transport numerical scheme are presented. The implementation of these strategies in the GIM/PAGE code for 2-D and 3-D applications is documented and demonstrated.

  5. An adaptive pseudospectral method for discontinuous problems

    NASA Technical Reports Server (NTRS)

    Augenbaum, Jeffrey M.

    1988-01-01

    The accuracy of adaptively chosen, mapped polynomial approximations is studied for functions with steep gradients or discontinuities. It is shown that, for steep gradient functions, one can obtain spectral accuracy in the original coordinate system by using polynomial approximations in a transformed coordinate system with substantially fewer collocation points than are necessary using polynomial expansion directly in the original, physical, coordinate system. It is also shown that one can avoid the usual Gibbs oscillation associated with steep gradient solutions of hyperbolic pde's by approximation in suitably chosen coordinate systems. Continuous, high gradient solutions are computed with spectral accuracy (as measured in the physical coordinate system). Discontinuous solutions associated with nonlinear hyperbolic equations can be accurately computed by using an artificial viscosity chosen to smooth out the solution in the mapped, computational domain. Thus, shocks can be effectively resolved on a scale that is subgrid to the resolution available with collocation only in the physical domain. Examples with Fourier and Chebyshev collocation are given.

  6. Adaptable radiation monitoring system and method

    DOEpatents

    Archer, Daniel E.; Beauchamp, Brock R.; Mauger, G. Joseph; Nelson, Karl E.; Mercer, Michael B.; Pletcher, David C.; Riot, Vincent J.; Schek, James L.; Knapp, David A.

    2006-06-20

    A portable radioactive-material detection system capable of detecting radioactive sources moving at high speeds. The system has at least one radiation detector capable of detecting gamma-radiation and coupled to an MCA capable of collecting spectral data in very small time bins of less than about 150 msec. A computer processor is connected to the MCA for determining from the spectral data if a triggering event has occurred. Spectral data is stored on a data storage device, and a power source supplies power to the detection system. Various configurations of the detection system may be adaptably arranged for various radiation detection scenarios. In a preferred embodiment, the computer processor operates as a server which receives spectral data from other networked detection systems, and communicates the collected data to a central data reporting system.

  7. Moving and adaptive grid methods for compressible flows

    NASA Technical Reports Server (NTRS)

    Trepanier, Jean-Yves; Camarero, Ricardo

    1995-01-01

    This paper describes adaptive grid methods developed specifically for compressible flow computations. The basic flow solver is a finite-volume implementation of Roe's flux difference splitting scheme or arbitrarily moving unstructured triangular meshes. The grid adaptation is performed according to geometric and flow requirements. Some results are included to illustrate the potential of the methodology.

  8. Adaptive mesh strategies for the spectral element method

    NASA Technical Reports Server (NTRS)

    Mavriplis, Catherine

    1992-01-01

    An adaptive spectral method was developed for the efficient solution of time dependent partial differential equations. Adaptive mesh strategies that include resolution refinement and coarsening by three different methods are illustrated on solutions to the 1-D viscous Burger equation and the 2-D Navier-Stokes equations for driven flow in a cavity. Sharp gradients, singularities, and regions of poor resolution are resolved optimally as they develop in time using error estimators which indicate the choice of refinement to be used. The adaptive formulation presents significant increases in efficiency, flexibility, and general capabilities for high order spectral methods.

  9. Roy’s Adaptation Model-Based Patient Education for Promoting the Adaptation of Hemodialysis Patients

    PubMed Central

    Afrasiabifar, Ardashir; Karimi, Zohreh; Hassani, Parkhideh

    2013-01-01

    Background In addition to physical adaptation and psychosocial adjustment to chronic renal disease, hemodialysis (HD) patients must also adapt to dialysis therapy plan. Objectives The aim of the present study was to examine the effect of Roy’s adaptation model-based patient education on adaptation of HD patients. Patients and Methods This study is a semi-experimental research that was conducted with the participation of all patients with end-stage renal disease referred to the dialysis unit of Shahid Beheshti Hospital of Yasuj city, 2010. A total of 59 HD patients were randomly allocated to two groups of test and control. Data were collected by a questionnaire based on the Roy’s Adaptation Model (RAM). Validity and reliability of the questionnaire were approved. Patient education was determined by eight one-hour sessions over eight weeks. At the end of the education plan, the patients were given an educational booklet containing the main points of self-care for HD patients. The effectiveness of education plan was assessed two months after plan completion and data were compared with the pre-education scores. All analyses were conducted using the SPSS software (version 16) through descriptive and inferential statistics including correlation, t-test, ANOVA and ANCOVA tests. Results The results showed significant differences in the mean scores of physiological and self-concept models between the test and control groups (P = 0.01 and P = 0.03 respectively). Also a statistical difference (P = 0.04) was observed in the mean scores of the role function mode of both groups. There was no significant difference in the mean scores of interdependence modes between the two groups. Conclusions RAM based patient education could improve the patients’ adaptation in physiologic and self-concept modes. In addition to suggesting further research in this area, nurses are recommended to pay more attention in applying RAM in dialysis centers. PMID:24396575

  10. A field-based method to derive macroinvertebrate benchmark for specific conductivity adapted for small data sets and demonstrated in the Hun-Tai River Basin, Northeast China.

    PubMed

    Zhao, Qian; Jia, Xiaobo; Xia, Rui; Lin, Jianing; Zhang, Yuan

    2016-09-01

    Ionic mixtures, measured as specific conductivity, have been increasingly concerned because of their toxicities to aquatic organisms. However, identifying protective values of specific conductivity for aquatic organisms is challenging given that laboratory test systems cannot examine more salt-intolerant species nor effects occurring in streams. Large data sets used for deriving field-based benchmarks are rarely available. In this study, a field-based method for small data sets was used to derive specific conductivity benchmark, which is expected to prevent the extirpation of 95% of local taxa from circum-neutral to alkaline waters dominated by a mixture of SO4(2-) and HCO3(-) anions and other dissolved ions. To compensate for the smaller sample size, species level analyses were combined with genus level analyses. The benchmark is based on extirpation concentration (XC95) values of specific conductivity for 60 macroinvertebrate genera estimated from 296 sampling sites in the Hun-Tai River Basin. We derived the specific conductivity benchmark by using a 2-point interpolation method, which yielded the benchmark of 249 μS/cm. Our study tailored the method that was developed by USEPA to derive aquatic life benchmark for specific conductivity for basin scale application, and may provide useful information for water pollution control and management.

  11. A field-based method to derive macroinvertebrate benchmark for specific conductivity adapted for small data sets and demonstrated in the Hun-Tai River Basin, Northeast China.

    PubMed

    Zhao, Qian; Jia, Xiaobo; Xia, Rui; Lin, Jianing; Zhang, Yuan

    2016-09-01

    Ionic mixtures, measured as specific conductivity, have been increasingly concerned because of their toxicities to aquatic organisms. However, identifying protective values of specific conductivity for aquatic organisms is challenging given that laboratory test systems cannot examine more salt-intolerant species nor effects occurring in streams. Large data sets used for deriving field-based benchmarks are rarely available. In this study, a field-based method for small data sets was used to derive specific conductivity benchmark, which is expected to prevent the extirpation of 95% of local taxa from circum-neutral to alkaline waters dominated by a mixture of SO4(2-) and HCO3(-) anions and other dissolved ions. To compensate for the smaller sample size, species level analyses were combined with genus level analyses. The benchmark is based on extirpation concentration (XC95) values of specific conductivity for 60 macroinvertebrate genera estimated from 296 sampling sites in the Hun-Tai River Basin. We derived the specific conductivity benchmark by using a 2-point interpolation method, which yielded the benchmark of 249 μS/cm. Our study tailored the method that was developed by USEPA to derive aquatic life benchmark for specific conductivity for basin scale application, and may provide useful information for water pollution control and management. PMID:27389551

  12. Design of an adaptive neural network based power system stabilizer.

    PubMed

    Liu, Wenxin; Venayagamoorthy, Ganesh K; Wunsch, Donald C

    2003-01-01

    Power system stabilizers (PSS) are used to generate supplementary control signals for the excitation system in order to damp the low frequency power system oscillations. To overcome the drawbacks of conventional PSS (CPSS), numerous techniques have been proposed in the literature. Based on the analysis of existing techniques, this paper presents an indirect adaptive neural network based power system stabilizer (IDNC) design. The proposed IDNC consists of a neuro-controller, which is used to generate a supplementary control signal to the excitation system, and a neuro-identifier, which is used to model the dynamics of the power system and to adapt the neuro-controller parameters. The proposed method has the features of a simple structure, adaptivity and fast response. The proposed IDNC is evaluated on a single machine infinite bus power system under different operating conditions and disturbances to demonstrate its effectiveness and robustness. PMID:12850048

  13. Adaptive sequential methods for detecting network intrusions

    NASA Astrophysics Data System (ADS)

    Chen, Xinjia; Walker, Ernest

    2013-06-01

    In this paper, we propose new sequential methods for detecting port-scan attackers which routinely perform random "portscans" of IP addresses to find vulnerable servers to compromise. In addition to rigorously control the probability of falsely implicating benign remote hosts as malicious, our method performs significantly faster than other current solutions. Moreover, our method guarantees that the maximum amount of observational time is bounded. In contrast to the previous most effective method, Threshold Random Walk Algorithm, which is explicit and analytical in nature, our proposed algorithm involve parameters to be determined by numerical methods. We have introduced computational techniques such as iterative minimax optimization for quick determination of the parameters of the new detection algorithm. A framework of multi-valued decision for detecting portscanners and DoS attacks is also proposed.

  14. An adaptive stepsize method for the chemical Langevin equation.

    PubMed

    Ilie, Silvana; Teslya, Alexandra

    2012-05-14

    Mathematical and computational modeling are key tools in analyzing important biological processes in cells and living organisms. In particular, stochastic models are essential to accurately describe the cellular dynamics, when the assumption of the thermodynamic limit can no longer be applied. However, stochastic models are computationally much more challenging than the traditional deterministic models. Moreover, many biochemical systems arising in applications have multiple time-scales, which lead to mathematical stiffness. In this paper we investigate the numerical solution of a stochastic continuous model of well-stirred biochemical systems, the chemical Langevin equation. The chemical Langevin equation is a stochastic differential equation with multiplicative, non-commutative noise. We propose an adaptive stepsize algorithm for approximating the solution of models of biochemical systems in the Langevin regime, with small noise, based on estimates of the local error. The underlying numerical method is the Milstein scheme. The proposed adaptive method is tested on several examples arising in applications and it is shown to have improved efficiency and accuracy compared to the existing fixed stepsize schemes.

  15. Investigation of the Multiple Model Adaptive Control (MMAC) method for flight control systems

    NASA Technical Reports Server (NTRS)

    1975-01-01

    The application was investigated of control theoretic ideas to the design of flight control systems for the F-8 aircraft. The design of an adaptive control system based upon the so-called multiple model adaptive control (MMAC) method is considered. Progress is reported.

  16. A Comparative Study of Item Exposure Control Methods in Computerized Adaptive Testing.

    ERIC Educational Resources Information Center

    Chang, Shun-Wen; Twu, Bor-Yaun

    This study investigated and compared the properties of five methods of item exposure control within the purview of estimating examinees' abilities in a computerized adaptive testing (CAT) context. Each of the exposure control algorithms was incorporated into the item selection procedure and the adaptive testing progressed based on the CAT design…

  17. SU-E-J-153: MRI Based, Daily Adaptive Radiotherapy for Rectal Cancer: Contour Adaptation

    SciTech Connect

    Kleijnen, J; Burbach, M; Verbraeken, T; Weggers, R; Zoetelief, A; Reerink, O; Lagendijk, J; Raaymakers, B; Asselen, B

    2014-06-01

    Purpose: A major hurdle in adaptive radiotherapy is the adaptation of the planning MRI's delineations to the daily anatomy. We therefore investigate the accuracy and time needed for online clinical target volume (CTV) adaptation by radiation therapists (RTT), to be used in MRI-guided adaptive treatments on a MRI-Linac (MRL). Methods: Sixteen patients, diagnosed with early stage rectal cancer, underwent a T2-weighted MRI prior to each fraction of short-course radiotherapy, resulting in 4–5 scans per patient. On these scans, the CTV was delineated according to guidelines by an experienced radiation oncologist (RO) and considered to be the gold standard. For each patient, the first MRI was considered as the planning MRI and matched on bony anatomy to the 3–4 daily MRIs. The planning MRI's CTV delineation was rigidly propagated to the daily MRI scans as a proposal for adaptation. Three RTTs in training started the adaptation of the CTV conform guidelines, after a two hour training lecture and a two patient (n=7) training set. To assess the inter-therapist variation, all three RTTs altered delineations of 3 patients (n=12). One RTT altered the CTV delineations (n=53) of the remaining 11 patients. Time needed for adaptation of the CTV to guidelines was registered.As a measure of agreement, the conformity index (CI) was determined between the RTTs' delineations as a group. Dice similarity coefficients were determined between delineations of the RTT and the RO. Results: We found good agreement between RTTs' and RO's delineations (average Dice=0.91, SD=0.03). Furthermore, the inter-observer agreement between the RTTs was high (average CI=0.94, SD=0.02). Adaptation time reduced from 10:33 min (SD= 3:46) to 2:56 min (SD=1:06) between the first and last ten delineations, respectively. Conclusion: Daily CTV adaptation by RTTs, seems a feasible and safe way to introduce daily, online MRI-based plan adaptation for a MRL.

  18. Adaptive directional wavelet transform based on directional prefiltering.

    PubMed

    Tanaka, Yuichi; Hasegawa, Madoka; Kato, Shigeo; Ikehara, Masaaki; Nguyen, Truong Q

    2010-04-01

    This paper proposes an efficient approach for adaptive directional wavelet transform (WT) based on directional prefiltering. Although the adaptive directional WT is able to transform an image along diagonal orientations as well as traditional horizontal and vertical directions, it sacrifices computation speed for good image coding performance. We present two efficient methods to find the best transform directions by prefiltering using 2-D filter bank or 1-D directional WT along two fixed directions. The proposed direction calculation methods achieve comparable image coding performance comparing to the conventional one with less complexity. Furthermore, transform direction data of the proposed method can be used for content-based image retrieval to increase retrieval ratio. PMID:20028625

  19. Adaptive upscaling with the dual mesh method

    SciTech Connect

    Guerillot, D.; Verdiere, S.

    1997-08-01

    The objective of this paper is to demonstrate that upscaling should be calculated during the flow simulation instead of trying to enhance the a priori upscaling methods. Hence, counter-examples are given to motivate our approach, the so-called Dual Mesh Method. The main steps of this numerical algorithm are recalled. Applications illustrate the necessity to consider different average relative permeability values depending on the direction in space. Moreover, these values could be different for the same average saturation. This proves that an a priori upscaling cannot be the answer even in homogeneous cases because of the {open_quotes}dynamical heterogeneity{close_quotes} created by the saturation profile. Other examples show the efficiency of the Dual Mesh Method applied to heterogeneous medium and to an actual field case in South America.

  20. A Comparison of Item Selection Procedures Using Different Ability Estimation Methods in Computerized Adaptive Testing Based on the Generalized Partial Credit Model

    ERIC Educational Resources Information Center

    Ho, Tsung-Han

    2010-01-01

    Computerized adaptive testing (CAT) provides a highly efficient alternative to the paper-and-pencil test. By selecting items that match examinees' ability levels, CAT not only can shorten test length and administration time but it can also increase measurement precision and reduce measurement error. In CAT, maximum information (MI) is the most…

  1. Using the Modified Checklist for Autism in Toddlers in a Well-Child Clinic in Turkey: Adapting the Screening Method Based on Culture and Setting

    ERIC Educational Resources Information Center

    Kara, Bülent; Mukaddes, Nahit Motavalli; Altinkaya, Isilay; Güntepe, Dilek; Gökçay, Gülbin; Özmen, Meral

    2014-01-01

    We aimed to adapt the Modified Checklist for Autism in Toddlers to Turkish culture. The Modified Checklist for Autism in Toddlers was filled out independently by 191 parents while they were waiting for the well-child examination of their child. A high screen-positive rate was found. Because of this high false-positive rate, a second study was done…

  2. Modular adaptive implant based on smart materials.

    PubMed

    Bîzdoacă, N; Tarniţă, Daniela; Tarniţă, D N

    2008-01-01

    Applications of biological methods and systems found in nature to the study and design of engineering systems and modern technology are defined as Bionics. The present paper describes a bionics application of shape memory alloy in construction of orthopedic implant. The main idea of this paper is related to design modular adaptive implants for fractured bones. In order to target the efficiency of medical treatment, the implant has to protect the fractured bone, for the healing period, undertaking much as is possible from the daily usual load of the healthy bones. After a particular stage of healing period is passed, using implant modularity, the load is gradually transferred to bone, assuring in this manner a gradually recover of bone function. The adaptability of this design is related to medical possibility of the physician to made the implant to correspond to patient specifically anatomy. Using a CT realistic numerical bone models, the mechanical simulation of different types of loading of the fractured bones treated with conventional method are presented. The results are commented and conclusions are formulated. PMID:19050799

  3. KNOWBOT; An adaptive data base interface

    SciTech Connect

    Heger, A.S.; Koen, B.U. . Dept. of Mechanical Engineering)

    1991-02-01

    This paper reports on an adaptive interface KNOWBOT designed to solve some of the problems that face the users of large centralized data bases. The interface applies the neural network approach to information retrieval from a data base. The data base is a subset of the Nuclear Plant Reliability Data System. The interface KNOWBOT preempts an existing data base interface and works in conjunction with it. By design, KNOWBOT starts as a tabula rasa but acquires knowledge through its interactions with the user and the data base. The interface uses its gained knowledge to personalize the data base retrieval process and to induce new queries. The interface also forgets the information that is no longer needed by the user. These self-organizing features of the interface reduce the scope of the data base to the subsets that are highly relevant to the user needs. A proof-of-principal version of this interface has been implemented in Common LISP on a Texas Instruments Explorer I workstation. Experiments with KNOWBOT have been successful in demonstrating the robustness of the model especially with induction and self-organization. This paper describes the design of KNOWBOT and presents some of the experimental results.

  4. Adaptive RED algorithm based on minority game

    NASA Astrophysics Data System (ADS)

    Wei, Jiaolong; Lei, Ling; Qian, Jingjing

    2007-11-01

    With more and more applications appearing and the technology developing in the Internet, only relying on terminal system can not satisfy the complicated demand of QoS network. Router mechanisms must be participated into protecting responsive flows from the non-responsive. Routers mainly use active queue management mechanism (AQM) to avoid congestion. In the point of interaction between the routers, the paper applies minority game to describe the interaction of the users and observes the affection on the length of average queue. The parameters α, β of ARED being hard to confirm, adaptive RED based on minority game can depict the interactions of main body and amend the parameter α, β of ARED to the best. Adaptive RED based on minority game optimizes ARED and realizes the smoothness of average queue length. At the same time, this paper extends the network simulator plat - NS by adding new elements. Simulation has been implemented and the results show that new algorithm can reach the anticipative objects.

  5. Adaptive Transmission Control Method for Communication-Broadcasting Integrated Services

    NASA Astrophysics Data System (ADS)

    Koto, Hideyuki; Furuya, Hiroki; Nakamura, Hajime

    This paper proposes an adaptive transmission control method for massive and intensive telecommunication traffic generated by communication-broadcasting integrated services. The proposed method adaptively controls data transmissions from viewers depending on the congestion states, so that severe congestion can be effectively avoided. Furthermore, it utilizes the broadcasting channel which is not only scalable, but also reliable for controlling the responses from vast numbers of viewers. The performance of the proposed method is evaluated through experiments on a test bed where approximately one million viewers are emulated. The obtained results quantitatively demonstrate the performance of the proposed method and its effectiveness under massive and intensive traffic conditions.

  6. Adaptive DFT-based Interferometer Fringe Tracking

    NASA Technical Reports Server (NTRS)

    Wilson, Edward; Pedretti, Ettore; Bregman, Jesse; Mah, Robert W.; Traub, Wesley A.

    2004-01-01

    An automatic interferometer fringe tracking system has been developed, implemented, and tested at the Infrared Optical Telescope Array (IOTA) observatory at Mt. Hopkins, Arizona. The system can minimize the optical path differences (OPDs) for all three baselines of the Michelson stellar interferometer at IOTA. Based on sliding window discrete Fourier transform (DFT) calculations that were optimized for computational efficiency and robustness to atmospheric disturbances, the algorithm has also been tested extensively on off-line data. Implemented in ANSI C on the 266 MHz PowerPC processor running the VxWorks real-time operating system, the algorithm runs in approximately 2.0 milliseconds per scan (including all three interferograms), using the science camera and piezo scanners to measure and correct the OPDs. The adaptive DFT-based tracking algorithm should be applicable to other systems where there is a need to detect or track a signal with an approximately constant-frequency carrier pulse.

  7. Efficient Combustion Simulation via the Adaptive Wavelet Collocation Method

    NASA Astrophysics Data System (ADS)

    Lung, Kevin; Brown-Dymkoski, Eric; Guerrero, Victor; Doran, Eric; Museth, Ken; Balme, Jo; Urberger, Bob; Kessler, Andre; Jones, Stephen; Moses, Billy; Crognale, Anthony

    Rocket engine development continues to be driven by the intuition and experience of designers, progressing through extensive trial-and-error test campaigns. Extreme temperatures and pressures frustrate direct observation, while high-fidelity simulation can be impractically expensive owing to the inherent muti-scale, multi-physics nature of the problem. To address this cost, an adaptive multi-resolution PDE solver has been designed which targets the high performance, many-core architecture of GPUs. The adaptive wavelet collocation method is used to maintain a sparse-data representation of the high resolution simulation, greatly reducing the memory footprint while tightly controlling physical fidelity. The tensorial, stencil topology of wavelet-based grids lends itself to highly vectorized algorithms which are necessary to exploit the performance of GPUs. This approach permits efficient implementation of direct finite-rate kinetics, and improved resolution of steep thermodynamic gradients and the smaller mixing scales that drive combustion dynamics. Resolving these scales is crucial for accurate chemical kinetics, which are typically degraded or lost in statistical modeling approaches.

  8. Locally adaptive method to define coordination shell

    NASA Astrophysics Data System (ADS)

    Higham, Jonathan; Henchman, Richard H.

    2016-08-01

    An algorithm is presented to define a particle's coordination shell for any collection of particles. It requires only the particles' positions and no pre-existing knowledge or parameters beyond those already in the force field. A particle's shell is taken to be all particles that are not blocked by any other particle and not further away than a blocked particle. Because blocking is based on two distances and an angle for triplets of particles, it is called the relative angular distance (RAD) algorithm. RAD is applied to Lennard-Jones particles in molecular dynamics simulations of crystalline, liquid, and gaseous phases at various temperatures and densities. RAD coordination shells agree well with those from a cut-off in the radial distribution function for the crystals and liquids and are slightly higher for the gas.

  9. Locally adaptive method to define coordination shell.

    PubMed

    Higham, Jonathan; Henchman, Richard H

    2016-08-28

    An algorithm is presented to define a particle's coordination shell for any collection of particles. It requires only the particles' positions and no pre-existing knowledge or parameters beyond those already in the force field. A particle's shell is taken to be all particles that are not blocked by any other particle and not further away than a blocked particle. Because blocking is based on two distances and an angle for triplets of particles, it is called the relative angular distance (RAD) algorithm. RAD is applied to Lennard-Jones particles in molecular dynamics simulations of crystalline, liquid, and gaseous phases at various temperatures and densities. RAD coordination shells agree well with those from a cut-off in the radial distribution function for the crystals and liquids and are slightly higher for the gas. PMID:27586905

  10. An auto-adaptive background subtraction method for Raman spectra

    NASA Astrophysics Data System (ADS)

    Xie, Yi; Yang, Lidong; Sun, Xilong; Wu, Dewen; Chen, Qizhen; Zeng, Yongming; Liu, Guokun

    2016-05-01

    Background subtraction is a crucial step in the preprocessing of Raman spectrum. Usually, parameter manipulating of the background subtraction method is necessary for the efficient removal of the background, which makes the quality of the spectrum empirically dependent. In order to avoid artificial bias, we proposed an auto-adaptive background subtraction method without parameter adjustment. The main procedure is: (1) select the local minima of spectrum while preserving major peaks, (2) apply an interpolation scheme to estimate background, (3) and design an iteration scheme to improve the adaptability of background subtraction. Both simulated data and Raman spectra have been used to evaluate the proposed method. By comparing the backgrounds obtained from three widely applied methods: the polynomial, the Baek's and the airPLS, the auto-adaptive method meets the demand of practical applications in terms of efficiency and accuracy.

  11. Fabrication Methods for Adaptive Deformable Mirrors

    NASA Technical Reports Server (NTRS)

    Toda, Risaku; White, Victor E.; Manohara, Harish; Patterson, Keith D.; Yamamoto, Namiko; Gdoutos, Eleftherios; Steeves, John B.; Daraio, Chiara; Pellegrino, Sergio

    2013-01-01

    Previously, it was difficult to fabricate deformable mirrors made by piezoelectric actuators. This is because numerous actuators need to be precisely assembled to control the surface shape of the mirror. Two approaches have been developed. Both approaches begin by depositing a stack of piezoelectric films and electrodes over a silicon wafer substrate. In the first approach, the silicon wafer is removed initially by plasmabased reactive ion etching (RIE), and non-plasma dry etching with xenon difluoride (XeF2). In the second approach, the actuator film stack is immersed in a liquid such as deionized water. The adhesion between the actuator film stack and the substrate is relatively weak. Simply by seeping liquid between the film and the substrate, the actuator film stack is gently released from the substrate. The deformable mirror contains multiple piezoelectric membrane layers as well as multiple electrode layers (some are patterned and some are unpatterned). At the piezolectric layer, polyvinylidene fluoride (PVDF), or its co-polymer, poly(vinylidene fluoride trifluoroethylene P(VDF-TrFE) is used. The surface of the mirror is coated with a reflective coating. The actuator film stack is fabricated on silicon, or silicon on insulator (SOI) substrate, by repeatedly spin-coating the PVDF or P(VDFTrFE) solution and patterned metal (electrode) deposition. In the first approach, the actuator film stack is prepared on SOI substrate. Then, the thick silicon (typically 500-micron thick and called handle silicon) of the SOI wafer is etched by a deep reactive ion etching process tool (SF6-based plasma etching). This deep RIE stops at the middle SiO2 layer. The middle SiO2 layer is etched by either HF-based wet etching or dry plasma etch. The thin silicon layer (generally called a device layer) of SOI is removed by XeF2 dry etch. This XeF2 etch is very gentle and extremely selective, so the released mirror membrane is not damaged. It is possible to replace SOI with silicon

  12. Stability and error estimation for Component Adaptive Grid methods

    NASA Technical Reports Server (NTRS)

    Oliger, Joseph; Zhu, Xiaolei

    1994-01-01

    Component adaptive grid (CAG) methods for solving hyperbolic partial differential equations (PDE's) are discussed in this paper. Applying recent stability results for a class of numerical methods on uniform grids. The convergence of these methods for linear problems on component adaptive grids is established here. Furthermore, the computational error can be estimated on CAG's using the stability results. Using these estimates, the error can be controlled on CAG's. Thus, the solution can be computed efficiently on CAG's within a given error tolerance. Computational results for time dependent linear problems in one and two space dimensions are presented.

  13. Adaptive two-regime method: Application to front propagation

    SciTech Connect

    Robinson, Martin Erban, Radek; Flegg, Mark

    2014-03-28

    The Adaptive Two-Regime Method (ATRM) is developed for hybrid (multiscale) stochastic simulation of reaction-diffusion problems. It efficiently couples detailed Brownian dynamics simulations with coarser lattice-based models. The ATRM is a generalization of the previously developed Two-Regime Method [Flegg et al., J. R. Soc., Interface 9, 859 (2012)] to multiscale problems which require a dynamic selection of regions where detailed Brownian dynamics simulation is used. Typical applications include a front propagation or spatio-temporal oscillations. In this paper, the ATRM is used for an in-depth study of front propagation in a stochastic reaction-diffusion system which has its mean-field model given in terms of the Fisher equation [R. Fisher, Ann. Eugen. 7, 355 (1937)]. It exhibits a travelling reaction front which is sensitive to stochastic fluctuations at the leading edge of the wavefront. Previous studies into stochastic effects on the Fisher wave propagation speed have focused on lattice-based models, but there has been limited progress using off-lattice (Brownian dynamics) models, which suffer due to their high computational cost, particularly at the high molecular numbers that are necessary to approach the Fisher mean-field model. By modelling only the wavefront itself with the off-lattice model, it is shown that the ATRM leads to the same Fisher wave results as purely off-lattice models, but at a fraction of the computational cost. The error analysis of the ATRM is also presented for a morphogen gradient model.

  14. Adaptive two-regime method: application to front propagation.

    PubMed

    Robinson, Martin; Flegg, Mark; Erban, Radek

    2014-03-28

    The Adaptive Two-Regime Method (ATRM) is developed for hybrid (multiscale) stochastic simulation of reaction-diffusion problems. It efficiently couples detailed Brownian dynamics simulations with coarser lattice-based models. The ATRM is a generalization of the previously developed Two-Regime Method [Flegg et al., J. R. Soc., Interface 9, 859 (2012)] to multiscale problems which require a dynamic selection of regions where detailed Brownian dynamics simulation is used. Typical applications include a front propagation or spatio-temporal oscillations. In this paper, the ATRM is used for an in-depth study of front propagation in a stochastic reaction-diffusion system which has its mean-field model given in terms of the Fisher equation [R. Fisher, Ann. Eugen. 7, 355 (1937)]. It exhibits a travelling reaction front which is sensitive to stochastic fluctuations at the leading edge of the wavefront. Previous studies into stochastic effects on the Fisher wave propagation speed have focused on lattice-based models, but there has been limited progress using off-lattice (Brownian dynamics) models, which suffer due to their high computational cost, particularly at the high molecular numbers that are necessary to approach the Fisher mean-field model. By modelling only the wavefront itself with the off-lattice model, it is shown that the ATRM leads to the same Fisher wave results as purely off-lattice models, but at a fraction of the computational cost. The error analysis of the ATRM is also presented for a morphogen gradient model.

  15. Adaptive control with an expert system based supervisory level. Thesis

    NASA Technical Reports Server (NTRS)

    Sullivan, Gerald A.

    1991-01-01

    Adaptive control is presently one of the methods available which may be used to control plants with poorly modelled dynamics or time varying dynamics. Although many variations of adaptive controllers exist, a common characteristic of all adaptive control schemes, is that input/output measurements from the plant are used to adjust a control law in an on-line fashion. Ideally the adjustment mechanism of the adaptive controller is able to learn enough about the dynamics of the plant from input/output measurements to effectively control the plant. In practice, problems such as measurement noise, controller saturation, and incorrect model order, to name a few, may prevent proper adjustment of the controller and poor performance or instability result. In this work we set out to avoid the inadequacies of procedurally implemented safety nets, by introducing a two level control scheme in which an expert system based 'supervisor' at the upper level provides all the safety net functions for an adaptive controller at the lower level. The expert system is based on a shell called IPEX, (Interactive Process EXpert), that we developed specifically for the diagnosis and treatment of dynamic systems. Some of the more important functions that the IPEX system provides are: (1) temporal reasoning; (2) planning of diagnostic activities; and (3) interactive diagnosis. Also, because knowledge and control logic are separate, the incorporation of new diagnostic and treatment knowledge is relatively simple. We note that the flexibility available in the system to express diagnostic and treatment knowledge, allows much greater functionality than could ever be reasonably expected from procedural implementations of safety nets. The remainder of this chapter is divided into three sections. In section 1.1 we give a detailed review of the literature in the area of supervisory systems for adaptive controllers. In particular, we describe the evolution of safety nets from simple ad hoc techniques, up

  16. An adaptive multiscale finite element method for unsaturated flow problems in heterogeneous porous media

    NASA Astrophysics Data System (ADS)

    He, Xinguang; Ren, Li

    2009-07-01

    SummaryIn this paper we present an adaptive multiscale finite element method for solving the unsaturated water flow problems in heterogeneous porous media spanning over many scales. The main purpose is to design a numerical method which is capable of adaptively capturing the large-scale behavior of the solution on a coarse-scale mesh without resolving all the small-scale details at each time step. This is accomplished by constructing the multiscale base functions that are adapted to the time change of the unsaturated hydraulic conductivity field. The key idea of our method is to use a criterion based on the temporal variation of the hydraulic conductivity field to determine when and where to update our multiscale base functions. As a consequence, these base functions are able to dynamically account for the spatio-temporal variability in the equation coefficients. We described the principle for constructing such a method in detail and gave an algorithm for implementing it. Numerical experiments were carried out for the unsaturated water flow equation with randomly generated lognormal hydraulic parameters to demonstrate the efficiency and accuracy of the proposed method. The results show that throughout the adaptive simulation, only a very small fraction of the multiscale base functions needs to be recomputed, and the level of accuracy of the adaptive method is higher than that of the multiscale finite element technique in which the base functions are not updated with the time change of the hydraulic conductivity.

  17. On Accuracy of Adaptive Grid Methods for Captured Shocks

    NASA Technical Reports Server (NTRS)

    Yamaleev, Nail K.; Carpenter, Mark H.

    2002-01-01

    The accuracy of two grid adaptation strategies, grid redistribution and local grid refinement, is examined by solving the 2-D Euler equations for the supersonic steady flow around a cylinder. Second- and fourth-order linear finite difference shock-capturing schemes, based on the Lax-Friedrichs flux splitting, are used to discretize the governing equations. The grid refinement study shows that for the second-order scheme, neither grid adaptation strategy improves the numerical solution accuracy compared to that calculated on a uniform grid with the same number of grid points. For the fourth-order scheme, the dominant first-order error component is reduced by the grid adaptation, while the design-order error component drastically increases because of the grid nonuniformity. As a result, both grid adaptation techniques improve the numerical solution accuracy only on the coarsest mesh or on very fine grids that are seldom found in practical applications because of the computational cost involved. Similar error behavior has been obtained for the pressure integral across the shock. A simple analysis shows that both grid adaptation strategies are not without penalties in the numerical solution accuracy. Based on these results, a new grid adaptation criterion for captured shocks is proposed.

  18. Adaptive mesh generation for edge-element finite element method

    NASA Astrophysics Data System (ADS)

    Tsuboi, Hajime; Gyimothy, Szabolcs

    2001-06-01

    An adaptive mesh generation method for two- and three-dimensional finite element methods using edge elements is proposed. Since the tangential component continuity is preserved when using edge elements, the strategy of creating new nodes is based on evaluation of the normal component of the magnetic vector potential across element interfaces. The evaluation is performed at the middle point of edge of a triangular element for two-dimensional problems or at the gravity center of triangular surface of a tetrahedral element for three-dimensional problems. At the boundary of two elements, the error estimator is the ratio of the normal component discontinuity to the maximum value of the potential in the same material. One or more nodes are set at the middle points of the edges according to the value of the estimator as well as the subdivision of elements where new nodes have been created. A final mesh will be obtained after several iterations. Some computation results of two- and three-dimensional problems using the proposed method are shown.

  19. Evaluation of Adaptive Subdivision Method on Mobile Device

    NASA Astrophysics Data System (ADS)

    Rahim, Mohd Shafry Mohd; Isa, Siti Aida Mohd; Rehman, Amjad; Saba, Tanzila

    2013-06-01

    Recently, there are significant improvements in the capabilities of mobile devices; but rendering large 3D object is still tedious because of the constraint in resources of mobile devices. To reduce storage requirement, 3D object is simplified but certain area of curvature is compromised and the surface will not be smooth. Therefore a method to smoother selected area of a curvature is implemented. One of the popular methods is adaptive subdivision method. Experiments are performed using two data with results based on processing time, rendering speed and the appearance of the object on the devices. The result shows a downfall in frame rate performance due to the increase in the number of triangles with each level of iteration while the processing time of generating the new mesh also significantly increase. Since there is a difference in screen size between the devices the surface on the iPhone appears to have more triangles and more compact than the surface displayed on the iPad. [Figure not available: see fulltext.

  20. Adaptive Elastic Net for Generalized Methods of Moments.

    PubMed

    Caner, Mehmet; Zhang, Hao Helen

    2014-01-30

    Model selection and estimation are crucial parts of econometrics. This paper introduces a new technique that can simultaneously estimate and select the model in generalized method of moments (GMM) context. The GMM is particularly powerful for analyzing complex data sets such as longitudinal and panel data, and it has wide applications in econometrics. This paper extends the least squares based adaptive elastic net estimator of Zou and Zhang (2009) to nonlinear equation systems with endogenous variables. The extension is not trivial and involves a new proof technique due to estimators lack of closed form solutions. Compared to Bridge-GMM of Caner (2009), we allow for the number of parameters to diverge to infinity as well as collinearity among a large number of variables, also the redundant parameters set to zero via a data dependent technique. This method has the oracle property, meaning that we can estimate nonzero parameters with their standard limit and the redundant parameters are dropped from the equations simultaneously. Numerical examples are used to illustrate the performance of the new method.

  1. An adaptive grid-based all hexahedral meshing algorithm based on 2-refinement.

    SciTech Connect

    Edgel, Jared; Benzley, Steven E.; Owen, Steven James

    2010-08-01

    Most adaptive mesh generation algorithms employ a 3-refinement method. This method, although easy to employ, provides a mesh that is often too coarse in some areas and over refined in other areas. Because this method generates 27 new hexes in place of a single hex, there is little control on mesh density. This paper presents an adaptive all-hexahedral grid-based meshing algorithm that employs a 2-refinement method. 2-refinement is based on dividing the hex to be refined into eight new hexes. This method allows a greater control on mesh density when compared to a 3-refinement procedure. This adaptive all-hexahedral meshing algorithm provides a mesh that is efficient for analysis by providing a high element density in specific locations and a reduced mesh density in other areas. In addition, this tool can be effectively used for inside-out hexahedral grid based schemes, using Cartesian structured grids for the base mesh, which have shown great promise in accommodating automatic all-hexahedral algorithms. This adaptive all-hexahedral grid-based meshing algorithm employs a 2-refinement insertion method. This allows greater control on mesh density when compared to 3-refinement methods. This algorithm uses a two layer transition zone to increase element quality and keeps transitions from lower to higher mesh densities smooth. Templates were introduced to allow both convex and concave refinement.

  2. Adaptive Estimation of Intravascular Shear Rate Based on Parameter Optimization

    NASA Astrophysics Data System (ADS)

    Nitta, Naotaka; Takeda, Naoto

    2008-05-01

    The relationships between the intravascular wall shear stress, controlled by flow dynamics, and the progress of arteriosclerosis plaque have been clarified by various studies. Since the shear stress is determined by the viscosity coefficient and shear rate, both factors must be estimated accurately. In this paper, an adaptive method for improving the accuracy of quantitative shear rate estimation was investigated. First, the parameter dependence of the estimated shear rate was investigated in terms of the differential window width and the number of averaged velocity profiles based on simulation and experimental data, and then the shear rate calculation was optimized. The optimized result revealed that the proposed adaptive method of shear rate estimation was effective for improving the accuracy of shear rate calculation.

  3. Final Report: Symposium on Adaptive Methods for Partial Differential Equations

    SciTech Connect

    Pernice, M.; Johnson, C.R.; Smith, P.J.; Fogelson, A.

    1998-12-10

    OAK-B135 Final Report: Symposium on Adaptive Methods for Partial Differential Equations. Complex physical phenomena often include features that span a wide range of spatial and temporal scales. Accurate simulation of such phenomena can be difficult to obtain, and computations that are under-resolved can even exhibit spurious features. While it is possible to resolve small scale features by increasing the number of grid points, global grid refinement can quickly lead to problems that are intractable, even on the largest available computing facilities. These constraints are particularly severe for three dimensional problems that involve complex physics. One way to achieve the needed resolution is to refine the computational mesh locally, in only those regions where enhanced resolution is required. Adaptive solution methods concentrate computational effort in regions where it is most needed. These methods have been successfully applied to a wide variety of problems in computational science and engineering. Adaptive methods can be difficult to implement, prompting the development of tools and environments to facilitate their use. To ensure that the results of their efforts are useful, algorithm and tool developers must maintain close communication with application specialists. Conversely it remains difficult for application specialists who are unfamiliar with the methods to evaluate the trade-offs between the benefits of enhanced local resolution and the effort needed to implement an adaptive solution method.

  4. Contrast-based sensorless adaptive optics for retinal imaging.

    PubMed

    Zhou, Xiaolin; Bedggood, Phillip; Bui, Bang; Nguyen, Christine T O; He, Zheng; Metha, Andrew

    2015-09-01

    Conventional adaptive optics ophthalmoscopes use wavefront sensing methods to characterize ocular aberrations for real-time correction. However, there are important situations in which the wavefront sensing step is susceptible to difficulties that affect the accuracy of the correction. To circumvent these, wavefront sensorless adaptive optics (or non-wavefront sensing AO; NS-AO) imaging has recently been developed and has been applied to point-scanning based retinal imaging modalities. In this study we show, for the first time, contrast-based NS-AO ophthalmoscopy for full-frame in vivo imaging of human and animal eyes. We suggest a robust image quality metric that could be used for any imaging modality, and test its performance against other metrics using (physical) model eyes.

  5. Contrast-based sensorless adaptive optics for retinal imaging

    PubMed Central

    Zhou, Xiaolin; Bedggood, Phillip; Bui, Bang; Nguyen, Christine T.O.; He, Zheng; Metha, Andrew

    2015-01-01

    Conventional adaptive optics ophthalmoscopes use wavefront sensing methods to characterize ocular aberrations for real-time correction. However, there are important situations in which the wavefront sensing step is susceptible to difficulties that affect the accuracy of the correction. To circumvent these, wavefront sensorless adaptive optics (or non-wavefront sensing AO; NS-AO) imaging has recently been developed and has been applied to point-scanning based retinal imaging modalities. In this study we show, for the first time, contrast-based NS-AO ophthalmoscopy for full-frame in vivo imaging of human and animal eyes. We suggest a robust image quality metric that could be used for any imaging modality, and test its performance against other metrics using (physical) model eyes. PMID:26417525

  6. A Dynamically Adaptive Arbitrary Lagrangian-Eulerian Method for Hydrodynamics

    SciTech Connect

    Anderson, R W; Pember, R B; Elliott, N S

    2004-01-28

    A new method that combines staggered grid Arbitrary Lagrangian-Eulerian (ALE) techniques with structured local adaptive mesh refinement (AMR) has been developed for solution of the Euler equations. The novel components of the combined ALE-AMR method hinge upon the integration of traditional AMR techniques with both staggered grid Lagrangian operators as well as elliptic relaxation operators on moving, deforming mesh hierarchies. Numerical examples demonstrate the utility of the method in performing detailed three-dimensional shock-driven instability calculations.

  7. A Dynamically Adaptive Arbitrary Lagrangian-Eulerian Method for Hydrodynamics

    SciTech Connect

    Anderson, R W; Pember, R B; Elliott, N S

    2002-10-19

    A new method that combines staggered grid Arbitrary Lagrangian-Eulerian (ALE) techniques with structured local adaptive mesh refinement (AMR) has been developed for solution of the Euler equations. The novel components of the combined ALE-AMR method hinge upon the integration of traditional AMR techniques with both staggered grid Lagrangian operators as well as elliptic relaxation operators on moving, deforming mesh hierarchies. Numerical examples demonstrate the utility of the method in performing detailed three-dimensional shock-driven instability calculations.

  8. Analyzing Hedges in Verbal Communication: An Adaptation-Based Approach

    ERIC Educational Resources Information Center

    Wang, Yuling

    2010-01-01

    Based on Adaptation Theory, the article analyzes the production process of hedges. The procedure consists of the continuous making of choices in linguistic forms and communicative strategies. These choices are made just for adaptation to the contextual correlates. Besides, the adaptation process is dynamic, intentional and bidirectional.

  9. Simple adaptive sparse representation based classification schemes for EEG based brain-computer interface applications.

    PubMed

    Shin, Younghak; Lee, Seungchan; Ahn, Minkyu; Cho, Hohyun; Jun, Sung Chan; Lee, Heung-No

    2015-11-01

    One of the main problems related to electroencephalogram (EEG) based brain-computer interface (BCI) systems is the non-stationarity of the underlying EEG signals. This results in the deterioration of the classification performance during experimental sessions. Therefore, adaptive classification techniques are required for EEG based BCI applications. In this paper, we propose simple adaptive sparse representation based classification (SRC) schemes. Supervised and unsupervised dictionary update techniques for new test data and a dictionary modification method by using the incoherence measure of the training data are investigated. The proposed methods are very simple and additional computation for the re-training of the classifier is not needed. The proposed adaptive SRC schemes are evaluated using two BCI experimental datasets. The proposed methods are assessed by comparing classification results with the conventional SRC and other adaptive classification methods. On the basis of the results, we find that the proposed adaptive schemes show relatively improved classification accuracy as compared to conventional methods without requiring additional computation.

  10. Microscopic cell nuclei segmentation based on adaptive attention window.

    PubMed

    Ko, ByoungChul; Seo, MiSuk; Nam, Jae-Yeal

    2009-06-01

    This paper presents an adaptive attention window (AAW)-based microscopic cell nuclei segmentation method. For semantic AAW detection, a luminance map is used to create an initial attention window, which is then reduced close to the size of the real region of interest (ROI) using a quad-tree. The purpose of the AAW is to facilitate background removal and reduce the ROI segmentation processing time. Region segmentation is performed within the AAW, followed by region clustering and removal to produce segmentation of only ROIs. Experimental results demonstrate that the proposed method can efficiently segment one or more ROIs and produce similar segmentation results to human perception. In future work, the proposed method will be used for supporting a region-based medical image retrieval system that can generate a combined feature vector of segmented ROIs based on extraction and patient data.

  11. Speckle reduction in optical coherence tomography by adaptive total variation method

    NASA Astrophysics Data System (ADS)

    Wu, Tong; Shi, Yaoyao; Liu, Youwen; He, Chongjun

    2015-12-01

    An adaptive total variation method based on the combination of speckle statistics and total variation restoration is proposed and developed for reducing speckle noise in optical coherence tomography (OCT) images. The statistical distribution of the speckle noise in OCT image is investigated and measured. With the measured parameters such as the mean value and variance of the speckle noise, the OCT image is restored by the adaptive total variation restoration method. The adaptive total variation restoration algorithm was applied to the OCT images of a volunteer's hand skin, which showed effective speckle noise reduction and image quality improvement. For image quality comparison, the commonly used median filtering method was also applied to the same images to reduce the speckle noise. The measured results demonstrate the superior performance of the adaptive total variation restoration method in terms of image signal-to-noise ratio, equivalent number of looks, contrast-to-noise ratio, and mean square error.

  12. An adaptive high and low impedance fault detection method

    SciTech Connect

    Yu, D.C. ); Khan, S.H. )

    1994-10-01

    An integrated high impedance fault (HIF) and low impedance fault (LIF) detection method is proposed in this paper. For a HIF detection, the proposed technique is based on a number of characteristics of the HIF current. These characteristics are: fault current magnitude, magnitude of the 3rd harmonic current, magnitude of the 5th harmonic current, the angle of the third harmonic current, the angle difference between the third harmonics current and the fundamental voltage, negative sequence current of HIF. These characteristics are identified by modeling the distribution feeders in EMTP. Apart from these characteristics, the above ambient (average) negative sequence current is also considered. An adjustable block out region around the average load current is provided. The average load current is calculated at every 18,000 cycles (5 minutes) interval. This adaptive feature will not only make the proposed scheme more sensitive to the low fault current, but it will also prevent the relay from tripping during the normal load current. In this paper, the logic circuit required for implementing the proposed HIF detection methods is also included. With minimal modifications, the logic developed for the HIF detection can be applied for the low impedance fault (LIF) detection. A complete logic circuit which detects both the HIF and LIF is proposed. Using this combined logic, the need of installing separate devices for HIF and LIF detection can be eliminated.

  13. Adaptive windowed range-constrained Otsu method using local information

    NASA Astrophysics Data System (ADS)

    Zheng, Jia; Zhang, Dinghua; Huang, Kuidong; Sun, Yuanxi; Tang, Shaojie

    2016-01-01

    An adaptive windowed range-constrained Otsu method using local information is proposed for improving the performance of image segmentation. First, the reason why traditional thresholding methods do not perform well in the segmentation of complicated images is analyzed. Therein, the influences of global and local thresholdings on the image segmentation are compared. Second, two methods that can adaptively change the size of the local window according to local information are proposed by us. The characteristics of the proposed methods are analyzed. Thereby, the information on the number of edge pixels in the local window of the binarized variance image is employed to adaptively change the local window size. Finally, the superiority of the proposed method over other methods such as the range-constrained Otsu, the active contour model, the double Otsu, the Bradley's, and the distance-regularized level set evolution is demonstrated. It is validated by the experiments that the proposed method can keep more details and acquire much more satisfying area overlap measure as compared with the other conventional methods.

  14. A Conditional Exposure Control Method for Multidimensional Adaptive Testing

    ERIC Educational Resources Information Center

    Finkelman, Matthew; Nering, Michael L.; Roussos, Louis A.

    2009-01-01

    In computerized adaptive testing (CAT), ensuring the security of test items is a crucial practical consideration. A common approach to reducing item theft is to define maximum item exposure rates, i.e., to limit the proportion of examinees to whom a given item can be administered. Numerous methods for controlling exposure rates have been proposed…

  15. Likelihood Methods for Adaptive Filtering and Smoothing. Technical Report #455.

    ERIC Educational Resources Information Center

    Butler, Ronald W.

    The dynamic linear model or Kalman filtering model provides a useful methodology for predicting the past, present, and future states of a dynamic system, such as an object in motion or an economic or social indicator that is changing systematically with time. Recursive likelihood methods for adaptive Kalman filtering and smoothing are developed.…

  16. Visual-adaptation-mechanism based underwater object extraction

    NASA Astrophysics Data System (ADS)

    Chen, Zhe; Wang, Huibin; Xu, Lizhong; Shen, Jie

    2014-03-01

    Due to the major obstacles originating from the strong light absorption and scattering in a dynamic underwater environment, underwater optical information acquisition and processing suffer from effects such as limited range, non-uniform lighting, low contrast, and diminished colors, causing it to become the bottleneck for marine scientific research and projects. After studying and generalizing the underwater biological visual mechanism, we explore its advantages in light adaption which helps animals to precisely sense the underwater scene and recognize their prey or enemies. Then, aiming to transform the significant advantage of the visual adaptation mechanism into underwater computer vision tasks, a novel knowledge-based information weighting fusion model is established for underwater object extraction. With this bionic model, the dynamical adaptability is given to the underwater object extraction task, making them more robust to the variability of the optical properties in different environments. The capability of the proposed method to adapt to the underwater optical environments is shown, and its outperformance for the object extraction is demonstrated by comparison experiments.

  17. Adaptive Neural Network Based Control of Noncanonical Nonlinear Systems.

    PubMed

    Zhang, Yanjun; Tao, Gang; Chen, Mou

    2016-09-01

    This paper presents a new study on the adaptive neural network-based control of a class of noncanonical nonlinear systems with large parametric uncertainties. Unlike commonly studied canonical form nonlinear systems whose neural network approximation system models have explicit relative degree structures, which can directly be used to derive parameterized controllers for adaptation, noncanonical form nonlinear systems usually do not have explicit relative degrees, and thus their approximation system models are also in noncanonical forms. It is well-known that the adaptive control of noncanonical form nonlinear systems involves the parameterization of system dynamics. As demonstrated in this paper, it is also the case for noncanonical neural network approximation system models. Effective control of such systems is an open research problem, especially in the presence of uncertain parameters. This paper shows that it is necessary to reparameterize such neural network system models for adaptive control design, and that such reparameterization can be realized using a relative degree formulation, a concept yet to be studied for general neural network system models. This paper then derives the parameterized controllers that guarantee closed-loop stability and asymptotic output tracking for noncanonical form neural network system models. An illustrative example is presented with the simulation results to demonstrate the control design procedure, and to verify the effectiveness of such a new design method.

  18. Goal-based angular adaptivity applied to a wavelet-based discretisation of the neutral particle transport equation

    SciTech Connect

    Goffin, Mark A.; Buchan, Andrew G.; Dargaville, Steven; Pain, Christopher C.; Smith, Paul N.; Smedley-Stevenson, Richard P.

    2015-01-15

    A method for applying goal-based adaptive methods to the angular resolution of the neutral particle transport equation is presented. The methods are applied to an octahedral wavelet discretisation of the spherical angular domain which allows for anisotropic resolution. The angular resolution is adapted across both the spatial and energy dimensions. The spatial domain is discretised using an inner-element sub-grid scale finite element method. The goal-based adaptive methods optimise the angular discretisation to minimise the error in a specific functional of the solution. The goal-based error estimators require the solution of an adjoint system to determine the importance to the specified functional. The error estimators and the novel methods to calculate them are described. Several examples are presented to demonstrate the effectiveness of the methods. It is shown that the methods can significantly reduce the number of unknowns and computational time required to obtain a given error. The novelty of the work is the use of goal-based adaptive methods to obtain anisotropic resolution in the angular domain for solving the transport equation. -- Highlights: •Wavelet angular discretisation used to solve transport equation. •Adaptive method developed for the wavelet discretisation. •Anisotropic angular resolution demonstrated through the adaptive method. •Adaptive method provides improvements in computational efficiency.

  19. Adaptive muffler based on controlled flow valves.

    PubMed

    Šteblaj, Peter; Čudina, Mirko; Lipar, Primož; Prezelj, Jurij

    2015-06-01

    An adaptive muffler with a flexible internal structure is considered. Flexibility is achieved using controlled flow valves. The proposed adaptive muffler is able to adapt to changes in engine operating conditions. It consists of a Helmholtz resonator, expansion chamber, and quarter wavelength resonator. Different combinations of the control valves' states at different operating conditions define the main working principle. To control the valve's position, an active noise control approach was used. With the proposed muffler, the transmission loss can be increased by more than 10 dB in the selected frequency range. PMID:26093462

  20. ICASE/LaRC Workshop on Adaptive Grid Methods

    NASA Technical Reports Server (NTRS)

    South, Jerry C., Jr. (Editor); Thomas, James L. (Editor); Vanrosendale, John (Editor)

    1995-01-01

    Solution-adaptive grid techniques are essential to the attainment of practical, user friendly, computational fluid dynamics (CFD) applications. In this three-day workshop, experts gathered together to describe state-of-the-art methods in solution-adaptive grid refinement, analysis, and implementation; to assess the current practice; and to discuss future needs and directions for research. This was accomplished through a series of invited and contributed papers. The workshop focused on a set of two-dimensional test cases designed by the organizers to aid in assessing the current state of development of adaptive grid technology. In addition, a panel of experts from universities, industry, and government research laboratories discussed their views of needs and future directions in this field.

  1. Adaptable Metadata Rich IO Methods for Portable High Performance IO

    SciTech Connect

    Lofstead, J.; Zheng, Fang; Klasky, Scott A; Schwan, Karsten

    2009-01-01

    Since IO performance on HPC machines strongly depends on machine characteristics and configuration, it is important to carefully tune IO libraries and make good use of appropriate library APIs. For instance, on current petascale machines, independent IO tends to outperform collective IO, in part due to bottlenecks at the metadata server. The problem is exacerbated by scaling issues, since each IO library scales differently on each machine, and typically, operates efficiently to different levels of scaling on different machines. With scientific codes being run on a variety of HPC resources, efficient code execution requires us to address three important issues: (1) end users should be able to select the most efficient IO methods for their codes, with minimal effort in terms of code updates or alterations; (2) such performance-driven choices should not prevent data from being stored in the desired file formats, since those are crucial for later data analysis; and (3) it is important to have efficient ways of identifying and selecting certain data for analysis, to help end users cope with the flood of data produced by high end codes. This paper employs ADIOS, the ADaptable IO System, as an IO API to address (1)-(3) above. Concerning (1), ADIOS makes it possible to independently select the IO methods being used by each grouping of data in an application, so that end users can use those IO methods that exhibit best performance based on both IO patterns and the underlying hardware. In this paper, we also use this facility of ADIOS to experimentally evaluate on petascale machines alternative methods for high performance IO. Specific examples studied include methods that use strong file consistency vs. delayed parallel data consistency, as that provided by MPI-IO or POSIX IO. Concerning (2), to avoid linking IO methods to specific file formats and attain high IO performance, ADIOS introduces an efficient intermediate file format, termed BP, which can be converted, at small

  2. Tsunami modelling with adaptively refined finite volume methods

    USGS Publications Warehouse

    LeVeque, R.J.; George, D.L.; Berger, M.J.

    2011-01-01

    Numerical modelling of transoceanic tsunami propagation, together with the detailed modelling of inundation of small-scale coastal regions, poses a number of algorithmic challenges. The depth-averaged shallow water equations can be used to reduce this to a time-dependent problem in two space dimensions, but even so it is crucial to use adaptive mesh refinement in order to efficiently handle the vast differences in spatial scales. This must be done in a 'wellbalanced' manner that accurately captures very small perturbations to the steady state of the ocean at rest. Inundation can be modelled by allowing cells to dynamically change from dry to wet, but this must also be done carefully near refinement boundaries. We discuss these issues in the context of Riemann-solver-based finite volume methods for tsunami modelling. Several examples are presented using the GeoClaw software, and sample codes are available to accompany the paper. The techniques discussed also apply to a variety of other geophysical flows. ?? 2011 Cambridge University Press.

  3. Adaptive IMEX schemes for high-order unstructured methods

    NASA Astrophysics Data System (ADS)

    Vermeire, Brian C.; Nadarajah, Siva

    2015-01-01

    We present an adaptive implicit-explicit (IMEX) method for use with high-order unstructured schemes. The proposed method makes use of the Gerschgorin theorem to conservatively estimate the influence of each individual degree of freedom on the spectral radius of the discretization. This information is used to split the system into implicit and explicit regions, adapting to unsteady features in the flow. We dynamically repartition the domain to balance the number of implicit and explicit elements per core. As a consequence, we are able to achieve an even load balance for each implicit/explicit stage of the IMEX scheme. We investigate linear advection-diffusion, isentropic vortex advection, unsteady laminar flow over an SD7003 airfoil, and turbulent flow over a circular cylinder. Results show that the proposed method consistently yields a stable discretization, and maintains the theoretical order of accuracy of the high-order spatial schemes.

  4. Final Report: Symposium on Adaptive Methods for Partial Differential Equations

    SciTech Connect

    Pernice, Michael; Johnson, Christopher R.; Smith, Philip J.; Fogelson, Aaron

    1998-12-08

    Complex physical phenomena often include features that span a wide range of spatial and temporal scales. Accurate simulation of such phenomena can be difficult to obtain, and computations that are under-resolved can even exhibit spurious features. While it is possible to resolve small scale features by increasing the number of grid points, global grid refinement can quickly lead to problems that are intractable, even on the largest available computing facilities. These constraints are particularly severe for three dimensional problems that involve complex physics. One way to achieve the needed resolution is to refine the computational mesh locally, in only those regions where enhanced resolution is required. Adaptive solution methods concentrate computational effort in regions where it is most needed. These methods have been successfully applied to a wide variety of problems in computational science and engineering. Adaptive methods can be difficult to implement, prompting the development of tools and environments to facilitate their use. To ensure that the results of their efforts are useful, algorithm and tool developers must maintain close communication with application specialists. Conversely it remains difficult for application specialists who are unfamiliar with the methods to evaluate the trade-offs between the benefits of enhanced local resolution and the effort needed to implement an adaptive solution method.

  5. Parallel 3D Mortar Element Method for Adaptive Nonconforming Meshes

    NASA Technical Reports Server (NTRS)

    Feng, Huiyu; Mavriplis, Catherine; VanderWijngaart, Rob; Biswas, Rupak

    2004-01-01

    High order methods are frequently used in computational simulation for their high accuracy. An efficient way to avoid unnecessary computation in smooth regions of the solution is to use adaptive meshes which employ fine grids only in areas where they are needed. Nonconforming spectral elements allow the grid to be flexibly adjusted to satisfy the computational accuracy requirements. The method is suitable for computational simulations of unsteady problems with very disparate length scales or unsteady moving features, such as heat transfer, fluid dynamics or flame combustion. In this work, we select the Mark Element Method (MEM) to handle the non-conforming interfaces between elements. A new technique is introduced to efficiently implement MEM in 3-D nonconforming meshes. By introducing an "intermediate mortar", the proposed method decomposes the projection between 3-D elements and mortars into two steps. In each step, projection matrices derived in 2-D are used. The two-step method avoids explicitly forming/deriving large projection matrices for 3-D meshes, and also helps to simplify the implementation. This new technique can be used for both h- and p-type adaptation. This method is applied to an unsteady 3-D moving heat source problem. With our new MEM implementation, mesh adaptation is able to efficiently refine the grid near the heat source and coarsen the grid once the heat source passes. The savings in computational work resulting from the dynamic mesh adaptation is demonstrated by the reduction of the the number of elements used and CPU time spent. MEM and mesh adaptation, respectively, bring irregularity and dynamics to the computer memory access pattern. Hence, they provide a good way to gauge the performance of computer systems when running scientific applications whose memory access patterns are irregular and unpredictable. We select a 3-D moving heat source problem as the Unstructured Adaptive (UA) grid benchmark, a new component of the NAS Parallel

  6. Adaptive color correction based on object color classification

    NASA Astrophysics Data System (ADS)

    Kotera, Hiroaki; Morimoto, Tetsuro; Yasue, Nobuyuki; Saito, Ryoichi

    1998-09-01

    An adaptive color management strategy depending on the image contents is proposed. Pictorial color image is classified into different object areas with clustered color distribution. Euclidian or Mahalanobis color distance measures, and maximum likelihood method based on Bayesian decision rule, are introduced to the classification. After the classification process, each clustered pixels are projected onto principal component space by Hotelling transform and the color corrections are performed for the principal components to be matched each other in between the individual clustered color areas of original and printed images.

  7. Digital speech enhancement based on DTOMP and adaptive quantile

    NASA Astrophysics Data System (ADS)

    Wang, Anna; Zhou, Xiaoxing; Xue, Changliang; Sun, Xiyan; Sun, Hongying

    2013-03-01

    Compressed Sensing (CS) that can effectively extract the information contained in the signal is a new sampling theory based on signal sparseness. This paper applies CS theory in digital speech signal enhancement processing, proposes an adaptive quantile method for the noise power estimation and combines the improved double-threshold orthogonal matching pursuit algorithm for speech reconstruction, then achieves speech enhancement processing. Compared with the simulation results of the spectral subtraction and the subspace algorithm, the experiment results verify the feasibility and effectiveness of the algorithm proposed in this paper applied to speech enhancement processing.

  8. Adaptive Rule Based Fetal QRS Complex Detection Using Hilbert Transform

    PubMed Central

    Ulusar, Umit D.; Govindan, R.B.; Wilson, James D.; Lowery, Curtis L.; Preissl, Hubert; Eswaran, Hari

    2010-01-01

    In this paper we introduce an adaptive rule based QRS detection algorithm using the Hilbert transform (adHQRS) for fetal magnetocardiography processing. Hilbert transform is used to combine multiple channel measurements and the adaptive rule based decision process is used to eliminate spurious beats. The algorithm has been tested with a large number of datasets and promising results were obtained. PMID:19964648

  9. Creating Evidence-Based Research in Adapted Physical Activity

    ERIC Educational Resources Information Center

    Reid, Greg; Bouffard, Marcel; MacDonald, Catherine

    2012-01-01

    Professional practice guided by the best research evidence is a usually referred to as evidence-based practice. The aim of the present paper is to describe five fundamental beliefs of adapted physical activity practices that should be considered in an 8-step research model to create evidence-based research in adapted physical activity. The five…

  10. Efficient Unstructured Grid Adaptation Methods for Sonic Boom Prediction

    NASA Technical Reports Server (NTRS)

    Campbell, Richard L.; Carter, Melissa B.; Deere, Karen A.; Waithe, Kenrick A.

    2008-01-01

    This paper examines the use of two grid adaptation methods to improve the accuracy of the near-to-mid field pressure signature prediction of supersonic aircraft computed using the USM3D unstructured grid flow solver. The first method (ADV) is an interactive adaptation process that uses grid movement rather than enrichment to more accurately resolve the expansion and compression waves. The second method (SSGRID) uses an a priori adaptation approach to stretch and shear the original unstructured grid to align the grid with the pressure waves and reduce the cell count required to achieve an accurate signature prediction at a given distance from the vehicle. Both methods initially create negative volume cells that are repaired in a module in the ADV code. While both approaches provide significant improvements in the near field signature (< 3 body lengths) relative to a baseline grid without increasing the number of grid points, only the SSGRID approach allows the details of the signature to be accurately computed at mid-field distances (3-10 body lengths) for direct use with mid-field-to-ground boom propagation codes.

  11. Robust flicker evaluation method for low power adaptive dimming LCDs

    NASA Astrophysics Data System (ADS)

    Kim, Seul-Ki; Song, Seok-Jeong; Nam, Hyoungsik

    2015-05-01

    This paper describes a robust dimming flicker evaluation method of adaptive dimming algorithms for low power liquid crystal displays (LCDs). While the previous methods use sum of square difference (SSD) values without excluding the image sequence information, the proposed modified SSD (mSSD) values are obtained only with the dimming flicker effects by making use of differential images. The proposed scheme is verified for eight dimming configurations of two dimming level selection methods and four temporal filters over three test videos. Furthermore, a new figure of merit is introduced to cover the dimming flicker as well as image qualities and power consumption.

  12. [An adaptive threshloding segmentation method for urinary sediment image].

    PubMed

    Li, Yongming; Zeng, Xiaoping; Qin, Jian; Han, Liang

    2009-02-01

    In this paper is proposed a new method to solve the segmentation of the complicated defocusing urinary sediment image. The main points of the method are: (1) using wavelet transforms and morphology to erase the effect of defocusing and realize the first segmentation, (2) using adaptive threshold processing in accordance to the subimages after wavelet processing, and (3) using 'peel off' algorithm to deal with the overlapped cells' segmentations. The experimental results showed that this method was not affected by the defocusing, and it made good use of many kinds of characteristics of the images. So this new mehtod can get very precise segmentation; it is effective for defocusing urinary sediment image segmentation.

  13. Turbulent Output-Based Anisotropic Adaptation

    NASA Technical Reports Server (NTRS)

    Park, Michael A.; Carlson, Jan-Renee

    2010-01-01

    Controlling discretization error is a remaining challenge for computational fluid dynamics simulation. Grid adaptation is applied to reduce estimated discretization error in drag or pressure integral output functions. To enable application to high O(10(exp 7)) Reynolds number turbulent flows, a hybrid approach is utilized that freezes the near-wall boundary layer grids and adapts the grid away from the no slip boundaries. The hybrid approach is not applicable to problems with under resolved initial boundary layer grids, but is a powerful technique for problems with important off-body anisotropic features. Supersonic nozzle plume, turbulent flat plate, and shock-boundary layer interaction examples are presented with comparisons to experimental measurements of pressure and velocity. Adapted grids are produced that resolve off-body features in locations that are not known a priori.

  14. An adaptive locally optimal method detecting weak deterministic signals

    NASA Astrophysics Data System (ADS)

    Wang, C. H.

    1983-10-01

    A new method for detecting weak signals in interference and clutter in radar systems is presented. The detector which uses this method is adaptive for an environment varying with time and locally optimal for detecting targets and constant false-alarm ratio (CFAR) for the statistics of interference and clutter varying with time. The loss of CFAR is small, and the detector is also simple in structure. The statistical equivalent transfer characteristic of a rank quantizer which can be used as part of an adaptive locally most powerful detector (ALMP) is obtained. It is shown that the distribution-free Doppler processor of Dillard (1974) is not only a nonparameter detector, but also an ALMP detector under certain conditions.

  15. An Adaptive Unstructured Grid Method by Grid Subdivision, Local Remeshing, and Grid Movement

    NASA Technical Reports Server (NTRS)

    Pirzadeh, Shahyar Z.

    1999-01-01

    An unstructured grid adaptation technique has been developed and successfully applied to several three dimensional inviscid flow test cases. The approach is based on a combination of grid subdivision, local remeshing, and grid movement. For solution adaptive grids, the surface triangulation is locally refined by grid subdivision, and the tetrahedral grid in the field is partially remeshed at locations of dominant flow features. A grid redistribution strategy is employed for geometric adaptation of volume grids to moving or deforming surfaces. The method is automatic and fast and is designed for modular coupling with different solvers. Several steady state test cases with different inviscid flow features were tested for grid/solution adaptation. In all cases, the dominant flow features, such as shocks and vortices, were accurately and efficiently predicted with the present approach. A new and robust method of moving tetrahedral "viscous" grids is also presented and demonstrated on a three-dimensional example.

  16. Research on adaptive segmentation and activity classification method of filamentous fungi image in microbe fermentation

    NASA Astrophysics Data System (ADS)

    Cai, Xiaochun; Hu, Yihua; Wang, Peng; Sun, Dujuan; Hu, Guilan

    2009-10-01

    The paper presents an adaptive segmentation and activity classification method for filamentous fungi image. Firstly, an adaptive structuring element (SE) construction algorithm is proposed for image background suppression. Based on watershed transform method, the color labeled segmentation of fungi image is taken. Secondly, the fungi elements feature space is described and the feature set for fungi hyphae activity classification is extracted. The growth rate evaluation of fungi hyphae is achieved by using SVM classifier. Some experimental results demonstrate that the proposed method is effective for filamentous fungi image processing.

  17. Adaptive SVD-Based Digital Image Watermarking

    NASA Astrophysics Data System (ADS)

    Shirvanian, Maliheh; Torkamani Azar, Farah

    Digital data utilization along with the increase popularity of the Internet has facilitated information sharing and distribution. However, such applications have also raised concern about copyright issues and unauthorized modification and distribution of digital data. Digital watermarking techniques which are proposed to solve these problems hide some information in digital media and extract it whenever needed to indicate the data owner. In this paper a new method of image watermarking based on singular value decomposition (SVD) of images is proposed which considers human visual system prior to embedding watermark by segmenting the original image into several blocks of different sizes, with more density in the edges of the image. In this way the original image quality is preserved in the watermarked image. Additional advantages of the proposed technique are large capacity of watermark embedding and robustness of the method against different types of image manipulation techniques.

  18. Method and apparatus for telemetry adaptive bandwidth compression

    NASA Astrophysics Data System (ADS)

    Graham, Olin L.

    1987-07-01

    Methods and apparatus are provided for automatic and/or manual adaptive bandwidth compression of telemetry. An adaptive sampler samples a video signal from a scanning sensor and generates a sequence of sampled fields. Each field and range rate information from the sensor are hence sequentially transmitted to and stored in a multiple and adaptive field storage means. The field storage means then, in response to an automatic or manual control signal, transfers the stored sampled field signals to a video monitor in a form for sequential or simultaneous display of a desired number of stored signal fields. The sampling ratio of the adaptive sample, the relative proportion of available communication bandwidth allocated respectively to transmitted data and video information, and the number of fields simultaneously displayed are manually or automatically selectively adjustable in functional relationship to each other and detected range rate. In one embodiment, when relatively little or no scene motion is detected, the control signal maximizes sampling ratio and causes simultaneous display of all stored fields, thus maximizing resolution and bandwidth available for data transmission. When increased scene motion is detected, the control signal is adjusted accordingly to cause display of fewer fields. If greater resolution is desired, the control signal is adjusted to increase the sampling ratio.

  19. An adaptive P300-based control system

    NASA Astrophysics Data System (ADS)

    Jin, Jing; Allison, Brendan Z.; Sellers, Eric W.; Brunner, Clemens; Horki, Petar; Wang, Xingyu; Neuper, Christa

    2011-06-01

    An adaptive P300 brain-computer interface (BCI) using a 12 × 7 matrix explored new paradigms to improve bit rate and accuracy. During online use, the system adaptively selects the number of flashes to average. Five different flash patterns were tested. The 19-flash paradigm represents the typical row/column presentation (i.e. 12 columns and 7 rows). The 9- and 14-flash A and B paradigms present all items of the 12 × 7 matrix three times using either 9 or 14 flashes (instead of 19), decreasing the amount of time to present stimuli. Compared to 9-flash A, 9-flash B decreased the likelihood that neighboring items would flash when the target was not flashing, thereby reducing the interference from items adjacent to targets. 14-flash A also reduced the adjacent item interference and 14-flash B additionally eliminated successive (double) flashes of the same item. Results showed that the accuracy and bit rate of the adaptive system were higher than those of the non-adaptive system. In addition, 9- and 14-flash B produced significantly higher performance than their respective A conditions. The results also show the trend that the 14-flash B paradigm was better than the 19-flash pattern for naive users.

  20. A New Online Calibration Method for Multidimensional Computerized Adaptive Testing.

    PubMed

    Chen, Ping; Wang, Chun

    2016-09-01

    Multidimensional-Method A (M-Method A) has been proposed as an efficient and effective online calibration method for multidimensional computerized adaptive testing (MCAT) (Chen & Xin, Paper presented at the 78th Meeting of the Psychometric Society, Arnhem, The Netherlands, 2013). However, a key assumption of M-Method A is that it treats person parameter estimates as their true values, thus this method might yield erroneous item calibration when person parameter estimates contain non-ignorable measurement errors. To improve the performance of M-Method A, this paper proposes a new MCAT online calibration method, namely, the full functional MLE-M-Method A (FFMLE-M-Method A). This new method combines the full functional MLE (Jones & Jin in Psychometrika 59:59-75, 1994; Stefanski & Carroll in Annals of Statistics 13:1335-1351, 1985) with the original M-Method A in an effort to correct for the estimation error of ability vector that might otherwise adversely affect the precision of item calibration. Two correction schemes are also proposed when implementing the new method. A simulation study was conducted to show that the new method generated more accurate item parameter estimation than the original M-Method A in almost all conditions. PMID:26608960

  1. Web-Based Adaptive Testing System (WATS) for Classifying Students Academic Ability

    ERIC Educational Resources Information Center

    Lee, Jaemu; Park, Sanghoon; Kim, Kwangho

    2012-01-01

    Computer Adaptive Testing (CAT) has been highlighted as a promising assessment method to fulfill two testing purposes: estimating student academic ability and classifying student academic level. In this paper, assessment for we introduced the Web-based Adaptive Testing System (WATS) developed to support a cost effective assessment for classifying…

  2. Passivity-Based Adaptive Hybrid Synchronization of a New Hyperchaotic System with Uncertain Parameters

    PubMed Central

    2012-01-01

    We investigate the adaptive hybrid synchronization problem for a new hyperchaotic system with uncertain parameters. Based on the passivity theory and the adaptive control theory, corresponding controllers and parameter estimation update laws are proposed to achieve hybrid synchronization between two identical uncertain hyperchaotic systems with different initial values, respectively. Numerical simulation indicates that the presented methods work effectively. PMID:23365538

  3. An extended framework for adaptive playback-based video summarization

    NASA Astrophysics Data System (ADS)

    Peker, Kadir A.; Divakaran, Ajay

    2003-11-01

    In our previous work, we described an adaptive fast playback framework for video summarization where we changed the playback rate using the motion activity feature so as to maintain a constant "pace." This method provides an effective way of skimming through video, especially when the motion is not too complex and the background is mostly still, such as in surveillance video. In this paper, we present an extended summarization framework that, in addition to motion activity, uses semantic cues such as face or skin color appearance, speech and music detection, or other domain dependent semantically significant events to control the playback rate. The semantic features we use are computationally inexpensive and can be computed in compressed domain, yet are robust, reliable, and have a wide range of applicability across different content types. The presented framework also allows for adaptive summaries based on preference, for example, to include more dramatic vs. action elements, or vice versa. The user can switch at any time between the skimming and the normal playback modes. The continuity of the video is preserved, and complete omission of segments that may be important to the user is avoided by using adaptive fast playback instead of skipping over long segments. The rule-set and the input parameters can be further modified to fit a certain domain or application. Our framework can be used by itself, or as a subsequent presentation stage for a summary produced by any other summarization technique that relies on generating a sub-set of the content.

  4. A self-adaptive-grid method with application to airfoil flow

    NASA Technical Reports Server (NTRS)

    Nakahashi, K.; Deiwert, G. S.

    1985-01-01

    A self-adaptive-grid method is described that is suitable for multidimensional steady and unsteady computations. Based on variational principles, a spring analogy is used to redistribute grid points in an optimal sense to reduce the overall solution error. User-specified parameters, denoting both maximum and minimum permissible grid spacings, are used to define the all-important constants, thereby minimizing the empiricism and making the method self-adaptive. Operator splitting and one-sided controls for orthogonality and smoothness are used to make the method practical, robust, and efficient. Examples are included for both steady and unsteady viscous flow computations about airfoils in two dimensions, as well as for a steady inviscid flow computation and a one-dimensional case. These examples illustrate the precise control the user has with the self-adaptive method and demonstrate a significant improvement in accuracy and quality of the solutions.

  5. Parallel, adaptive finite element methods for conservation laws

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak; Devine, Karen D.; Flaherty, Joseph E.

    1994-01-01

    We construct parallel finite element methods for the solution of hyperbolic conservation laws in one and two dimensions. Spatial discretization is performed by a discontinuous Galerkin finite element method using a basis of piecewise Legendre polynomials. Temporal discretization utilizes a Runge-Kutta method. Dissipative fluxes and projection limiting prevent oscillations near solution discontinuities. A posteriori estimates of spatial errors are obtained by a p-refinement technique using superconvergence at Radau points. The resulting method is of high order and may be parallelized efficiently on MIMD computers. We compare results using different limiting schemes and demonstrate parallel efficiency through computations on an NCUBE/2 hypercube. We also present results using adaptive h- and p-refinement to reduce the computational cost of the method.

  6. Investigation of the Multiple Method Adaptive Control (MMAC) method for flight control systems

    NASA Technical Reports Server (NTRS)

    Athans, M.; Baram, Y.; Castanon, D.; Dunn, K. P.; Green, C. S.; Lee, W. H.; Sandell, N. R., Jr.; Willsky, A. S.

    1979-01-01

    The stochastic adaptive control of the NASA F-8C digital-fly-by-wire aircraft using the multiple model adaptive control (MMAC) method is presented. The selection of the performance criteria for the lateral and the longitudinal dynamics, the design of the Kalman filters for different operating conditions, the identification algorithm associated with the MMAC method, the control system design, and simulation results obtained using the real time simulator of the F-8 aircraft at the NASA Langley Research Center are discussed.

  7. Adaptive control based on retrospective cost optimization

    NASA Technical Reports Server (NTRS)

    Santillo, Mario A. (Inventor); Bernstein, Dennis S. (Inventor)

    2012-01-01

    A discrete-time adaptive control law for stabilization, command following, and disturbance rejection that is effective for systems that are unstable, MIMO, and/or nonminimum phase. The adaptive control algorithm includes guidelines concerning the modeling information needed for implementation. This information includes the relative degree, the first nonzero Markov parameter, and the nonminimum-phase zeros. Except when the plant has nonminimum-phase zeros whose absolute value is less than the plant's spectral radius, the required zero information can be approximated by a sufficient number of Markov parameters. No additional information about the poles or zeros need be known. Numerical examples are presented to illustrate the algorithm's effectiveness in handling systems with errors in the required modeling data, unknown latency, sensor noise, and saturation.

  8. An adaptive Tikhonov regularization method for fluorescence molecular tomography.

    PubMed

    Cao, Xu; Zhang, Bin; Wang, Xin; Liu, Fei; Liu, Ke; Luo, Jianwen; Bai, Jing

    2013-08-01

    The high degree of absorption and scattering of photons propagating through biological tissues makes fluorescence molecular tomography (FMT) reconstruction a severe ill-posed problem and the reconstructed result is susceptible to noise in the measurements. To obtain a reasonable solution, Tikhonov regularization (TR) is generally employed to solve the inverse problem of FMT. However, with a fixed regularization parameter, the Tikhonov solutions suffer from low resolution. In this work, an adaptive Tikhonov regularization (ATR) method is presented. Considering that large regularization parameters can smoothen the solution with low spatial resolution, while small regularization parameters can sharpen the solution with high level of noise, the ATR method adaptively updates the spatially varying regularization parameters during the iteration process and uses them to penalize the solutions. The ATR method can adequately sharpen the feasible region with fluorescent probes and smoothen the region without fluorescent probes resorting to no complementary priori information. Phantom experiments are performed to verify the feasibility of the proposed method. The results demonstrate that the proposed method can improve the spatial resolution and reduce the noise of FMT reconstruction at the same time.

  9. Adaptive Training for Voice Conversion Based on Eigenvoices

    NASA Astrophysics Data System (ADS)

    Ohtani, Yamato; Toda, Tomoki; Saruwatari, Hiroshi; Shikano, Kiyohiro

    In this paper, we describe a novel model training method for one-to-many eigenvoice conversion (EVC). One-to-many EVC is a technique for converting a specific source speaker's voice into an arbitrary target speaker's voice. An eigenvoice Gaussian mixture model (EV-GMM) is trained in advance using multiple parallel data sets consisting of utterance-pairs of the source speaker and many pre-stored target speakers. The EV-GMM can be adapted to new target speakers using only a few of their arbitrary utterances by estimating a small number of adaptive parameters. In the adaptation process, several parameters of the EV-GMM to be fixed for different target speakers strongly affect the conversion performance of the adapted model. In order to improve the conversion performance in one-to-many EVC, we propose an adaptive training method of the EV-GMM. In the proposed training method, both the fixed parameters and the adaptive parameters are optimized by maximizing a total likelihood function of the EV-GMMs adapted to individual pre-stored target speakers. We conducted objective and subjective evaluations to demonstrate the effectiveness of the proposed training method. The experimental results show that the proposed adaptive training yields significant quality improvements in the converted speech.

  10. A NOISE ADAPTIVE FUZZY EQUALIZATION METHOD FOR PROCESSING SOLAR EXTREME ULTRAVIOLET IMAGES

    SciTech Connect

    Druckmueller, M.

    2013-08-15

    A new image enhancement tool ideally suited for the visualization of fine structures in extreme ultraviolet images of the corona is presented in this paper. The Noise Adaptive Fuzzy Equalization method is particularly suited for the exceptionally high dynamic range images from the Atmospheric Imaging Assembly instrument on the Solar Dynamics Observatory. This method produces artifact-free images and gives significantly better results than methods based on convolution or Fourier transform which are often used for that purpose.

  11. Image Watermarking Based on Adaptive Models of Human Visual Perception

    NASA Astrophysics Data System (ADS)

    Khawne, Amnach; Hamamoto, Kazuhiko; Chitsobhuk, Orachat

    This paper proposes a digital image watermarking based on adaptive models of human visual perception. The algorithm exploits the local activities estimated from wavelet coefficients of each subband to adaptively control the luminance masking. The adaptive luminance is thus delicately combined with the contrast masking and edge detection and adopted as a visibility threshold. With the proposed combination of adaptive visual sensitivity parameters, the proposed perceptual model can be more appropriate to the different characteristics of various images. The weighting function is chosen such that the fidelity, imperceptibility and robustness could be preserved without making any perceptual difference to the image quality.

  12. Adaptive robust controller based on integral sliding mode concept

    NASA Astrophysics Data System (ADS)

    Taleb, M.; Plestan, F.

    2016-09-01

    This paper proposes, for a class of uncertain nonlinear systems, an adaptive controller based on adaptive second-order sliding mode control and integral sliding mode control concepts. The adaptation strategy solves the problem of gain tuning and has the advantage of chattering reduction. Moreover, limited information about perturbation and uncertainties has to be known. The control is composed of two parts: an adaptive one whose objective is to reject the perturbation and system uncertainties, whereas the second one is chosen such as the nominal part of the system is stabilised in zero. To illustrate the effectiveness of the proposed approach, an application on an academic example is shown with simulation results.

  13. A robust adaptive sampling method for faster acquisition of MR images.

    PubMed

    Vellagoundar, Jaganathan; Machireddy, Ramasubba Reddy

    2015-06-01

    A robust adaptive k-space sampling method is proposed for faster acquisition and reconstruction of MR images. In this method, undersampling patterns are generated based on magnitude profile of a fully acquired 2-D k-space data. Images are reconstructed using compressive sampling reconstruction algorithm. Simulation experiments are done to assess the performance of the proposed method under various signal-to-noise ratio (SNR) levels. The performance of the method is better than non-adaptive variable density sampling method when k-space SNR is greater than 10dB. The method is implemented on a fully acquired multi-slice raw k-space data and a quality assurance phantom data. Data reduction of up to 60% is achieved in the multi-slice imaging data and 75% is achieved in the phantom imaging data. The results show that reconstruction accuracy is improved over non-adaptive or conventional variable density sampling method. The proposed sampling method is signal dependent and the estimation of sampling locations is robust to noise. As a result, it eliminates the necessity of mathematical model and parameter tuning to compute k-space sampling patterns as required in non-adaptive sampling methods.

  14. An adaptive altitude information fusion method for autonomous landing processes of small unmanned aerial rotorcraft.

    PubMed

    Lei, Xusheng; Li, Jingjing

    2012-01-01

    This paper presents an adaptive information fusion method to improve the accuracy and reliability of the altitude measurement information for small unmanned aerial rotorcraft during the landing process. Focusing on the low measurement performance of sensors mounted on small unmanned aerial rotorcraft, a wavelet filter is applied as a pre-filter to attenuate the high frequency noises in the sensor output. Furthermore, to improve altitude information, an adaptive extended Kalman filter based on a maximum a posteriori criterion is proposed to estimate measurement noise covariance matrix in real time. Finally, the effectiveness of the proposed method is proved by static tests, hovering flight and autonomous landing flight tests. PMID:23201993

  15. The block adaptive multigrid method applied to the solution of the Euler equations

    NASA Technical Reports Server (NTRS)

    Pantelelis, Nikos

    1993-01-01

    In the present study, a scheme capable of solving very fast and robust complex nonlinear systems of equations is presented. The Block Adaptive Multigrid (BAM) solution method offers multigrid acceleration and adaptive grid refinement based on the prediction of the solution error. The proposed solution method was used with an implicit upwind Euler solver for the solution of complex transonic flows around airfoils. Very fast results were obtained (18-fold acceleration of the solution) using one fourth of the volumes of a global grid with the same solution accuracy for two test cases.

  16. A comparison of locally adaptive multigrid methods: LDC, FAC and FIC

    NASA Technical Reports Server (NTRS)

    Khadra, Khodor; Angot, Philippe; Caltagirone, Jean-Paul

    1993-01-01

    This study is devoted to a comparative analysis of three 'Adaptive ZOOM' (ZOom Overlapping Multi-level) methods based on similar concepts of hierarchical multigrid local refinement: LDC (Local Defect Correction), FAC (Fast Adaptive Composite), and FIC (Flux Interface Correction)--which we proposed recently. These methods are tested on two examples of a bidimensional elliptic problem. We compare, for V-cycle procedures, the asymptotic evolution of the global error evaluated by discrete norms, the corresponding local errors, and the convergence rates of these algorithms.

  17. An Adaptive Altitude Information Fusion Method for Autonomous Landing Processes of Small Unmanned Aerial Rotorcraft

    PubMed Central

    Lei, Xusheng; Li, Jingjing

    2012-01-01

    This paper presents an adaptive information fusion method to improve the accuracy and reliability of the altitude measurement information for small unmanned aerial rotorcraft during the landing process. Focusing on the low measurement performance of sensors mounted on small unmanned aerial rotorcraft, a wavelet filter is applied as a pre-filter to attenuate the high frequency noises in the sensor output. Furthermore, to improve altitude information, an adaptive extended Kalman filter based on a maximum a posteriori criterion is proposed to estimate measurement noise covariance matrix in real time. Finally, the effectiveness of the proposed method is proved by static tests, hovering flight and autonomous landing flight tests. PMID:23201993

  18. Planetary gearbox fault diagnosis using an adaptive stochastic resonance method

    NASA Astrophysics Data System (ADS)

    Lei, Yaguo; Han, Dong; Lin, Jing; He, Zhengjia

    2013-07-01

    Planetary gearboxes are widely used in aerospace, automotive and heavy industry applications due to their large transmission ratio, strong load-bearing capacity and high transmission efficiency. The tough operation conditions of heavy duty and intensive impact load may cause gear tooth damage such as fatigue crack and teeth missed etc. The challenging issues in fault diagnosis of planetary gearboxes include selection of sensitive measurement locations, investigation of vibration transmission paths and weak feature extraction. One of them is how to effectively discover the weak characteristics from noisy signals of faulty components in planetary gearboxes. To address the issue in fault diagnosis of planetary gearboxes, an adaptive stochastic resonance (ASR) method is proposed in this paper. The ASR method utilizes the optimization ability of ant colony algorithms and adaptively realizes the optimal stochastic resonance system matching input signals. Using the ASR method, the noise may be weakened and weak characteristics highlighted, and therefore the faults can be diagnosed accurately. A planetary gearbox test rig is established and experiments with sun gear faults including a chipped tooth and a missing tooth are conducted. And the vibration signals are collected under the loaded condition and various motor speeds. The proposed method is used to process the collected signals and the results of feature extraction and fault diagnosis demonstrate its effectiveness.

  19. Adaptive wavefront sensor based on the Talbot phenomenon.

    PubMed

    Podanchuk, Dmytro V; Goloborodko, Andrey A; Kotov, Myhailo M; Kovalenko, Andrey V; Kurashov, Vitalij N; Dan'ko, Volodymyr P

    2016-04-20

    A new adaptive method of wavefront sensing is proposed and demonstrated. The method is based on the Talbot self-imaging effect, which is observed in an illuminating light beam with strong second-order aberration. Compensation of defocus and astigmatism is achieved with an appropriate choice of size of the rectangular unit cell of the diffraction grating, which is performed iteratively. A liquid-crystal spatial light modulator is used for this purpose. Self-imaging of rectangular grating in the astigmatic light beam is demonstrated experimentally. High-order aberrations are detected with respect to the compensated second-order aberration. The comparative results of wavefront sensing with a Shack-Hartmann sensor and the proposed sensor are adduced. PMID:27140122

  20. An adaptable peptide-based porous material.

    PubMed

    Rabone, J; Yue, Y-F; Chong, S Y; Stylianou, K C; Bacsa, J; Bradshaw, D; Darling, G R; Berry, N G; Khimyak, Y Z; Ganin, A Y; Wiper, P; Claridge, J B; Rosseinsky, M J

    2010-08-27

    Porous materials find widespread application in storage, separation, and catalytic technologies. We report a crystalline porous solid with adaptable porosity, in which a simple dipeptide linker is arranged in a regular array by coordination to metal centers. Experiments reinforced by molecular dynamics simulations showed that low-energy torsions and displacements of the peptides enabled the available pore volume to evolve smoothly from zero as the guest loading increased. The observed cooperative feedback in sorption isotherms resembled the response of proteins undergoing conformational selection, suggesting an energy landscape similar to that required for protein folding. The flexible peptide linker was shown to play the pivotal role in changing the pore conformation.

  1. Lens based adaptive optics scanning laser ophthalmoscope.

    PubMed

    Felberer, Franz; Kroisamer, Julia-Sophie; Hitzenberger, Christoph K; Pircher, Michael

    2012-07-30

    We present an alternative approach for an adaptive optics scanning laser ophthalmoscope (AO-SLO). In contrast to other commonly used AO-SLO instruments, the imaging optics consist of lenses. Images of the fovea region of 5 healthy volunteers are recorded. The system is capable to resolve human foveal cones in 3 out of 5 healthy volunteers. Additionally, we investigated the capability of the system to support larger scanning angles (up to 5°) on the retina. Finally, in order to demonstrate the performance of the instrument images of rod photoreceptors are presented.

  2. The SMART CLUSTER METHOD - adaptive earthquake cluster analysis and declustering

    NASA Astrophysics Data System (ADS)

    Schaefer, Andreas; Daniell, James; Wenzel, Friedemann

    2016-04-01

    Earthquake declustering is an essential part of almost any statistical analysis of spatial and temporal properties of seismic activity with usual applications comprising of probabilistic seismic hazard assessments (PSHAs) and earthquake prediction methods. The nature of earthquake clusters and subsequent declustering of earthquake catalogues plays a crucial role in determining the magnitude-dependent earthquake return period and its respective spatial variation. Various methods have been developed to address this issue from other researchers. These have differing ranges of complexity ranging from rather simple statistical window methods to complex epidemic models. This study introduces the smart cluster method (SCM), a new methodology to identify earthquake clusters, which uses an adaptive point process for spatio-temporal identification. Hereby, an adaptive search algorithm for data point clusters is adopted. It uses the earthquake density in the spatio-temporal neighbourhood of each event to adjust the search properties. The identified clusters are subsequently analysed to determine directional anisotropy, focussing on a strong correlation along the rupture plane and adjusts its search space with respect to directional properties. In the case of rapid subsequent ruptures like the 1992 Landers sequence or the 2010/2011 Darfield-Christchurch events, an adaptive classification procedure is applied to disassemble subsequent ruptures which may have been grouped into an individual cluster using near-field searches, support vector machines and temporal splitting. The steering parameters of the search behaviour are linked to local earthquake properties like magnitude of completeness, earthquake density and Gutenberg-Richter parameters. The method is capable of identifying and classifying earthquake clusters in space and time. It is tested and validated using earthquake data from California and New Zealand. As a result of the cluster identification process, each event in

  3. Smoothed aggregation adaptive spectral element-based algebraic multigrid

    SciTech Connect

    2015-01-20

    SAAMGE provides parallel methods for building multilevel hierarchies and solvers that can be used for elliptic equations with highly heterogeneous coefficients. Additionally, hierarchy adaptation is implemented allowing solving multiple problems with close coefficients without rebuilding the hierarchy.

  4. Kalman filter based control for Adaptive Optics

    NASA Astrophysics Data System (ADS)

    Petit, Cyril; Quiros-Pacheco, Fernando; Conan, Jean-Marc; Kulcsár, Caroline; Raynaud, Henri-François; Fusco, Thierry

    2004-12-01

    Classical Adaptive Optics suffer from a limitation of the corrected Field Of View. This drawback has lead to the development of MultiConjugated Adaptive Optics. While the first MCAO experimental set-ups are presently under construction, little attention has been paid to the control loop. This is however a key element in the optimization process especially for MCAO systems. Different approaches have been proposed in recent articles for astronomical applications : simple integrator, Optimized Modal Gain Integrator and Kalman filtering. We study here Kalman filtering which seems a very promising solution. Following the work of Brice Leroux, we focus on a frequential characterization of kalman filters, computing a transfer matrix. The result brings much information about their behaviour and allows comparisons with classical controllers. It also appears that straightforward improvements of the system models can lead to static aberrations and vibrations filtering. Simulation results are proposed and analysed thanks to our frequential characterization. Related problems such as model errors, aliasing effect reduction or experimental implementation and testing of Kalman filter control loop on a simplified MCAO experimental set-up could be then discussed.

  5. An adaptive pseudo-spectral method for reaction diffusion problems

    NASA Technical Reports Server (NTRS)

    Bayliss, A.; Matkowsky, B. J.; Gottlieb, D.; Minkoff, M.

    1989-01-01

    The spectral interpolation error was considered for both the Chebyshev pseudo-spectral and Galerkin approximations. A family of functionals I sub r (u), with the property that the maximum norm of the error is bounded by I sub r (u)/J sub r, where r is an integer and J is the degree of the polynomial approximation, was developed. These functionals are used in the adaptive procedure whereby the problem is dynamically transformed to minimize I sub r (u). The number of collocation points is then chosen to maintain a prescribed error bound. The method is illustrated by various examples from combustion problems in one and two dimensions.

  6. An adaptive pseudo-spectral method for reaction diffusion problems

    NASA Technical Reports Server (NTRS)

    Bayliss, A.; Gottlieb, D.; Matkowsky, B. J.; Minkoff, M.

    1987-01-01

    The spectral interpolation error was considered for both the Chebyshev pseudo-spectral and Galerkin approximations. A family of functionals I sub r (u), with the property that the maximum norm of the error is bounded by I sub r (u)/J sub r, where r is an integer and J is the degree of the polynomial approximation, was developed. These functionals are used in the adaptive procedure whereby the problem is dynamically transformed to minimize I sub r (u). The number of collocation points is then chosen to maintain a prescribed error bound. The method is illustrated by various examples from combustion problems in one and two dimensions.

  7. An adaptive response surface method for crashworthiness optimization

    NASA Astrophysics Data System (ADS)

    Shi, Lei; Yang, Ren-Jye; Zhu, Ping

    2013-11-01

    Response surface-based design optimization has been commonly used for optimizing large-scale design problems in the automotive industry. However, most response surface models are built by a limited number of design points without considering data uncertainty. In addition, the selection of a response surface in the literature is often arbitrary. This article uses a Bayesian metric to systematically select the best available response surface among several candidates in a library while considering data uncertainty. An adaptive, efficient response surface strategy, which minimizes the number of computationally intensive simulations, was developed for design optimization of large-scale complex problems. This methodology was demonstrated by a crashworthiness optimization example.

  8. [Molecular genetic bases of adaptation processes and approaches to their analysis].

    PubMed

    Salmenkova, E A

    2013-01-01

    Great interest in studying the molecular genetic bases of the adaptation processes is explained by their importance in understanding evolutionary changes, in the development ofintraspecific and interspecific genetic diversity, and in the creation of approaches and programs for maintaining and restoring the population. The article examines the sources and conditions for generating adaptive genetic variability and contribution of neutral and adaptive genetic variability to the population structure of the species; methods for identifying the adaptive genetic variability on the genome level are also described. Considerable attention is paid to the potential of new technologies of genome analysis, including next-generation sequencing and some accompanying methods. In conclusion, the important role of the joint use of genomics and proteomics approaches in understanding the molecular genetic bases of adaptation is emphasized.

  9. Impulse-based methods for fluid flow

    SciTech Connect

    Cortez, R.

    1995-05-01

    A Lagrangian numerical method based on impulse variables is analyzed. A relation between impulse vectors and vortex dipoles with a prescribed dipole moment is presented. This relation is used to adapt the high-accuracy cutoff functions of vortex methods for use in impulse-based methods. A source of error in the long-time implementation of the impulse method is explained and two techniques for avoiding this error are presented. An application of impulse methods to the motion of a fluid surrounded by an elastic membrane is presented.

  10. An h-adaptive local discontinuous Galerkin method for the Navier-Stokes-Korteweg equations

    NASA Astrophysics Data System (ADS)

    Tian, Lulu; Xu, Yan; Kuerten, J. G. M.; van der Vegt, J. J. W.

    2016-08-01

    In this article, we develop a mesh adaptation algorithm for a local discontinuous Galerkin (LDG) discretization of the (non)-isothermal Navier-Stokes-Korteweg (NSK) equations modeling liquid-vapor flows with phase change. This work is a continuation of our previous research, where we proposed LDG discretizations for the (non)-isothermal NSK equations with a time-implicit Runge-Kutta method. To save computing time and to capture the thin interfaces more accurately, we extend the LDG discretization with a mesh adaptation method. Given the current adapted mesh, a criterion for selecting candidate elements for refinement and coarsening is adopted based on the locally largest value of the density gradient. A strategy to refine and coarsen the candidate elements is then provided. We emphasize that the adaptive LDG discretization is relatively simple and does not require additional stabilization. The use of a locally refined mesh in combination with an implicit Runge-Kutta time method is, however, non-trivial, but results in an efficient time integration method for the NSK equations. Computations, including cases with solid wall boundaries, are provided to demonstrate the accuracy, efficiency and capabilities of the adaptive LDG discretizations.

  11. DEVS-based intelligent control of space adapted fluid mixing

    NASA Technical Reports Server (NTRS)

    Chi, Sung-Do; Zeigler, Bernard P.

    1990-01-01

    The development is described of event-based intelligent control system for a space-adapted mixing process by employing the DEVS (Discrete Event System Specification) formalism. In this control paradigm, the controller expects to receive confirming sensor responses to its control commands within definite time windows determined by its DEVS model of the system under control. The DEVS-based intelligent control paradigm was applied in a space-adapted mixing system capable of supporting the laboratory automation aboard a Space Station.

  12. Scale-adaptive tensor algebra for local many-body methods of electronic structure theory

    SciTech Connect

    Liakh, Dmitry I

    2014-01-01

    While the formalism of multiresolution analysis (MRA), based on wavelets and adaptive integral representations of operators, is actively progressing in electronic structure theory (mostly on the independent-particle level and, recently, second-order perturbation theory), the concepts of multiresolution and adaptivity can also be utilized within the traditional formulation of correlated (many-particle) theory which is based on second quantization and the corresponding (generally nonorthogonal) tensor algebra. In this paper, we present a formalism called scale-adaptive tensor algebra (SATA) which exploits an adaptive representation of tensors of many-body operators via the local adjustment of the basis set quality. Given a series of locally supported fragment bases of a progressively lower quality, we formulate the explicit rules for tensor algebra operations dealing with adaptively resolved tensor operands. The formalism suggested is expected to enhance the applicability and reliability of local correlated many-body methods of electronic structure theory, especially those directly based on atomic orbitals (or any other localized basis functions).

  13. An adaptive demodulation approach for bearing fault detection based on adaptive wavelet filtering and spectral subtraction

    NASA Astrophysics Data System (ADS)

    Zhang, Yan; Tang, Baoping; Liu, Ziran; Chen, Rengxiang

    2016-02-01

    Fault diagnosis of rolling element bearings is important for improving mechanical system reliability and performance. Vibration signals contain a wealth of complex information useful for state monitoring and fault diagnosis. However, any fault-related impulses in the original signal are often severely tainted by various noises and the interfering vibrations caused by other machine elements. Narrow-band amplitude demodulation has been an effective technique to detect bearing faults by identifying bearing fault characteristic frequencies. To achieve this, the key step is to remove the corrupting noise and interference, and to enhance the weak signatures of the bearing fault. In this paper, a new method based on adaptive wavelet filtering and spectral subtraction is proposed for fault diagnosis in bearings. First, to eliminate the frequency associated with interfering vibrations, the vibration signal is bandpass filtered with a Morlet wavelet filter whose parameters (i.e. center frequency and bandwidth) are selected in separate steps. An alternative and efficient method of determining the center frequency is proposed that utilizes the statistical information contained in the production functions (PFs). The bandwidth parameter is optimized using a local ‘greedy’ scheme along with Shannon wavelet entropy criterion. Then, to further reduce the residual in-band noise in the filtered signal, a spectral subtraction procedure is elaborated after wavelet filtering. Instead of resorting to a reference signal as in the majority of papers in the literature, the new method estimates the power spectral density of the in-band noise from the associated PF. The effectiveness of the proposed method is validated using simulated data, test rig data, and vibration data recorded from the transmission system of a helicopter. The experimental results and comparisons with other methods indicate that the proposed method is an effective approach to detecting the fault-related impulses

  14. Brain source localization based on fast fully adaptive approach.

    PubMed

    Ravan, Maryam; Reilly, James P

    2012-01-01

    In the electroencephalogram (EEG) or magnetoencephalogram (MEG) context, brain source localization (beamforming) methods often fail when the number of observations is small. This is particularly true when measuring evoked potentials, especially when the number of electrodes is large. Due to the nonstationarity of the EEG/MEG, an adaptive capability is desirable. Previous work has addressed these issues by reducing the adaptive degrees of freedom (DoFs). This paper develops and tests a new multistage adaptive processing for brain source localization that has been previously used for radar statistical signal processing application with uniform linear antenna array. This processing, referred to as the fast fully adaptive (FFA) approach, could significantly reduce the required sample support and computational complexity, while still processing all available DoFs. The performance improvement offered by the FFA approach in comparison to the fully adaptive minimum variance beamforming (MVB) with limited data is demonstrated by bootstrapping simulated data to evaluate the variability of the source location.

  15. Image subband coding using context-based classification and adaptive quantization.

    PubMed

    Yoo, Y; Ortega, A; Yu, B

    1999-01-01

    Adaptive compression methods have been a key component of many proposed subband (or wavelet) image coding techniques. This paper deals with a particular type of adaptive subband image coding where we focus on the image coder's ability to adjust itself "on the fly" to the spatially varying statistical nature of image contents. This backward adaptation is distinguished from more frequently used forward adaptation in that forward adaptation selects the best operating parameters from a predesigned set and thus uses considerable amount of side information in order for the encoder and the decoder to operate with the same parameters. Specifically, we present backward adaptive quantization using a new context-based classification technique which classifies each subband coefficient based on the surrounding quantized coefficients. We couple this classification with online parametric adaptation of the quantizer applied to each class. A simple uniform threshold quantizer is employed as the baseline quantizer for which adaptation is achieved. Our subband image coder based on the proposed adaptive classification quantization idea exhibits excellent rate-distortion performance, in particular at very low rates. For popular test images, it is comparable or superior to most of the state-of-the-art coders in the literature.

  16. A GPU-accelerated adaptive discontinuous Galerkin method for level set equation

    NASA Astrophysics Data System (ADS)

    Karakus, A.; Warburton, T.; Aksel, M. H.; Sert, C.

    2016-01-01

    This paper presents a GPU-accelerated nodal discontinuous Galerkin method for the solution of two- and three-dimensional level set (LS) equation on unstructured adaptive meshes. Using adaptive mesh refinement, computations are localised mostly near the interface location to reduce the computational cost. Small global time step size resulting from the local adaptivity is avoided by local time-stepping based on a multi-rate Adams-Bashforth scheme. Platform independence of the solver is achieved with an extensible multi-threading programming API that allows runtime selection of different computing devices (GPU and CPU) and different threading interfaces (CUDA, OpenCL and OpenMP). Overall, a highly scalable, accurate and mass conservative numerical scheme that preserves the simplicity of LS formulation is obtained. Efficiency, performance and local high-order accuracy of the method are demonstrated through distinct numerical test cases.

  17. An adaptive PCA fusion method for remote sensing images

    NASA Astrophysics Data System (ADS)

    Guo, Qing; Li, An; Zhang, Hongqun; Feng, Zhongkui

    2014-10-01

    The principal component analysis (PCA) method is a popular fusion method used for its efficiency and high spatial resolution improvement. However, the spectral distortion is often found in PCA. In this paper, we propose an adaptive PCA method to enhance the spectral quality of the fused image. The amount of spatial details of the panchromatic (PAN) image injected into each band of the multi-spectral (MS) image is appropriately determined by a weighting matrix, which is defined by the edges of the PAN image, the edges of the MS image and the proportions between MS bands. In order to prove the effectiveness of the proposed method, the qualitative visual and quantitative analyses are introduced. The correlation coefficient (CC), the spectral discrepancy (SPD), and the spectral angle mapper (SAM) are used to measure the spectral quality of each fused band image. Q index is calculated to evaluate the global spectral quality of all the fused bands as a whole. The spatial quality is evaluated by the average gradient (AG) and the standard deviation (STD). Experimental results show that the proposed method improves the spectral quality very much comparing to the original PCA method while maintaining the high spatial quality of the original PCA.

  18. A fast, robust, and simple implicit method for adaptive time-stepping on adaptive mesh-refinement grids

    NASA Astrophysics Data System (ADS)

    Commerçon, B.; Debout, V.; Teyssier, R.

    2014-03-01

    Context. Implicit solvers present strong limitations when used on supercomputing facilities and in particular for adaptive mesh-refinement codes. Aims: We present a new method for implicit adaptive time-stepping on adaptive mesh-refinement grids. We implement it in the radiation-hydrodynamics solver we designed for the RAMSES code for astrophysical purposes and, more particularly, for protostellar collapse. Methods: We briefly recall the radiation-hydrodynamics equations and the adaptive time-stepping methodology used for hydrodynamical solvers. We then introduce the different types of boundary conditions (Dirichlet, Neumann, and Robin) that are used at the interface between levels and present our implementation of the new method in the RAMSES code. The method is tested against classical diffusion and radiation-hydrodynamics tests, after which we present an application for protostellar collapse. Results: We show that using Dirichlet boundary conditions at level interfaces is a good compromise between robustness and accuracy and that it can be used in structure formation calculations. The gain in computational time over our former unique time step method ranges from factors of 5 to 50 depending on the level of adaptive time-stepping and on the problem. We successfully compare the old and new methods for protostellar collapse calculations that involve highly non linear physics. Conclusions: We have developed a simple but robust method for adaptive time-stepping of implicit scheme on adaptive mesh-refinement grids. It can be applied to a wide variety of physical problems that involve diffusion processes.

  19. A Dynamically Adaptive Arbitrary Lagrangian-Eulerian Method for Solution of the Euler Equations

    SciTech Connect

    Anderson, R W; Elliott, N S; Pember, R B

    2003-02-14

    A new method that combines staggered grid arbitrary Lagrangian-Eulerian (ALE) techniques with structured local adaptive mesh refinement (AMR) has been developed for solution of the Euler equations. The novel components of the methods are driven by the need to reconcile traditional AMR techniques with the staggered variables and moving, deforming meshes associated with Lagrange based ALE schemes. We develop interlevel solution transfer operators and interlevel boundary conditions first in the case of purely Lagrangian hydrodynamics, and then extend these ideas into an ALE method by developing adaptive extensions of elliptic mesh relaxation techniques. Conservation properties of the method are analyzed, and a series of test problem calculations are presented which demonstrate the utility and efficiency of the method.

  20. Image quality-based adaptive illumination normalisation for face recognition

    NASA Astrophysics Data System (ADS)

    Sellahewa, Harin; Jassim, Sabah A.

    2009-05-01

    Automatic face recognition is a challenging task due to intra-class variations. Changes in lighting conditions during enrolment and identification stages contribute significantly to these intra-class variations. A common approach to address the effects such of varying conditions is to pre-process the biometric samples in order normalise intra-class variations. Histogram equalisation is a widely used illumination normalisation technique in face recognition. However, a recent study has shown that applying histogram equalisation on well-lit face images could lead to a decrease in recognition accuracy. This paper presents a dynamic approach to illumination normalisation, based on face image quality. The quality of a given face image is measured in terms of its luminance distortion by comparing this image against a known reference face image. Histogram equalisation is applied to a probe image if its luminance distortion is higher than a predefined threshold. We tested the proposed adaptive illumination normalisation method on the widely used Extended Yale Face Database B. Identification results demonstrate that our adaptive normalisation produces better identification accuracy compared to the conventional approach where every image is normalised, irrespective of the lighting condition they were acquired.

  1. A Spectral Adaptive Mesh Refinement Method for the Burgers equation

    NASA Astrophysics Data System (ADS)

    Nasr Azadani, Leila; Staples, Anne

    2013-03-01

    Adaptive mesh refinement (AMR) is a powerful technique in computational fluid dynamics (CFD). Many CFD problems have a wide range of scales which vary with time and space. In order to resolve all the scales numerically, high grid resolutions are required. The smaller the scales the higher the resolutions should be. However, small scales are usually formed in a small portion of the domain or in a special period of time. AMR is an efficient method to solve these types of problems, allowing high grid resolutions where and when they are needed and minimizing memory and CPU time. Here we formulate a spectral version of AMR in order to accelerate simulations of a 1D model for isotropic homogenous turbulence, the Burgers equation, as a first test of this method. Using pseudo spectral methods, we applied AMR in Fourier space. The spectral AMR (SAMR) method we present here is applied to the Burgers equation and the results are compared with the results obtained using standard solution methods performed using a fine mesh.

  2. Robust image registration using adaptive coherent point drift method

    NASA Astrophysics Data System (ADS)

    Yang, Lijuan; Tian, Zheng; Zhao, Wei; Wen, Jinhuan; Yan, Weidong

    2016-04-01

    Coherent point drift (CPD) method is a powerful registration tool under the framework of the Gaussian mixture model (GMM). However, the global spatial structure of point sets is considered only without other forms of additional attribute information. The equivalent simplification of mixing parameters and the manual setting of the weight parameter in GMM make the CPD method less robust to outlier and have less flexibility. An adaptive CPD method is proposed to automatically determine the mixing parameters by embedding the local attribute information of features into the construction of GMM. In addition, the weight parameter is treated as an unknown parameter and automatically determined in the expectation-maximization algorithm. In image registration applications, the block-divided salient image disk extraction method is designed to detect sparse salient image features and local self-similarity is used as attribute information to describe the local neighborhood structure of each feature. The experimental results on optical images and remote sensing images show that the proposed method can significantly improve the matching performance.

  3. Support Vector Machine Based on Adaptive Acceleration Particle Swarm Optimization

    PubMed Central

    Abdulameer, Mohammed Hasan; Othman, Zulaiha Ali

    2014-01-01

    Existing face recognition methods utilize particle swarm optimizer (PSO) and opposition based particle swarm optimizer (OPSO) to optimize the parameters of SVM. However, the utilization of random values in the velocity calculation decreases the performance of these techniques; that is, during the velocity computation, we normally use random values for the acceleration coefficients and this creates randomness in the solution. To address this problem, an adaptive acceleration particle swarm optimization (AAPSO) technique is proposed. To evaluate our proposed method, we employ both face and iris recognition based on AAPSO with SVM (AAPSO-SVM). In the face and iris recognition systems, performance is evaluated using two human face databases, YALE and CASIA, and the UBiris dataset. In this method, we initially perform feature extraction and then recognition on the extracted features. In the recognition process, the extracted features are used for SVM training and testing. During the training and testing, the SVM parameters are optimized with the AAPSO technique, and in AAPSO, the acceleration coefficients are computed using the particle fitness values. The parameters in SVM, which are optimized by AAPSO, perform efficiently for both face and iris recognition. A comparative analysis between our proposed AAPSO-SVM and the PSO-SVM technique is presented. PMID:24790584

  4. Support vector machine based on adaptive acceleration particle swarm optimization.

    PubMed

    Abdulameer, Mohammed Hasan; Sheikh Abdullah, Siti Norul Huda; Othman, Zulaiha Ali

    2014-01-01

    Existing face recognition methods utilize particle swarm optimizer (PSO) and opposition based particle swarm optimizer (OPSO) to optimize the parameters of SVM. However, the utilization of random values in the velocity calculation decreases the performance of these techniques; that is, during the velocity computation, we normally use random values for the acceleration coefficients and this creates randomness in the solution. To address this problem, an adaptive acceleration particle swarm optimization (AAPSO) technique is proposed. To evaluate our proposed method, we employ both face and iris recognition based on AAPSO with SVM (AAPSO-SVM). In the face and iris recognition systems, performance is evaluated using two human face databases, YALE and CASIA, and the UBiris dataset. In this method, we initially perform feature extraction and then recognition on the extracted features. In the recognition process, the extracted features are used for SVM training and testing. During the training and testing, the SVM parameters are optimized with the AAPSO technique, and in AAPSO, the acceleration coefficients are computed using the particle fitness values. The parameters in SVM, which are optimized by AAPSO, perform efficiently for both face and iris recognition. A comparative analysis between our proposed AAPSO-SVM and the PSO-SVM technique is presented. PMID:24790584

  5. Generalization Bounds Derived IPM-Based Regularization for Domain Adaptation.

    PubMed

    Meng, Juan; Hu, Guyu; Li, Dong; Zhang, Yanyan; Pan, Zhisong

    2016-01-01

    Domain adaptation has received much attention as a major form of transfer learning. One issue that should be considered in domain adaptation is the gap between source domain and target domain. In order to improve the generalization ability of domain adaption methods, we proposed a framework for domain adaptation combining source and target data, with a new regularizer which takes generalization bounds into account. This regularization term considers integral probability metric (IPM) as the distance between the source domain and the target domain and thus can bound up the testing error of an existing predictor from the formula. Since the computation of IPM only involves two distributions, this generalization term is independent with specific classifiers. With popular learning models, the empirical risk minimization is expressed as a general convex optimization problem and thus can be solved effectively by existing tools. Empirical studies on synthetic data for regression and real-world data for classification show the effectiveness of this method.

  6. Development of quantum-based adaptive neuro-fuzzy networks.

    PubMed

    Kim, Sung-Suk; Kwak, Keun-Chang

    2010-02-01

    In this study, we are concerned with a method for constructing quantum-based adaptive neuro-fuzzy networks (QANFNs) with a Takagi-Sugeno-Kang (TSK) fuzzy type based on the fuzzy granulation from a given input-output data set. For this purpose, we developed a systematic approach in producing automatic fuzzy rules based on fuzzy subtractive quantum clustering. This clustering technique is not only an extension of ideas inherent to scale-space and support-vector clustering but also represents an effective prototype that exhibits certain characteristics of the target system to be modeled from the fuzzy subtractive method. Furthermore, we developed linear-regression QANFN (LR-QANFN) as an incremental model to deal with localized nonlinearities of the system, so that all modeling discrepancies can be compensated. After adopting the construction of the linear regression as the first global model, we refined it through a series of local fuzzy if-then rules in order to capture the remaining localized characteristics. The experimental results revealed that the proposed QANFN and LR-QANFN yielded a better performance in comparison with radial basis function networks and the linguistic model obtained in previous literature for an automobile mile-per-gallon prediction, Boston Housing data, and a coagulant dosing process in a water purification plant.

  7. Link-based formalism for time evolution of adaptive networks

    NASA Astrophysics Data System (ADS)

    Zhou, Jie; Xiao, Gaoxi; Chen, Guanrong

    2013-09-01

    Network topology and nodal dynamics are two fundamental stones of adaptive networks. Detailed and accurate knowledge of these two ingredients is crucial for understanding the evolution and mechanism of adaptive networks. In this paper, by adopting the framework of the adaptive SIS model proposed by Gross [Phys. Rev. Lett.PRLTAO0031-900710.1103/PhysRevLett.96.208701 96, 208701 (2006)] and carefully utilizing the information of degree correlation of the network, we propose a link-based formalism for describing the system dynamics with high accuracy and subtle details. Several specific degree correlation measures are introduced to reveal the coevolution of network topology and system dynamics.

  8. Nonlinear mode decomposition: a noise-robust, adaptive decomposition method.

    PubMed

    Iatsenko, Dmytro; McClintock, Peter V E; Stefanovska, Aneta

    2015-09-01

    The signals emanating from complex systems are usually composed of a mixture of different oscillations which, for a reliable analysis, should be separated from each other and from the inevitable background of noise. Here we introduce an adaptive decomposition tool-nonlinear mode decomposition (NMD)-which decomposes a given signal into a set of physically meaningful oscillations for any wave form, simultaneously removing the noise. NMD is based on the powerful combination of time-frequency analysis techniques-which, together with the adaptive choice of their parameters, make it extremely noise robust-and surrogate data tests used to identify interdependent oscillations and to distinguish deterministic from random activity. We illustrate the application of NMD to both simulated and real signals and demonstrate its qualitative and quantitative superiority over other approaches, such as (ensemble) empirical mode decomposition, Karhunen-Loève expansion, and independent component analysis. We point out that NMD is likely to be applicable and useful in many different areas of research, such as geophysics, finance, and the life sciences. The necessary matlab codes for running NMD are freely available for download.

  9. Nonlinear mode decomposition: A noise-robust, adaptive decomposition method

    NASA Astrophysics Data System (ADS)

    Iatsenko, Dmytro; McClintock, Peter V. E.; Stefanovska, Aneta

    2015-09-01

    The signals emanating from complex systems are usually composed of a mixture of different oscillations which, for a reliable analysis, should be separated from each other and from the inevitable background of noise. Here we introduce an adaptive decomposition tool—nonlinear mode decomposition (NMD)—which decomposes a given signal into a set of physically meaningful oscillations for any wave form, simultaneously removing the noise. NMD is based on the powerful combination of time-frequency analysis techniques—which, together with the adaptive choice of their parameters, make it extremely noise robust—and surrogate data tests used to identify interdependent oscillations and to distinguish deterministic from random activity. We illustrate the application of NMD to both simulated and real signals and demonstrate its qualitative and quantitative superiority over other approaches, such as (ensemble) empirical mode decomposition, Karhunen-Loève expansion, and independent component analysis. We point out that NMD is likely to be applicable and useful in many different areas of research, such as geophysics, finance, and the life sciences. The necessary matlab codes for running NMD are freely available for download.

  10. Nonlinear mode decomposition: a noise-robust, adaptive decomposition method.

    PubMed

    Iatsenko, Dmytro; McClintock, Peter V E; Stefanovska, Aneta

    2015-09-01

    The signals emanating from complex systems are usually composed of a mixture of different oscillations which, for a reliable analysis, should be separated from each other and from the inevitable background of noise. Here we introduce an adaptive decomposition tool-nonlinear mode decomposition (NMD)-which decomposes a given signal into a set of physically meaningful oscillations for any wave form, simultaneously removing the noise. NMD is based on the powerful combination of time-frequency analysis techniques-which, together with the adaptive choice of their parameters, make it extremely noise robust-and surrogate data tests used to identify interdependent oscillations and to distinguish deterministic from random activity. We illustrate the application of NMD to both simulated and real signals and demonstrate its qualitative and quantitative superiority over other approaches, such as (ensemble) empirical mode decomposition, Karhunen-Loève expansion, and independent component analysis. We point out that NMD is likely to be applicable and useful in many different areas of research, such as geophysics, finance, and the life sciences. The necessary matlab codes for running NMD are freely available for download. PMID:26465549

  11. Key techniques and applications of adaptive growth method for stiffener layout design of plates and shells

    NASA Astrophysics Data System (ADS)

    Ding, Xiaohong; Ji, Xuerong; Ma, Man; Hou, Jianyun

    2013-11-01

    The application of the adaptive growth method is limited because several key techniques during the design process need manual intervention of designers. Key techniques of the method including the ground structure construction and seed selection are studied, so as to make it possible to improve the effectiveness and applicability of the adaptive growth method in stiffener layout design optimization of plates and shells. Three schemes of ground structures, which are comprised by different shell elements and beam elements, are proposed. It is found that the main stiffener layouts resulted from different ground structures are almost the same, but the ground structure comprised by 8-nodes shell elements and both 3-nodes and 2-nodes beam elements can result in clearest stiffener layout, and has good adaptability and low computational cost. An automatic seed selection approach is proposed, which is based on such selection rules that the seeds should be positioned on where the structural strain energy is great for the minimum compliance problem, and satisfy the dispersancy requirement. The adaptive growth method with the suggested key techniques is integrated into an ANSYS-based program, which provides a design tool for the stiffener layout design optimization of plates and shells. Typical design examples, including plate and shell structures to achieve minimum compliance and maximum bulking stability are illustrated. In addition, as a practical mechanical structural design example, the stiffener layout of an inlet structure for a large-scale electrostatic precipitator is also demonstrated. The design results show that the adaptive growth method integrated with the suggested key techniques can effectively and flexibly deal with stiffener layout design problem for plates and shells with complex geometrical shape and loading conditions to achieve various design objectives, thus it provides a new solution method for engineering structural topology design optimization.

  12. Method and system for training dynamic nonlinear adaptive filters which have embedded memory

    NASA Technical Reports Server (NTRS)

    Rabinowitz, Matthew (Inventor)

    2002-01-01

    Described herein is a method and system for training nonlinear adaptive filters (or neural networks) which have embedded memory. Such memory can arise in a multi-layer finite impulse response (FIR) architecture, or an infinite impulse response (IIR) architecture. We focus on filter architectures with separate linear dynamic components and static nonlinear components. Such filters can be structured so as to restrict their degrees of computational freedom based on a priori knowledge about the dynamic operation to be emulated. The method is detailed for an FIR architecture which consists of linear FIR filters together with nonlinear generalized single layer subnets. For the IIR case, we extend the methodology to a general nonlinear architecture which uses feedback. For these dynamic architectures, we describe how one can apply optimization techniques which make updates closer to the Newton direction than those of a steepest descent method, such as backpropagation. We detail a novel adaptive modified Gauss-Newton optimization technique, which uses an adaptive learning rate to determine both the magnitude and direction of update steps. For a wide range of adaptive filtering applications, the new training algorithm converges faster and to a smaller value of cost than both steepest-descent methods such as backpropagation-through-time, and standard quasi-Newton methods. We apply the algorithm to modeling the inverse of a nonlinear dynamic tracking system 5, as well as a nonlinear amplifier 6.

  13. Distributed adaptive simulation through standards-based integration of simulators and adaptive learning systems.

    PubMed

    Bergeron, Bryan; Cline, Andrew; Shipley, Jaime

    2012-01-01

    We have developed a distributed, standards-based architecture that enables simulation and simulator designers to leverage adaptive learning systems. Our approach, which incorporates an electronic competency record, open source LMS, and open source microcontroller hardware, is a low-cost, pragmatic option to integrating simulators with traditional courseware. PMID:22356955

  14. Adaptive Mesh Refinement in Computational Astrophysics -- Methods and Applications

    NASA Astrophysics Data System (ADS)

    Balsara, D.

    2001-12-01

    The advent of robust, reliable and accurate higher order Godunov schemes for many of the systems of equations of interest in computational astrophysics has made it important to understand how to solve them in multi-scale fashion. This is so because the physics associated with astrophysical phenomena evolves in multi-scale fashion and we wish to arrive at a multi-scale simulational capability to represent the physics. Because astrophysical systems have magnetic fields, multi-scale magnetohydrodynamics (MHD) is of especial interest. In this paper we first discuss general issues in adaptive mesh refinement (AMR). We then focus on the important issues in carrying out divergence-free AMR-MHD and catalogue the progress we have made in that area. We show that AMR methods lend themselves to easy parallelization. We then discuss applications of the RIEMANN framework for AMR-MHD to problems in computational astophysics.

  15. Adaptive mesh refinement and adjoint methods in geophysics simulations

    NASA Astrophysics Data System (ADS)

    Burstedde, Carsten

    2013-04-01

    It is an ongoing challenge to increase the resolution that can be achieved by numerical geophysics simulations. This applies to considering sub-kilometer mesh spacings in global-scale mantle convection simulations as well as to using frequencies up to 1 Hz in seismic wave propagation simulations. One central issue is the numerical cost, since for three-dimensional space discretizations, possibly combined with time stepping schemes, a doubling of resolution can lead to an increase in storage requirements and run time by factors between 8 and 16. A related challenge lies in the fact that an increase in resolution also increases the dimensionality of the model space that is needed to fully parametrize the physical properties of the simulated object (a.k.a. earth). Systems that exhibit a multiscale structure in space are candidates for employing adaptive mesh refinement, which varies the resolution locally. An example that we found well suited is the mantle, where plate boundaries and fault zones require a resolution on the km scale, while deeper area can be treated with 50 or 100 km mesh spacings. This approach effectively reduces the number of computational variables by several orders of magnitude. While in this case it is possible to derive the local adaptation pattern from known physical parameters, it is often unclear what are the most suitable criteria for adaptation. We will present the goal-oriented error estimation procedure, where such criteria are derived from an objective functional that represents the observables to be computed most accurately. Even though this approach is well studied, it is rarely used in the geophysics community. A related strategy to make finer resolution manageable is to design methods that automate the inference of model parameters. Tweaking more than a handful of numbers and judging the quality of the simulation by adhoc comparisons to known facts and observations is a tedious task and fundamentally limited by the turnaround times

  16. Applications of automatic mesh generation and adaptive methods in computational medicine

    SciTech Connect

    Schmidt, J.A.; Macleod, R.S.; Johnson, C.R.; Eason, J.C.

    1995-12-31

    Important problems in Computational Medicine exist that can benefit from the implementation of adaptive mesh refinement techniques. Biological systems are so inherently complex that only efficient models running on state of the art hardware can begin to simulate reality. To tackle the complex geometries associated with medical applications we present a general purpose mesh generation scheme based upon the Delaunay tessellation algorithm and an iterative point generator. In addition, automatic, two- and three-dimensional adaptive mesh refinement methods are presented that are derived from local and global estimates of the finite element error. Mesh generation and adaptive refinement techniques are utilized to obtain accurate approximations of bioelectric fields within anatomically correct models of the heart and human thorax. Specifically, we explore the simulation of cardiac defibrillation and the general forward and inverse problems in electrocardiography (ECG). Comparisons between uniform and adaptive refinement techniques are made to highlight the computational efficiency and accuracy of adaptive methods in the solution of field problems in computational medicine.

  17. Adaptive Device Context Based Mobile Learning Systems

    ERIC Educational Resources Information Center

    Pu, Haitao; Lin, Jinjiao; Song, Yanwei; Liu, Fasheng

    2011-01-01

    Mobile learning is e-learning delivered through mobile computing devices, which represents the next stage of computer-aided, multi-media based learning. Therefore, mobile learning is transforming the way of traditional education. However, as most current e-learning systems and their contents are not suitable for mobile devices, an approach for…

  18. Adaptive non-local means method for speckle reduction in ultrasound images

    NASA Astrophysics Data System (ADS)

    Ai, Ling; Ding, Mingyue; Zhang, Xuming

    2016-03-01

    Noise removal is a crucial step to enhance the quality of ultrasound images. However, some existing despeckling methods cannot ensure satisfactory restoration performance. In this paper, an adaptive non-local means (ANLM) filter is proposed for speckle noise reduction in ultrasound images. The distinctive property of the proposed method lies in that the decay parameter will not take the fixed value for the whole image but adapt itself to the variation of the local features in the ultrasound images. In the proposed method, the pre-filtered image will be obtained using the traditional NLM method. Based on the pre-filtered result, the local gradient will be computed and it will be utilized to determine the decay parameter adaptively for each image pixel. The final restored image will be produced by the ANLM method using the obtained decay parameters. Simulations on the synthetic image show that the proposed method can deliver sufficient speckle reduction while preserving image details very well and it outperforms the state-of-the-art despeckling filters in terms of peak signal-to-noise ratio (PSNR) and structural similarity (SSIM). Experiments on the clinical ultrasound image further demonstrate the practicality and advantage of the proposed method over the compared filtering methods.

  19. Adaptive surrogate model based multi-objective transfer trajectory optimization between different libration points

    NASA Astrophysics Data System (ADS)

    Peng, Haijun; Wang, Wei

    2016-10-01

    An adaptive surrogate model-based multi-objective optimization strategy that combines the benefits of invariant manifolds and low-thrust control toward developing a low-computational-cost transfer trajectory between libration orbits around the L1 and L2 libration points in the Sun-Earth system has been proposed in this paper. A new structure for a multi-objective transfer trajectory optimization model that divides the transfer trajectory into several segments and gives the dominations for invariant manifolds and low-thrust control in different segments has been established. To reduce the computational cost of multi-objective transfer trajectory optimization, a mixed sampling strategy-based adaptive surrogate model has been proposed. Numerical simulations show that the results obtained from the adaptive surrogate-based multi-objective optimization are in agreement with the results obtained using direct multi-objective optimization methods, and the computational workload of the adaptive surrogate-based multi-objective optimization is only approximately 10% of that of direct multi-objective optimization. Furthermore, the generating efficiency of the Pareto points of the adaptive surrogate-based multi-objective optimization is approximately 8 times that of the direct multi-objective optimization. Therefore, the proposed adaptive surrogate-based multi-objective optimization provides obvious advantages over direct multi-objective optimization methods.

  20. Adaptive control system having hedge unit and related apparatus and methods

    NASA Technical Reports Server (NTRS)

    Johnson, Eric Norman (Inventor); Calise, Anthony J. (Inventor)

    2007-01-01

    The invention includes an adaptive control system used to control a plant. The adaptive control system includes a hedge unit that receives at least one control signal and a plant state signal. The hedge unit generates a hedge signal based on the control signal, the plant state signal, and a hedge model including a first model having one or more characteristics to which the adaptive control system is not to adapt, and a second model not having the characteristic(s) to which the adaptive control system is not to adapt. The hedge signal is used in the adaptive control system to remove the effect of the characteristic from a signal supplied to an adaptation law unit of the adaptive control system so that the adaptive control system does not adapt to the characteristic in controlling the plant.

  1. Adaptive control system having hedge unit and related apparatus and methods

    NASA Technical Reports Server (NTRS)

    Johnson, Eric Norman (Inventor); Calise, Anthony J. (Inventor)

    2003-01-01

    The invention includes an adaptive control system used to control a plant. The adaptive control system includes a hedge unit that receives at least one control signal and a plant state signal. The hedge unit generates a hedge signal based on the control signal, the plant state signal, and a hedge model including a first model having one or more characteristics to which the adaptive control system is not to adapt, and a second model not having the characteristic(s) to which the adaptive control system is not to adapt. The hedge signal is used in the adaptive control system to remove the effect of the characteristic from a signal supplied to an adaptation law unit of the adaptive control system so that the adaptive control system does not adapt to the characteristic in controlling the plant.

  2. Combining Adaptive Hypermedia with Project and Case-Based Learning

    ERIC Educational Resources Information Center

    Papanikolaou, Kyparisia; Grigoriadou, Maria

    2009-01-01

    In this article we investigate the design of educational hypermedia based on constructivist learning theories. According to the principles of project and case-based learning we present the design rational of an Adaptive Educational Hypermedia system prototype named MyProject; learners working with MyProject undertake a project and the system…

  3. A wavelet-optimized, very high order adaptive grid and order numerical method

    NASA Technical Reports Server (NTRS)

    Jameson, Leland

    1996-01-01

    Differencing operators of arbitrarily high order can be constructed by interpolating a polynomial through a set of data followed by differentiation of this polynomial and finally evaluation of the polynomial at the point where a derivative approximation is desired. Furthermore, the interpolating polynomial can be constructed from algebraic, trigonometric, or, perhaps exponential polynomials. This paper begins with a comparison of such differencing operator construction. Next, the issue of proper grids for high order polynomials is addressed. Finally, an adaptive numerical method is introduced which adapts the numerical grid and the order of the differencing operator depending on the data. The numerical grid adaptation is performed on a Chebyshev grid. That is, at each level of refinement the grid is a Chebvshev grid and this grid is refined locally based on wavelet analysis.

  4. Developing the evidence base for mainstreaming adaptation of stormwater systems to climate change.

    PubMed

    Gersonius, B; Nasruddin, F; Ashley, R; Jeuken, A; Pathirana, A; Zevenbergen, C

    2012-12-15

    In a context of high uncertainty about hydro-climatic variables, the development of updated methods for climate impact and adaptation assessment is as important, if not more important than the provision of improved climate change data. In this paper, we introduce a hybrid method to facilitate mainstreaming adaptation of stormwater systems to climate change: i.e., the Mainstreaming method. The Mainstreaming method starts with an analysis of adaptation tipping points (ATPs), which is effect-based. These are points of reference where the magnitude of climate change is such that acceptable technical, environmental, societal or economic standards may be compromised. It extends the ATP analysis to include aspects from a bottom-up approach. The extension concerns the analysis of adaptation opportunities in the stormwater system. The results from both analyses are then used in combination to identify and exploit Adaptation Mainstreaming Moments (AMMs). Use of this method will enhance the understanding of the adaptive potential of stormwater systems. We have applied the proposed hybrid method to the management of flood risk for an urban stormwater system in Dordrecht (the Netherlands). The main finding of this case study is that the application of the Mainstreaming method helps to increase the no-/low-regret character of adaptation for several reasons: it focuses the attention on the most urgent effects of climate change; it is expected to lead to potential cost reductions, since adaptation options can be integrated into infrastructure and building design at an early stage instead of being applied separately; it will lead to the development of area-specific responses, which could not have been developed on a higher scale level; it makes it possible to take account of local values and sensibilities, which contributes to increased public and political support for the adaptive strategies.

  5. Probabilistic dual heuristic programming-based adaptive critic

    NASA Astrophysics Data System (ADS)

    Herzallah, Randa

    2010-02-01

    Adaptive critic (AC) methods have common roots as generalisations of dynamic programming for neural reinforcement learning approaches. Since they approximate the dynamic programming solutions, they are potentially suitable for learning in noisy, non-linear and non-stationary environments. In this study, a novel probabilistic dual heuristic programming (DHP)-based AC controller is proposed. Distinct to current approaches, the proposed probabilistic (DHP) AC method takes uncertainties of forward model and inverse controller into consideration. Therefore, it is suitable for deterministic and stochastic control problems characterised by functional uncertainty. Theoretical development of the proposed method is validated by analytically evaluating the correct value of the cost function which satisfies the Bellman equation in a linear quadratic control problem. The target value of the probabilistic critic network is then calculated and shown to be equal to the analytically derived correct value. Full derivation of the Riccati solution for this non-standard stochastic linear quadratic control problem is also provided. Moreover, the performance of the proposed probabilistic controller is demonstrated on linear and non-linear control examples.

  6. Adaptive projection method applied to three-dimensional ultrasonic focusing and steering through the ribs.

    PubMed

    Cochard, E; Aubry, J F; Tanter, M; Prada, C

    2011-08-01

    An adaptive projection method for ultrasonic focusing through the rib cage, with minimal energy deposition on the ribs, was evaluated experimentally in 3D geometry. Adaptive projection is based on decomposition of the time-reversal operator (DORT method) and projection on the "noise" subspace. It is shown that 3D implementation of this method is straightforward, and not more time-consuming than 2D. Comparisons are made between adaptive projection, spherical focusing, and a previously proposed time-reversal focusing method, by measuring pressure fields in the focal plane and rib region using the three methods. The ratio of the specific absorption rate at the focus over the one at the ribs was found to be increased by a factor of up to eight, versus spherical emission. Beam steering out of geometric focus was also investigated. For all configurations projecting steered emissions were found to deposit less energy on the ribs than steering time-reversed emissions: thus the non-invasive method presented here is more efficient than state-of-the-art invasive techniques. In fact, this method could be used for real-time treatment, because a single acquisition of back-scattered echoes from the ribs is enough to treat a large volume around the focus, thanks to real time projection of the steered beams.

  7. An Evidence-Based Public Health Approach to Climate Change Adaptation

    PubMed Central

    Eidson, Millicent; Tlumak, Jennifer E.; Raab, Kristin K.; Luber, George

    2014-01-01

    Background: Public health is committed to evidence-based practice, yet there has been minimal discussion of how to apply an evidence-based practice framework to climate change adaptation. Objectives: Our goal was to review the literature on evidence-based public health (EBPH), to determine whether it can be applied to climate change adaptation, and to consider how emphasizing evidence-based practice may influence research and practice decisions related to public health adaptation to climate change. Methods: We conducted a substantive review of EBPH, identified a consensus EBPH framework, and modified it to support an EBPH approach to climate change adaptation. We applied the framework to an example and considered implications for stakeholders. Discussion: A modified EBPH framework can accommodate the wide range of exposures, outcomes, and modes of inquiry associated with climate change adaptation and the variety of settings in which adaptation activities will be pursued. Several factors currently limit application of the framework, including a lack of higher-level evidence of intervention efficacy and a lack of guidelines for reporting climate change health impact projections. To enhance the evidence base, there must be increased attention to designing, evaluating, and reporting adaptation interventions; standardized health impact projection reporting; and increased attention to knowledge translation. This approach has implications for funders, researchers, journal editors, practitioners, and policy makers. Conclusions: The current approach to EBPH can, with modifications, support climate change adaptation activities, but there is little evidence regarding interventions and knowledge translation, and guidelines for projecting health impacts are lacking. Realizing the goal of an evidence-based approach will require systematic, coordinated efforts among various stakeholders. Citation: Hess JJ, Eidson M, Tlumak JE, Raab KK, Luber G. 2014. An evidence-based public

  8. CRISPR-Based Adaptive Immune Systems

    PubMed Central

    Terns, Michael P.; Terns, Rebecca M.

    2011-01-01

    CRISPR-Cas systems are recently discovered, RNA-based immune systems that control invasions of viruses and plasmids in archaea and bacteria. Prokaryotes with CRISPR-Cas immune systems capture short invader sequences within the CRISPR loci in their genomes, and small RNAs produced from the CRISPR loci (CRISPR (cr)RNAs) guide Cas proteins to recognize and degrade (or otherwise silence) the invading nucleic acids. There are multiple variations of the pathway found among prokaryotes, each mediated by largely distinct components and mechanisms that we are only beginning to delineate. Here we will review our current understanding of the remarkable CRISPR-Cas pathways with particular attention to studies relevant to systems found in the archaea. PMID:21531607

  9. Modeling of Rate-Dependent Hysteresis Using a GPO-Based Adaptive Filter.

    PubMed

    Zhang, Zhen; Ma, Yaopeng

    2016-02-06

    A novel generalized play operator-based (GPO-based) nonlinear adaptive filter is proposed to model rate-dependent hysteresis nonlinearity for smart actuators. In the proposed filter, the input signal vector consists of the output of a tapped delay line. GPOs with various thresholds are used to construct a nonlinear network and connected with the input signals. The output signal of the filter is composed of a linear combination of signals from the output of GPOs. The least-mean-square (LMS) algorithm is used to adjust the weights of the nonlinear filter. The modeling results of four adaptive filter methods are compared: GPO-based adaptive filter, Volterra filter, backlash filter and linear adaptive filter. Moreover, a phenomenological operator-based model, the rate-dependent generalized Prandtl-Ishlinskii (RDGPI) model, is compared to the proposed adaptive filter. The various rate-dependent modeling methods are applied to model the rate-dependent hysteresis of a giant magnetostrictive actuator (GMA). It is shown from the modeling results that the GPO-based adaptive filter can describe the rate-dependent hysteresis nonlinear of the GMA more accurately and effectively.

  10. Modeling of Rate-Dependent Hysteresis Using a GPO-Based Adaptive Filter.

    PubMed

    Zhang, Zhen; Ma, Yaopeng

    2016-01-01

    A novel generalized play operator-based (GPO-based) nonlinear adaptive filter is proposed to model rate-dependent hysteresis nonlinearity for smart actuators. In the proposed filter, the input signal vector consists of the output of a tapped delay line. GPOs with various thresholds are used to construct a nonlinear network and connected with the input signals. The output signal of the filter is composed of a linear combination of signals from the output of GPOs. The least-mean-square (LMS) algorithm is used to adjust the weights of the nonlinear filter. The modeling results of four adaptive filter methods are compared: GPO-based adaptive filter, Volterra filter, backlash filter and linear adaptive filter. Moreover, a phenomenological operator-based model, the rate-dependent generalized Prandtl-Ishlinskii (RDGPI) model, is compared to the proposed adaptive filter. The various rate-dependent modeling methods are applied to model the rate-dependent hysteresis of a giant magnetostrictive actuator (GMA). It is shown from the modeling results that the GPO-based adaptive filter can describe the rate-dependent hysteresis nonlinear of the GMA more accurately and effectively. PMID:26861349

  11. Method for removing tilt control in adaptive optics systems

    DOEpatents

    Salmon, Joseph Thaddeus

    1998-01-01

    A new adaptive optics system and method of operation, whereby the method removes tilt control, and includes the steps of using a steering mirror to steer a wavefront in the desired direction, for aiming an impinging aberrated light beam in the direction of a deformable mirror. The deformable mirror has its surface deformed selectively by means of a plurality of actuators, and compensates, at least partially, for existing aberrations in the light beam. The light beam is split into an output beam and a sample beam, and the sample beam is sampled using a wavefront sensor. The sampled signals are converted into corresponding electrical signals for driving a controller, which, in turn, drives the deformable mirror in a feedback loop in response to the sampled signals, for compensating for aberrations in the wavefront. To this purpose, a displacement error (gradient) of the wavefront is measured, and adjusted by a modified gain matrix, which satisfies the following equation: G'=(I-X(X.sup.T X).sup.-1 X.sup.T)G(I-A)

  12. Method for removing tilt control in adaptive optics systems

    DOEpatents

    Salmon, J.T.

    1998-04-28

    A new adaptive optics system and method of operation are disclosed, whereby the method removes tilt control, and includes the steps of using a steering mirror to steer a wavefront in the desired direction, for aiming an impinging aberrated light beam in the direction of a deformable mirror. The deformable mirror has its surface deformed selectively by means of a plurality of actuators, and compensates, at least partially, for existing aberrations in the light beam. The light beam is split into an output beam and a sample beam, and the sample beam is sampled using a wavefront sensor. The sampled signals are converted into corresponding electrical signals for driving a controller, which, in turn, drives the deformable mirror in a feedback loop in response to the sampled signals, for compensating for aberrations in the wavefront. To this purpose, a displacement error (gradient) of the wavefront is measured, and adjusted by a modified gain matrix, which satisfies the following equation: G{prime} = (I{minus}X(X{sup T} X){sup {minus}1}X{sup T})G(I{minus}A). 3 figs.

  13. An adaptive training method for optimal interpolative neural nets.

    PubMed

    Liu, T Z; Yen, C W

    1997-04-01

    In contrast to conventional multilayered feedforward networks which are typically trained by iterative gradient search methods, an optimal interpolative (OI) net can be trained by a noniterative least squares algorithm called RLS-OI. The basic idea of RLS-OI is to use a subset of the training set, whose inputs are called subprototypes, to constrain the OI net solution. A subset of these subprototypes, called prototypes, is then chosen as the parameter vectors of the activation functions of the OI net to satisfy the subprototype constraints in the least squares (LS) sense. By dynamically increasing the numbers of subprototypes and prototypes, RLS-OI evolves the OI net from scratch to the extent sufficient to solve a given classification problem. To improve the performance of RLS-OI, this paper addresses two important problems in OI net training: the selection of the subprototypes and the selection of the prototypes. By choosing subprototypes from poorly classified regions, this paper proposes a new subprototype selection method which is adaptive to the changing classification performance of the growing OI net. This paper also proposes a new prototype selection criterion to reduce the complexity of the OI net. For the same training accuracy, simulation results demonstrate that the proposed approach produces smaller OI net than the RLS-OI algorithm. Experimental results also show that the proposed approach is less sensitive to the variation of the training set than RLS-OI.

  14. Adaptive conventional power system stabilizer based on artificial neural network

    SciTech Connect

    Kothari, M.L.; Segal, R.; Ghodki, B.K.

    1995-12-31

    This paper deals with an artificial neural network (ANN) based adaptive conventional power system stabilizer (PSS). The ANN comprises an input layer, a hidden layer and an output layer. The input vector to the ANN comprises real power (P) and reactive power (Q), while the output vector comprises optimum PSS parameters. A systematic approach for generating training set covering wide range of operating conditions, is presented. The ANN has been trained using back-propagation training algorithm. Investigations reveal that the dynamic performance of ANN based adaptive conventional PSS is quite insensitive to wide variations in loading conditions.

  15. An Adaptive Feedback and Review Paradigm for Computer-Based Drills.

    ERIC Educational Resources Information Center

    Siegel, Martin A.; Misselt, A. Lynn

    The Corrective Feedback Paradigm (CFP), which has been refined and expanded through use on the PLATO IV Computer-Based Education System, is based on instructional design strategies implied by stimulus-locus analyses, direct instruction, and instructional feedback methods. Features of the paradigm include adaptive feedback techniques with…

  16. An h-adaptive finite element method for turbulent heat transfer

    SciTech Connect

    Carriington, David B

    2009-01-01

    A two-equation turbulence closure model (k-{omega}) using an h-adaptive grid technique and finite element method (FEM) has been developed to simulate low Mach flow and heat transfer. These flows are applicable to many flows in engineering and environmental sciences. Of particular interest in the engineering modeling areas are: combustion, solidification, and heat exchanger design. Flows for indoor air quality modeling and atmospheric pollution transport are typical types of environmental flows modeled with this method. The numerical method is based on a hybrid finite element model using an equal-order projection process. The model includes thermal and species transport, localized mesh refinement (h-adaptive) and Petrov-Galerkin weighting for the stabilizing the advection. This work develops the continuum model of a two-equation turbulence closure method. The fractional step solution method is stated along with the h-adaptive grid method (Carrington and Pepper, 2002). Solutions are presented for 2d flow over a backward-facing step.

  17. Investigation of the effects of color on judgments of sweetness using a taste adaptation method.

    PubMed

    Hidaka, Souta; Shimoda, Kazumasa

    2014-01-01

    It has been reported that color can affect the judgment of taste. For example, a dark red color enhances the subjective intensity of sweetness. However, the underlying mechanisms of the effect of color on taste have not been fully investigated; in particular, it remains unclear whether the effect is based on cognitive/decisional or perceptual processes. Here, we investigated the effect of color on sweetness judgments using a taste adaptation method. A sweet solution whose color was subjectively congruent with sweetness was judged as sweeter than an uncolored sweet solution both before and after adaptation to an uncolored sweet solution. In contrast, subjective judgment of sweetness for uncolored sweet solutions did not differ between the conditions following adaptation to a colored sweet solution and following adaptation to an uncolored one. Color affected sweetness judgment when the target solution was colored, but the colored sweet solution did not modulate the magnitude of taste adaptation. Therefore, it is concluded that the effect of color on the judgment of taste would occur mainly in cognitive/decisional domains.

  18. Sparse time-frequency decomposition based on dictionary adaptation.

    PubMed

    Hou, Thomas Y; Shi, Zuoqiang

    2016-04-13

    In this paper, we propose a time-frequency analysis method to obtain instantaneous frequencies and the corresponding decomposition by solving an optimization problem. In this optimization problem, the basis that is used to decompose the signal is not known a priori. Instead, it is adapted to the signal and is determined as part of the optimization problem. In this sense, this optimization problem can be seen as a dictionary adaptation problem, in which the dictionary is adaptive to one signal rather than a training set in dictionary learning. This dictionary adaptation problem is solved by using the augmented Lagrangian multiplier (ALM) method iteratively. We further accelerate the ALM method in each iteration by using the fast wavelet transform. We apply our method to decompose several signals, including signals with poor scale separation, signals with outliers and polluted by noise and a real signal. The results show that this method can give accurate recovery of both the instantaneous frequencies and the intrinsic mode functions.

  19. The Adaptively Biased Molecular Dynamics method revisited: New capabilities and an application

    NASA Astrophysics Data System (ADS)

    Moradi, Mahmoud; Babin, Volodymyr; Roland, Christopher; Sagui, Celeste

    2015-09-01

    The free energy is perhaps one of the most important quantity required for describing biomolecular systems at equilibrium. Unfortunately, accurate and reliable free energies are notoriously difficult to calculate. To address this issue, we previously developed the Adaptively Biased Molecular Dynamics (ABMD) method for accurate calculation of rugged free energy surfaces (FES). Here, we briefly review the workings of the ABMD method with an emphasis on recent software additions, along with a short summary of a selected ABMD application based on the B-to-Z DNA transition. The ABMD method, along with current extensions, is currently implemented in the AMBER (ver.10-14) software package.

  20. A two-dimensional adaptive spectral element method for the direct simulation of incompressible flow

    NASA Astrophysics Data System (ADS)

    Hsu, Li-Chieh

    The spectral element method is a high order discretization scheme for the solution of nonlinear partial differential equations. The method draws its strengths from the finite element method for geometrical flexibility and spectral methods for high accuracy. Although the method is, in theory, very powerful for complex phenomena such as transitional flows, its practical implementation is limited by the arbitrary choice of domain discretization. For instance, it is hard to estimate the appropriate number of elements for a specific case. Selection of regions to be refined or coarsened is difficult especially as the flow becomes more complex and memory limits of the computer are stressed. We present an adaptive spectral element method in which the grid is automatically refined or coarsened in order to capture underresolved regions of the domain and to follow regions requiring high resolution as they develop in time. The objective is to provide the best and most efficient solution to a time-dependent nonlinear problem by continually optimizing resource allocation. The adaptivity is based on an error estimator which determines which regions need more resolution. The solution strategy is as follows: compute an initial solution with a suitable initial mesh, estimate errors in the solution locally in each element, modify the mesh according to the error estimators, interpolate old mesh solutions onto the new elements, and resume the numerical solution process. A two-dimensional adaptive spectral element method for the direct simulation of incompressible flows has been developed. The adaptive algorithm effectively diagnoses and refines regions of the flow where complexity of the solution requires increased resolution. The method has been demonstrated on two-dimensional examples in heat conduction, Stokes and Navier-Stokes flows.

  1. Video Adaptation Model Based on Cognitive Lattice in Ubiquitous Computing

    NASA Astrophysics Data System (ADS)

    Kim, Svetlana; Yoon, Yong-Ik

    The multimedia service delivery chain poses today many challenges. There are an increasing terminal diversity, network heterogeneity and a pressure to satisfy the user preferences. The situation encourages the need for the personalized contents to provide the user in the best possible experience in ubiquitous computing. This paper introduces a personalized content preparation and delivery framework for multimedia service. The personalized video adaptation is expected to satisfy individual users' need in video content. Cognitive lattice plays a significant role of video annotation to meet users' preference on video content. In this paper, a comprehensive solution for the PVA (Personalized Video Adaptation) is proposed based on Cognitive lattice concept. The PVA is implemented based on MPEG-21 Digital Item Adaptation framework. One of the challenges is how to quantify users' preference on video content.

  2. Principles and Methods of Adapted Physical Education and Recreation.

    ERIC Educational Resources Information Center

    Arnheim, Daniel D.; And Others

    This text is designed for the elementary and secondary school physical educator and the recreation specialist in adapted physical education and, more specifically, as a text for college courses in adapted and corrective physical education and therapeutic recreation. The text is divided into four major divisions: scope, key teaching and therapy…

  3. Preliminary Exploration of Adaptive State Predictor Based Human Operator Modeling

    NASA Technical Reports Server (NTRS)

    Trujillo, Anna C.; Gregory, Irene M.

    2012-01-01

    Control-theoretic modeling of the human operator dynamic behavior in manual control tasks has a long and rich history. In the last two decades, there has been a renewed interest in modeling the human operator. There has also been significant work on techniques used to identify the pilot model of a given structure. The purpose of this research is to attempt to go beyond pilot identification based on collected experimental data and to develop a predictor of pilot behavior. An experiment was conducted to quantify the effects of changing aircraft dynamics on an operator s ability to track a signal in order to eventually model a pilot adapting to changing aircraft dynamics. A gradient descent estimator and a least squares estimator with exponential forgetting used these data to predict pilot stick input. The results indicate that individual pilot characteristics and vehicle dynamics did not affect the accuracy of either estimator method to estimate pilot stick input. These methods also were able to predict pilot stick input during changing aircraft dynamics and they may have the capability to detect a change in a subject due to workload, engagement, etc., or the effects of changes in vehicle dynamics on the pilot.

  4. Fast and robust reconstruction for fluorescence molecular tomography via a sparsity adaptive subspace pursuit method.

    PubMed

    Ye, Jinzuo; Chi, Chongwei; Xue, Zhenwen; Wu, Ping; An, Yu; Xu, Han; Zhang, Shuang; Tian, Jie

    2014-02-01

    Fluorescence molecular tomography (FMT), as a promising imaging modality, can three-dimensionally locate the specific tumor position in small animals. However, it remains challenging for effective and robust reconstruction of fluorescent probe distribution in animals. In this paper, we present a novel method based on sparsity adaptive subspace pursuit (SASP) for FMT reconstruction. Some innovative strategies including subspace projection, the bottom-up sparsity adaptive approach, and backtracking technique are associated with the SASP method, which guarantees the accuracy, efficiency, and robustness for FMT reconstruction. Three numerical experiments based on a mouse-mimicking heterogeneous phantom have been performed to validate the feasibility of the SASP method. The results show that the proposed SASP method can achieve satisfactory source localization with a bias less than 1mm; the efficiency of the method is much faster than mainstream reconstruction methods; and this approach is robust even under quite ill-posed condition. Furthermore, we have applied this method to an in vivo mouse model, and the results demonstrate the feasibility of the practical FMT application with the SASP method.

  5. An adaptive kernel smoothing method for classifying Austrosimulium tillyardianum (Diptera: Simuliidae) larval instars.

    PubMed

    Cen, Guanjun; Yu, Yonghao; Zeng, Xianru; Long, Xiuzhen; Wei, Dewei; Gao, Xuyuan; Zeng, Tao

    2015-01-01

    In insects, the frequency distribution of the measurements of sclerotized body parts is generally used to classify larval instars and is characterized by a multimodal overlap between instar stages. Nonparametric methods with fixed bandwidths, such as histograms, have significant limitations when used to fit this type of distribution, making it difficult to identify divisions between instars. Fixed bandwidths have also been chosen somewhat subjectively in the past, which is another problem. In this study, we describe an adaptive kernel smoothing method to differentiate instars based on discontinuities in the growth rates of sclerotized insect body parts. From Brooks' rule, we derived a new standard for assessing the quality of instar classification and a bandwidth selector that more accurately reflects the distributed character of specific variables. We used this method to classify the larvae of Austrosimulium tillyardianum (Diptera: Simuliidae) based on five different measurements. Based on head capsule width and head capsule length, the larvae were separated into nine instars. Based on head capsule postoccipital width and mandible length, the larvae were separated into 8 instars and 10 instars, respectively. No reasonable solution was found for antennal segment 3 length. Separation of the larvae into nine instars using head capsule width or head capsule length was most robust and agreed with Crosby's growth rule. By strengthening the distributed character of the separation variable through the use of variable bandwidths, the adaptive kernel smoothing method could identify divisions between instars more effectively and accurately than previous methods.

  6. An Adaptive Kernel Smoothing Method for Classifying Austrosimulium tillyardianum (Diptera: Simuliidae) Larval Instars

    PubMed Central

    Cen, Guanjun; Zeng, Xianru; Long, Xiuzhen; Wei, Dewei; Gao, Xuyuan; Zeng, Tao

    2015-01-01

    In insects, the frequency distribution of the measurements of sclerotized body parts is generally used to classify larval instars and is characterized by a multimodal overlap between instar stages. Nonparametric methods with fixed bandwidths, such as histograms, have significant limitations when used to fit this type of distribution, making it difficult to identify divisions between instars. Fixed bandwidths have also been chosen somewhat subjectively in the past, which is another problem. In this study, we describe an adaptive kernel smoothing method to differentiate instars based on discontinuities in the growth rates of sclerotized insect body parts. From Brooks’ rule, we derived a new standard for assessing the quality of instar classification and a bandwidth selector that more accurately reflects the distributed character of specific variables. We used this method to classify the larvae of Austrosimulium tillyardianum (Diptera: Simuliidae) based on five different measurements. Based on head capsule width and head capsule length, the larvae were separated into nine instars. Based on head capsule postoccipital width and mandible length, the larvae were separated into 8 instars and 10 instars, respectively. No reasonable solution was found for antennal segment 3 length. Separation of the larvae into nine instars using head capsule width or head capsule length was most robust and agreed with Crosby’s growth rule. By strengthening the distributed character of the separation variable through the use of variable bandwidths, the adaptive kernel smoothing method could identify divisions between instars more effectively and accurately than previous methods. PMID:26546689

  7. An intelligent computational algorithm based on neural network for spatial data mining in adaptability evaluation

    NASA Astrophysics Data System (ADS)

    Miao, Zuohua; Xu, Hong; Chen, Yong; Zeng, Xiangyang

    2009-10-01

    Back-propagation neural network model (BPNN) is an intelligent computational model based on stylebook learning. This model is different from traditional adaptability symbolic logic reasoning method based on knowledge and rules. At the same time, BPNN model has shortcoming such as: slowly convergence speed and partial minimum. During the process of adaptability evaluation, the factors were diverse, complicated and uncertain, so an effectual model should adopt the technique of data mining method and fuzzy logical technology. In this paper, the author ameliorated the backpropagation of BPNN and applied fuzzy logical theory for dynamic inference of fuzzy rules. Authors also give detail description on training and experiment process of the novel model.

  8. Serial identification of EEG patterns using adaptive wavelet-based analysis

    NASA Astrophysics Data System (ADS)

    Nazimov, A. I.; Pavlov, A. N.; Nazimova, A. A.; Grubov, V. V.; Koronovskii, A. A.; Sitnikova, E.; Hramov, A. E.

    2013-10-01

    A problem of recognition specific oscillatory patterns in the electroencephalograms with the continuous wavelet-transform is discussed. Aiming to improve abilities of the wavelet-based tools we propose a serial adaptive method for sequential identification of EEG patterns such as sleep spindles and spike-wave discharges. This method provides an optimal selection of parameters based on objective functions and enables to extract the most informative features of the recognized structures. Different ways of increasing the quality of patterns recognition within the proposed serial adaptive technique are considered.

  9. A hybrid method for optimization of the adaptive Goldstein filter

    NASA Astrophysics Data System (ADS)

    Jiang, Mi; Ding, Xiaoli; Tian, Xin; Malhotra, Rakesh; Kong, Weixue

    2014-12-01

    The Goldstein filter is a well-known filter for interferometric filtering in the frequency domain. The main parameter of this filter, alpha, is set as a power of the filtering function. Depending on it, considered areas are strongly or weakly filtered. Several variants have been developed to adaptively determine alpha using different indicators such as the coherence, and phase standard deviation. The common objective of these methods is to prevent areas with low noise from being over filtered while simultaneously allowing stronger filtering over areas with high noise. However, the estimators of these indicators are biased in the real world and the optimal model to accurately determine the functional relationship between the indicators and alpha is also not clear. As a result, the filter always under- or over-filters and is rarely correct. The study presented in this paper aims to achieve accurate alpha estimation by correcting the biased estimator using homogeneous pixel selection and bootstrapping algorithms, and by developing an optimal nonlinear model to determine alpha. In addition, an iteration is also merged into the filtering procedure to suppress the high noise over incoherent areas. The experimental results from synthetic and real data show that the new filter works well under a variety of conditions and offers better and more reliable performance when compared to existing approaches.

  10. Profiling Students' Adaptation Styles in Web-based Learning.

    ERIC Educational Resources Information Center

    Lee, Myung-Geun

    2001-01-01

    Discussion of Web-based instruction (WBI) focuses on a study of Korean universities that analyzed learners' adaptation styles and characteristics by retrospectively assessing the perceptions of various aspects of WBI. Considers computer literacy, interaction with instructor and students, difficulty of contents, and learners' perception of academic…

  11. Adaptive NUC algorithm for uncooled IRFPA based on neural networks

    NASA Astrophysics Data System (ADS)

    Liu, Ziji; Jiang, Yadong; Lv, Jian; Zhu, Hongbin

    2010-10-01

    With developments in uncooled infrared plane array (UFPA) technology, many new advanced uncooled infrared sensors are used in defensive weapons, scientific research, industry and commercial applications. A major difference in imaging techniques between infrared IRFPA imaging system and a visible CCD camera is that, IRFPA need nonuniformity correction and dead pixel compensation, we usually called it infrared image pre-processing. Two-point or multi-point correction algorithms based on calibration commonly used may correct the non-uniformity of IRFPAs, but they are limited by pixel linearity and instability. Therefore, adaptive non-uniformity correction techniques are developed. Two of these adaptive non-uniformity correction algorithms are mostly discussed, one is based on temporal high-pass filter, and another is based on neural network. In this paper, a new NUC algorithm based on improved neural networks is introduced, and involves the compare result between improved neural networks and other adaptive correction techniques. A lot of different will discussed in different angle, like correction effects, calculation efficiency, hardware implementation and so on. According to the result and discussion, it could be concluding that the adaptive algorithm offers improved performance compared to traditional calibration mode techniques. This new algorithm not only provides better sensitivity, but also increases the system dynamic range. As the sensor application expended, it will be very useful in future infrared imaging systems.

  12. Adaptive Knowledge Management of Project-Based Learning

    ERIC Educational Resources Information Center

    Tilchin, Oleg; Kittany, Mohamed

    2016-01-01

    The goal of an approach to Adaptive Knowledge Management (AKM) of project-based learning (PBL) is to intensify subject study through guiding, inducing, and facilitating development knowledge, accountability skills, and collaborative skills of students. Knowledge development is attained by knowledge acquisition, knowledge sharing, and knowledge…

  13. Teaching a Biotechnology Curriculum Based on Adapted Primary Literature

    ERIC Educational Resources Information Center

    Falk, Hedda; Brill, Gilat; Yarden, Anat

    2008-01-01

    Adapted primary literature (APL) refers to an educational genre specifically designed to enable the use of research articles for learning biology in high school. The present investigation focuses on the pedagogical content knowledge (PCK) of four high-school biology teachers who enacted an APL-based curriculum in biotechnology. Using a…

  14. An Adaptive Evaluation Structure for Computer-Based Instruction.

    ERIC Educational Resources Information Center

    Welsh, William A.

    Adaptive Evaluation Structure (AES) is a set of linked computer programs designed to increase the effectiveness of interactive computer-assisted instruction at the college level. The package has four major features, the first of which is based on a prior cognitive inventory and on the accuracy and pace of student responses. AES adjusts materials…

  15. Evidence-Based Practice in Adapted Physical Education

    ERIC Educational Resources Information Center

    Jin, Jooyeon; Yun, Joonkoo

    2010-01-01

    Although implementation of evidence-based practice (EBP) has been strongly advocated by federal legislation as well as school districts in recent years, the concept has not been well accepted in adapted physical education (APE), perhaps due to a lack of understanding of the central notion of EBP. The purpose of this article is to discuss how APE…

  16. LDRD Final Report: Adaptive Methods for Laser Plasma Simulation

    SciTech Connect

    Dorr, M R; Garaizar, F X; Hittinger, J A

    2003-01-29

    The goal of this project was to investigate the utility of parallel adaptive mesh refinement (AMR) in the simulation of laser plasma interaction (LPI). The scope of work included the development of new numerical methods and parallel implementation strategies. The primary deliverables were (1) parallel adaptive algorithms to solve a system of equations combining plasma fluid and light propagation models, (2) a research code implementing these algorithms, and (3) an analysis of the performance of parallel AMR on LPI problems. The project accomplished these objectives. New algorithms were developed for the solution of a system of equations describing LPI. These algorithms were implemented in a new research code named ALPS (Adaptive Laser Plasma Simulator) that was used to test the effectiveness of the AMR algorithms on the Laboratory's large-scale computer platforms. The details of the algorithm and the results of the numerical tests were documented in an article published in the Journal of Computational Physics [2]. A principal conclusion of this investigation is that AMR is most effective for LPI systems that are ''hydrodynamically large'', i.e., problems requiring the simulation of a large plasma volume relative to the volume occupied by the laser light. Since the plasma-only regions require less resolution than the laser light, AMR enables the use of efficient meshes for such problems. In contrast, AMR is less effective for, say, a single highly filamented beam propagating through a phase plate, since the resulting speckle pattern may be too dense to adequately separate scales with a locally refined mesh. Ultimately, the gain to be expected from the use of AMR is highly problem-dependent. One class of problems investigated in this project involved a pair of laser beams crossing in a plasma flow. Under certain conditions, energy can be transferred from one beam to the other via a resonant interaction with an ion acoustic wave in the crossing region. AMR provides an

  17. Adapting Western research methods to indigenous ways of knowing.

    PubMed

    Simonds, Vanessa W; Christopher, Suzanne

    2013-12-01

    Indigenous communities have long experienced exploitation by researchers and increasingly require participatory and decolonizing research processes. We present a case study of an intervention research project to exemplify a clash between Western research methodologies and Indigenous methodologies and how we attempted reconciliation. We then provide implications for future research based on lessons learned from Native American community partners who voiced concern over methods of Western deductive qualitative analysis. Decolonizing research requires constant reflective attention and action, and there is an absence of published guidance for this process. Continued exploration is needed for implementing Indigenous methods alone or in conjunction with appropriate Western methods when conducting research in Indigenous communities. Currently, examples of Indigenous methods and theories are not widely available in academic texts or published articles, and are often not perceived as valid.

  18. Adapting Western Research Methods to Indigenous Ways of Knowing

    PubMed Central

    Christopher, Suzanne

    2013-01-01

    Indigenous communities have long experienced exploitation by researchers and increasingly require participatory and decolonizing research processes. We present a case study of an intervention research project to exemplify a clash between Western research methodologies and Indigenous methodologies and how we attempted reconciliation. We then provide implications for future research based on lessons learned from Native American community partners who voiced concern over methods of Western deductive qualitative analysis. Decolonizing research requires constant reflective attention and action, and there is an absence of published guidance for this process. Continued exploration is needed for implementing Indigenous methods alone or in conjunction with appropriate Western methods when conducting research in Indigenous communities. Currently, examples of Indigenous methods and theories are not widely available in academic texts or published articles, and are often not perceived as valid. PMID:23678897

  19. Stability of a modified Peaceman-Rachford method for the paraxial Helmholtz equation on adaptive grids

    NASA Astrophysics Data System (ADS)

    Sheng, Qin; Sun, Hai-wei

    2016-11-01

    This study concerns the asymptotic stability of an eikonal, or ray, transformation based Peaceman-Rachford splitting method for solving the paraxial Helmholtz equation with high wave numbers. Arbitrary nonuniform grids are considered in transverse and beam propagation directions. The differential equation targeted has been used for modeling propagations of high intensity laser pulses over a long distance without diffractions. Self-focusing of high intensity beams may be balanced with the de-focusing effect of created ionized plasma channel in the situation, and applications of grid adaptations are frequently essential. It is shown rigorously that the fully discretized oscillation-free decomposition method on arbitrary adaptive grids is asymptotically stable with a stability index one. Simulation experiments are carried out to illustrate our concern and conclusions.

  20. Research on a pulmonary nodule segmentation method combining fast self-adaptive FCM and classification.

    PubMed

    Liu, Hui; Zhang, Cai-Ming; Su, Zhi-Yuan; Wang, Kai; Deng, Kai

    2015-01-01

    The key problem of computer-aided diagnosis (CAD) of lung cancer is to segment pathologically changed tissues fast and accurately. As pulmonary nodules are potential manifestation of lung cancer, we propose a fast and self-adaptive pulmonary nodules segmentation method based on a combination of FCM clustering and classification learning. The enhanced spatial function considers contributions to fuzzy membership from both the grayscale similarity between central pixels and single neighboring pixels and the spatial similarity between central pixels and neighborhood and improves effectively the convergence rate and self-adaptivity of the algorithm. Experimental results show that the proposed method can achieve more accurate segmentation of vascular adhesion, pleural adhesion, and ground glass opacity (GGO) pulmonary nodules than other typical algorithms.

  1. An Adaptive Digital Image Watermarking Algorithm Based on Morphological Haar Wavelet Transform

    NASA Astrophysics Data System (ADS)

    Huang, Xiaosheng; Zhao, Sujuan

    At present, much more of the wavelet-based digital watermarking algorithms are based on linear wavelet transform and fewer on non-linear wavelet transform. In this paper, we propose an adaptive digital image watermarking algorithm based on non-linear wavelet transform--Morphological Haar Wavelet Transform. In the algorithm, the original image and the watermark image are decomposed with multi-scale morphological wavelet transform respectively. Then the watermark information is adaptively embedded into the original image in different resolutions, combining the features of Human Visual System (HVS). The experimental results show that our method is more robust and effective than the ordinary wavelet transform algorithms.

  2. Adaptive sparse polynomial chaos expansion based on least angle regression

    NASA Astrophysics Data System (ADS)

    Blatman, Géraud; Sudret, Bruno

    2011-03-01

    Polynomial chaos (PC) expansions are used in stochastic finite element analysis to represent the random model response by a set of coefficients in a suitable (so-called polynomial chaos) basis. The number of terms to be computed grows dramatically with the size of the input random vector, which makes the computational cost of classical solution schemes (may it be intrusive (i.e. of Galerkin type) or non intrusive) unaffordable when the deterministic finite element model is expensive to evaluate. To address such problems, the paper describes a non intrusive method that builds a sparse PC expansion. First, an original strategy for truncating the PC expansions, based on hyperbolic index sets, is proposed. Then an adaptive algorithm based on least angle regression (LAR) is devised for automatically detecting the significant coefficients of the PC expansion. Beside the sparsity of the basis, the experimental design used at each step of the algorithm is systematically complemented in order to avoid the overfitting phenomenon. The accuracy of the PC metamodel is checked using an estimate inspired by statistical learning theory, namely the corrected leave-one-out error. As a consequence, a rather small number of PC terms are eventually retained ( sparse representation), which may be obtained at a reduced computational cost compared to the classical "full" PC approximation. The convergence of the algorithm is shown on an analytical function. Then the method is illustrated on three stochastic finite element problems. The first model features 10 input random variables, whereas the two others involve an input random field, which is discretized into 38 and 30 - 500 random variables, respectively.

  3. Searching for adaptive traits in genetic resources - phenology based approach

    NASA Astrophysics Data System (ADS)

    Bari, Abdallah

    2015-04-01

    Searching for adaptive traits in genetic resources - phenology based approach Abdallah Bari, Kenneth Street, Eddy De Pauw, Jalal Eddin Omari, and Chandra M. Biradar International Center for Agricultural Research in the Dry Areas, Rabat Institutes, Rabat, Morocco Phenology is an important plant trait not only for assessing and forecasting food production but also for searching in genebanks for adaptive traits. Among the phenological parameters we have been considering to search for such adaptive and rare traits are the onset (sowing period) and the seasonality (growing period). Currently an application is being developed as part of the focused identification of germplasm strategy (FIGS) approach to use climatic data in order to identify crop growing seasons and characterize them in terms of onset and duration. These approximations of growing period characteristics can then be used to estimate flowering and maturity dates for dryland crops, such as wheat, barley, faba bean, lentils and chickpea, and assess, among others, phenology-related traits such as days to heading [dhe] and grain filling period [gfp]. The approach followed here is based on first calculating long term average daily temperatures by fitting a curve to the monthly data over days from beginning of the year. Prior to the identification of these phenological stages the onset is extracted first from onset integer raster GIS layers developed based on a model of the growing period that considers both moisture and temperature limitations. The paper presents some examples of real applications of the approach to search for rare and adaptive traits.

  4. Solution of Reactive Compressible Flows Using an Adaptive Wavelet Method

    NASA Astrophysics Data System (ADS)

    Zikoski, Zachary; Paolucci, Samuel; Powers, Joseph

    2008-11-01

    This work presents numerical simulations of reactive compressible flow, including detailed multicomponent transport, using an adaptive wavelet algorithm. The algorithm allows for dynamic grid adaptation which enhances our ability to fully resolve all physically relevant scales. The thermodynamic properties, equation of state, and multicomponent transport properties are provided by CHEMKIN and TRANSPORT libraries. Results for viscous detonation in a H2:O2:Ar mixture, and other problems in multiple dimensions, are included.

  5. Adaptive optics in digital micromirror based confocal microscopy

    NASA Astrophysics Data System (ADS)

    Pozzi, P.; Wilding, D.; Soloviev, O.; Vdovin, G.; Verhaegen, M.

    2016-03-01

    This proceeding reports early results in the development of a new technique for adaptive optics in confocal microscopy. The term adaptive optics refers to the branch of optics in which an active element in the optical system is used to correct inhomogeneities in the media through which light propagates. In its most classical form, mostly used in astronomical imaging, adaptive optics is achieved through a closed loop in which the actuators of a deformable mirror are driven by a wavefront sensor. This approach is severely limited in fluorescence microscopy, as the use of a wavefront sensor requires the presence of a bright, point like source in the field of view, a condition rarely satisfied in microscopy samples. Previously reported approaches to adaptive optics in fluorescence microscopy are therefore limited to the inclusion of fluorescent microspheres in the sample, to use as bright stars for wavefront sensors, or time consuming sensorless optimization procedures, requiring several seconds of optimization before the acquisition of a single image. We propose an alternative approach to the problem, implementing sensorless adaptive optics in a Programmable array microscope. A programmable array microscope is a microscope based on a digital micromirror device, in which the single elements of the micromirror act both as point sources and pinholes.

  6. Feasibility of an online adaptive replanning method for cranial frameless intensity-modulated radiosurgery

    SciTech Connect

    Calvo, Juan Francisco; San José, Sol; Garrido, LLuís; Puertas, Enrique; Moragues, Sandra; Pozo, Miquel; Casals, Joan

    2013-10-01

    To introduce an approach for online adaptive replanning (i.e., dose-guided radiosurgery) in frameless stereotactic radiosurgery, when a 6-dimensional (6D) robotic couch is not available in the linear accelerator (linac). Cranial radiosurgical treatments are planned in our department using intensity-modulated technique. Patients are immobilized using thermoplastic mask. A cone-beam computed tomography (CBCT) scan is acquired after the initial laser-based patient setup (CBCT{sub setup}). The online adaptive replanning procedure we propose consists of a 6D registration-based mapping of the reference plan onto actual CBCT{sub setup}, followed by a reoptimization of the beam fluences (“6D plan”) to achieve similar dosage as originally was intended, while the patient is lying in the linac couch and the original beam arrangement is kept. The goodness of the online adaptive method proposed was retrospectively analyzed for 16 patients with 35 targets treated with CBCT-based frameless intensity modulated technique. Simulation of reference plan onto actual CBCT{sub setup}, according to the 4 degrees of freedom, supported by linac couch was also generated for each case (4D plan). Target coverage (D99%) and conformity index values of 6D and 4D plans were compared with the corresponding values of the reference plans. Although the 4D-based approach does not always assure the target coverage (D99% between 72% and 103%), the proposed online adaptive method gave a perfect coverage in all cases analyzed as well as a similar conformity index value as was planned. Dose-guided radiosurgery approach is effective to assure the dose coverage and conformity of an intracranial target volume, avoiding resetting the patient inside the mask in a “trial and error” way so as to remove the pitch and roll errors when a robotic table is not available.

  7. A Comparison of a Brain-Based Adaptive System and a Manual Adaptable System for Invoking Automation

    NASA Technical Reports Server (NTRS)

    Bailey, Nathan R.; Scerbo, Mark W.; Freeman, Frederick G.; Mikulka, Peter J.; Scott, Lorissa A.

    2004-01-01

    Two experiments are presented that examine alternative methods for invoking automation. In each experiment, participants were asked to perform simultaneously a monitoring task and a resource management task as well as a tracking task that changed between automatic and manual modes. The monitoring task required participants to detect failures of an automated system to correct aberrant conditions under either high or low system reliability. Performance on each task was assessed as well as situation awareness and subjective workload. In the first experiment, half of the participants worked with a brain-based system that used their EEG signals to switch the tracking task between automatic and manual modes. The remaining participants were yoked to participants from the adaptive condition and received the same schedule of mode switches, but their EEG had no effect on the automation. Within each group, half of the participants were assigned to either the low or high reliability monitoring task. In addition, within each combination of automation invocation and system reliability, participants were separated into high and low complacency potential groups. The results revealed no significant effects of automation invocation on the performance measures; however, the high complacency individuals demonstrated better situation awareness when working with the adaptive automation system. The second experiment was the same as the first with one important exception. Automation was invoked manually. Thus, half of the participants pressed a button to invoke automation for 10 s. The remaining participants were yoked to participants from the adaptable condition and received the same schedule of mode switches, but they had no control over the automation. The results showed that participants who could invoke automation performed more poorly on the resource management task and reported higher levels of subjective workload. Further, those who invoked automation more frequently performed

  8. Vortical Flow Prediction using an Adaptive Unstructured Grid Method. Chapter 11

    NASA Technical Reports Server (NTRS)

    Pirzadeh, Shahyar Z.

    2009-01-01

    A computational fluid dynamics (CFD) method has been employed to compute vortical flows around slender wing/body configurations. The emphasis of the paper is on the effectiveness of an adaptive grid procedure in "capturing" concentrated vortices generated at sharp edges or flow separation lines of lifting surfaces flying at high angles of attack. The method is based on a tetrahedral unstructured grid technology developed at the NASA Langley Research Center. Two steady-state, subsonic, inviscid and Navier-Stokes flow test cases are presented to demonstrate the applicability of the method for solving vortical flow problems. The first test case concerns vortex flow over a simple 65 delta wing with different values of leading-edge radius. Although the geometry is quite simple, it poses a challenging problem for computing vortices originating from blunt leading edges. The second case is that of a more complex fighter configuration. The superiority of the adapted solutions in capturing the vortex flow structure over the conventional unadapted results is demonstrated by comparisons with the wind-tunnel experimental data. The study shows that numerical prediction of vortical flows is highly sensitive to the local grid resolution and that the implementation of grid adaptation is essential when applying CFD methods to such complicated flow problems.

  9. Failure of Anisotropic Unstructured Mesh Adaption Based on Multidimensional Residual Minimization

    NASA Technical Reports Server (NTRS)

    Wood, William A.; Kleb, William L.

    2003-01-01

    An automated anisotropic unstructured mesh adaptation strategy is proposed, implemented, and assessed for the discretization of viscous flows. The adaption criteria is based upon the minimization of the residual fluctuations of a multidimensional upwind viscous flow solver. For scalar advection, this adaption strategy has been shown to use fewer grid points than gradient based adaption, naturally aligning mesh edges with discontinuities and characteristic lines. The adaption utilizes a compact stencil and is local in scope, with four fundamental operations: point insertion, point deletion, edge swapping, and nodal displacement. Evaluation of the solution-adaptive strategy is performed for a two-dimensional blunt body laminar wind tunnel case at Mach 10. The results demonstrate that the strategy suffers from a lack of robustness, particularly with regard to alignment of the bow shock in the vicinity of the stagnation streamline. In general, constraining the adaption to such a degree as to maintain robustness results in negligible improvement to the solution. Because the present method fails to consistently or significantly improve the flow solution, it is rejected in favor of simple uniform mesh refinement.

  10. Adaptive integral method with fast Gaussian gridding for solving combined field integral equations

    NASA Astrophysics Data System (ADS)

    Bakır, O.; Baǧ; Cı, H.; Michielssen, E.

    Fast Gaussian gridding (FGG), a recently proposed nonuniform fast Fourier transform algorithm, is used to reduce the memory requirements of the adaptive integral method (AIM) for accelerating the method of moments-based solution of combined field integral equations pertinent to the analysis of scattering from three-dimensional perfect electrically conducting surfaces. Numerical results that demonstrate the efficiency and accuracy of the AIM-FGG hybrid in comparison to an AIM-accelerated solver, which uses moment matching to project surface sources onto an auxiliary grid, are presented.

  11. Method and system for spatial data input, manipulation and distribution via an adaptive wireless transceiver

    NASA Technical Reports Server (NTRS)

    Wang, Ray (Inventor)

    2009-01-01

    A method and system for spatial data manipulation input and distribution via an adaptive wireless transceiver. The method and system include a wireless transceiver for automatically and adaptively controlling wireless transmissions using a Waveform-DNA method. The wireless transceiver can operate simultaneously over both the short and long distances. The wireless transceiver is automatically adaptive and wireless devices can send and receive wireless digital and analog data from various sources rapidly in real-time via available networks and network services.

  12. Adapt

    NASA Astrophysics Data System (ADS)

    Bargatze, L. F.

    2015-12-01

    Active Data Archive Product Tracking (ADAPT) is a collection of software routines that permits one to generate XML metadata files to describe and register data products in support of the NASA Heliophysics Virtual Observatory VxO effort. ADAPT is also a philosophy. The ADAPT concept is to use any and all available metadata associated with scientific data to produce XML metadata descriptions in a consistent, uniform, and organized fashion to provide blanket access to the full complement of data stored on a targeted data server. In this poster, we present an application of ADAPT to describe all of the data products that are stored by using the Common Data File (CDF) format served out by the CDAWEB and SPDF data servers hosted at the NASA Goddard Space Flight Center. These data servers are the primary repositories for NASA Heliophysics data. For this purpose, the ADAPT routines have been used to generate data resource descriptions by using an XML schema named Space Physics Archive, Search, and Extract (SPASE). SPASE is the designated standard for documenting Heliophysics data products, as adopted by the Heliophysics Data and Model Consortium. The set of SPASE XML resource descriptions produced by ADAPT includes high-level descriptions of numerical data products, display data products, or catalogs and also includes low-level "Granule" descriptions. A SPASE Granule is effectively a universal access metadata resource; a Granule associates an individual data file (e.g. a CDF file) with a "parent" high-level data resource description, assigns a resource identifier to the file, and lists the corresponding assess URL(s). The CDAWEB and SPDF file systems were queried to provide the input required by the ADAPT software to create an initial set of SPASE metadata resource descriptions. Then, the CDAWEB and SPDF data repositories were queried subsequently on a nightly basis and the CDF file lists were checked for any changes such as the occurrence of new, modified, or deleted

  13. Systems and Methods for Derivative-Free Adaptive Control

    NASA Technical Reports Server (NTRS)

    Yucelen, Tansel (Inventor); Kim, Kilsoo (Inventor); Calise, Anthony J. (Inventor)

    2015-01-01

    An adaptive control system is disclosed. The control system can control uncertain dynamic systems. The control system can employ one or more derivative-free adaptive control architectures. The control system can further employ one or more derivative-free weight update laws. The derivative-free weight update laws can comprise a time-varying estimate of an ideal vector of weights. The control system of the present invention can therefore quickly stabilize systems that undergo sudden changes in dynamics, caused by, for example, sudden changes in weight. Embodiments of the present invention can also provide a less complex control system than existing adaptive control systems. The control system can control aircraft and other dynamic systems, such as, for example, those with non-minimum phase dynamics.

  14. Study of adaptive methods for data compression of scanner data

    NASA Technical Reports Server (NTRS)

    1977-01-01

    The performance of adaptive image compression techniques and the applicability of a variety of techniques to the various steps in the data dissemination process are examined in depth. It is concluded that the bandwidth of imagery generated by scanners can be reduced without introducing significant degradation such that the data can be transmitted over an S-band channel. This corresponds to a compression ratio equivalent to 1.84 bits per pixel. It is also shown that this can be achieved using at least two fairly simple techniques with weight-power requirements well within the constraints of the LANDSAT-D satellite. These are the adaptive 2D DPCM and adaptive hybrid techniques.

  15. Phase demodulation using adaptive windowed Fourier transform based on Hilbert-Huang transform.

    PubMed

    Wang, Chenxing; Da, Feipeng

    2012-07-30

    The phase demodulation method of adaptive windowed Fourier transform (AWFT) is proposed based on Hilbert-Huang transform (HHT). HHT is analyzed and performed on fringe pattern to obtain instantaneous frequencies firstly. These instantaneous frequencies are further analyzed based on the condition of AWFT to locate local stationary areas where the fundamental spectrum will not be interfered by high-order spectrum. Within each local stationary area, the fundamental spectrum can be extracted accurately and adaptively by using AWFT with the background, which has been determined previously with the presented criterion during HHT, being eliminated to remove the zero-spectrum. This method is adaptive and unconstrained by any precondition for the measured phase. Experiments demonstrate its robustness and effectiveness for measuring the object with discontinuities or complex surface.

  16. Patch-based Adaptive Mesh Refinement for Multimaterial Hydrodynamics

    SciTech Connect

    Lomov, I; Pember, R; Greenough, J; Liu, B

    2005-10-18

    We present a patch-based direct Eulerian adaptive mesh refinement (AMR) algorithm for modeling real equation-of-state, multimaterial compressible flow with strength. Our approach to AMR uses a hierarchical, structured grid approach first developed by (Berger and Oliger 1984), (Berger and Oliger 1984). The grid structure is dynamic in time and is composed of nested uniform rectangular grids of varying resolution. The integration scheme on the grid hierarchy is a recursive procedure in which the coarse grids are advanced, then the fine grids are advanced multiple steps to reach the same time, and finally the coarse and fine grids are synchronized to remove conservation errors during the separate advances. The methodology presented here is based on a single grid algorithm developed for multimaterial gas dynamics by (Colella et al. 1993), refined by(Greenough et al. 1995), and extended to the solution of solid mechanics problems with significant strength by (Lomov and Rubin 2003). The single grid algorithm uses a second-order Godunov scheme with an approximate single fluid Riemann solver and a volume-of-fluid treatment of material interfaces. The method also uses a non-conservative treatment of the deformation tensor and an acoustic approximation for shear waves in the Riemann solver. This departure from a strict application of the higher-order Godunov methodology to the equation of solid mechanics is justified due to the fact that highly nonlinear behavior of shear stresses is rare. This algorithm is implemented in two codes, Geodyn and Raptor, the latter of which is a coupled rad-hydro code. The present discussion will be solely concerned with hydrodynamics modeling. Results from a number of simulations for flows with and without strength will be presented.

  17. Assessing Implementation Fidelity and Adaptation in a Community-Based Childhood Obesity Prevention Intervention

    ERIC Educational Resources Information Center

    Richards, Zoe; Kostadinov, Iordan; Jones, Michelle; Richard, Lucie; Cargo, Margaret

    2014-01-01

    Little research has assessed the fidelity, adaptation or integrity of activities implemented within community-based obesity prevention initiatives. To address this gap, a mixed-method process evaluation was undertaken in the context of the South Australian Obesity Prevention and Lifestyle (OPAL) initiative. An ecological coding procedure assessed…

  18. A high-throughput multiplex method adapted for GMO detection.

    PubMed

    Chaouachi, Maher; Chupeau, Gaëlle; Berard, Aurélie; McKhann, Heather; Romaniuk, Marcel; Giancola, Sandra; Laval, Valérie; Bertheau, Yves; Brunel, Dominique

    2008-12-24

    A high-throughput multiplex assay for the detection of genetically modified organisms (GMO) was developed on the basis of the existing SNPlex method designed for SNP genotyping. This SNPlex assay allows the simultaneous detection of up to 48 short DNA sequences (approximately 70 bp; "signature sequences") from taxa endogenous reference genes, from GMO constructions, screening targets, construct-specific, and event-specific targets, and finally from donor organisms. This assay avoids certain shortcomings of multiplex PCR-based methods already in widespread use for GMO detection. The assay demonstrated high specificity and sensitivity. The results suggest that this assay is reliable, flexible, and cost- and time-effective for high-throughput GMO detection.

  19. Adaptively biased molecular dynamics: An umbrella sampling method with a time-dependent potential

    NASA Astrophysics Data System (ADS)

    Babin, Volodymyr; Karpusenka, Vadzim; Moradi, Mahmoud; Roland, Christopher; Sagui, Celeste

    We discuss an adaptively biased molecular dynamics (ABMD) method for the computation of a free energy surface for a set of reaction coordinates. The ABMD method belongs to the general category of umbrella sampling methods with an evolving biasing potential. It is characterized by a small number of control parameters and an O(t) numerical cost with simulation time t. The method naturally allows for extensions based on multiple walkers and replica exchange mechanism. The workings of the method are illustrated with a number of examples, including sugar puckering, and free energy landscapes for polymethionine and polyproline peptides, and for a short β-turn peptide. ABMD has been implemented into the latest version (Case et al., AMBER 10; University of California: San Francisco, 2008) of the AMBER software package and is freely available to the simulation community.

  20. Adaptive particle refinement and derefinement applied to the smoothed particle hydrodynamics method

    NASA Astrophysics Data System (ADS)

    Barcarolo, D. A.; Le Touzé, D.; Oger, G.; de Vuyst, F.

    2014-09-01

    SPH simulations are usually performed with a uniform particle distribution. New techniques have been recently proposed to enable the use of spatially varying particle distributions, which encouraged the development of automatic adaptivity and particle refinement/derefinement algorithms. All these efforts resulted in very interesting and promising procedures leading to more efficient and faster SPH simulations. In this article, a family of particle refinement techniques is reviewed and a new derefinement technique is proposed and validated through several test cases involving both free-surface and viscous flows. Besides, this new procedure allows higher resolutions in the regions requiring increased accuracy. Moreover, several levels of refinement can be used with this new technique, as often encountered in adaptive mesh refinement techniques in mesh-based methods.

  1. Parallel simulation of multiphase flows using octree adaptivity and the volume-of-fluid method

    NASA Astrophysics Data System (ADS)

    Agbaglah, Gilou; Delaux, Sébastien; Fuster, Daniel; Hoepffner, Jérôme; Josserand, Christophe; Popinet, Stéphane; Ray, Pascal; Scardovelli, Ruben; Zaleski, Stéphane

    2011-02-01

    We describe computations performed using the Gerris code, an open-source software implementing finite volume solvers on an octree adaptive grid together with a piecewise linear volume of fluid interface tracking method. The parallelisation of Gerris is achieved by domain decomposition. We show examples of the capabilities of Gerris on several types of problems. The impact of a droplet on a layer of the same liquid results in the formation of a thin air layer trapped between the droplet and the liquid layer that the adaptive refinement allows to capture. It is followed by the jetting of a thin corolla emerging from below the impacting droplet. The jet atomisation problem is another extremely challenging computational problem, in which a large number of small scales are generated. Finally we show an example of a turbulent jet computation in an equivalent resolution of 6×1024 cells. The jet simulation is based on the configuration of the Deepwater Horizon oil leak.

  2. An adaptive watershed management assessment based on watershed investigation data.

    PubMed

    Kang, Min Goo; Park, Seung Woo

    2015-05-01

    The aim of this study was to assess the states of watersheds in South Korea and to formulate new measures to improve identified inadequacies. The study focused on the watersheds of the Han River basin and adopted an adaptive watershed management framework. Using data collected during watershed investigation projects, we analyzed the management context of the study basin and identified weaknesses in water use management, flood management, and environmental and ecosystems management in the watersheds. In addition, we conducted an interview survey to obtain experts' opinions on the possible management of watersheds in the future. The results of the assessment show that effective management of the Han River basin requires adaptive watershed management, which includes stakeholders' participation and social learning. Urbanization was the key variable in watershed management of the study basin. The results provide strong guidance for future watershed management and suggest that nonstructural measures are preferred to improve the states of the watersheds and that consistent implementation of the measures can lead to successful watershed management. The results also reveal that governance is essential for adaptive watershed management in the study basin. A special ordinance is necessary to establish governance and aid social learning. Based on the findings, a management process is proposed to support new watershed management practices. The results will be of use to policy makers and practitioners who can implement the measures recommended here in the early stages of adaptive watershed management in the Han River basin. The measures can also be applied to other river basins.

  3. A New Method to Cancel RFI---The Adaptive Filter

    NASA Astrophysics Data System (ADS)

    Bradley, R.; Barnbaum, C.

    1996-12-01

    An increasing amount of precious radio frequency spectrum in the VHF, UHF, and microwave bands is being utilized each year to support new commercial and military ventures, and all have the potential to interfere with radio astronomy observations. Some radio spectral lines of astronomical interest occur outside the protected radio astronomy bands and are unobservable due to heavy interference. Conventional approaches to deal with RFI include legislation, notch filters, RF shielding, and post-processing techniques. Although these techniques are somewhat successful, each suffers from insufficient interference cancellation. One concept of interference excision that has not been used before in radio astronomy is adaptive interference cancellation. The concept of adaptive interference canceling was first introduced in the mid-1970s as a way to reduce unwanted noise in low frequency (audio) systems. Examples of such systems include the canceling of maternal ECG in fetal electrocardiography and the reduction of engine noise in the passenger compartment of automobiles. Only recently have high-speed digital filter chips made adaptive filtering possible in a bandwidth as large a few megahertz, finally opening the door to astronomical uses. The system consists of two receivers: the main beam of the radio telescope receives the desired signal corrupted by RFI coming in the sidelobes, and the reference antenna receives only the RFI. The reference antenna is processed using a digital adaptive filter and then subtracted from the signal in the main beam, thus producing the system output. The weights of the digital filter are adjusted by way of an algorithm that minimizes, in a least-squares sense, the power output of the system. Through an adaptive-iterative process, the interference canceler will lock onto the RFI and the filter will adjust itself to minimize the effect of the RFI at the system output. We are building a prototype 100 MHz receiver and will measure the cancellation

  4. ZZ-Type a posteriori error estimators for adaptive boundary element methods on a curve.

    PubMed

    Feischl, Michael; Führer, Thomas; Karkulik, Michael; Praetorius, Dirk

    2014-01-01

    In the context of the adaptive finite element method (FEM), ZZ-error estimators named after Zienkiewicz and Zhu (1987) [52] are mathematically well-established and widely used in practice. In this work, we propose and analyze ZZ-type error estimators for the adaptive boundary element method (BEM). We consider weakly singular and hyper-singular integral equations and prove, in particular, convergence of the related adaptive mesh-refining algorithms. Throughout, the theoretical findings are underlined by numerical experiments.

  5. The Pilates method and cardiorespiratory adaptation to training.

    PubMed

    Tinoco-Fernández, Maria; Jiménez-Martín, Miguel; Sánchez-Caravaca, M Angeles; Fernández-Pérez, Antonio M; Ramírez-Rodrigo, Jesús; Villaverde-Gutiérrez, Carmen

    2016-01-01

    Although all authors report beneficial health changes following training based on the Pilates method, no explicit analysis has been performed of its cardiorespiratory effects. The objective of this study was to evaluate possible changes in cardiorespiratory parameters with the Pilates method. A total of 45 university students aged 18-35 years (77.8% female and 22.2% male), who did not routinely practice physical exercise or sports, volunteered for the study and signed informed consent. The Pilates training was conducted over 10 weeks, with three 1-hour sessions per week. Physiological cardiorespiratory responses were assessed using a MasterScreen CPX apparatus. After the 10-week training, statistically significant improvements were observed in mean heart rate (135.4-124.2 beats/min), respiratory exchange ratio (1.1-0.9) and oxygen equivalent (30.7-27.6) values, among other spirometric parameters, in submaximal aerobic testing. These findings indicate that practice of the Pilates method has a positive influence on cardiorespiratory parameters in healthy adults who do not routinely practice physical exercise activities. PMID:27357919

  6. The use of the spectral method within the fast adaptive composite grid method

    SciTech Connect

    McKay, S.M.

    1994-12-31

    The use of efficient algorithms for the solution of partial differential equations has been sought for many years. The fast adaptive composite grid (FAC) method combines an efficient algorithm with high accuracy to obtain low cost solutions to partial differential equations. The FAC method achieves fast solution by combining solutions on different grids with varying discretizations and using multigrid like techniques to find fast solution. Recently, the continuous FAC (CFAC) method has been developed which utilizes an analytic solution within a subdomain to iterate to a solution of the problem. This has been shown to achieve excellent results when the analytic solution can be found. The CFAC method will be extended to allow solvers which construct a function for the solution, e.g., spectral and finite element methods. In this discussion, the spectral methods will be used to provide a fast, accurate solution to the partial differential equation. As spectral methods are more accurate than finite difference methods, the ensuing accuracy from this hybrid method outside of the subdomain will be investigated.

  7. Vivid Motor Imagery as an Adaptation Method for Head Turns on a Short-Arm Centrifuge

    NASA Technical Reports Server (NTRS)

    Newby, N. J.; Mast, F. W.; Natapoff, A.; Paloski, W. H.

    2006-01-01

    from one another. For the perceived duration of sensations, the CG group again exhibited the least amount of adaptation. However, the rates of adaptation of the PA and the MA groups were indistinguishable, suggesting that the imagined pseudostimulus appeared to be just as effective a means of adaptation as the actual stimulus. The MA group's rate of adaptation to motion sickness symptoms was also comparable to the PA group. The use of vivid motor imagery may be an effective method for adapting to the illusory sensations and motion sickness symptoms produced by cross-coupled stimuli. For space-based AG applications, this technique may prove quite useful in retaining astronauts considered highly susceptible to motion sickness as it reduces the number of actual CCS required to attain adaptation.

  8. Adaptive Impedance Analysis of Grooved Surface using the Finite Element Method

    SciTech Connect

    Wang, L; /SLAC

    2007-07-06

    Grooved surface is proposed to reduce the secondary emission yield in a dipole and wiggler magnet of International Linear Collider. An analysis of the impedance of the grooved surface based on adaptive finite element is presented in this paper. The performance of the adaptive algorithms, based on an element-element h refinement technique, is assessed. The features of the refinement indicators, adaptation criteria and error estimation parameters are discussed.

  9. Multigrid iterative method with adaptive spatial support for computed tomography reconstruction from few-view data

    NASA Astrophysics Data System (ADS)

    Lee, Ping-Chang

    2014-03-01

    Computed tomography (CT) plays a key role in modern medical system, whether it be for diagnosis or therapy. As an increased risk of cancer development is associated with exposure to radiation, reducing radiation exposure in CT becomes an essential issue. Based on the compressive sensing (CS) theory, iterative based method with total variation (TV) minimization is proven to be a powerful framework for few-view tomographic image reconstruction. Multigrid method is an iterative method for solving both linear and nonlinear systems, especially when the system contains a huge number of components. In medical imaging, image background is often defined by zero intensity, thus attaining spatial support of the image, which is helpful for iterative reconstruction. In the proposed method, the image support is not considered as a priori knowledge. Rather, it evolves during the reconstruction process. Based on the CS framework, we proposed a multigrid method with adaptive spatial support constraint. The simultaneous algebraic reconstruction (SART) with TV minimization is implemented for comparison purpose. The numerical result shows: 1. Multigrid method has better performance while less than 60 views of projection data were used, 2. Spatial support highly improves the CS reconstruction, and 3. When few views of projection data were measured, our method performs better than the SART+TV method with spatial support constraint.

  10. Remote sensing image subpixel mapping based on adaptive differential evolution.

    PubMed

    Zhong, Yanfei; Zhang, Liangpei

    2012-10-01

    In this paper, a novel subpixel mapping algorithm based on an adaptive differential evolution (DE) algorithm, namely, adaptive-DE subpixel mapping (ADESM), is developed to perform the subpixel mapping task for remote sensing images. Subpixel mapping may provide a fine-resolution map of class labels from coarser spectral unmixing fraction images, with the assumption of spatial dependence. In ADESM, to utilize DE, the subpixel mapping problem is transformed into an optimization problem by maximizing the spatial dependence index. The traditional DE algorithm is an efficient and powerful population-based stochastic global optimizer in continuous optimization problems, but it cannot be applied to the subpixel mapping problem in a discrete search space. In addition, it is not an easy task to properly set control parameters in DE. To avoid these problems, this paper utilizes an adaptive strategy without user-defined parameters, and a reversible-conversion strategy between continuous space and discrete space, to improve the classical DE algorithm. During the process of evolution, they are further improved by enhanced evolution operators, e.g., mutation, crossover, repair, exchange, insertion, and an effective local search to generate new candidate solutions. Experimental results using different types of remote images show that the ADESM algorithm consistently outperforms the previous subpixel mapping algorithms in all the experiments. Based on sensitivity analysis, ADESM, with its self-adaptive control parameter setting, is better than, or at least comparable to, the standard DE algorithm, when considering the accuracy of subpixel mapping, and hence provides an effective new approach to subpixel mapping for remote sensing imagery.

  11. Adaptive correction method for an OCXO and investigation of analytical cumulative time error upper bound.

    PubMed

    Zhou, Hui; Kunz, Thomas; Schwartz, Howard

    2011-01-01

    Traditional oscillators used in timing modules of CDMA and WiMAX base stations are large and expensive. Applying cheaper and smaller, albeit more inaccurate, oscillators in timing modules is an interesting research challenge. An adaptive control algorithm is presented to enhance the oscillators to meet the requirements of base stations during holdover mode. An oscillator frequency stability model is developed for the adaptive control algorithm. This model takes into account the control loop which creates the correction signal when the timing module is in locked mode. A recursive prediction error method is used to identify the system model parameters. Simulation results show that an oscillator enhanced by our adaptive control algorithm improves the oscillator performance significantly, compared with uncorrected oscillators. Our results also show the benefit of explicitly modeling the control loop. Finally, the cumulative time error upper bound of such enhanced oscillators is investigated analytically and comparison results between the analytical and simulated upper bound are provided. The results show that the analytical upper bound can serve as a practical guide for system designers. PMID:21244973

  12. Adaptive correction method for an OCXO and investigation of analytical cumulative time error upper bound.

    PubMed

    Zhou, Hui; Kunz, Thomas; Schwartz, Howard

    2011-01-01

    Traditional oscillators used in timing modules of CDMA and WiMAX base stations are large and expensive. Applying cheaper and smaller, albeit more inaccurate, oscillators in timing modules is an interesting research challenge. An adaptive control algorithm is presented to enhance the oscillators to meet the requirements of base stations during holdover mode. An oscillator frequency stability model is developed for the adaptive control algorithm. This model takes into account the control loop which creates the correction signal when the timing module is in locked mode. A recursive prediction error method is used to identify the system model parameters. Simulation results show that an oscillator enhanced by our adaptive control algorithm improves the oscillator performance significantly, compared with uncorrected oscillators. Our results also show the benefit of explicitly modeling the control loop. Finally, the cumulative time error upper bound of such enhanced oscillators is investigated analytically and comparison results between the analytical and simulated upper bound are provided. The results show that the analytical upper bound can serve as a practical guide for system designers.

  13. Prediction-based manufacturing center self-adaptive demand side energy optimization in cyber physical systems

    NASA Astrophysics Data System (ADS)

    Sun, Xinyao; Wang, Xue; Wu, Jiangwei; Liu, Youda

    2014-05-01

    Cyber physical systems(CPS) recently emerge as a new technology which can provide promising approaches to demand side management(DSM), an important capability in industrial power systems. Meanwhile, the manufacturing center is a typical industrial power subsystem with dozens of high energy consumption devices which have complex physical dynamics. DSM, integrated with CPS, is an effective methodology for solving energy optimization problems in manufacturing center. This paper presents a prediction-based manufacturing center self-adaptive energy optimization method for demand side management in cyber physical systems. To gain prior knowledge of DSM operating results, a sparse Bayesian learning based componential forecasting method is introduced to predict 24-hour electric load levels for specific industrial areas in China. From this data, a pricing strategy is designed based on short-term load forecasting results. To minimize total energy costs while guaranteeing manufacturing center service quality, an adaptive demand side energy optimization algorithm is presented. The proposed scheme is tested in a machining center energy optimization experiment. An AMI sensing system is then used to measure the demand side energy consumption of the manufacturing center. Based on the data collected from the sensing system, the load prediction-based energy optimization scheme is implemented. By employing both the PSO and the CPSO method, the problem of DSM in the manufacturing center is solved. The results of the experiment show the self-adaptive CPSO energy optimization method enhances optimization by 5% compared with the traditional PSO optimization method.

  14. A Cartesian Adaptive Level Set Method for Two-Phase Flows

    NASA Technical Reports Server (NTRS)

    Ham, F.; Young, Y.-N.

    2003-01-01

    In the present contribution we develop a level set method based on local anisotropic Cartesian adaptation as described in Ham et al. (2002). Such an approach should allow for the smallest possible Cartesian grid capable of resolving a given flow. The remainder of the paper is organized as follows. In section 2 the level set formulation for free surface calculations is presented and its strengths and weaknesses relative to the other free surface methods reviewed. In section 3 the collocated numerical method is described. In section 4 the method is validated by solving the 2D and 3D drop oscilation problem. In section 5 we present some results from more complex cases including the 3D drop breakup in an impulsively accelerated free stream, and the 3D immiscible Rayleigh-Taylor instability. Conclusions are given in section 6.

  15. Adaptive finite element methods for two-dimensional problems in computational fracture mechanics

    NASA Technical Reports Server (NTRS)

    Min, J. B.; Bass, J. M.; Spradley, L. W.

    1994-01-01

    Some recent results obtained using solution-adaptive finite element methods in two-dimensional problems in linear elastic fracture mechanics are presented. The focus is on the basic issue of adaptive finite element methods for validating the new methodology by computing demonstration problems and comparing the stress intensity factors to analytical results.

  16. Individual-based models for adaptive diversification in high-dimensional phenotype spaces.

    PubMed

    Ispolatov, Iaroslav; Madhok, Vaibhav; Doebeli, Michael

    2016-02-01

    Most theories of evolutionary diversification are based on equilibrium assumptions: they are either based on optimality arguments involving static fitness landscapes, or they assume that populations first evolve to an equilibrium state before diversification occurs, as exemplified by the concept of evolutionary branching points in adaptive dynamics theory. Recent results indicate that adaptive dynamics may often not converge to equilibrium points and instead generate complicated trajectories if evolution takes place in high-dimensional phenotype spaces. Even though some analytical results on diversification in complex phenotype spaces are available, to study this problem in general we need to reconstruct individual-based models from the adaptive dynamics generating the non-equilibrium dynamics. Here we first provide a method to construct individual-based models such that they faithfully reproduce the given adaptive dynamics attractor without diversification. We then show that a propensity to diversify can be introduced by adding Gaussian competition terms that generate frequency dependence while still preserving the same adaptive dynamics. For sufficiently strong competition, the disruptive selection generated by frequency-dependence overcomes the directional evolution along the selection gradient and leads to diversification in phenotypic directions that are orthogonal to the selection gradient. PMID:26598329

  17. Individual-based models for adaptive diversification in high-dimensional phenotype spaces.

    PubMed

    Ispolatov, Iaroslav; Madhok, Vaibhav; Doebeli, Michael

    2016-02-01

    Most theories of evolutionary diversification are based on equilibrium assumptions: they are either based on optimality arguments involving static fitness landscapes, or they assume that populations first evolve to an equilibrium state before diversification occurs, as exemplified by the concept of evolutionary branching points in adaptive dynamics theory. Recent results indicate that adaptive dynamics may often not converge to equilibrium points and instead generate complicated trajectories if evolution takes place in high-dimensional phenotype spaces. Even though some analytical results on diversification in complex phenotype spaces are available, to study this problem in general we need to reconstruct individual-based models from the adaptive dynamics generating the non-equilibrium dynamics. Here we first provide a method to construct individual-based models such that they faithfully reproduce the given adaptive dynamics attractor without diversification. We then show that a propensity to diversify can be introduced by adding Gaussian competition terms that generate frequency dependence while still preserving the same adaptive dynamics. For sufficiently strong competition, the disruptive selection generated by frequency-dependence overcomes the directional evolution along the selection gradient and leads to diversification in phenotypic directions that are orthogonal to the selection gradient.

  18. Physically constrained voxel-based penalty adaptation for ultra-fast IMRT planning.

    PubMed

    Wahl, Niklas; Bangert, Mark; Kamerling, Cornelis P; Ziegenhein, Peter; Bol, Gijsbert H; Raaymakers, Bas W; Oelfke, Uwe

    2016-07-08

    Conventional treatment planning in intensity-modulated radiation therapy (IMRT) is a trial-and-error process that usually involves tedious tweaking of optimization parameters. Here, we present an algorithm that automates part of this process, in particular the adaptation of voxel-based penalties within normal tissue. Thereby, the proposed algorithm explicitly considers a priori known physical limitations of photon irradiation. The efficacy of the developed algorithm is assessed during treatment planning studies comprising 16 prostate and 5 head and neck cases. We study the eradication of hot spots in the normal tissue, effects on target coverage and target conformity, as well as selected dose volume points for organs at risk. The potential of the proposed method to generate class solutions for the two indications is investigated. Run-times of the algorithms are reported. Physically constrained voxel-based penalty adaptation is an adequate means to automatically detect and eradicate hot-spots during IMRT planning while maintaining target coverage and conformity. Negative effects on organs at risk are comparably small and restricted to lower doses. Using physically constrained voxel-based penalty adaptation, it was possible to improve the generation of class solutions for both indications. Considering the reported run-times of less than 20 s, physically constrained voxel-based penalty adaptation has the potential to reduce the clinical workload during planning and automated treatment plan generation in the long run, facilitating adaptive radiation treatments.

  19. Designing an Adaptive Web-Based Learning System Based on Students' Cognitive Styles Identified Online

    ERIC Educational Resources Information Center

    Lo, Jia-Jiunn; Chan, Ya-Chen; Yeh, Shiou-Wen

    2012-01-01

    This study developed an adaptive web-based learning system focusing on students' cognitive styles. The system is composed of a student model and an adaptation model. It collected students' browsing behaviors to update the student model for unobtrusively identifying student cognitive styles through a multi-layer feed-forward neural network (MLFF).…

  20. Adaptive Projection Subspace Dimension for the Thick-Restart Lanczos Method

    SciTech Connect

    Yamazaki, Ichitaro; Bai, Zhaojun; Simon, Horst; Wang, Lin-Wang; Wu, K.

    2008-10-01

    The Thick-Restart Lanczos (TRLan) method is an effective method for solving large-scale Hermitian eigenvalue problems. However, its performance strongly depends on the dimension of the projection subspace. In this paper, we propose an objective function to quantify the effectiveness of a chosen subspace dimension, and then introduce an adaptive scheme to dynamically adjust the dimension at each restart. An open-source software package, nu-TRLan, which implements the TRLan method with this adaptive projection subspace dimension is available in the public domain. The numerical results of synthetic eigenvalue problems are presented to demonstrate that nu-TRLan achieves speedups of between 0.9 and 5.1 over the static method using a default subspace dimension. To demonstrate the effectiveness of nu-TRLan in a real application, we apply it to the electronic structure calculations of quantum dots. We show that nu-TRLan can achieve speedups of greater than 1.69 over the state-of-the-art eigensolver for this application, which is based on the Conjugate Gradient method with a powerful preconditioner.

  1. Episodic memories predict adaptive value-based decision-making.

    PubMed

    Murty, Vishnu P; FeldmanHall, Oriel; Hunter, Lindsay E; Phelps, Elizabeth A; Davachi, Lila

    2016-05-01

    Prior research illustrates that memory can guide value-based decision-making. For example, previous work has implicated both working memory and procedural memory (i.e., reinforcement learning) in guiding choice. However, other types of memories, such as episodic memory, may also influence decision-making. Here we test the role for episodic memory-specifically item versus associative memory-in supporting value-based choice. Participants completed a task where they first learned the value associated with trial unique lotteries. After a short delay, they completed a decision-making task where they could choose to reengage with previously encountered lotteries, or new never before seen lotteries. Finally, participants completed a surprise memory test for the lotteries and their associated values. Results indicate that participants chose to reengage more often with lotteries that resulted in high versus low rewards. Critically, participants not only formed detailed, associative memories for the reward values coupled with individual lotteries, but also exhibited adaptive decision-making only when they had intact associative memory. We further found that the relationship between adaptive choice and associative memory generalized to more complex, ecologically valid choice behavior, such as social decision-making. However, individuals more strongly encode experiences of social violations-such as being treated unfairly, suggesting a bias for how individuals form associative memories within social contexts. Together, these findings provide an important integration of episodic memory and decision-making literatures to better understand key mechanisms supporting adaptive behavior. PMID:26999046

  2. Speaker-Adaptive Speech Recognition Based on Surface Electromyography

    NASA Astrophysics Data System (ADS)

    Wand, Michael; Schultz, Tanja

    We present our recent advances in silent speech interfaces using electromyographic signals that capture the movements of the human articulatory muscles at the skin surface for recognizing continuously spoken speech. Previous systems were limited to speaker- and session-dependent recognition tasks on small amounts of training and test data. In this article we present speaker-independent and speaker-adaptive training methods which allow us to use a large corpus of data from many speakers to train acoustic models more reliably. We use the speaker-dependent system as baseline, carefully tuning the data preprocessing and acoustic modeling. Then on our corpus we compare the performance of speaker-dependent and speaker-independent acoustic models and carry out model adaptation experiments.

  3. A Massively Parallel Adaptive Fast Multipole Method on Heterogeneous Architectures

    SciTech Connect

    Lashuk, Ilya; Chandramowlishwaran, Aparna; Langston, Harper; Nguyen, Tuan-Anh; Sampath, Rahul S; Shringarpure, Aashay; Vuduc, Richard; Ying, Lexing; Zorin, Denis; Biros, George

    2012-01-01

    We describe a parallel fast multipole method (FMM) for highly nonuniform distributions of particles. We employ both distributed memory parallelism (via MPI) and shared memory parallelism (via OpenMP and GPU acceleration) to rapidly evaluate two-body nonoscillatory potentials in three dimensions on heterogeneous high performance computing architectures. We have performed scalability tests with up to 30 billion particles on 196,608 cores on the AMD/CRAY-based Jaguar system at ORNL. On a GPU-enabled system (NSF's Keeneland at Georgia Tech/ORNL), we observed 30x speedup over a single core CPU and 7x speedup over a multicore CPU implementation. By combining GPUs with MPI, we achieve less than 10 ns/particle and six digits of accuracy for a run with 48 million nonuniformly distributed particles on 192 GPUs.

  4. Adapting Practice-Based Intervention Research to Electronic Environments: Opportunities and Complexities at Two Institutions

    PubMed Central

    Stille, Christopher J.; Lockhart, Steven A.; Maertens, Julie A.; Madden, Christi A.; Darden, Paul M.

    2015-01-01

    Background and Purpose: Primary care practice-based research has become more complex with increased use of electronic health records (EHRs). Little has been reported about changes in study planning and execution that are required as practices change from paper-based to electronic-based environments. We describe the evolution of a pediatric practice-based intervention study as it was adapted for use in the electronic environment, to enable other practice-based researchers to plan efficient, effective studies. Methods: We adapted a paper-based pediatric office-level intervention to enhance parent-provider communication about subspecialty referrals for use in two practice-based research networks (PBRNs) with partially and fully electronic environments. We documented the process of adaptation and its effect on study feasibility and efficiency, resource use, and administrative and regulatory complexities, as the study was implemented in the two networks. Results: Considerable time and money was required to adapt the paper-based study to the electronic environment, requiring extra meetings with institutional EHR-, regulatory-, and administrative teams, and increased practice training. Institutional unfamiliarity with using EHRs in practice-based research, and the consequent need to develop new policies, were major contributors to delays. Adapting intervention tools to the EHR and minimizing practice disruptions was challenging, but resulted in several efficiencies as compared with a paper-based project. In particular, recruitment and tracking of subjects and data collection were easier and more efficient. Conclusions: Practice-based intervention research in an electronic environment adds considerable cost and time at the outset of a study, especially for centers unfamiliar with such research. Efficiencies generated have the potential of easing the work of study enrollment, subject tracking, and data collection. PMID:25848633

  5. MEMS-based extreme adaptive optics for planet detection

    SciTech Connect

    Macintosh, B A; Graham, J R; Oppenheimer, B; Poyneer, L; Sivaramakrishnan, A; Veran, J

    2005-11-18

    The next major step in the study of extrasolar planets will be the direct detection, resolved from their parent star, of a significant sample of Jupiter-like extrasolar giant planets. Such detection will open up new parts of the extrasolar planet distribution and allow spectroscopic characterization of the planets themselves. Detecting Jovian planets at 5-50 AU scale orbiting nearby stars requires adaptive optics systems and coronagraphs an order of magnitude more powerful than those available today--the realm of ''Extreme'' adaptive optics. We present the basic requirements and design for such a system, the Gemini Planet Imager (GPI.) GPI will require a MEMS-based deformable mirror with good surface quality, 2-4 micron stroke (operated in tandem with a conventional low-order ''woofer'' mirror), and a fully-functional 48-actuator-diameter aperture.

  6. A method of adaptive wavelet filtering of the peripheral blood flow oscillations under stationary and non-stationary conditions.

    PubMed

    Tankanag, Arina V; Chemeris, Nikolay K

    2009-10-01

    The paper describes an original method for analysis of the peripheral blood flow oscillations measured with the laser Doppler flowmetry (LDF) technique. The method is based on the continuous wavelet transform and adaptive wavelet theory and applies an adaptive wavelet filtering to the LDF data. The method developed allows one to examine the dynamics of amplitude oscillations in a wide frequency range (from 0.007 to 2 Hz) and to process both stationary and non-stationary short (6 min) signals. The capabilities of the method have been demonstrated by analyzing LDF signals registered in the state of rest and upon humeral occlusion. The paper shows the main advantage of the method proposed, which is the significant reduction of 'border effects', as compared to the traditional wavelet analysis. It was found that the low-frequency amplitudes obtained by adaptive wavelets are significantly higher than those obtained by non-adaptive ones. The method suggested would be useful for the analysis of low-frequency components of the short-living transitional processes under the conditions of functional tests. The method of adaptive wavelet filtering can be used to process stationary and non-stationary biomedical signals (cardiograms, encephalograms, myograms, etc), as well as signals studied in the other fields of science and engineering.

  7. Adaptive bad pixel correction algorithm for IRFPA based on PCNN

    NASA Astrophysics Data System (ADS)

    Leng, Hanbing; Zhou, Zuofeng; Cao, Jianzhong; Yi, Bo; Yan, Aqi; Zhang, Jian

    2013-10-01

    Bad pixels and response non-uniformity are the primary obstacles when IRFPA is used in different thermal imaging systems. The bad pixels of IRFPA include fixed bad pixels and random bad pixels. The former is caused by material or manufacture defect and their positions are always fixed, the latter is caused by temperature drift and their positions are always changing. Traditional radiometric calibration-based bad pixel detection and compensation algorithm is only valid to the fixed bad pixels. Scene-based bad pixel correction algorithm is the effective way to eliminate these two kinds of bad pixels. Currently, the most used scene-based bad pixel correction algorithm is based on adaptive median filter (AMF). In this algorithm, bad pixels are regarded as image noise and then be replaced by filtered value. However, missed correction and false correction often happens when AMF is used to handle complex infrared scenes. To solve this problem, a new adaptive bad pixel correction algorithm based on pulse coupled neural networks (PCNN) is proposed. Potential bad pixels are detected by PCNN in the first step, then image sequences are used periodically to confirm the real bad pixels and exclude the false one, finally bad pixels are replaced by the filtered result. With the real infrared images obtained from a camera, the experiment results show the effectiveness of the proposed algorithm.

  8. Experiments on Adaptive Techniques for Host-Based Intrusion Detection

    SciTech Connect

    DRAELOS, TIMOTHY J.; COLLINS, MICHAEL J.; DUGGAN, DAVID P.; THOMAS, EDWARD V.; WUNSCH, DONALD

    2001-09-01

    This research explores four experiments of adaptive host-based intrusion detection (ID) techniques in an attempt to develop systems that can detect novel exploits. The technique considered to have the most potential is adaptive critic designs (ACDs) because of their utilization of reinforcement learning, which allows learning exploits that are difficult to pinpoint in sensor data. Preliminary results of ID using an ACD, an Elman recurrent neural network, and a statistical anomaly detection technique demonstrate an ability to learn to distinguish between clean and exploit data. We used the Solaris Basic Security Module (BSM) as a data source and performed considerable preprocessing on the raw data. A detection approach called generalized signature-based ID is recommended as a middle ground between signature-based ID, which has an inability to detect novel exploits, and anomaly detection, which detects too many events including events that are not exploits. The primary results of the ID experiments demonstrate the use of custom data for generalized signature-based intrusion detection and the ability of neural network-based systems to learn in this application environment.

  9. Efficient reconstruction method for ground layer adaptive optics with mixed natural and laser guide stars.

    PubMed

    Wagner, Roland; Helin, Tapio; Obereder, Andreas; Ramlau, Ronny

    2016-02-20

    The imaging quality of modern ground-based telescopes such as the planned European Extremely Large Telescope is affected by atmospheric turbulence. In consequence, they heavily depend on stable and high-performance adaptive optics (AO) systems. Using measurements of incoming light from guide stars, an AO system compensates for the effects of turbulence by adjusting so-called deformable mirror(s) (DMs) in real time. In this paper, we introduce a novel reconstruction method for ground layer adaptive optics. In the literature, a common approach to this problem is to use Bayesian inference in order to model the specific noise structure appearing due to spot elongation. This approach leads to large coupled systems with high computational effort. Recently, fast solvers of linear order, i.e., with computational complexity O(n), where n is the number of DM actuators, have emerged. However, the quality of such methods typically degrades in low flux conditions. Our key contribution is to achieve the high quality of the standard Bayesian approach while at the same time maintaining the linear order speed of the recent solvers. Our method is based on performing a separate preprocessing step before applying the cumulative reconstructor (CuReD). The efficiency and performance of the new reconstructor are demonstrated using the OCTOPUS, the official end-to-end simulation environment of the ESO for extremely large telescopes. For more specific simulations we also use the MOST toolbox. PMID:26906596

  10. Method and apparatus for adaptive force and position control of manipulators

    NASA Technical Reports Server (NTRS)

    Seraji, Homayoun (Inventor)

    1989-01-01

    The present invention discloses systematic methods and apparatus for the design of real time controllers. Real-time control employs adaptive force/position by use of feedforward and feedback controllers, with the feedforward controller being the inverse of the linearized model of robot dynamics and containing only proportional-double-derivative terms is disclosed. The feedback controller, of the proportional-integral-derivative type, ensures that manipulator joints follow reference trajectories and the feedback controller achieves robust tracking of step-plus-exponential trajectories, all in real time. The adaptive controller includes adaptive force and position control within a hybrid control architecture. The adaptive controller, for force control, achieves tracking of desired force setpoints, and the adaptive position controller accomplishes tracking of desired position trajectories. Circuits in the adaptive feedback and feedforward controllers are varied by adaptation laws.

  11. Building Adaptive Capacity with the Delphi Method and Mediated Modeling for Water Quality and Climate Change Adaptation in Lake Champlain Basin

    NASA Astrophysics Data System (ADS)

    Coleman, S.; Hurley, S.; Koliba, C.; Zia, A.; Exler, S.

    2014-12-01

    Eutrophication and nutrient pollution of surface waters occur within complex governance, social, hydrologic and biophysical basin contexts. The pervasive and perennial nutrient pollution in Lake Champlain Basin, despite decades of efforts, exemplifies problems found across the world's surface waters. Stakeholders with diverse values, interests, and forms of explicit and tacit knowledge determine water quality impacts through land use, agricultural and water resource decisions. Uncertainty, ambiguity and dynamic feedback further complicate the ability to promote the continual provision of water quality and ecosystem services. Adaptive management of water resources and land use requires mechanisms to allow for learning and integration of new information over time. The transdisciplinary Research on Adaptation to Climate Change (RACC) team is working to build regional adaptive capacity in Lake Champlain Basin while studying and integrating governance, land use, hydrological, and biophysical systems to evaluate implications for adaptive management. The RACC team has engaged stakeholders through mediated modeling workshops, online forums, surveys, focus groups and interviews. In March 2014, CSS2CC.org, an interactive online forum to source and identify adaptive interventions from a group of stakeholders across sectors was launched. The forum, based on the Delphi Method, brings forward the collective wisdom of stakeholders and experts to identify potential interventions and governance designs in response to scientific uncertainty and ambiguity surrounding the effectiveness of any strategy, climate change impacts, and the social and natural systems governing water quality and eutrophication. A Mediated Modeling Workshop followed the forum in May 2014, where participants refined and identified plausible interventions under different governance, policy and resource scenarios. Results from the online forum and workshop can identify emerging consensus across scales and sectors

  12. Quality evaluation of adaptive optical image based on DCT and Rényi entropy

    NASA Astrophysics Data System (ADS)

    Xu, Yuannan; Li, Junwei; Wang, Jing; Deng, Rong; Dong, Yanbing

    2015-04-01

    The adaptive optical telescopes play a more and more important role in the detection system on the ground, and the adaptive optical images are so many that we need find a suitable method of quality evaluation to choose good quality images automatically in order to save human power. It is well known that the adaptive optical images are no-reference images. In this paper, a new logarithmic evaluation method based on the use of the discrete cosine transform(DCT) and Rényi entropy for the adaptive optical images is proposed. Through the DCT using one or two dimension window, the statistical property of Rényi entropy for images is studied. The different directional Rényi entropy maps of an input image containing different information content are obtained. The mean values of different directional Rényi entropy maps are calculated. For image quality evaluation, the different directional Rényi entropy and its standard deviation corresponding to region of interest is selected as an indicator for the anisotropy of the images. The standard deviation of different directional Rényi entropy is obtained as the quality evaluation value for adaptive optical image. Experimental results show the proposed method that the sorting quality matches well with the visual inspection.

  13. [Automated recognition of quasars based on adaptive radial basis function neural networks].

    PubMed

    Zhao, Mei-Fang; Luo, A-Li; Wu, Fu-Chao; Hu, Zhan-Yi

    2006-02-01

    Recognizing and certifying quasars through the research on spectra is an important method in the field of astronomy. This paper presents a novel adaptive method for the automated recognition of quasars based on the radial basis function neural networks (RBFN). The proposed method is composed of the following three parts: (1) The feature space is reduced by the PCA (the principal component analysis) on the normalized input spectra; (2) An adaptive RBFN is constructed and trained in this reduced space. At first, the K-means clustering is used for the initialization, then based on the sum of squares errors and a gradient descent optimization technique, the number of neurons in the hidden layer is adaptively increased to improve the recognition performance; (3) The quasar spectra recognition is effectively carried out by the above trained RBFN. The author's proposed adaptive RBFN is shown to be able to not only overcome the difficulty of selecting the number of neurons in hidden layer of the traditional RBFN algorithm, but also increase the stability and accuracy of recognition of quasars. Besides, the proposed method is particularly useful for automatic voluminous spectra processing produced from a large-scale sky survey project, such as our LAMOST, due to its efficiency.

  14. Adaptive aggregation method for the Chemical Master Equation.

    PubMed

    Zhang, Jingwei; Watson, Layne T; Cao, Yang

    2009-01-01

    One important aspect of biological systems such as gene regulatory networks and protein-protein interaction networks is the stochastic nature of interactions between chemical species. Such stochastic behaviour can be accurately modelled by the Chemical Master Equation (CME). However, the CME usually imposes intensive computational requirements when used to characterise molecular biological systems. The major challenge comes from the curse of dimensionality, which has been tackled by a few research papers. The essential goal is to aggregate the system efficiently with limited approximation errors. This paper presents an adaptive way to implement the aggregation process using information collected from Monte Carlo simulations. Numerical results show the effectiveness of the proposed algorithm.

  15. Adaptive image contrast enhancement algorithm for point-based rendering

    NASA Astrophysics Data System (ADS)

    Xu, Shaoping; Liu, Xiaoping P.

    2015-03-01

    Surgical simulation is a major application in computer graphics and virtual reality, and most of the existing work indicates that interactive real-time cutting simulation of soft tissue is a fundamental but challenging research problem in virtual surgery simulation systems. More specifically, it is difficult to achieve a fast enough graphic update rate (at least 30 Hz) on commodity PC hardware by utilizing traditional triangle-based rendering algorithms. In recent years, point-based rendering (PBR) has been shown to offer the potential to outperform the traditional triangle-based rendering in speed when it is applied to highly complex soft tissue cutting models. Nevertheless, the PBR algorithms are still limited in visual quality due to inherent contrast distortion. We propose an adaptive image contrast enhancement algorithm as a postprocessing module for PBR, providing high visual rendering quality as well as acceptable rendering efficiency. Our approach is based on a perceptible image quality technique with automatic parameter selection, resulting in a visual quality comparable to existing conventional PBR algorithms. Experimental results show that our adaptive image contrast enhancement algorithm produces encouraging results both visually and numerically compared to representative algorithms, and experiments conducted on the latest hardware demonstrate that the proposed PBR framework with the postprocessing module is superior to the conventional PBR algorithm and that the proposed contrast enhancement algorithm can be utilized in (or compatible with) various variants of the conventional PBR algorithm.

  16. Enhancement of ELDA Tracker Based on CNN Features and Adaptive Model Update.

    PubMed

    Gao, Changxin; Shi, Huizhang; Yu, Jin-Gang; Sang, Nong

    2016-01-01

    Appearance representation and the observation model are the most important components in designing a robust visual tracking algorithm for video-based sensors. Additionally, the exemplar-based linear discriminant analysis (ELDA) model has shown good performance in object tracking. Based on that, we improve the ELDA tracking algorithm by deep convolutional neural network (CNN) features and adaptive model update. Deep CNN features have been successfully used in various computer vision tasks. Extracting CNN features on all of the candidate windows is time consuming. To address this problem, a two-step CNN feature extraction method is proposed by separately computing convolutional layers and fully-connected layers. Due to the strong discriminative ability of CNN features and the exemplar-based model, we update both object and background models to improve their adaptivity and to deal with the tradeoff between discriminative ability and adaptivity. An object updating method is proposed to select the "good" models (detectors), which are quite discriminative and uncorrelated to other selected models. Meanwhile, we build the background model as a Gaussian mixture model (GMM) to adapt to complex scenes, which is initialized offline and updated online. The proposed tracker is evaluated on a benchmark dataset of 50 video sequences with various challenges. It achieves the best overall performance among the compared state-of-the-art trackers, which demonstrates the effectiveness and robustness of our tracking algorithm.

  17. Enhancement of ELDA Tracker Based on CNN Features and Adaptive Model Update

    PubMed Central

    Gao, Changxin; Shi, Huizhang; Yu, Jin-Gang; Sang, Nong

    2016-01-01

    Appearance representation and the observation model are the most important components in designing a robust visual tracking algorithm for video-based sensors. Additionally, the exemplar-based linear discriminant analysis (ELDA) model has shown good performance in object tracking. Based on that, we improve the ELDA tracking algorithm by deep convolutional neural network (CNN) features and adaptive model update. Deep CNN features have been successfully used in various computer vision tasks. Extracting CNN features on all of the candidate windows is time consuming. To address this problem, a two-step CNN feature extraction method is proposed by separately computing convolutional layers and fully-connected layers. Due to the strong discriminative ability of CNN features and the exemplar-based model, we update both object and background models to improve their adaptivity and to deal with the tradeoff between discriminative ability and adaptivity. An object updating method is proposed to select the “good” models (detectors), which are quite discriminative and uncorrelated to other selected models. Meanwhile, we build the background model as a Gaussian mixture model (GMM) to adapt to complex scenes, which is initialized offline and updated online. The proposed tracker is evaluated on a benchmark dataset of 50 video sequences with various challenges. It achieves the best overall performance among the compared state-of-the-art trackers, which demonstrates the effectiveness and robustness of our tracking algorithm. PMID:27092505

  18. Enhancement of ELDA Tracker Based on CNN Features and Adaptive Model Update.

    PubMed

    Gao, Changxin; Shi, Huizhang; Yu, Jin-Gang; Sang, Nong

    2016-01-01

    Appearance representation and the observation model are the most important components in designing a robust visual tracking algorithm for video-based sensors. Additionally, the exemplar-based linear discriminant analysis (ELDA) model has shown good performance in object tracking. Based on that, we improve the ELDA tracking algorithm by deep convolutional neural network (CNN) features and adaptive model update. Deep CNN features have been successfully used in various computer vision tasks. Extracting CNN features on all of the candidate windows is time consuming. To address this problem, a two-step CNN feature extraction method is proposed by separately computing convolutional layers and fully-connected layers. Due to the strong discriminative ability of CNN features and the exemplar-based model, we update both object and background models to improve their adaptivity and to deal with the tradeoff between discriminative ability and adaptivity. An object updating method is proposed to select the "good" models (detectors), which are quite discriminative and uncorrelated to other selected models. Meanwhile, we build the background model as a Gaussian mixture model (GMM) to adapt to complex scenes, which is initialized offline and updated online. The proposed tracker is evaluated on a benchmark dataset of 50 video sequences with various challenges. It achieves the best overall performance among the compared state-of-the-art trackers, which demonstrates the effectiveness and robustness of our tracking algorithm. PMID:27092505

  19. High dynamic range image rendering with a Retinex-based adaptive filter.

    PubMed

    Meylan, Laurence; Süsstrunk, Sabine

    2006-09-01

    We propose a new method to render high dynamic range images that models global and local adaptation of the human visual system. Our method is based on the center-surround Retinex model. The novelties of our method is first to use an adaptive filter, whose shape follows the image high-contrast edges, thus reducing halo artifacts common to other methods. Second, only the luminance channel is processed, which is defined by the first component of a principal component analysis. Principal component analysis provides orthogonality between channels and thus reduces the chromatic changes caused by the modification of luminance. We show that our method efficiently renders high dynamic range images and we compare our results with the current state of the art. PMID:16948325

  20. Advances in Patch-Based Adaptive Mesh Refinement Scalability

    SciTech Connect

    Gunney, Brian T.N.; Anderson, Robert W.

    2015-12-18

    Patch-based structured adaptive mesh refinement (SAMR) is widely used for high-resolution simu- lations. Combined with modern supercomputers, it could provide simulations of unprecedented size and resolution. A persistent challenge for this com- bination has been managing dynamically adaptive meshes on more and more MPI tasks. The dis- tributed mesh management scheme in SAMRAI has made some progress SAMR scalability, but early al- gorithms still had trouble scaling past the regime of 105 MPI tasks. This work provides two critical SAMR regridding algorithms, which are integrated into that scheme to ensure efficiency of the whole. The clustering algorithm is an extension of the tile- clustering approach, making it more flexible and efficient in both clustering and parallelism. The partitioner is a new algorithm designed to prevent the network congestion experienced by its prede- cessor. We evaluated performance using weak- and strong-scaling benchmarks designed to be difficult for dynamic adaptivity. Results show good scaling on up to 1.5M cores and 2M MPI tasks. Detailed timing diagnostics suggest scaling would continue well past that.

  1. Advances in Patch-Based Adaptive Mesh Refinement Scalability

    DOE PAGES

    Gunney, Brian T.N.; Anderson, Robert W.

    2015-12-18

    Patch-based structured adaptive mesh refinement (SAMR) is widely used for high-resolution simu- lations. Combined with modern supercomputers, it could provide simulations of unprecedented size and resolution. A persistent challenge for this com- bination has been managing dynamically adaptive meshes on more and more MPI tasks. The dis- tributed mesh management scheme in SAMRAI has made some progress SAMR scalability, but early al- gorithms still had trouble scaling past the regime of 105 MPI tasks. This work provides two critical SAMR regridding algorithms, which are integrated into that scheme to ensure efficiency of the whole. The clustering algorithm is an extensionmore » of the tile- clustering approach, making it more flexible and efficient in both clustering and parallelism. The partitioner is a new algorithm designed to prevent the network congestion experienced by its prede- cessor. We evaluated performance using weak- and strong-scaling benchmarks designed to be difficult for dynamic adaptivity. Results show good scaling on up to 1.5M cores and 2M MPI tasks. Detailed timing diagnostics suggest scaling would continue well past that.« less

  2. Improvement in adaptive nonuniformity correction method with nonlinear model for infrared focal plane arrays

    NASA Astrophysics Data System (ADS)

    Rui, Lai; Yin-Tang, Yang; Qing, Li; Hui-Xin, Zhou

    2009-09-01

    The scene adaptive nonuniformity correction (NUC) technique is commonly used to decrease the fixed pattern noise (FPN) in infrared focal plane arrays (IRFPA). However, the correction precision of existing scene adaptive NUC methods is reduced by the nonlinear response of IRFPA detectors seriously. In this paper, an improved scene adaptive NUC method that employs "S"-curve model to approximate the detector response is presented. The performance of the proposed method is tested with real infrared video sequence, and the experimental results validate that our method can promote the correction precision considerably.

  3. Rule-based mechanisms of learning for intelligent adaptive flight control

    NASA Technical Reports Server (NTRS)

    Handelman, David A.; Stengel, Robert F.

    1990-01-01

    How certain aspects of human learning can be used to characterize learning in intelligent adaptive control systems is investigated. Reflexive and declarative memory and learning are described. It is shown that model-based systems-theoretic adaptive control methods exhibit attributes of reflexive learning, whereas the problem-solving capabilities of knowledge-based systems of artificial intelligence are naturally suited for implementing declarative learning. Issues related to learning in knowledge-based control systems are addressed, with particular attention given to rule-based systems. A mechanism for real-time rule-based knowledge acquisition is suggested, and utilization of this mechanism within the context of failure diagnosis for fault-tolerant flight control is demonstrated.

  4. Investigating Item Exposure Control Methods in Computerized Adaptive Testing

    ERIC Educational Resources Information Center

    Ozturk, Nagihan Boztunc; Dogan, Nuri

    2015-01-01

    This study aims to investigate the effects of item exposure control methods on measurement precision and on test security under various item selection methods and item pool characteristics. In this study, the Randomesque (with item group sizes of 5 and 10), Sympson-Hetter, and Fade-Away methods were used as item exposure control methods. Moreover,…

  5. An HP Adaptive Discontinuous Galerkin Method for Hyperbolic Conservation Laws. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Bey, Kim S.

    1994-01-01

    This dissertation addresses various issues for model classes of hyperbolic conservation laws. The basic approach developed in this work employs a new family of adaptive, hp-version, finite element methods based on a special discontinuous Galerkin formulation for hyperbolic problems. The discontinuous Galerkin formulation admits high-order local approximations on domains of quite general geometry, while providing a natural framework for finite element approximations and for theoretical developments. The use of hp-versions of the finite element method makes possible exponentially convergent schemes with very high accuracies in certain cases; the use of adaptive hp-schemes allows h-refinement in regions of low regularity and p-enrichment to deliver high accuracy, while keeping problem sizes manageable and dramatically smaller than many conventional approaches. The use of discontinuous Galerkin methods is uncommon in applications, but the methods rest on a reasonable mathematical basis for low-order cases and has local approximation features that can be exploited to produce very efficient schemes, especially in a parallel, multiprocessor environment. The place of this work is to first and primarily focus on a model class of linear hyperbolic conservation laws for which concrete mathematical results, methodologies, error estimates, convergence criteria, and parallel adaptive strategies can be developed, and to then briefly explore some extensions to more general cases. Next, we provide preliminaries to the study and a review of some aspects of the theory of hyperbolic conservation laws. We also provide a review of relevant literature on this subject and on the numerical analysis of these types of problems.

  6. Adaptive wavelet-based recognition of oscillatory patterns on electroencephalograms

    NASA Astrophysics Data System (ADS)

    Nazimov, Alexey I.; Pavlov, Alexey N.; Hramov, Alexander E.; Grubov, Vadim V.; Koronovskii, Alexey A.; Sitnikova, Evgenija Y.

    2013-02-01

    The problem of automatic recognition of specific oscillatory patterns on electroencephalograms (EEG) is addressed using the continuous wavelet-transform (CWT). A possibility of improving the quality of recognition by optimizing the choice of CWT parameters is discussed. An adaptive approach is proposed to identify sleep spindles (SS) and spike wave discharges (SWD) that assumes automatic selection of CWT-parameters reflecting the most informative features of the analyzed time-frequency structures. Advantages of the proposed technique over the standard wavelet-based approaches are considered.

  7. Implementer-Initiated Adaptation of Evidence-Based Interventions: Kids Remember the Blue Wig

    ERIC Educational Resources Information Center

    Gibbs, D. A.; Krieger, K. E.; Cutbush, S. L.; Clinton-Sherrod, A. M.; Miller, S.

    2016-01-01

    Adaptation of evidence-based interventions by implementers is widespread. Although frequently viewed as departures from fidelity, adaptations may be positive in impact and consistent with fidelity. Research typically catalogs adaptations but rarely includes the implementers' perspectives on adaptation. We report data on individuals implementing an…

  8. Learners' Perceptions and Illusions of Adaptivity in Computer-Based Learning Environments

    ERIC Educational Resources Information Center

    Vandewaetere, Mieke; Vandercruysse, Sylke; Clarebout, Geraldine

    2012-01-01

    Research on computer-based adaptive learning environments has shown exemplary growth. Although the mechanisms of effective adaptive instruction are unraveled systematically, little is known about the relative effect of learners' perceptions of adaptivity in adaptive learning environments. As previous research has demonstrated that the learners'…

  9. Designing Training for Temporal and Adaptive Transfer: A Comparative Evaluation of Three Training Methods for Process Control Tasks

    ERIC Educational Resources Information Center

    Kluge, Annette; Sauer, Juergen; Burkolter, Dina; Ritzmann, Sandrina

    2010-01-01

    Training in process control environments requires operators to be prepared for temporal and adaptive transfer of skill. Three training methods were compared with regard to their effectiveness in supporting transfer: Drill & Practice (D&P), Error Training (ET), and procedure-based and error heuristics training (PHT). Communication electronics…

  10. Context-Aware Adaptation in Web-Based Groupware Systems

    NASA Astrophysics Data System (ADS)

    Pinheiro, Manuele Kirsch; Carrillo-Ramos, Angela; Villanova-Oliver, Marlène; Gensel, Jérôme; Berbers, Yolande

    In this chapter, we propose a context-aware filtering process for adapting content delivered to mobile users by Web-based Groupware Systems. This process is based on context-aware profiles, expressing mobile users preferences for particular situations they encounter when using these systems. These profiles, which are shared between members of a given community, are exploited by the adaptation process in order to select and organize the delivered information into several levels of detail, based on a progressive access model. By defining these profiles, we propose a filtering process that considers both the user's current context and the user's preferences for this context. The context notion of context is represented by an object-oriented model we propose and which takes into account consideration both the user's physical and collaborative context, including elements related to collaborative activities performed inside the groupware system. The filtering process selects, in a first step, the context-aware profiles that match the user's current context, and then it filters the available content according to the selected profiles and uses the progressive access model to organize the selected information.

  11. Fuzzy-based adaptive bandwidth control for loss guarantees.

    PubMed

    Siripongwutikorn, Peerapon; Banerjee, Sujata; Tipper, David

    2005-09-01

    This paper presents the use of adaptive bandwidth control (ABC) for a quantitative packet loss rate guarantee to aggregate traffic in packet switched networks. ABC starts with some initial amount of bandwidth allocated to a queue and adjusts it over time based on online measurements of system states to ensure that the allocated bandwidth is just enough to attain the specified loss requirement. Consequently, no a priori detailed traffic information is required, making ABC more suitable for efficient aggregate quality of service (QoS) provisioning. We propose an ABC algorithm called augmented Fuzzy (A-Fuzzy) control, whereby fuzzy logic control is used to keep an average queue length at an appropriate target value, and the measured packet loss rate is used to augment the standard control to achieve better performance. An extensive simulation study based on both theoretical traffic models and real traffic traces under a wide range of system configurations demonstrates that the A-Fuzzy control itself is highly robust, yields high bandwidth utilization, and is indeed a viable alternative and improvement to static bandwidth allocation (SBA) and existing adaptive bandwidth allocation schemes. Additionally, we develop a simple and efficient measurement-based admission control procedure which limits the amount of input traffic in order to maintain the performance of the A-Fuzzy control at an acceptable level.

  12. Robust observer-based adaptive fuzzy sliding mode controller

    NASA Astrophysics Data System (ADS)

    Oveisi, Atta; Nestorović, Tamara

    2016-08-01

    In this paper, a new observer-based adaptive fuzzy integral sliding mode controller is proposed based on the Lyapunov stability theorem. The plant is subjected to a square-integrable disturbance and is assumed to have mismatch uncertainties both in state- and input-matrices. Based on the classical sliding mode controller, the equivalent control effort is obtained to satisfy the sufficient requirement of sliding mode controller and then the control law is modified to guarantee the reachability of the system trajectory to the sliding manifold. In order to relax the norm-bounded constrains on the control law and solve the chattering problem of sliding mode controller, a fuzzy logic inference mechanism is combined with the controller. An adaptive law is then introduced to tune the parameters of the fuzzy system on-line. Finally, for evaluating the controller and the robust performance of the closed-loop system, the proposed regulator is implemented on a real-time mechanical vibrating system.

  13. Data-adapted moving least squares method for 3-D image interpolation

    NASA Astrophysics Data System (ADS)

    Jang, Sumi; Nam, Haewon; Lee, Yeon Ju; Jeong, Byeongseon; Lee, Rena; Yoon, Jungho

    2013-12-01

    In this paper, we present a nonlinear three-dimensional interpolation scheme for gray-level medical images. The scheme is based on the moving least squares method but introduces a fundamental modification. For a given evaluation point, the proposed method finds the local best approximation by reproducing polynomials of a certain degree. In particular, in order to obtain a better match to the local structures of the given image, we employ locally data-adapted least squares methods that can improve the classical one. Some numerical experiments are presented to demonstrate the performance of the proposed method. Five types of data sets are used: MR brain, MR foot, MR abdomen, CT head, and CT foot. From each of the five types, we choose five volumes. The scheme is compared with some well-known linear methods and other recently developed nonlinear methods. For quantitative comparison, we follow the paradigm proposed by Grevera and Udupa (1998). (Each slice is first assumed to be unknown then interpolated by each method. The performance of each interpolation method is assessed statistically.) The PSNR results for the estimated volumes are also provided. We observe that the new method generates better results in both quantitative and visual quality comparisons.

  14. Adapting School-Based Substance Use Prevention Curriculum Through Cultural Grounding: A Review and Exemplar of Adaptation Processes for Rural Schools

    PubMed Central

    Colby, Margaret; Hecht, Michael L.; Miller-Day, Michelle; Krieger, Janice L.; Syvertsen, Amy K.; Graham, John W.; Pettigrew, Jonathan

    2014-01-01

    A central challenge facing twenty-first century community-based researchers and prevention scientists is curriculum adaptation processes. While early prevention efforts sought to develop effective programs, taking programs to scale implies that they will be adapted, especially as programs are implemented with populations other than those with whom they were developed or tested. The principle of cultural grounding, which argues that health message adaptation should be informed by knowledge of the target population and by cultural insiders, provides a theoretical rational for cultural regrounding and presents an illustrative case of methods used to reground the keepin’ it REAL substance use prevention curriculum for a rural adolescent population. We argue that adaptation processes like those presented should be incorporated into the design and dissemination of prevention interventions. PMID:22961604

  15. Adaptive fiber optics collimator based on flexible hinges.

    PubMed

    Zhi, Dong; Ma, Yanxing; Ma, Pengfei; Si, Lei; Wang, Xiaolin; Zhou, Pu

    2014-08-20

    In this manuscript, we present a new design for an adaptive fiber optics collimator (AFOC) based on flexible hinges by using piezoelectric stacks actuators for X-Y displacement. Different from traditional AFOC, the new structure is based on flexible hinges to drive the fiber end cap instead of naked fiber. We fabricated a real AFOC based on flexible hinges, and the end cap's deviation and resonance frequency of the device were measured. Experimental results show that this new AFOC can provide fast control of tip-tilt deviation of the laser beam emitting from the end cap. As a result, the fiber end cap can support much higher power than naked fiber, which makes the new structure ideal for tip-tilt controlling in a high-power fiber laser system.

  16. Adaptive PCA based fault diagnosis scheme in imperial smelting process.

    PubMed

    Hu, Zhikun; Chen, Zhiwen; Gui, Weihua; Jiang, Bin

    2014-09-01

    In this paper, an adaptive fault detection scheme based on a recursive principal component analysis (PCA) is proposed to deal with the problem of false alarm due to normal process changes in real process. Our further study is also dedicated to develop a fault isolation approach based on Generalized Likelihood Ratio (GLR) test and Singular Value Decomposition (SVD) which is one of general techniques of PCA, on which the off-set and scaling fault can be easily isolated with explicit off-set fault direction and scaling fault classification. The identification of off-set and scaling fault is also applied. The complete scheme of PCA-based fault diagnosis procedure is proposed. The proposed scheme is first applied to Imperial Smelting Process, and the results show that the proposed strategies can be able to mitigate false alarms and isolate faults efficiently.

  17. General adaptive guidance using nonlinear programming constraint solving methods (FAST)

    NASA Astrophysics Data System (ADS)

    Skalecki, Lisa; Martin, Marc

    An adaptive, general purpose, constraint solving guidance algorithm called FAST (Flight Algorithm to Solve Trajectories) has been developed by the authors in response to the requirements for the Advanced Launch System (ALS). The FAST algorithm can be used for all mission phases for a wide range of Space Transportation Vehicles without code modification because of the general formulation of the nonlinear programming (NLP) problem, ad the general trajectory simulation used to predict constraint values. The approach allows on board re-targeting for severe weather and changes in payload or mission parameters, increasing flight reliability and dependability while reducing the amount of pre-flight analysis that must be performed. The algorithm is described in general in this paper. Three degree of freedom simulation results are presented for application of the algorithm to ascent and reentry phases of an ALS mission, and Mars aerobraking. Flight processor CPU requirement data is also shown.

  18. Shape-model-based adaptation of 3D deformable meshes for segmentation of medical images

    NASA Astrophysics Data System (ADS)

    Pekar, Vladimir; Kaus, Michael R.; Lorenz, Cristian; Lobregt, Steven; Truyen, Roel; Weese, Juergen

    2001-07-01

    Segmentation methods based on adaptation of deformable models have found numerous applications in medical image analysis. Many efforts have been made in the recent years to improve their robustness and reliability. In particular, increasingly more methods use a priori information about the shape of the anatomical structure to be segmented. This reduces the risk of the model being attracted to false features in the image and, as a consequence, makes the need of close initialization, which remains the principal limitation of elastically deformable models, less crucial for the segmentation quality. In this paper, we present a novel segmentation approach which uses a 3D anatomical statistical shape model to initialize the adaptation process of a deformable model represented by a triangular mesh. As the first step, the anatomical shape model is parametrically fitted to the structure of interest in the image. The result of this global adaptation is used to initialize the local mesh refinement based on an energy minimization. We applied our approach to segment spine vertebrae in CT datasets. The segmentation quality was quantitatively assessed for 6 vertebrae, from 2 datasets, by computing the mean and maximum distance between the adapted mesh and a manually segmented reference shape. The results of the study show that the presented method is a promising approach for segmentation of complex anatomical structures in medical images.

  19. A three-dimensional adaptive grid method. [for computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Nakahashi, K.; Deiwert, G. S.

    1985-01-01

    A three-dimensional solution-adaptive-grid scheme is described which is suitable for complex fluid flows. This method, using tension and torsion spring analogies, was previously developed and successfully applied for two-dimensional flows. In the present work, a collection of three-dimensional flow fields are used to demonstrate the feasibility and versatility of this concept to include an added dimension. Flow fields considered include: (1) supersonic flow past an aerodynamic afterbody with a propulsive jet at incidence to the free stream, (2) supersonic flow past a blunt fin mounted on a solid wall, and (3) supersonic flow over a bump. In addition to generating three-dimensional solution-adapted grids, the method can also be used effectively as an initial grid generator. The utility of the method lies in: (1) optimum distribution of discrete grid points, (2) improvement of accuracy, (3) improved computational efficiency, (4) minimization of data base sizes, and (5) simplified three-dimensional grid generation.

  20. A Newton method with adaptive finite elements for solving phase-change problems with natural convection

    NASA Astrophysics Data System (ADS)

    Danaila, Ionut; Moglan, Raluca; Hecht, Frédéric; Le Masson, Stéphane

    2014-10-01

    We present a new numerical system using finite elements with mesh adaptivity for the simulation of solid-liquid phase change systems. In the liquid phase, the natural convection flow is simulated by solving the incompressible Navier-Stokes equations with Boussinesq approximation. A variable viscosity model allows the velocity to progressively vanish in the solid phase, through an intermediate mushy region. The phase change is modeled by introducing an implicit enthalpy source term in the heat equation. The final system of equations describing the liquid-solid system by a single domain approach is solved using a Newton iterative algorithm. The space discretization is based on a P2-P1 Taylor-Hood finite elements and mesh adaptivity by metric control is used to accurately track the solid-liquid interface or the density inversion interface for water flows. The numerical method is validated against classical benchmarks that progressively add strong non-linearities in the system of equations: natural convection of air, natural convection of water, melting of a phase-change material and water freezing. Very good agreement with experimental data is obtained for each test case, proving the capability of the method to deal with both melting and solidification problems with convection. The presented numerical method is easy to implement using FreeFem++ software using a syntax close to the mathematical formulation.

  1. The older person has a stroke: Learning to adapt using the Feldenkrais® Method.

    PubMed

    Jackson-Wyatt, O

    1995-01-01

    The older person with a stroke requires adapted therapeutic interventions to take into account normal age-related changes. The Feldenkrais® Method presents a model for learning to promote adaptability that addresses key functional changes seen with normal aging. Clinical examples related to specific functional tasks are discussed to highlight major treatment modifications and neuromuscular, psychological, emotional, and sensory considerations. PMID:27619899

  2. Simple method for adaptive filtering of motion artifacts in E-textile wearable ECG sensors.

    PubMed

    Alkhidir, Tamador; Sluzek, Andrzej; Yapici, Murat Kaya

    2015-08-01

    In this paper, we have developed a simple method for adaptive out-filtering of the motion artifact from the electrocardiogram (ECG) obtained by using conductive textile electrodes. The textile electrodes were placed on the left and the right wrist to measure ECG through lead-1 configuration. The motion artifact was induced by simple hand movements. The reference signal for adaptive filtering was obtained by placing additional electrodes at one hand to capture the motion of the hand. The adaptive filtering was compared to independent component analysis (ICA) algorithm. The signal-to-noise ratio (SNR) for the adaptive filtering approach was higher than independent component analysis in most cases.

  3. Co-production of knowledge: recipe for success in land-based climate change adaptation?

    NASA Astrophysics Data System (ADS)

    Coninx, Ingrid; Swart, Rob

    2015-04-01

    After multiple failures of scientists to trigger policymakers and other relevant actors to take action when communicating research findings, the request for co-production (or co-creation) of knowledge and stakeholder involvement in climate change adaptation efforts has rapidly increased over the past few years. In particular for land-based adaptation, on-the-ground action is often met by societal resistance towards solutions proposed by scientists, by a misfit of potential solutions with the local context, leading to misunderstanding and even rejection of scientific recommendations. A fully integrative co-creation process in which both scientists and practitioners discuss climate vulnerability and possible responses, exploring perspectives and designing adaptation measures based on their own knowledge, is expected to prevent the adaptation deadlock. The apparent conviction that co-creation processes result in successful adaptation, has not yet been unambiguously empirically demonstrated, but has resulted in co-creation being one of basic principles in many new research and policy programmes. But is co-creation that brings knowledge of scientists and practitioners together always the best recipe for success in climate change adaptation? Assessing a number of actual cases, the authors have serious doubts. The paper proposes additional considerations for adaptively managing the environment that should be taken into account in the design of participatory knowledge development in which climate scientists play a role. These include the nature of the problem at stake; the values, interests and perceptions of the actors involved; the methods used to build trust, strengthen alignment and develop reciprocal relationships among scientists and practitioners; and the concreteness of the co-creation output.

  4. Adaptive Shape Kernel-Based Mean Shift Tracker in Robot Vision System.

    PubMed

    Liu, Chunmei; Wang, Yirui; Gao, Shangce

    2016-01-01

    This paper proposes an adaptive shape kernel-based mean shift tracker using a single static camera for the robot vision system. The question that we address in this paper is how to construct such a kernel shape that is adaptive to the object shape. We perform nonlinear manifold learning technique to obtain the low-dimensional shape space which is trained by training data with the same view as the tracking video. The proposed kernel searches the shape in the low-dimensional shape space obtained by nonlinear manifold learning technique and constructs the adaptive kernel shape in the high-dimensional shape space. It can improve mean shift tracker performance to track object position and object contour and avoid the background clutter. In the experimental part, we take the walking human as example to validate that our method is accurate and robust to track human position and describe human contour. PMID:27379165

  5. Multichannel Speech Enhancement Based on Generalized Gamma Prior Distribution with Its Online Adaptive Estimation

    NASA Astrophysics Data System (ADS)

    Dat, Tran Huy; Takeda, Kazuya; Itakura, Fumitada

    We present a multichannel speech enhancement method based on MAP speech spectral magnitude estimation using a generalized gamma model of speech prior distribution, where the model parameters are adapted from actual noisy speech in a frame-by-frame manner. The utilization of a more general prior distribution with its online adaptive estimation is shown to be effective for speech spectral estimation in noisy environments. Furthermore, the multi-channel information in terms of cross-channel statistics are shown to be useful to better adapt the prior distribution parameters to the actual observation, resulting in better performance of speech enhancement algorithm. We tested the proposed algorithm in an in-car speech database and obtained significant improvements of the speech recognition performance, particularly under non-stationary noise conditions such as music, air-conditioner and open window.

  6. Adaptive Shape Kernel-Based Mean Shift Tracker in Robot Vision System

    PubMed Central

    2016-01-01

    This paper proposes an adaptive shape kernel-based mean shift tracker using a single static camera for the robot vision system. The question that we address in this paper is how to construct such a kernel shape that is adaptive to the object shape. We perform nonlinear manifold learning technique to obtain the low-dimensional shape space which is trained by training data with the same view as the tracking video. The proposed kernel searches the shape in the low-dimensional shape space obtained by nonlinear manifold learning technique and constructs the adaptive kernel shape in the high-dimensional shape space. It can improve mean shift tracker performance to track object position and object contour and avoid the background clutter. In the experimental part, we take the walking human as example to validate that our method is accurate and robust to track human position and describe human contour. PMID:27379165

  7. A study of interceptor attitude control based on adaptive wavelet neural networks

    NASA Astrophysics Data System (ADS)

    Li, Da; Wang, Qing-chao

    2005-12-01

    This paper engages to study the 3-DOF attitude control problem of the kinetic interceptor. When the kinetic interceptor enters into terminal guidance it has to maneuver with large angles. The characteristic of interceptor attitude system is nonlinearity, strong-coupling and MIMO. A kind of inverse control approach based on adaptive wavelet neural networks was proposed in this paper. Instead of using one complex neural network as the controller, the nonlinear dynamics of the interceptor can be approximated by three independent subsystems applying exact feedback-linearization firstly, and then controllers for each subsystem are designed using adaptive wavelet neural networks respectively. This method avoids computing a large amount of the weights and bias in one massive neural network and the control parameters can be adaptive changed online. Simulation results betray that the proposed controller performs remarkably well.

  8. An adaptive fusion approach for infrared and visible images based on NSCT and compressed sensing

    NASA Astrophysics Data System (ADS)

    Zhang, Qiong; Maldague, Xavier

    2016-01-01

    A novel nonsubsampled contourlet transform (NSCT) based image fusion approach, implementing an adaptive-Gaussian (AG) fuzzy membership method, compressed sensing (CS) technique, total variation (TV) based gradient descent reconstruction algorithm, is proposed for the fusion computation of infrared and visible images. Compared with wavelet, contourlet, or any other multi-resolution analysis method, NSCT has many evident advantages, such as multi-scale, multi-direction, and translation invariance. As is known, a fuzzy set is characterized by its membership function (MF), while the commonly known Gaussian fuzzy membership degree can be introduced to establish an adaptive control of the fusion processing. The compressed sensing technique can sparsely sample the image information in a certain sampling rate, and the sparse signal can be recovered by solving a convex problem employing gradient descent based iterative algorithm(s). In the proposed fusion process, the pre-enhanced infrared image and the visible image are decomposed into low-frequency subbands and high-frequency subbands, respectively, via the NSCT method as a first step. The low-frequency coefficients are fused using the adaptive regional average energy rule; the highest-frequency coefficients are fused using the maximum absolute selection rule; the other high-frequency coefficients are sparsely sampled, fused using the adaptive-Gaussian regional standard deviation rule, and then recovered by employing the total variation based gradient descent recovery algorithm. Experimental results and human visual perception illustrate the effectiveness and advantages of the proposed fusion approach. The efficiency and robustness are also analyzed and discussed through different evaluation methods, such as the standard deviation, Shannon entropy, root-mean-square error, mutual information and edge-based similarity index.

  9. An adaptive MR-CT registration method for MRI-guided prostate cancer radiotherapy

    NASA Astrophysics Data System (ADS)

    Zhong, Hualiang; Wen, Ning; Gordon, James J.; Elshaikh, Mohamed A.; Movsas, Benjamin; Chetty, Indrin J.

    2015-04-01

    Magnetic Resonance images (MRI) have superior soft tissue contrast compared with CT images. Therefore, MRI might be a better imaging modality to differentiate the prostate from surrounding normal organs. Methods to accurately register MRI to simulation CT images are essential, as we transition the use of MRI into the routine clinic setting. In this study, we present a finite element method (FEM) to improve the performance of a commercially available, B-spline-based registration algorithm in the prostate region. Specifically, prostate contours were delineated independently on ten MRI and CT images using the Eclipse treatment planning system. Each pair of MRI and CT images was registered with the B-spline-based algorithm implemented in the VelocityAI system. A bounding box that contains the prostate volume in the CT image was selected and partitioned into a tetrahedral mesh. An adaptive finite element method was then developed to adjust the displacement vector fields (DVFs) of the B-spline-based registrations within the box. The B-spline and FEM-based registrations were evaluated based on the variations of prostate volume and tumor centroid, the unbalanced energy of the generated DVFs, and the clarity of the reconstructed anatomical structures. The results showed that the volumes of the prostate contours warped with the B-spline-based DVFs changed 10.2% on average, relative to the volumes of the prostate contours on the original MR images. This discrepancy was reduced to 1.5% for the FEM-based DVFs. The average unbalanced energy was 2.65 and 0.38 mJ cm-3, and the prostate centroid deviation was 0.37 and 0.28 cm, for the B-spline and FEM-based registrations, respectively. Different from the B-spline-warped MR images, the FEM-warped MR images have clear boundaries between prostates and bladders, and their internal prostatic structures are consistent with those of the original MR images. In summary, the developed adaptive FEM method preserves the prostate volume

  10. hp-Adaptive time integration based on the BDF for viscous flows

    NASA Astrophysics Data System (ADS)

    Hay, A.; Etienne, S.; Pelletier, D.; Garon, A.

    2015-06-01

    This paper presents a procedure based on the Backward Differentiation Formulas of order 1 to 5 to obtain efficient time integration of the incompressible Navier-Stokes equations. The adaptive algorithm performs both stepsize and order selections to control respectively the solution accuracy and the computational efficiency of the time integration process. The stepsize selection (h-adaptivity) is based on a local error estimate and an error controller to guarantee that the numerical solution accuracy is within a user prescribed tolerance. The order selection (p-adaptivity) relies on the idea that low-accuracy solutions can be computed efficiently by low order time integrators while accurate solutions require high order time integrators to keep computational time low. The selection is based on a stability test that detects growing numerical noise and deems a method of order p stable if there is no method of lower order that delivers the same solution accuracy for a larger stepsize. Hence, it guarantees both that (1) the used method of integration operates inside of its stability region and (2) the time integration procedure is computationally efficient. The proposed time integration procedure also features a time-step rejection and quarantine mechanisms, a modified Newton method with a predictor and dense output techniques to compute solution at off-step points.

  11. Minimising the error in eigenvalue calculations involving the Boltzmann transport equation using goal-based adaptivity on unstructured meshes

    NASA Astrophysics Data System (ADS)

    Goffin, Mark A.; Baker, Christopher M. J.; Buchan, Andrew G.; Pain, Christopher C.; Eaton, Matthew D.; Smith, Paul N.

    2013-06-01

    This article presents a method for goal-based anisotropic adaptive methods for the finite element method applied to the Boltzmann transport equation. The neutron multiplication factor, k, is used as the goal of the adaptive procedure. The anisotropic adaptive algorithm requires error measures for k with directional dependence. General error estimators are derived for any given functional of the flux and applied to k to acquire the driving force for the adaptive procedure. The error estimators require the solution of an appropriately formed dual equation. Forward and dual error indicators are calculated by weighting the Hessian of each solution with the dual and forward residual respectively. The Hessian is used as an approximation of the interpolation error in the solution which gives rise to the directional dependence. The two indicators are combined to form a single error metric that is used to adapt the finite element mesh. The residual is approximated using a novel technique arising from the sub-grid scale finite element discretisation. Two adaptive routes are demonstrated: (i) a single mesh is used to solve all energy groups, and (ii) a different mesh is used to solve each energy group. The second method aims to capture the benefit from representing the flux from each energy group on a specifically optimised mesh. The k goal-based adaptive method was applied to three examples which illustrate the superior accuracy in criticality problems that can be obtained.

  12. Minimising the error in eigenvalue calculations involving the Boltzmann transport equation using goal-based adaptivity on unstructured meshes

    SciTech Connect

    Goffin, Mark A.; Baker, Christopher M.J.; Buchan, Andrew G.; Pain, Christopher C.; Eaton, Matthew D.; Smith, Paul N.

    2013-06-01

    This article presents a method for goal-based anisotropic adaptive methods for the finite element method applied to the Boltzmann transport equation. The neutron multiplication factor, k{sub eff}, is used as the goal of the adaptive procedure. The anisotropic adaptive algorithm requires error measures for k{sub eff} with directional dependence. General error estimators are derived for any given functional of the flux and applied to k{sub eff} to acquire the driving force for the adaptive procedure. The error estimators require the solution of an appropriately formed dual equation. Forward and dual error indicators are calculated by weighting the Hessian of each solution with the dual and forward residual respectively. The Hessian is used as an approximation of the interpolation error in the solution which gives rise to the directional dependence. The two indicators are combined to form a single error metric that is used to adapt the finite element mesh. The residual is approximated using a novel technique arising from the sub-grid scale finite element discretisation. Two adaptive routes are demonstrated: (i) a single mesh is used to solve all energy groups, and (ii) a different mesh is used to solve each energy group. The second method aims to capture the benefit from representing the flux from each energy group on a specifically optimised mesh. The k{sub eff} goal-based adaptive method was applied to three examples which illustrate the superior accuracy in criticality problems that can be obtained.

  13. Widefield multiphoton microscopy with image-based adaptive optics

    NASA Astrophysics Data System (ADS)

    Chang, C.-Y.; Cheng, L.-C.; Su, H.-W.; Yen, W.-C.; Chen, S.-J.

    2012-10-01

    Unlike conventional multiphoton microscopy according to pixel by pixel point scanning, a widefield multiphoton microscope based on spatiotemporal focusing has been developed to provide fast optical sectioning images at a frame rate over 100 Hz. In order to overcome the aberrations of the widefield multiphoton microscope and the wavefront distortion from turbid biospecimens, an image-based adaptive optics system (AOS) was integrated. The feedback control signal of the AOS was acquired according to locally maximize image intensity, which were provided by the widefield multiphoton excited microscope, by using a hill climbing algorithm. Then, the control signal was utilized to drive a deformable mirror in such a way as to eliminate the aberration and distortion. A R6G-doped PMMA thin film is also increased by 3.7-fold. Furthermore, the TPEF image quality of 1 μm fluorescent beads sealed in agarose gel at different depths is improved.

  14. Optimization-based wavefront sensorless adaptive optics for multiphoton microscopy.

    PubMed

    Antonello, Jacopo; van Werkhoven, Tim; Verhaegen, Michel; Truong, Hoa H; Keller, Christoph U; Gerritsen, Hans C

    2014-06-01

    Optical aberrations have detrimental effects in multiphoton microscopy. These effects can be curtailed by implementing model-based wavefront sensorless adaptive optics, which only requires the addition of a wavefront shaping device, such as a deformable mirror (DM) to an existing microscope. The aberration correction is achieved by maximizing a suitable image quality metric. We implement a model-based aberration correction algorithm in a second-harmonic microscope. The tip, tilt, and defocus aberrations are removed from the basis functions used for the control of the DM, as these aberrations induce distortions in the acquired images. We compute the parameters of a quadratic polynomial that is used to model the image quality metric directly from experimental input-output measurements. Finally, we apply the aberration correction by maximizing the image quality metric using the least-squares estimate of the unknown aberration.

  15. An adaptive mesh refinement algorithm for the discrete ordinates method

    SciTech Connect

    Jessee, J.P.; Fiveland, W.A.; Howell, L.H.; Colella, P.; Pember, R.B.

    1996-03-01

    The discrete ordinates form of the radiative transport equation (RTE) is spatially discretized and solved using an adaptive mesh refinement (AMR) algorithm. This technique permits the local grid refinement to minimize spatial discretization error of the RTE. An error estimator is applied to define regions for local grid refinement; overlapping refined grids are recursively placed in these regions; and the RTE is then solved over the entire domain. The procedure continues until the spatial discretization error has been reduced to a sufficient level. The following aspects of the algorithm are discussed: error estimation, grid generation, communication between refined levels, and solution sequencing. This initial formulation employs the step scheme, and is valid for absorbing and isotopically scattering media in two-dimensional enclosures. The utility of the algorithm is tested by comparing the convergence characteristics and accuracy to those of the standard single-grid algorithm for several benchmark cases. The AMR algorithm provides a reduction in memory requirements and maintains the convergence characteristics of the standard single-grid algorithm; however, the cases illustrate that efficiency gains of the AMR algorithm will not be fully realized until three-dimensional geometries are considered.

  16. Parallel architectures for iterative methods on adaptive, block structured grids

    NASA Technical Reports Server (NTRS)

    Gannon, D.; Vanrosendale, J.

    1983-01-01

    A parallel computer architecture well suited to the solution of partial differential equations in complicated geometries is proposed. Algorithms for partial differential equations contain a great deal of parallelism. But this parallelism can be difficult to exploit, particularly on complex problems. One approach to extraction of this parallelism is the use of special purpose architectures tuned to a given problem class. The architecture proposed here is tuned to boundary value problems on complex domains. An adaptive elliptic algorithm which maps effectively onto the proposed architecture is considered in detail. Two levels of parallelism are exploited by the proposed architecture. First, by making use of the freedom one has in grid generation, one can construct grids which are locally regular, permitting a one to one mapping of grids to systolic style processor arrays, at least over small regions. All local parallelism can be extracted by this approach. Second, though there may be a regular global structure to the grids constructed, there will be parallelism at this level. One approach to finding and exploiting this parallelism is to use an architecture having a number of processor clusters connected by a switching network. The use of such a network creates a highly flexible architecture which automatically configures to the problem being solved.

  17. Analysis of modified SMI method for adaptive array weight control

    NASA Technical Reports Server (NTRS)

    Dilsavor, R. L.; Moses, R. L.

    1989-01-01

    An adaptive array is applied to the problem of receiving a desired signal in the presence of weak interference signals which need to be suppressed. A modification, suggested by Gupta, of the sample matrix inversion (SMI) algorithm controls the array weights. In the modified SMI algorithm, interference suppression is increased by subtracting a fraction F of the noise power from the diagonal elements of the estimated covariance matrix. Given the true covariance matrix and the desired signal direction, the modified algorithm is shown to maximize a well-defined, intuitive output power ratio criterion. Expressions are derived for the expected value and variance of the array weights and output powers as a function of the fraction F and the number of snapshots used in the covariance matrix estimate. These expressions are compared with computer simulation and good agreement is found. A trade-off is found to exist between the desired level of interference suppression and the number of snapshots required in order to achieve that level with some certainty. The removal of noise eigenvectors from the covariance matrix inverse is also discussed with respect to this application. Finally, the type and severity of errors which occur in the covariance matrix estimate are characterized through simulation.

  18. Grid coupling mechanism in the semi-implicit adaptive Multi-Level Multi-Domain method

    NASA Astrophysics Data System (ADS)

    Innocenti, M. E.; Tronci, C.; Markidis, S.; Lapenta, G.

    2016-05-01

    The Multi-Level Multi-Domain (MLMD) method is a semi-implicit adaptive method for Particle-In-Cell plasma simulations. It has been demonstrated in the past in simulations of Maxwellian plasmas, electrostatic and electromagnetic instabilities, plasma expansion in vacuum, magnetic reconnection [1, 2, 3]. In multiple occasions, it has been commented on the coupling between the coarse and the refined grid solutions. The coupling mechanism itself, however, has never been explored in depth. Here, we investigate the theoretical bases of grid coupling in the MLMD system. We obtain an evolution law for the electric field solution in the overlap area of the MLMD system which highlights a dependance on the densities and currents from both the coarse and the refined grid, rather than from the coarse grid alone: grid coupling is obtained via densities and currents.

  19. Numerical Relativistic Magnetohydrodynamics with ADER Discontinuous Galerkin methods on adaptively refined meshes.

    NASA Astrophysics Data System (ADS)

    Zanotti, O.; Dumbser, M.; Fambri, F.

    2016-05-01

    We describe a new method for the solution of the ideal MHD equations in special relativity which adopts the following strategy: (i) the main scheme is based on Discontinuous Galerkin (DG) methods, allowing for an arbitrary accuracy of order N+1, where N is the degree of the basis polynomials; (ii) in order to cope with oscillations at discontinuities, an ”a-posteriori” sub-cell limiter is activated, which scatters the DG polynomials of the previous time-step onto a set of 2N+1 sub-cells, over which the solution is recomputed by means of a robust finite volume scheme; (iii) a local spacetime Discontinuous-Galerkin predictor is applied both on the main grid of the DG scheme and on the sub-grid of the finite volume scheme; (iv) adaptive mesh refinement (AMR) with local time-stepping is used. We validate the new scheme and comment on its potential applications in high energy astrophysics.

  20. Adaptive ultrasonic imaging with the total focusing method for inspection of complex components immersed in water

    NASA Astrophysics Data System (ADS)

    Le Jeune, L.; Robert, S.; Dumas, P.; Membre, A.; Prada, C.

    2015-03-01

    In this paper, we propose an ultrasonic adaptive imaging method based on the phased-array technology and the synthetic focusing algorithm Total Focusing Method (TFM). The general principle is to image the surface by applying the TFM algorithm in a semi-infinite water medium. Then, the reconstructed surface is taken into account to make a second TFM image inside the component. In the surface reconstruction step, the TFM algorithm has been optimized to decrease computation time and to limit noise in water. In the second step, the ultrasonic paths through the reconstructed surface are calculated by the Fermat's principle and an iterative algorithm, and the classical TFM is applied to obtain an image inside the component. This paper presents several results of TFM imaging in components of different geometries, and a result obtained with a new technology of probes equipped with a flexible wedge filled with water (manufactured by Imasonic).

  1. Automated Tract Extraction via Atlas Based Adaptive Clustering

    PubMed Central

    Tunç, Birkan; Parker, William A.; Ingalhalikar, Madhura; Verma, Ragini

    2014-01-01

    Advancements in imaging protocols such as the high angular resolution diffusion-weighted imaging (HARDI) and in tractography techniques are expected to cause an increase in the tract-based analyses. Statistical analyses over white matter tracts can contribute greatly towards understanding structural mechanisms of the brain since tracts are representative of the connectivity pathways. The main challenge with tract-based studies is the extraction of the tracts of interest in a consistent and comparable manner over a large group of individuals without drawing the inclusion and exclusion regions of interest. In this work, we design a framework for automated extraction of white matter tracts. The framework introduces three main components, namely a connectivity based fiber representation, a fiber clustering atlas, and a clustering approach called Adaptive Clustering. The fiber representation relies on the connectivity signatures of fibers to establish an easy correspondence between different subjects. A group-wise clustering of these fibers that are represented by the connectivity signatures is then used to generate a fiber bundle atlas. Finally, Adaptive Clustering incorporates the previously generated clustering atlas as a prior, to cluster the fibers of a new subject automatically. Experiments on the HARDI scans of healthy individuals acquired repeatedly, demonstrate the applicability, the reliability and the repeatability of our approach in extracting white matter tracts. By alleviating the seed region selection or the inclusion/exclusion ROI drawing requirements that are usually handled by trained radiologists, the proposed framework expands the range of possible clinical applications and establishes the ability to perform tract-based analyses with large samples. PMID:25134977

  2. A hybrid skull-stripping algorithm based on adaptive balloon snake models

    NASA Astrophysics Data System (ADS)

    Liu, Hung-Ting; Sheu, Tony W. H.; Chang, Herng-Hua

    2013-02-01

    Skull-stripping is one of the most important preprocessing steps in neuroimage analysis. We proposed a hybrid algorithm based on an adaptive balloon snake model to handle this challenging task. The proposed framework consists of two stages: first, the fuzzy possibilistic c-means (FPCM) is used for voxel clustering, which provides a labeled image for the snake contour initialization. In the second stage, the contour is initialized outside the brain surface based on the FPCM result and evolves under the guidance of the balloon snake model, which drives the contour with an adaptive inward normal force to capture the boundary of the brain. The similarity indices indicate that our method outperformed the BSE and BET methods in skull-stripping the MR image volumes in the IBSR data set. Experimental results show the effectiveness of this new scheme and potential applications in a wide variety of skull-stripping applications.

  3. An adaptation of Krylov subspace methods to path following

    SciTech Connect

    Walker, H.F.

    1996-12-31

    Krylov subspace methods at present constitute a very well known and highly developed class of iterative linear algebra methods. These have been effectively applied to nonlinear system solving through Newton-Krylov methods, in which Krylov subspace methods are used to solve the linear systems that characterize steps of Newton`s method (the Newton equations). Here, we will discuss the application of Krylov subspace methods to path following problems, in which the object is to track a solution curve as a parameter varies. Path following methods are typically of predictor-corrector form, in which a point near the solution curve is {open_quotes}predicted{close_quotes} by some easy but relatively inaccurate means, and then a series of Newton-like corrector iterations is used to return approximately to the curve. The analogue of the Newton equation is underdetermined, and an additional linear condition must be specified to determine corrector steps uniquely. This is typically done by requiring that the steps be orthogonal to an approximate tangent direction. Augmenting the under-determined system with this orthogonality condition in a straightforward way typically works well if direct linear algebra methods are used, but Krylov subspace methods are often ineffective with this approach. We will discuss recent work in which this orthogonality condition is imposed directly as a constraint on the corrector steps in a certain way. The means of doing this preserves problem conditioning, allows the use of preconditioners constructed for the fixed-parameter case, and has certain other advantages. Experiments on standard PDE continuation test problems indicate that this approach is effective.

  4. Adaptive and robust statistical methods for processing near-field scanning microwave microscopy images.

    PubMed

    Coakley, K J; Imtiaz, A; Wallis, T M; Weber, J C; Berweger, S; Kabos, P

    2015-03-01

    Near-field scanning microwave microscopy offers great potential to facilitate characterization, development and modeling of materials. By acquiring microwave images at multiple frequencies and amplitudes (along with the other modalities) one can study material and device physics at different lateral and depth scales. Images are typically noisy and contaminated by artifacts that can vary from scan line to scan line and planar-like trends due to sample tilt errors. Here, we level images based on an estimate of a smooth 2-d trend determined with a robust implementation of a local regression method. In this robust approach, features and outliers which are not due to the trend are automatically downweighted. We denoise images with the Adaptive Weights Smoothing method. This method smooths out additive noise while preserving edge-like features in images. We demonstrate the feasibility of our methods on topography images and microwave |S11| images. For one challenging test case, we demonstrate that our method outperforms alternative methods from the scanning probe microscopy data analysis software package Gwyddion. Our methods should be useful for massive image data sets where manual selection of landmarks or image subsets by a user is impractical.

  5. Eulerian adaptive finite-difference method for high-velocity impact and penetration problems

    SciTech Connect

    Barton, Philip T.; Deiterding, Ralf; Meiron, Daniel I.; Pullin, Dale I

    2013-01-01

    Owing to the complex processes involved, faithful prediction of high-velocity impact events demands a simulation method delivering efficient calculations based on comprehensively formulated constitutive models. Such an approach is presented herein, employing a weighted essentially non-oscillatory (WENO) method within an adaptive mesh refinement (AMR) framework for the numerical solution of hyperbolic partial differential equations. Applied widely in computational fluid dynamics, these methods are well suited to the involved locally non-smooth finite deformations, circumventing any requirement for artificial viscosity functions for shock capturing. Application of the methods is facilitated through using a model of solid dynamics based upon hyper-elastic theory comprising kinematic evolution equations for the elastic distortion tensor. The model for finite inelastic deformations is phenomenologically equivalent to Maxwell s model of tangential stress relaxation. Closure relations tailored to the expected high-pressure states are proposed and calibrated for the materials of interest. Sharp interface resolution is achieved by employing level-set functions to track boundary motion, along with a ghost material method to capture the necessary internal boundary conditions for material interactions and stress-free surfaces. The approach is demonstrated for the simulation of high velocity impacts of steel projectiles on aluminium target plates in two and three dimensions.

  6. Adaptive nonlocal means filtering based on local noise level for CT denoising

    SciTech Connect

    Li, Zhoubo; Trzasko, Joshua D.; Lake, David S.; Blezek, Daniel J.; Manduca, Armando; Yu, Lifeng; Fletcher, Joel G.; McCollough, Cynthia H.

    2014-01-15

    Purpose: To develop and evaluate an image-domain noise reduction method based on a modified nonlocal means (NLM) algorithm that is adaptive to local noise level of CT images and to implement this method in a time frame consistent with clinical workflow. Methods: A computationally efficient technique for local noise estimation directly from CT images was developed. A forward projection, based on a 2D fan-beam approximation, was used to generate the projection data, with a noise model incorporating the effects of the bowtie filter and automatic exposure control. The noise propagation from projection data to images was analytically derived. The analytical noise map was validated using repeated scans of a phantom. A 3D NLM denoising algorithm was modified to adapt its denoising strength locally based on this noise map. The performance of this adaptive NLM filter was evaluated in phantom studies in terms of in-plane and cross-plane high-contrast spatial resolution, noise power spectrum (NPS), subjective low-contrast spatial resolution using the American College of Radiology (ACR) accreditation phantom, and objective low-contrast spatial resolution using a channelized Hotelling model observer (CHO). Graphical processing units (GPU) implementation of this noise map calculation and the adaptive NLM filtering were developed to meet demands of clinical workflow. Adaptive NLM was piloted on lower dose scans in clinical practice. Results: The local noise level estimation matches the noise distribution determined from multiple repetitive scans of a phantom, demonstrated by small variations in the ratio map between the analytical noise map and the one calculated from repeated scans. The phantom studies demonstrated that the adaptive NLM filter can reduce noise substantially without degrading the high-contrast spatial resolution, as illustrated by modulation transfer function and slice sensitivity profile results. The NPS results show that adaptive NLM denoising preserves the

  7. Automatic multirate methods for ordinary differential equations. [Adaptive time steps

    SciTech Connect

    Gear, C.W.

    1980-01-01

    A study is made of the application of integration methods in which different step sizes are used for different members of a system of equations. Such methods can result in savings if the cost of derivative evaluation is high or if a system is sparse; however, the estimation and control of errors is very difficult and can lead to high overheads. Three approaches are discussed, and it is shown that the least intuitive is the most promising. 2 figures.

  8. An Adaptive Fast Multipole Boundary Element Method for Poisson-Boltzmann Electrostatics

    SciTech Connect

    Lu, Benzhuo; Cheng, Xiaolin; Huang, Jingfang; McCammon, Jonathan

    2009-01-01

    The numerical solution of the Poisson Boltzmann (PB) equation is a useful but a computationally demanding tool for studying electrostatic solvation effects in chemical and biomolecular systems. Recently, we have described a boundary integral equation-based PB solver accelerated by a new version of the fast multipole method (FMM). The overall algorithm shows an order N complexity in both the computational cost and memory usage. Here, we present an updated version of the solver by using an adaptive FMM for accelerating the convolution type matrix-vector multiplications. The adaptive algorithm, when compared to our previous nonadaptive one, not only significantly improves the performance of the overall memory usage but also remarkably speeds the calculation because of an improved load balancing between the local- and far-field calculations. We have also implemented a node-patch discretization scheme that leads to a reduction of unknowns by a factor of 2 relative to the constant element method without sacrificing accuracy. As a result of these improvements, the new solver makes the PB calculation truly feasible for large-scale biomolecular systems such as a 30S ribosome molecule even on a typical 2008 desktop computer.

  9. Adaptive Controls Method Demonstrated for the Active Suppression of Instabilities in Engine Combustors

    NASA Technical Reports Server (NTRS)

    Kopasakis, George

    2004-01-01

    An adaptive feedback control method was demonstrated that suppresses thermoacoustic instabilities in a liquid-fueled combustor of a type used in aircraft engines. Extensive research has been done to develop lean-burning (low fuel-to-air ratio) combustors that can reduce emissions throughout the mission cycle to reduce the environmental impact of aerospace propulsion systems. However, these lean-burning combustors are susceptible to thermoacoustic instabilities (high-frequency pressure waves), which can fatigue combustor components and even the downstream turbine blades. This can significantly decrease the safe operating lives of the combustor and turbine. Thus, suppressing the thermoacoustic combustor instabilities is an enabling technology for lean, low-emissions combustors under NASA's Propulsion and Power Program. This control methodology has been developed and tested in a partnership of the NASA Glenn Research Center, Pratt & Whitney, United Technologies Research Center, and the Georgia Institute of Technology. Initial combustor rig testing of the controls algorithm was completed during 2002. Subsequently, the test results were analyzed and improvements to the method were incorporated in 2003, which culminated in the final status of this controls algorithm. This control methodology is based on adaptive phase shifting. The combustor pressure oscillations are sensed and phase shifted, and a high-frequency fuel valve is actuated to put pressure oscillations into the combustor to cancel pressure oscillations produced by the instability.

  10. Adaptive error covariances estimation methods for ensemble Kalman filters

    SciTech Connect

    Zhen, Yicun; Harlim, John

    2015-08-01

    This paper presents a computationally fast algorithm for estimating, both, the system and observation noise covariances of nonlinear dynamics, that can be used in an ensemble Kalman filtering framework. The new method is a modification of Belanger's recursive method, to avoid an expensive computational cost in inverting error covariance matrices of product of innovation processes of different lags when the number of observations becomes large. When we use only product of innovation processes up to one-lag, the computational cost is indeed comparable to a recently proposed method by Berry–Sauer's. However, our method is more flexible since it allows for using information from product of innovation processes of more than one-lag. Extensive numerical comparisons between the proposed method and both the original Belanger's and Berry–Sauer's schemes are shown in various examples, ranging from low-dimensional linear and nonlinear systems of SDEs and 40-dimensional stochastically forced Lorenz-96 model. Our numerical results suggest that the proposed scheme is as accurate as the original Belanger's scheme on low-dimensional problems and has a wider range of more accurate estimates compared to Berry–Sauer's method on L-96 example.

  11. A progressive data compression scheme based upon adaptive transform coding: Mixture block coding of natural images

    NASA Technical Reports Server (NTRS)

    Rost, Martin C.; Sayood, Khalid

    1991-01-01

    A method for efficiently coding natural images using a vector-quantized variable-blocksized transform source coder is presented. The method, mixture block coding (MBC), incorporates variable-rate coding by using a mixture of discrete cosine transform (DCT) source coders. Which coders are selected to code any given image region is made through a threshold driven distortion criterion. In this paper, MBC is used in two different applications. The base method is concerned with single-pass low-rate image data compression. The second is a natural extension of the base method which allows for low-rate progressive transmission (PT). Since the base method adapts easily to progressive coding, it offers the aesthetic advantage of progressive coding without incorporating extensive channel overhead. Image compression rates of approximately 0.5 bit/pel are demonstrated for both monochrome and color images.

  12. Reducing interferences in wireless communication systems by mobile agents with recurrent neural networks-based adaptive channel equalization

    NASA Astrophysics Data System (ADS)

    Beritelli, Francesco; Capizzi, Giacomo; Lo Sciuto, Grazia; Napoli, Christian; Tramontana, Emiliano; Woźniak, Marcin

    2015-09-01

    Solving channel equalization problem in communication systems is based on adaptive filtering algorithms. Today, Mobile Agents (MAs) with Recurrent Neural Networks (RNNs) can be also adopted for effective interference reduction in modern wireless communication systems (WCSs). In this paper MAs with RNNs are proposed as novel computing algorithms for reducing interferences in WCSs performing an adaptive channel equalization. The method to provide it is so called MAs-RNNs. We perform the implementation of this new paradigm for interferences reduction. Simulations results and evaluations demonstrates the effectiveness of this approach and as better transmission performance in wireless communication network can be achieved by using the MAs-RNNs based adaptive filtering algorithm.

  13. Knowledge-based control of an adaptive interface

    NASA Technical Reports Server (NTRS)

    Lachman, Roy

    1989-01-01

    The analysis, development strategy, and preliminary design for an intelligent, adaptive interface is reported. The design philosophy couples knowledge-based system technology with standard human factors approaches to interface development for computer workstations. An expert system has been designed to drive the interface for application software. The intelligent interface will be linked to application packages, one at a time, that are planned for multiple-application workstations aboard Space Station Freedom. Current requirements call for most Space Station activities to be conducted at the workstation consoles. One set of activities will consist of standard data management services (DMS). DMS software includes text processing, spreadsheets, data base management, etc. Text processing was selected for the first intelligent interface prototype because text-processing software can be developed initially as fully functional but limited with a small set of commands. The program's complexity then can be increased incrementally. The intelligent interface includes the operator's behavior and three types of instructions to the underlying application software are included in the rule base. A conventional expert-system inference engine searches the data base for antecedents to rules and sends the consequents of fired rules as commands to the underlying software. Plans for putting the expert system on top of a second application, a database management system, will be carried out following behavioral research on the first application. The intelligent interface design is suitable for use with ground-based workstations now common in government, industrial, and educational organizations.

  14. An adaptive gyroscope-based algorithm for temporal gait analysis.

    PubMed

    Greene, Barry R; McGrath, Denise; O'Neill, Ross; O'Donovan, Karol J; Burns, Adrian; Caulfield, Brian

    2010-12-01

    Body-worn kinematic sensors have been widely proposed as the optimal solution for portable, low cost, ambulatory monitoring of gait. This study aims to evaluate an adaptive gyroscope-based algorithm for automated temporal gait analysis using body-worn wireless gyroscopes. Gyroscope data from nine healthy adult subjects performing four walks at four different speeds were then compared against data acquired simultaneously using two force plates and an optical motion capture system. Data from a poliomyelitis patient, exhibiting pathological gait walking with and without the aid of a crutch, were also compared to the force plate. Results show that the mean true error between the adaptive gyroscope algorithm and force plate was -4.5 ± 14.4 ms and 43.4 ± 6.0 ms for IC and TC points, respectively, in healthy subjects. Similarly, the mean true error when data from the polio patient were compared against the force plate was -75.61 ± 27.53 ms and 99.20 ± 46.00 ms for IC and TC points, respectively. A comparison of the present algorithm against temporal gait parameters derived from an optical motion analysis system showed good agreement for nine healthy subjects at four speeds. These results show that the algorithm reported here could constitute the basis of a robust, portable, low-cost system for ambulatory monitoring of gait.

  15. Adaptive multiwavelet-based watermarking through JPW masking.

    PubMed

    Cui, Lihong; Li, Wenguo

    2011-04-01

    In this paper, a multibit, multiplicative, spread spectrum watermarking using the discrete multiwavelet (including unbalanced and balanced multiwavelet) transform is presented. Performance improvement with respect to existing algorithm is obtained by means of a new just perceptual weighting (JPW) model. The new model incorporates various masking effects of human visual perception by taking into account the eye's sensitivity to noise changes depending on spatial frequency, luminance and texture of all the image subbands. In contrast to conventional JND threshold model, JPW describing minimum perceptual sensitivity weighting to noise changes, is fitter for nonadditive watermarking. Specifically, watermarking strength is adaptively adjusted to obtain minimum perceptual distortion by employing the JPW model. Correspondingly, an adaptive optimum decoding is derived using a statistic model based on generalized-Gaussian distribution (GGD) for multiwavelet coefficients of the cover-image. Furthermore, the impact of multiwavelet characteristics on proposed watermarking scheme is also analyzed. Finally, the experimental results show that proposed JPW model can improve the quality of the watermarked image and give more robustness of the watermark as compared with a variety of state-of-the-art algorithms.

  16. Lens-based wavefront sensorless adaptive optics swept source OCT

    PubMed Central

    Jian, Yifan; Lee, Sujin; Ju, Myeong Jin; Heisler, Morgan; Ding, Weiguang; Zawadzki, Robert J.; Bonora, Stefano; Sarunic, Marinko V.

    2016-01-01

    Optical coherence tomography (OCT) has revolutionized modern ophthalmology, providing depth resolved images of the retinal layers in a system that is suited to a clinical environment. Although the axial resolution of OCT system, which is a function of the light source bandwidth, is sufficient to resolve retinal features at a micrometer scale, the lateral resolution is dependent on the delivery optics and is limited by ocular aberrations. Through the combination of wavefront sensorless adaptive optics and the use of dual deformable transmissive optical elements, we present a compact lens-based OCT system at an imaging wavelength of 1060 nm for high resolution retinal imaging. We utilized a commercially available variable focal length lens to correct for a wide range of defocus commonly found in patient’s eyes, and a novel multi-actuator adaptive lens for aberration correction to achieve near diffraction limited imaging performance at the retina. With a parallel processing computational platform, high resolution cross-sectional and en face retinal image acquisition and display was performed in real time. In order to demonstrate the system functionality and clinical utility, we present images of the photoreceptor cone mosaic and other retinal layers acquired in vivo from research subjects. PMID:27278853

  17. Lens-based wavefront sensorless adaptive optics swept source OCT.

    PubMed

    Jian, Yifan; Lee, Sujin; Ju, Myeong Jin; Heisler, Morgan; Ding, Weiguang; Zawadzki, Robert J; Bonora, Stefano; Sarunic, Marinko V

    2016-01-01

    Optical coherence tomography (OCT) has revolutionized modern ophthalmology, providing depth resolved images of the retinal layers in a system that is suited to a clinical environment. Although the axial resolution of OCT system, which is a function of the light source bandwidth, is sufficient to resolve retinal features at a micrometer scale, the lateral resolution is dependent on the delivery optics and is limited by ocular aberrations. Through the combination of wavefront sensorless adaptive optics and the use of dual deformable transmissive optical elements, we present a compact lens-based OCT system at an imaging wavelength of 1060 nm for high resolution retinal imaging. We utilized a commercially available variable focal length lens to correct for a wide range of defocus commonly found in patient's eyes, and a novel multi-actuator adaptive lens for aberration correction to achieve near diffraction limited imaging performance at the retina. With a parallel processing computational platform, high resolution cross-sectional and en face retinal image acquisition and display was performed in real time. In order to demonstrate the system functionality and clinical utility, we present images of the photoreceptor cone mosaic and other retinal layers acquired in vivo from research subjects.

  18. Lens-based wavefront sensorless adaptive optics swept source OCT

    NASA Astrophysics Data System (ADS)

    Jian, Yifan; Lee, Sujin; Ju, Myeong Jin; Heisler, Morgan; Ding, Weiguang; Zawadzki, Robert J.; Bonora, Stefano; Sarunic, Marinko V.

    2016-06-01

    Optical coherence tomography (OCT) has revolutionized modern ophthalmology, providing depth resolved images of the retinal layers in a system that is suited to a clinical environment. Although the axial resolution of OCT system, which is a function of the light source bandwidth, is sufficient to resolve retinal features at a micrometer scale, the lateral resolution is dependent on the delivery optics and is limited by ocular aberrations. Through the combination of wavefront sensorless adaptive optics and the use of dual deformable transmissive optical elements, we present a compact lens-based OCT system at an imaging wavelength of 1060 nm for high resolution retinal imaging. We utilized a commercially available variable focal length lens to correct for a wide range of defocus commonly found in patient’s eyes, and a novel multi-actuator adaptive lens for aberration correction to achieve near diffraction limited imaging performance at the retina. With a parallel processing computational platform, high resolution cross-sectional and en face retinal image acquisition and display was performed in real time. In order to demonstrate the system functionality and clinical utility, we present images of the photoreceptor cone mosaic and other retinal layers acquired in vivo from research subjects.

  19. ZZ-Type a posteriori error estimators for adaptive boundary element methods on a curve☆

    PubMed Central

    Feischl, Michael; Führer, Thomas; Karkulik, Michael; Praetorius, Dirk

    2014-01-01

    In the context of the adaptive finite element method (FEM), ZZ-error estimators named after Zienkiewicz and Zhu (1987) [52] are mathematically well-established and widely used in practice. In this work, we propose and analyze ZZ-type error estimators for the adaptive boundary element method (BEM). We consider weakly singular and hyper-singular integral equations and prove, in particular, convergence of the related adaptive mesh-refining algorithms. Throughout, the theoretical findings are underlined by numerical experiments. PMID:24748725

  20. Systems and Methods for Parameter Dependent Riccati Equation Approaches to Adaptive Control

    NASA Technical Reports Server (NTRS)

    Kim, Kilsoo (Inventor); Yucelen, Tansel (Inventor); Calise, Anthony J. (Inventor)

    2015-01-01

    Systems and methods for adaptive control are disclosed. The systems and methods can control uncertain dynamic systems. The control system can comprise a controller that employs a parameter dependent Riccati equation. The controller can produce a response that causes the state of the system to remain bounded. The control system can control both minimum phase and non-minimum phase systems. The control system can augment an existing, non-adaptive control design without modifying the gains employed in that design. The control system can also avoid the use of high gains in both the observer design and the adaptive control law.

  1. Measuring Fidelity and Adaptation: Reliability of a Instrument for School-Based Prevention Programs.

    PubMed

    Bishop, Dana C; Pankratz, Melinda M; Hansen, William B; Albritton, Jordan; Albritton, Lauren; Strack, Joann

    2014-06-01

    There is a need to standardize methods for assessing fidelity and adaptation. Such standardization would allow program implementation to be examined in a manner that will be useful for understanding the moderating role of fidelity in dissemination research. This article describes a method for collecting data about fidelity of implementation for school-based prevention programs, including measures of adherence, quality of delivery, dosage, participant engagement, and adaptation. We report about the reliability of these methods when applied by four observers who coded video recordings of teachers delivering All Stars, a middle school drug prevention program. Interrater agreement for scaled items was assessed for an instrument designed to evaluate program fidelity. Results indicated sound interrater reliability for items assessing adherence, dosage, quality of teaching, teacher understanding of concepts, and program adaptations. The interrater reliability for items assessing potential program effectiveness, classroom management, achievement of activity objectives, and adaptation valences was improved by dichotomizing the response options for these items. The item that assessed student engagement demonstrated only modest interrater reliability and was not improved through dichotomization. Several coder pairs were discordant on items that overall demonstrated good interrater reliability. Proposed modifications to the coding manual and protocol are discussed.

  2. Adjoint-Based Algorithms for Adaptation and Design Optimizations on Unstructured Grids

    NASA Technical Reports Server (NTRS)

    Nielsen, Eric J.

    2006-01-01

    Schemes based on discrete adjoint algorithms present several exciting opportunities for significantly advancing the current state of the art in computational fluid dynamics. Such methods provide an extremely efficient means for obtaining discretely consistent sensitivity information for hundreds of design variables, opening the door to rigorous, automated design optimization of complex aerospace configuration using the Navier-Stokes equation. Moreover, the discrete adjoint formulation provides a mathematically rigorous foundation for mesh adaptation and systematic reduction of spatial discretization error. Error estimates are also an inherent by-product of an adjoint-based approach, valuable information that is virtually non-existent in today's large-scale CFD simulations. An overview of the adjoint-based algorithm work at NASA Langley Research Center is presented, with examples demonstrating the potential impact on complex computational problems related to design optimization as well as mesh adaptation.

  3. AP-Cloud: Adaptive Particle-in-Cloud method for optimal solutions to Vlasov-Poisson equation

    NASA Astrophysics Data System (ADS)

    Wang, Xingyu; Samulyak, Roman; Jiao, Xiangmin; Yu, Kwangmin

    2016-07-01

    We propose a new adaptive Particle-in-Cloud (AP-Cloud) method for obtaining optimal numerical solutions to the Vlasov-Poisson equation. Unlike the traditional particle-in-cell (PIC) method, which is commonly used for solving this problem, the AP-Cloud adaptively selects computational nodes or particles to deliver higher accuracy and efficiency when the particle distribution is highly non-uniform. Unlike other adaptive techniques for PIC, our method balances the errors in PDE discretization and Monte Carlo integration, and discretizes the differential operators using a generalized finite difference (GFD) method based on a weighted least square formulation. As a result, AP-Cloud is independent of the geometric shapes of computational domains and is free of artificial parameters. Efficient and robust implementation is achieved through an octree data structure with 2:1 balance. We analyze the accuracy and convergence order of AP-Cloud theoretically, and verify the method using an electrostatic problem of a particle beam with halo. Simulation results show that the AP-Cloud method is substantially more accurate and faster than the traditional PIC, and it is free of artificial forces that are typical for some adaptive PIC techniques.

  4. AP-Cloud: Adaptive particle-in-cloud method for optimal solutions to Vlasov–Poisson equation

    DOE PAGES

    Wang, Xingyu; Samulyak, Roman; Jiao, Xiangmin; Yu, Kwangmin

    2016-04-19

    We propose a new adaptive Particle-in-Cloud (AP-Cloud) method for obtaining optimal numerical solutions to the Vlasov–Poisson equation. Unlike the traditional particle-in-cell (PIC) method, which is commonly used for solving this problem, the AP-Cloud adaptively selects computational nodes or particles to deliver higher accuracy and efficiency when the particle distribution is highly non-uniform. Unlike other adaptive techniques for PIC, our method balances the errors in PDE discretization and Monte Carlo integration, and discretizes the differential operators using a generalized finite difference (GFD) method based on a weighted least square formulation. As a result, AP-Cloud is independent of the geometric shapes ofmore » computational domains and is free of artificial parameters. Efficient and robust implementation is achieved through an octree data structure with 2:1 balance. We analyze the accuracy and convergence order of AP-Cloud theoretically, and verify the method using an electrostatic problem of a particle beam with halo. Here, simulation results show that the AP-Cloud method is substantially more accurate and faster than the traditional PIC, and it is free of artificial forces that are typical for some adaptive PIC techniques.« less

  5. Ecological Scarcity Method: Adaptation and Implementation for Different Countries

    NASA Astrophysics Data System (ADS)

    Grinberg, Marina; Ackermann, Robert; Finkbeiner, Matthias

    2012-12-01

    The Ecological Scarcity Method is one of the methods for impact assessment in LCA. It enables to express different environmental impacts in single score units, eco-points. Such results are handy for decision-makers in policy or enterprises to improve environmental management. So far this method is mostly used in the country of its origin, Switzerland. Eco-factors derive from the national conditions. For other countries sometimes it is impossible to calculate all ecofactors. The solution of the problem is to create a set of transformation rules. The rules should take into account the regional differences, the level of society development, the grade of scarcity and other factors. The research is focused on the creation of transformation rules between Switzerland, Germany and the Russian Federation in case of GHG emissions.

  6. Land-based approach to evaluate sustainable land management and adaptive capacity of ecosystems/lands

    NASA Astrophysics Data System (ADS)

    Kust, German; Andreeva, Olga

    2015-04-01

    A number of new concepts and paradigms appeared during last decades, such as sustainable land management (SLM), climate change (CC) adaptation, environmental services, ecosystem health, and others. All of these initiatives still not having the common scientific platform although some agreements in terminology were reached, schemes of links and feedback loops created, and some models developed. Nevertheless, in spite of all these scientific achievements, the land related issues are still not in the focus of CC adaptation and mitigation. The last did not grow much beyond the "greenhouse gases" (GHG) concept, which makes land degradation as the "forgotten side of climate change" The possible decision to integrate concepts of climate and desertification/land degradation could be consideration of the "GHG" approach providing global solution, and "land" approach providing local solution covering other "locally manifesting" issues of global importance (biodiversity conservation, food security, disasters and risks, etc.) to serve as a central concept among those. SLM concept is a land-based approach, which includes the concepts of both ecosystem-based approach (EbA) and community-based approach (CbA). SLM can serve as in integral CC adaptation strategy, being based on the statement "the more healthy and resilient the system is, the less vulnerable and more adaptive it will be to any external changes and forces, including climate" The biggest scientific issue is the methods to evaluate the SLM and results of the SLM investments. We suggest using the approach based on the understanding of the balance or equilibrium of the land and nature components as the major sign of the sustainable system. Prom this point of view it is easier to understand the state of the ecosystem stress, size of the "health", range of adaptive capacity, drivers of degradation and SLM nature, as well as the extended land use, and the concept of environmental land management as the improved SLM approach

  7. Adaptation of motor imagery EEG classification model based on tensor decomposition

    NASA Astrophysics Data System (ADS)

    Li, Xinyang; Guan, Cuntai; Zhang, Haihong; Keng Ang, Kai; Ong, Sim Heng

    2014-10-01

    Objective. Session-to-session nonstationarity is inherent in brain-computer interfaces based on electroencephalography. The objective of this paper is to quantify the mismatch between the training model and test data caused by nonstationarity and to adapt the model towards minimizing the mismatch. Approach. We employ a tensor model to estimate the mismatch in a semi-supervised manner, and the estimate is regularized in the discriminative objective function. Main results. The performance of the proposed adaptation method was evaluated on a dataset recorded from 16 subjects performing motor imagery tasks on different days. The classification results validated the advantage of the proposed method in comparison with other regularization-based or spatial filter adaptation approaches. Experimental results also showed that there is a significant correlation between the quantified mismatch and the classification accuracy. Significance. The proposed method approached the nonstationarity issue from the perspective of data-model mismatch, which is more direct than data variation measurement. The results also demonstrated that the proposed method is effective in enhancing the performance of the feature extraction model.

  8. An adaptive filter bank for motor imagery based Brain Computer Interface.

    PubMed

    Thomas, Kavitha P; Guan, Cuntai; Tong, Lau Chiew; Prasad, Vinod A

    2008-01-01

    Brain Computer Interface (BCI) provides an alternative communication and control method for people with severe motor disabilities. Motor imagery patterns are widely used in Electroencephalogram (EEG) based BCIs. These motor imagery activities are associated with variation in alpha and beta band power of EEG signals called Event Related Desynchronization/synchronization (ERD/ERS). The dominant frequency bands are subject-specific and therefore performance of motor imagery based BCIs are sensitive to both temporal filtering and spatial filtering. As the optimum filter is strongly subject-dependent, we propose a method that selects the subject-specific discriminative frequency components using time-frequency plots of Fisher ratio of two-class motor imagery patterns. We also propose a low complexity adaptive Finite Impulse Response (FIR) filter bank system based on coefficient decimation technique which can realize the subject-specific bandpass filters adaptively depending on the information of Fisher ratio map. Features are extracted only from the selected frequency components. The proposed adaptive filter bank based system offers average classification accuracy of about 90%, which is slightly better than the existing fixed filter bank system. PMID:19162856

  9. SuBSENSE: a universal change detection method with local adaptive sensitivity.

    PubMed

    St-Charles, Pierre-Luc; Bilodeau, Guillaume-Alexandre; Bergevin, Robert

    2015-01-01

    Foreground/background segmentation via change detection in video sequences is often used as a stepping stone in high-level analytics and applications. Despite the wide variety of methods that have been proposed for this problem, none has been able to fully address the complex nature of dynamic scenes in real surveillance tasks. In this paper, we present a universal pixel-level segmentation method that relies on spatiotemporal binary features as well as color information to detect changes. This allows camouflaged foreground objects to be detected more easily while most illumination variations are ignored. Besides, instead of using manually set, frame-wide constants to dictate model sensitivity and adaptation speed, we use pixel-level feedback loops to dynamically adjust our method's internal parameters without user intervention. These adjustments are based on the continuous monitoring of model fidelity and local segmentation noise levels. This new approach enables us to outperform all 32 previously tested state-of-the-art methods on the 2012 and 2014 versions of the ChangeDetection.net dataset in terms of overall F-Measure. The use of local binary image descriptors for pixel-level modeling also facilitates high-speed parallel implementations: our own version, which used no low-level or architecture-specific instruction, reached real-time processing speed on a midlevel desktop CPU. A complete C++ implementation based on OpenCV is available online.

  10. SuBSENSE: a universal change detection method with local adaptive sensitivity.

    PubMed

    St-Charles, Pierre-Luc; Bilodeau, Guillaume-Alexandre; Bergevin, Robert

    2015-01-01

    Foreground/background segmentation via change detection in video sequences is often used as a stepping stone in high-level analytics and applications. Despite the wide variety of methods that have been proposed for this problem, none has been able to fully address the complex nature of dynamic scenes in real surveillance tasks. In this paper, we present a universal pixel-level segmentation method that relies on spatiotemporal binary features as well as color information to detect changes. This allows camouflaged foreground objects to be detected more easily while most illumination variations are ignored. Besides, instead of using manually set, frame-wide constants to dictate model sensitivity and adaptation speed, we use pixel-level feedback loops to dynamically adjust our method's internal parameters without user intervention. These adjustments are based on the continuous monitoring of model fidelity and local segmentation noise levels. This new approach enables us to outperform all 32 previously tested state-of-the-art methods on the 2012 and 2014 versions of the ChangeDetection.net dataset in terms of overall F-Measure. The use of local binary image descriptors for pixel-level modeling also facilitates high-speed parallel implementations: our own version, which used no low-level or architecture-specific instruction, reached real-time processing speed on a midlevel desktop CPU. A complete C++ implementation based on OpenCV is available online. PMID:25494507

  11. Adapting Cognitive Walkthrough to Support Game Based Learning Design

    ERIC Educational Resources Information Center

    Farrell, David; Moffat, David C.

    2014-01-01

    For any given Game Based Learning (GBL) project to be successful, the player must learn something. Designers may base their work on pedagogical research, but actual game design is still largely driven by intuition. People are famously poor at unsupported methodical thinking and relying so much on instinct is an obvious weak point in GBL design…

  12. MR Image Reconstruction Using Block Matching and Adaptive Kernel Methods

    PubMed Central

    Schmidt, Johannes F. M.; Santelli, Claudio; Kozerke, Sebastian

    2016-01-01

    An approach to Magnetic Resonance (MR) image reconstruction from undersampled data is proposed. Undersampling artifacts are removed using an iterative thresholding algorithm applied to nonlinearly transformed image block arrays. Each block array is transformed using kernel principal component analysis where the contribution of each image block to the transform depends in a nonlinear fashion on the distance to other image blocks. Elimination of undersampling artifacts is achieved by conventional principal component analysis in the nonlinear transform domain, projection onto the main components and back-mapping into the image domain. Iterative image reconstruction is performed by interleaving the proposed undersampling artifact removal step and gradient updates enforcing consistency with acquired k-space data. The algorithm is evaluated using retrospectively undersampled MR cardiac cine data and compared to k-t SPARSE-SENSE, block matching with spatial Fourier filtering and k-t ℓ1-SPIRiT reconstruction. Evaluation of image quality and root-mean-squared-error (RMSE) reveal improved image reconstruction for up to 8-fold undersampled data with the proposed approach relative to k-t SPARSE-SENSE, block matching with spatial Fourier filtering and k-t ℓ1-SPIRiT. In conclusion, block matching and kernel methods can be used for effective removal of undersampling artifacts in MR image reconstruction and outperform methods using standard compressed sensing and ℓ1-regularized parallel imaging methods. PMID:27116675

  13. An Adaptive Kalman Filter Using a Simple Residual Tuning Method

    NASA Technical Reports Server (NTRS)

    Harman, Richard R.

    1999-01-01

    One difficulty in using Kalman filters in real world situations is the selection of the correct process noise, measurement noise, and initial state estimate and covariance. These parameters are commonly referred to as tuning parameters. Multiple methods have been developed to estimate these parameters. Most of those methods such as maximum likelihood, subspace, and observer Kalman Identification require extensive offline processing and are not suitable for real time processing. One technique, which is suitable for real time processing, is the residual tuning method. Any mismodeling of the filter tuning parameters will result in a non-white sequence for the filter measurement residuals. The residual tuning technique uses this information to estimate corrections to those tuning parameters. The actual implementation results in a set of sequential equations that run in parallel with the Kalman filter. A. H. Jazwinski developed a specialized version of this technique for estimation of process noise. Equations for the estimation of the measurement noise have also been developed. These algorithms are used to estimate the process noise and measurement noise for the Wide Field Infrared Explorer star tracker and gyro.

  14. An Adaptive Kalman Filter using a Simple Residual Tuning Method

    NASA Technical Reports Server (NTRS)

    Harman, Richard R.

    1999-01-01

    One difficulty in using Kalman filters in real world situations is the selection of the correct process noise, measurement noise, and initial state estimate and covariance. These parameters are commonly referred to as tuning parameters. Multiple methods have been developed to estimate these parameters. Most of those methods such as maximum likelihood, subspace, and observer Kalman Identification require extensive offline processing and are not suitable for real time processing. One technique, which is suitable for real time processing, is the residual tuning method. Any mismodeling of the filter tuning parameters will result in a non-white sequence for the filter measurement residuals. The residual tuning technique uses this information to estimate corrections to those tuning parameters. The actual implementation results in a set of sequential equations that run in parallel with the Kalman filter. Equations for the estimation of the measurement noise have also been developed. These algorithms are used to estimate the process noise and measurement noise for the Wide Field Infrared Explorer star tracker and gyro.

  15. Restrictive Stochastic Item Selection Methods in Cognitive Diagnostic Computerized Adaptive Testing

    ERIC Educational Resources Information Center

    Wang, Chun; Chang, Hua-Hua; Huebner, Alan

    2011-01-01

    This paper proposes two new item selection methods for cognitive diagnostic computerized adaptive testing: the restrictive progressive method and the restrictive threshold method. They are built upon the posterior weighted Kullback-Leibler (KL) information index but include additional stochastic components either in the item selection index or in…

  16. Object tracking using adaptive covariance descriptor and clustering-based model updating for visual surveillance.

    PubMed

    Qin, Lei; Snoussi, Hichem; Abdallah, Fahed

    2014-05-26

    We propose a novel approach for tracking an arbitrary object in video sequences for visual surveillance. The first contribution of this work is an automatic feature extraction method that is able to extract compact discriminative features from a feature pool before computing the region covariance descriptor. As the feature extraction method is adaptive to a specific object of interest, we refer to the region covariance descriptor computed using the extracted features as the adaptive covariance descriptor. The second contribution is to propose a weakly supervised method for updating the object appearance model during tracking. The method performs a mean-shift clustering procedure among the tracking result samples accumulated during a period of time and selects a group of reliable samples for updating the object appearance model. As such, the object appearance model is kept up-to-date and is prevented from contamination even in case of tracking mistakes. We conducted comparing experiments on real-world video sequences, which confirmed the effectiveness of the proposed approaches. The tracking system that integrates the adaptive covariance descriptor and the clustering-based model updating method accomplished stable object tracking on challenging video sequences.

  17. Object Tracking Using Adaptive Covariance Descriptor and Clustering-Based Model Updating for Visual Surveillance

    PubMed Central

    Qin, Lei; Snoussi, Hichem; Abdallah, Fahed

    2014-01-01

    We propose a novel approach for tracking an arbitrary object in video sequences for visual surveillance. The first contribution of this work is an automatic feature extraction method that is able to extract compact discriminative features from a feature pool before computing the region covariance descriptor. As the feature extraction method is adaptive to a specific object of interest, we refer to the region covariance descriptor computed using the extracted features as the adaptive covariance descriptor. The second contribution is to propose a weakly supervised method for updating the object appearance model during tracking. The method performs a mean-shift clustering procedure among the tracking result samples accumulated during a period of time and selects a group of reliable samples for updating the object appearance model. As such, the object appearance model is kept up-to-date and is prevented from contamination even in case of tracking mistakes. We conducted comparing experiments on real-world video sequences, which confirmed the effectiveness of the proposed approaches. The tracking system that integrates the adaptive covariance descriptor and the clustering-based model updating method accomplished stable object tracking on challenging video sequences. PMID:24865883

  18. Laser-induced Breakdown spectroscopy quantitative analysis method via adaptive analytical line selection and relevance vector machine regression model

    NASA Astrophysics Data System (ADS)

    Yang, Jianhong; Yi, Cancan; Xu, Jinwu; Ma, Xianghong

    2015-05-01

    A new LIBS quantitative analysis method based on analytical line adaptive selection and Relevance Vector Machine (RVM) regression model is proposed. First, a scheme of adaptively selecting analytical line is put forward in order to overcome the drawback of high dependency on a priori knowledge. The candidate analytical lines are automatically selected based on the built-in characteristics of spectral lines, such as spectral intensity, wavelength and width at half height. The analytical lines which will be used as input variables of regression model are determined adaptively according to the samples for both training and testing. Second, an LIBS quantitative analysis method based on RVM is presented. The intensities of analytical lines and the elemental concentrations of certified standard samples are used to train the RVM regression model. The predicted elemental concentration analysis results will be given with a form of confidence interval of probabilistic distribution, which is helpful for evaluating the uncertainness contained in the measured spectra. Chromium concentration analysis experiments of 23 certified standard high-alloy steel samples have been carried out. The multiple correlation coefficient of the prediction was up to 98.85%, and the average relative error of the prediction was 4.01%. The experiment results showed that the proposed LIBS quantitative analysis method achieved better prediction accuracy and better modeling robustness compared with the methods based on partial least squares regression, artificial neural network and standard support vector machine.

  19. P-method post hoc test for adaptive trimmed mean, HQ

    NASA Astrophysics Data System (ADS)

    Low, Joon Khim; Yahaya, Sharipah Soaad Syed; Abdullah, Suhaida; Yusof, Zahayu Md; Othman, Abdul Rahman

    2014-12-01

    Adaptive trimmed mean, HQ, which is one of the latest additions in robust estimators, had been proven to be good in controlling Type I error in omnibus test. However, post hoc (pairwise multiple comparison) procedure for HQ was yet to be developed then. Thus, we have taken the initiative to develop post hoc procedure for HQ. Percentile bootstrap method or P-Method was proposed as it was proven to be effective in controlling Type I error rate even when the sample size was small. This paper deliberates on the effectiveness of P-Method on HQ, denoted as P-HQ. The strength and weakness of the proposed method were put to test on various conditions created by manipulating several variables such as shape of distributions, number of groups, sample sizes, degree of variance heterogeneity and pairing of sample sizes and group variances. For such, a simulation study on 2000 datasets was conducted using SAS/IML Version 9.2. The performance of the method on various conditions was based on its ability in controlling Type I error which was benchmarked using Bradley's criterion of robustness. The finding revealed that P-HQ could effectively control Type I error for almost all the conditions investigated.

  20. A hybrid and adaptive segmentation method using color and texture information

    NASA Astrophysics Data System (ADS)

    Meurie, C.; Ruichek, Y.; Cohen, A.; Marais, J.

    2010-01-01

    This paper presents a new image segmentation method based on the combination of texture and color informations. The method first computes the morphological color and texture gradients. The color gradient is analyzed taking into account the different color spaces. The texture gradient is computed using the luminance component of the HSL color space. The texture gradient procedure is achieved using a morphological filter and a granulometric and local energy analysis. To overcome the limitations of a linear/barycentric combination, the two morphological gradients are then mixed using a gradient component fusion strategy (to fuse the three components of the color gradient and the unique component of the texture gradient) and an adaptive technique to choose the weighting coefficients. The segmentation process is finally performed by applying the watershed technique using different type of germ images. The segmentation method is evaluated in different object classification applications using the k-means algorithm. The obtained results are compared with other known segmentation methods. The evaluation analysis shows that the proposed method gives better results, especially with hard image acquisition conditions.

  1. AMA- and RWE- Based Adaptive Kalman Filter for Denoising Fiber Optic Gyroscope Drift Signal

    PubMed Central

    Yang, Gongliu; Liu, Yuanyuan; Li, Ming; Song, Shunguang

    2015-01-01

    An improved double-factor adaptive Kalman filter called AMA-RWE-DFAKF is proposed to denoise fiber optic gyroscope (FOG) drift signal in both static and dynamic conditions. The first factor is Kalman gain updated by random weighting estimation (RWE) of the covariance matrix of innovation sequence at any time to ensure the lowest noise level of output, but the inertia of KF response increases in dynamic condition. To decrease the inertia, the second factor is the covariance matrix of predicted state vector adjusted by RWE only when discontinuities are detected by adaptive moving average (AMA).The AMA-RWE-DFAKF is applied for denoising FOG static and dynamic signals, its performance is compared with conventional KF (CKF), RWE-based adaptive KF with gain correction (RWE-AKFG), AMA- and RWE- based dual mode adaptive KF (AMA-RWE-DMAKF). Results of Allan variance on static signal and root mean square error (RMSE) on dynamic signal show that this proposed algorithm outperforms all the considered methods in denoising FOG signal. PMID:26512665

  2. High-accuracy stereo matching based on adaptive ground control points.

    PubMed

    Chenbo Shi; Guijin Wang; Xuanwu Yin; Xiaokang Pei; Bei He; Xinggang Lin

    2015-04-01

    This paper proposes a novel high-accuracy stereo matching scheme based on adaptive ground control points (AdaptGCP). Different from traditional fixed GCP-based methods, we consider color dissimilarity, spatial relation, and the pixel-matching reliability to select GCP adaptively in each local support window. To minimize the global energy, we propose a practical solution, named as alternating updating scheme of disparity and confidence map, which can effectively eliminate the redundant and interfering information of unreliable pixels. The disparity values of those unreliable pixels are reassigned with the information provided by local plane model, which is fitted with GCPs. Then, the confidence map is updated according to the disparity reassignment and the left-right consistency. Finally, the disparity map is refined by multistep filers. Quantitative evaluations demonstrate the effectiveness of our AdaptGCP scheme for regularizing the ill-posed matching problem. The top ranks on Middlebury benchmark with different error thresholds show that our algorithm achieves the state-of-the-art performance among the latest stereo matching algorithms. This paper provides a new insight toward high-accuracy stereo matching. PMID:25608303

  3. AMA- and RWE- Based Adaptive Kalman Filter for Denoising Fiber Optic Gyroscope Drift Signal.

    PubMed

    Yang, Gongliu; Liu, Yuanyuan; Li, Ming; Song, Shunguang

    2015-10-23

    An improved double-factor adaptive Kalman filter called AMA-RWE-DFAKF is proposed to denoise fiber optic gyroscope (FOG) drift signal in both static and dynamic conditions. The first factor is Kalman gain updated by random weighting estimation (RWE) of the covariance matrix of innovation sequence at any time to ensure the lowest noise level of output, but the inertia of KF response increases in dynamic condition. To decrease the inertia, the second factor is the covariance matrix of predicted state vector adjusted by RWE only when discontinuities are detected by adaptive moving average (AMA).The AMA-RWE-DFAKF is applied for denoising FOG static and dynamic signals, its performance is compared with conventional KF (CKF), RWE-based adaptive KF with gain correction (RWE-AKFG), AMA- and RWE- based dual mode adaptive KF (AMA-RWE-DMAKF). Results of Allan variance on static signal and root mean square error (RMSE) on dynamic signal show that this proposed algorithm outperforms all the considered methods in denoising FOG signal.

  4. A Space-Time Adaptive Method for Simulating Complex Cardiac Dynamics

    NASA Astrophysics Data System (ADS)

    Cherry, E. M.; Greenside, H. S.; Henriquez, C. S.

    2000-03-01

    A new space-time adaptive mesh refinement algorithm (AMRA) is presented and analyzed which, by automatically adding and deleting local patches of higher-resolution Cartesian meshes, can simulate quantitatively accurate models of cardiac electrical dynamics efficiently in large domains. We find in two space dimensions that the AMRA is able to achieve a factor of 5 speedup and a factor of 5 reduction in memory while achieving the same accuracy compared to a code based on a uniform space-time mesh at the highest resolution of the AMRA method. We summarize applications of the code to the Luo-Rudy 1 cardiac model in large two- and three-dimensional domains and discuss the implications of our results for understanding the initiation of arrhythmias.

  5. A Domain-Decomposed Multilevel Method for Adaptively Refined Cartesian Grids with Embedded Boundaries

    NASA Technical Reports Server (NTRS)

    Aftosmis, M. J.; Berger, M. J.; Adomavicius, G.

    2000-01-01

    Preliminary verification and validation of an efficient Euler solver for adaptively refined Cartesian meshes with embedded boundaries is presented. The parallel, multilevel method makes use of a new on-the-fly parallel domain decomposition strategy based upon the use of space-filling curves, and automatically generates a sequence of coarse meshes for processing by the multigrid smoother. The coarse mesh generation algorithm produces grids which completely cover the computational domain at every level in the mesh hierarchy. A series of examples on realistically complex three-dimensional configurations demonstrate that this new coarsening algorithm reliably achieves mesh coarsening ratios in excess of 7 on adaptively refined meshes. Numerical investigations of the scheme's local truncation error demonstrate an achieved order of accuracy between 1.82 and 1.88. Convergence results for the multigrid scheme are presented for both subsonic and transonic test cases and demonstrate W-cycle multigrid convergence rates between 0.84 and 0.94. Preliminary parallel scalability tests on both simple wing and complex complete aircraft geometries shows a computational speedup of 52 on 64 processors using the run-time mesh partitioner.

  6. Implementer-initiated adaptation of evidence-based interventions: kids remember the blue wig.

    PubMed

    Gibbs, D A; Krieger, K E; Cutbush, S L; Clinton-Sherrod, A M; Miller, S

    2016-06-01

    Adaptation of evidence-based interventions by implementers is widespread. Although frequently viewed as departures from fidelity, adaptations may be positive in impact and consistent with fidelity. Research typically catalogs adaptations but rarely includes the implementers' perspectives on adaptation. We report data on individuals implementing an evidence-based teen dating violence prevention curriculum. Key informant interviews (n = 20) and an online focus group (n = 10) addressed reasons for adaptations, adaptation processes and kinds of adaptations. All implementers described making adaptations, which they considered necessary to achieving intended outcomes. Adaptations were tailored to needs of individual students or learning opportunities presented by current events, fine-tuned over repeated applications and shared with colleagues. Adaptations modified both content and delivery and included both planned and in-the-moment changes. Implementers made adaptations to increase student engagement, and to fit students' learning needs, learning style, social maturity and culture. Student engagement served as an indicator that adaptation might be needed and provided feedback about the immediate effects of the adaptation. These findings underscore the value of fidelity assessments that measure participant response, intervention-specific guidance to implementers and evaluation of the impact of adaptations on participant response and intervention outcomes. PMID:27107432

  7. A Modified Active Appearance Model Based on an Adaptive Artificial Bee Colony

    PubMed Central

    Othman, Zulaiha Ali

    2014-01-01

    Active appearance model (AAM) is one of the most popular model-based approaches that have been extensively used to extract features by highly accurate modeling of human faces under various physical and environmental circumstances. However, in such active appearance model, fitting the model with original image is a challenging task. State of the art shows that optimization method is applicable to resolve this problem. However, another common problem is applying optimization. Hence, in this paper we propose an AAM based face recognition technique, which is capable of resolving the fitting problem of AAM by introducing a new adaptive ABC algorithm. The adaptation increases the efficiency of fitting as against the conventional ABC algorithm. We have used three datasets: CASIA dataset, property 2.5D face dataset, and UBIRIS v1 images dataset in our experiments. The results have revealed that the proposed face recognition technique has performed effectively, in terms of accuracy of face recognition. PMID:25165748

  8. AMGET, an R-Based Postprocessing Tool for ADAPT 5

    PubMed Central

    Guiastrennec, B; Wollenberg, L; Forrest, A; Ait-Oudhia, S

    2013-01-01

    ADAPT 5 is a powerful modeling software for population pharmacokinetic and pharmacodynamic systems analysis, but provides limited built-in functionality for creating pre- and post-analysis diagnostic plots. ADAPT 5 Model Evaluation Graphical Toolkit (AMGET), an external package written in the open source R programming language, was developed specifically to support efficient postprocessing of ADAPT 5 runs, as well as NONMEM and S-ADAPT runs. Using interactive navigational menus, users of AMGET are able to rapidly create informative diagnostic plots enriched by the display of numerical and graphical elements with a high degree of customization using a simple settings spreadsheet. This article describes each feature of the AMGET package and illustrates how it allows users to utilize the powerful numerical routines of the ADAPT 5 package in a more efficient manner through the use of a simulated dataset and a simple pharmacokinetic model optimized using the maximum likelihood expectation maximization (MLEM) algorithm of ADAPT 5. PMID:23903464

  9. Error estimation and adaptive order nodal method for solving multidimensional transport problems

    SciTech Connect

    Zamonsky, O.M.; Gho, C.J.; Azmy, Y.Y.

    1998-01-01

    The authors propose a modification of the Arbitrarily High Order Transport Nodal method whereby they solve each node and each direction using different expansion order. With this feature and a previously proposed a posteriori error estimator they develop an adaptive order scheme to automatically improve the accuracy of the solution of the transport equation. They implemented the modified nodal method, the error estimator and the adaptive order scheme into a discrete-ordinates code for solving monoenergetic, fixed source, isotropic scattering problems in two-dimensional Cartesian geometry. They solve two test problems with large homogeneous regions to test the adaptive order scheme. The results show that using the adaptive process the storage requirements are reduced while preserving the accuracy of the results.

  10. Testing Multivariate Adaptive Regression Splines (MARS) as a Method of Land Cover Classification of TERRA-ASTER Satellite Images.

    PubMed

    Quirós, Elia; Felicísimo, Angel M; Cuartero, Aurora

    2009-01-01

    This work proposes a new method to classify multi-spectral satellite images based on multivariate adaptive regression splines (MARS) and compares this classification system with the more common parallelepiped and maximum likelihood (ML) methods. We apply the classification methods to the land cover classification of a test zone located in southwestern Spain. The basis of the MARS method and its associated procedures are explained in detail, and the area under the ROC curve (AUC) is compared for the three methods. The results show that the MARS method provides better results than the parallelepiped method in all cases, and it provides better results than the maximum likelihood method in 13 cases out of 17. These results demonstrate that the MARS method can be used in isolation or in combination with other methods to improve the accuracy of soil cover classification. The improvement is statistically significant according to the Wilcoxon signed rank test. PMID:22291550

  11. Impedance adaptation methods of the piezoelectric energy harvesting

    NASA Astrophysics Data System (ADS)

    Kim, Hyeoungwoo

    In this study, the important issues of energy recovery were addressed and a comprehensive investigation was performed on harvesting electrical power from an ambient mechanical vibration source. Also discussed are the impedance matching methods used to increase the efficiency of energy transfer from the environment to the application. Initially, the mechanical impedance matching method was investigated to increase mechanical energy transferred to the transducer from the environment. This was done by reducing the mechanical impedance such as damping factor and energy reflection ratio. The vibration source and the transducer were modeled by a two-degree-of-freedom dynamic system with mass, spring constant, and damper. The transmissibility employed to show how much mechanical energy that was transferred in this system was affected by the damping ratio and the stiffness of elastic materials. The mechanical impedance of the system was described by electrical system using analogy between the two systems in order to simply the total mechanical impedance. Secondly, the transduction rate of mechanical energy to electrical energy was improved by using a PZT material which has a high figure of merit and a high electromechanical coupling factor for electrical power generation, and a piezoelectric transducer which has a high transduction rate was designed and fabricated. The high g material (g33 = 40 [10-3Vm/N]) was developed to improve the figure of merit of the PZT ceramics. The cymbal composite transducer has been found as a promising structure for piezoelectric energy harvesting under high force at cyclic conditions (10--200 Hz), because it has almost 40 times higher effective strain coefficient than PZT ceramics. The endcap of cymbal also enhances the endurance of the ceramic to sustain ac load along with stress amplification. In addition, a macro fiber composite (MFC) was employed as a strain component because of its flexibility and the high electromechanical coupling

  12. TU-C-17A-07: FusionARC Treatment with Adaptive Beam Selection Method

    SciTech Connect

    Kim, H; Li, R; Xing, L; Lee, R

    2014-06-15

    Purpose: Recently, a new treatment scheme, FusionARC, has been introduced to compensate for the pitfalls in single-arc VMAT planning. It basically allows for the static field treatment in selected locations, while the remaining is treated by single-rotational arc delivery. The important issue is how to choose the directions for static field treatment. This study presents an adaptive beam selection method to formulate fusionARC treatment scheme. Methods: The optimal plan for single-rotational arc treatment is obtained from two-step approach based on the reweighted total-variation (TV) minimization. To choose the directions for static field treatment with extra segments, a value of our proposed cost function at each field is computed on the new fluence-map, which adds an extra segment to the designated field location only. The cost function is defined as a summation of equivalent uniform dose (EUD) of all structures with the fluence-map, while assuming that the lower cost function value implies the enhancement of plan quality. Finally, the extra segments for static field treatment would be added to the selected directions with low cost function values. A prostate patient data was applied and evaluated with three different plans: conventional VMAT, fusionARC, and static IMRT. Results: The 7 field locations, corresponding to the lowest cost function values, are chosen to insert extra segment for step-and-shoot dose delivery. Our proposed fusionARC plan with the selected angles improves the dose sparing to the critical organs, relative to static IMRT and conventional VMAT plans. The dose conformity to the target is significantly enhanced at the small expense of treatment time, compared with VMAT plan. Its estimated treatment time, however, is still much faster than IMRT. Conclusion: The fusionARC treatment with adaptive beam selection method could improve the plan quality with insignificant damage in the treatment time, relative to the conventional VMAT.

  13. Adaptive noise cancellation based on beehive pattern evolutionary digital filter

    NASA Astrophysics Data System (ADS)

    Zhou, Xiaojun; Shao, Yimin

    2014-01-01

    Evolutionary digital filtering (EDF) exhibits the advantage of avoiding the local optimum problem by using cloning and mating searching rules in an adaptive noise cancellation system. However, convergence performance is restricted by the large population of individuals and the low level of information communication among them. The special beehive structure enables the individuals on neighbour beehive nodes to communicate with each other and thus enhance the information spread and random search ability of the algorithm. By introducing the beehive pattern evolutionary rules into the original EDF, this paper proposes an improved beehive pattern evolutionary digital filter (BP-EDF) to overcome the defects of the original EDF. In the proposed algorithm, a new evolutionary rule which combines competing cloning, complete cloning and assistance mating methods is constructed to enable the individuals distributed on the beehive to communicate with their neighbours. Simulation results are used to demonstrate the improved performance of the proposed algorithm in terms of convergence speed to the global optimum compared with the original methods. Experimental results also verify the effectiveness of the proposed algorithm in extracting feature signals that are contaminated by significant amounts of noise during the fault diagnosis task.

  14. On the use of adaptive moving grid methods in combustion problems

    SciTech Connect

    Hyman, J.M.; Larrouturou, B.

    1986-01-01

    The investigators have presented the reasons and advantages of adaptively moving the mesh points for the solution of time-dependent PDEs (partial differential equations) systems developing sharp gradients, and more specifically for combustion problems. Several available adaptive dynamic rezone methods have been briefly reviewed, and the effectiveness of these algorithms for combustion problems has been illustrated by the numerical solution of a simple flame propagation problem. 29 refs., 7 figs.

  15. The adaptive buffered force QM/MM method in the CP2K and AMBER software packages.

    PubMed

    Mones, Letif; Jones, Andrew; Götz, Andreas W; Laino, Teodoro; Walker, Ross C; Leimkuhler, Ben; Csányi, Gábor; Bernstein, Noam

    2015-04-01

    The implementation and validation of the adaptive buffered force (AdBF) quantum-mechanics/molecular-mechanics (QM/MM) method in two popular packages, CP2K and AMBER are presented. The implementations build on the existing QM/MM functionality in each code, extending it to allow for redefinition of the QM and MM regions during the simulation and reducing QM-MM interface errors by discarding forces near the boundary according to the buffered force-mixing approach. New adaptive thermostats, needed by force-mixing methods, are also implemented. Different variants of the method are benchmarked by simulating the structure of bulk water, water autoprotolysis in the presence of zinc and dimethyl-phosphate hydrolysis using various semiempirical Hamiltonians and density functional theory as the QM model. It is shown that with suitable parameters, based on force convergence tests, the AdBF QM/MM scheme can provide an accurate approximation of the structure in the dynamical QM region matching the corresponding fully QM simulations, as well as reproducing the correct energetics in all cases. Adaptive unbuffered force-mixing and adaptive conventional QM/MM methods also provide reasonable results for some systems, but are more likely to suffer from instabilities and inaccuracies.

  16. The adaptive buffered force QM/MM method in the CP2K and AMBER software packages

    PubMed Central

    Mones, Letif; Jones, Andrew; Götz, Andreas W; Laino, Teodoro; Walker, Ross C; Leimkuhler, Ben; Csányi, Gábor; Bernstein, Noam

    2015-01-01

    The implementation and validation of the adaptive buffered force (AdBF) quantum-mechanics/molecular-mechanics (QM/MM) method in two popular packages, CP2K and AMBER are presented. The implementations build on the existing QM/MM functionality in each code, extending it to allow for redefinition of the QM and MM regions during the simulation and reducing QM-MM interface errors by discarding forces near the boundary according to the buffered force-mixing approach. New adaptive thermostats, needed by force-mixing methods, are also implemented. Different variants of the method are benchmarked by simulating the structure of bulk water, water autoprotolysis in the presence of zinc and dimethyl-phosphate hydrolysis using various semiempirical Hamiltonians and density functional theory as the QM model. It is shown that with suitable parameters, based on force convergence tests, the AdBF QM/MM scheme can provide an accurate approximation of the structure in the dynamical QM region matching the corresponding fully QM simulations, as well as reproducing the correct energetics in all cases. Adaptive unbuffered force-mixing and adaptive conventional QM/MM methods also provide reasonable results for some systems, but are more likely to suffer from instabilities and inaccuracies. © 2015 The Authors. Journal of Computational Chemistry Published by Wiley Periodicals, Inc. PMID:25649827

  17. Workload Model Based Dynamic Adaptation of Social Internet of Vehicles

    PubMed Central

    Alam, Kazi Masudul; Saini, Mukesh; El Saddik, Abdulmotaleb

    2015-01-01

    Social Internet of Things (SIoT) has gained much interest among different research groups in recent times. As a key member of a smart city, the vehicular domain of SIoT (SIoV) is also undergoing steep development. In the SIoV, vehicles work as sensor-hub to capture surrounding information using the in-vehicle and Smartphone sensors and later publish them for the consumers. A cloud centric cyber-physical system better describes the SIoV model where physical sensing-actuation process affects the cloud based service sharing or computation in a feedback loop or vice versa. The cyber based social relationship abstraction enables distributed, easily navigable and scalable peer-to-peer communication among the SIoV subsystems. These cyber-physical interactions involve a huge amount of data and it is difficult to form a real instance of the system to test the feasibility of SIoV applications. In this paper, we propose an analytical model to measure the workloads of various subsystems involved in the SIoV process. We present the basic model which is further extended to incorporate complex scenarios. We provide extensive simulation results for different parameter settings of the SIoV system. The findings of the analyses are further used to design example adaptation strategies for the SIoV subsystems which would foster deployment of intelligent transport systems. PMID:26389905

  18. Workload Model Based Dynamic Adaptation of Social Internet of Vehicles.

    PubMed

    Alam, Kazi Masudul; Saini, Mukesh; El Saddik, Abdulmotaleb

    2015-09-15

    Social Internet of Things (SIoT) has gained much interest among different research groups in recent times. As a key member of a smart city, the vehicular domain of SIoT (SIoV) is also undergoing steep development. In the SIoV, vehicles work as sensor-hub to capture surrounding information using the in-vehicle and Smartphone sensors and later publish them for the consumers. A cloud centric cyber-physical system better describes the SIoV model where physical sensing-actuation process affects the cloud based service sharing or computation in a feedback loop or vice versa. The cyber based social relationship abstraction enables distributed, easily navigable and scalable peer-to-peer communication among the SIoV subsystems. These cyber-physical interactions involve a huge amount of data and it is difficult to form a real instance of the system to test the feasibility of SIoV applications. In this paper, we propose an analytical model to measure the workloads of various subsystems involved in the SIoV process. We present the basic model which is further extended to incorporate complex scenarios. We provide extensive simulation results for different parameter settings of the SIoV system. The findings of the analyses are further used to design example adaptation strategies for the SIoV subsystems which would foster deployment of intelligent transport systems.

  19. Adapting Drug Approval Pathways for Bacteriophage-Based Therapeutics.

    PubMed

    Cooper, Callum J; Khan Mirzaei, Mohammadali; Nilsson, Anders S

    2016-01-01

    The global rise of multi-drug resistant bacteria has resulted in the notion that an "antibiotic apocalypse" is fast approaching. This has led to a number of well publicized calls for global funding initiatives to develop new antibacterial agents. The long clinical history of phage therapy in Eastern Europe, combined with more recent in vitro and in vivo success, demonstrates the potential for whole phage or phage based antibacterial agents. To date, no whole phage or phage derived products are approved for human therapeutic use in the EU or USA. There are at least three reasons for this: (i) phages possess different biological, physical, and pharmacological properties compared to conventional antibiotics. Phages need to replicate in order to achieve a viable antibacterial effect, resulting in complex pharmacodynamics/pharmacokinetics. (ii) The specificity of individual phages requires multiple phages to treat single species infections, often as part of complex cocktails. (iii) The current approval process for antibacterial agents has evolved with the development of chemically based drugs at its core, and is not suitable for phages. Due to similarities with conventional antibiotics, phage derived products such as endolysins are suitable for approval under current processes as biological therapeutic proteins. These criteria render the approval of phages for clinical use theoretically possible but not economically viable. In this review, pitfalls of the current approval process will be discussed for whole phage and phage derived products, in addition to the utilization of alternative approval pathways including adaptive licensing and "Right to try" legislation. PMID:27536293

  20. Workload Model Based Dynamic Adaptation of Social Internet of Vehicles.

    PubMed

    Alam, Kazi Masudul; Saini, Mukesh; El Saddik, Abdulmotaleb

    2015-01-01

    Social Internet of Things (SIoT) has gained much interest among different research groups in recent times. As a key member of a smart city, the vehicular domain of SIoT (SIoV) is also undergoing steep development. In the SIoV, vehicles work as sensor-hub to capture surrounding information using the in-vehicle and Smartphone sensors and later publish them for the consumers. A cloud centric cyber-physical system better describes the SIoV model where physical sensing-actuation process affects the cloud based service sharing or computation in a feedback loop or vice versa. The cyber based social relationship abstraction enables distributed, easily navigable and scalable peer-to-peer communication among the SIoV subsystems. These cyber-physical interactions involve a huge amount of data and it is difficult to form a real instance of the system to test the feasibility of SIoV applications. In this paper, we propose an analytical model to measure the workloads of various subsystems involved in the SIoV process. We present the basic model which is further extended to incorporate complex scenarios. We provide extensive simulation results for different parameter settings of the SIoV system. The findings of the analyses are further used to design example adaptation strategies for the SIoV subsystems which would foster deployment of intelligent transport systems. PMID:26389905