A domain decomposition method for pseudo-spectral electromagnetic simulations of plasmas
Vay, Jean-Luc; Haber, Irving; Godfrey, Brendan B.
2013-06-15
Pseudo-spectral electromagnetic solvers (i.e. representing the fields in Fourier space) have extraordinary precision. In particular, Haber et al. presented in 1973 a pseudo-spectral solver that integrates analytically the solution over a finite time step, under the usual assumption that the source is constant over that time step. Yet, pseudo-spectral solvers have not been widely used, due in part to the difficulty for efficient parallelization owing to global communications associated with global FFTs on the entire computational domains. A method for the parallelization of electromagnetic pseudo-spectral solvers is proposed and tested on single electromagnetic pulses, and on Particle-In-Cell simulations of the wakefield formation in a laser plasma accelerator. The method takes advantage of the properties of the Discrete Fourier Transform, the linearity of Maxwell’s equations and the finite speed of light for limiting the communications of data within guard regions between neighboring computational domains. Although this requires a small approximation, test results show that no significant error is made on the test cases that have been presented. The proposed method opens the way to solvers combining the favorable parallel scaling of standard finite-difference methods with the accuracy advantages of pseudo-spectral methods.
A conservative Fourier pseudo-spectral method for the nonlinear Schrödinger equation
NASA Astrophysics Data System (ADS)
Gong, Yuezheng; Wang, Qi; Wang, Yushun; Cai, Jiaxiang
2017-01-01
A Fourier pseudo-spectral method that conserves mass and energy is developed for a two-dimensional nonlinear Schrödinger equation. By establishing the equivalence between the semi-norm in the Fourier pseudo-spectral method and that in the finite difference method, we are able to extend the result in Ref. [56] to prove that the optimal rate of convergence of the new method is in the order of O (N-r +τ2) in the discrete L2 norm without any restrictions on the grid ratio, where N is the number of modes used in the spectral method and τ is the time step size. A fast solver is then applied to the discrete nonlinear equation system to speed up the numerical computation for the high order method. Numerical examples are presented to show the efficiency and accuracy of the new method.
A comparison of vortex and pseudo-spectral methods at high Reynolds numbers
NASA Astrophysics Data System (ADS)
Leonard, Anthony; van Rees, Wim; Koumoutsakos, Petros
2010-11-01
We validate the hybrid particle-mesh vortex method against a pseudo-spectral method in simulations of the Taylor-Green vortex and colliding vortex tubes at Re = 1600 - 10,000. The spectral method uses the smooth filter introduced in [1]. In the case of the Taylor-Green vortex, we observe very good agreement in the evolution of the vortical structures albeit small discrepancies in the energy spectrum only for the smallest length scales. In the collision of two anti-parallel vortex tubes at Re = 10 000, there is very good agreement between the two methods in terms of the simulated vortical structures throughout the first reconnection of the tubes. The maximum error in the effective viscosity is below 2.5% and 1% for the vortex method and the pseudo-spectral method respectively. At later times the agreement between the two methods in the vortical structures deteriorates even though there is good agreement in the energy spectrum. Both methods resolve an unexpected vortex breakdown during the second reconnection of the vortex tubes.[4pt] [1] Hou, T. and Li, R., 2007. Computing nearly singular solutions using pseudo-spectral methods. J. of Comput. Phys., 226:379-397.
NASA Astrophysics Data System (ADS)
van Rees, Wim M.; Leonard, Anthony; Pullin, D. I.; Koumoutsakos, Petros
2011-04-01
We present a validation study for the hybrid particle-mesh vortex method against a pseudo-spectral method for the Taylor-Green vortex at ReΓ = 1600 as well as in the collision of two antiparallel vortex tubes at ReΓ = 10,000. In this study we present diagnostics such as energy spectra and enstrophy as computed by both methods as well as point-wise comparisons of the vorticity field. Using a fourth order accurate kernel for interpolation between the particles and the mesh, the results of the hybrid vortex method and of the pseudo-spectral method agree well in both flow cases. For the Taylor-Green vortex, the vorticity contours computed by both methods around the time of the energy dissipation peak overlap. The energy spectrum shows that only the smallest length scales in the flow are not captured by the vortex method. In the second flow case, where we compute the collision of two anti-parallel vortex tubes at Reynolds number 10,000, the vortex method results and the pseudo-spectral method results are in very good agreement up to and including the first reconnection of the tubes. The maximum error in the effective viscosity is about 2.5% for the vortex method and about 1% for the pseudo-spectral method. At later times the flows computed with the different methods show the same qualitative features, but the quantitative agreement on vortical structures is lost.
High precision computing with charge domain devices and a pseudo-spectral method therefor
NASA Technical Reports Server (NTRS)
Barhen, Jacob (Inventor); Toomarian, Nikzad (Inventor); Fijany, Amir (Inventor); Zak, Michail (Inventor)
1997-01-01
The present invention enhances the bit resolution of a CCD/CID MVM processor by storing each bit of each matrix element as a separate CCD charge packet. The bits of each input vector are separately multiplied by each bit of each matrix element in massive parallelism and the resulting products are combined appropriately to synthesize the correct product. In another aspect of the invention, such arrays are employed in a pseudo-spectral method of the invention, in which partial differential equations are solved by expressing each derivative analytically as matrices, and the state function is updated at each computation cycle by multiplying it by the matrices. The matrices are treated as synaptic arrays of a neural network and the state function vector elements are treated as neurons. In a further aspect of the invention, moving target detection is performed by driving the soliton equation with a vector of detector outputs. The neural architecture consists of two synaptic arrays corresponding to the two differential terms of the soliton-equation and an adder connected to the output thereof and to the output of the detector array to drive the soliton equation.
Seismic waves modeling with the Fourier pseudo-spectral method on massively parallel machines.
NASA Astrophysics Data System (ADS)
Klin, Peter
2015-04-01
The Fourier pseudo-spectral method (FPSM) is an approach for the 3D numerical modeling of the wave propagation, which is based on the discretization of the spatial domain in a structured grid and relies on global spatial differential operators for the solution of the wave equation. This last peculiarity is advantageous from the accuracy point of view but poses difficulties for an efficient implementation of the method to be run on parallel computers with distributed memory architecture. The 1D spatial domain decomposition approach has been so far commonly adopted in the parallel implementations of the FPSM, but it implies an intensive data exchange among all the processors involved in the computation, which can degrade the performance because of communication latencies. Moreover, the scalability of the 1D domain decomposition is limited, since the number of processors can not exceed the number of grid points along the directions in which the domain is partitioned. This limitation inhibits an efficient exploitation of the computational environments with a very large number of processors. In order to overcome the limitations of the 1D domain decomposition we implemented a parallel version of the FPSM based on a 2D domain decomposition, which allows to achieve a higher degree of parallelism and scalability on massively parallel machines with several thousands of processing elements. The parallel programming is essentially achieved using the MPI protocol but OpenMP parts are also included in order to exploit the single processor multi - threading capabilities, when available. The developed tool is aimed at the numerical simulation of the seismic waves propagation and in particular is intended for earthquake ground motion research. We show the scalability tests performed up to 16k processing elements on the IBM Blue Gene/Q computer at CINECA (Italy), as well as the application to the simulation of the earthquake ground motion in the alluvial plain of the Po river (Italy).
Morales, Jorge A.; Leroy, Matthieu; Bos, Wouter J.T.; Schneider, Kai
2014-10-01
A volume penalization approach to simulate magnetohydrodynamic (MHD) flows in confined domains is presented. Here the incompressible visco-resistive MHD equations are solved using parallel pseudo-spectral solvers in Cartesian geometries. The volume penalization technique is an immersed boundary method which is characterized by a high flexibility for the geometry of the considered flow. In the present case, it allows to use other than periodic boundary conditions in a Fourier pseudo-spectral approach. The numerical method is validated and its convergence is assessed for two- and three-dimensional hydrodynamic (HD) and MHD flows, by comparing the numerical results with results from literature and analytical solutions. The test cases considered are two-dimensional Taylor–Couette flow, the z-pinch configuration, three dimensional Orszag–Tang flow, Ohmic-decay in a periodic cylinder, three-dimensional Taylor–Couette flow with and without axial magnetic field and three-dimensional Hartmann-instabilities in a cylinder with an imposed helical magnetic field. Finally, we present a magnetohydrodynamic flow simulation in toroidal geometry with non-symmetric cross section and imposing a helical magnetic field to illustrate the potential of the method.
NASA Astrophysics Data System (ADS)
Margairaz, Fabien; Giometto, Marco; Parlange, Marc; Calaf, Marc
2015-11-01
The performance of dealiasing schemes and their computational cost on a pseudo-spectral code are analyzed. Dealiasing is required to limit the error that occurs when two discretized variables are multiplied, polluting the accuracy of the result. In this work three different dealiasing methods are explored: the 2/3 rule, the 3/2 rule, and a high order Fourier smoothing based method. We compare the cost of the traditionally accepted 3/2 rule (Canuto et al., 1988), where an expansion of the computational domain to a larger grid is required, to the cost of the other two techniques that do not require this expansion. This analysis is performed in the framework of Large-Eddy Simulations (LES) of incompressible flows using the constant Smagorinsky sub-grid model with a wall damping function and a wall model based on the log-law. A highly efficient LES code parallelized using a 2D pencil decomposition has been developed. The code employs the traditional pseudo-spectral approach to integrate the incompressible Navier-Stokes equations. Several simulations of a neutral atmospheric boundary layer using different degrees of numerical resolution are considered. Results show a net difference in computational cost between the different techniques without relevant changes in statistics.
Challenges at Petascale for Pseudo-Spectral Methods on Spheres (A Last Hurrah?)
NASA Technical Reports Server (NTRS)
Clune, Thomas
2011-01-01
Conclusions: a) Proper software abstractions should enable rapid-exploration of platform-specific optimizations/ tradeoffs. b) Pseudo-spectra! methods are marginally viable for at least some classes of petascaie problems. i.e., GPU based machine with good bisection would be best. c) Scalability at exascale is possible, but the necessary resolution will make algorithm prohibitively expensive. Efficient implementations of realistic global transposes are mtricate and tedious in MPI. PS at petascaie requires exploration of a variety of strategies for spreading local and remote communic3tions. PGAS allows far simpler implementation and thus rapid exploration of variants.
Pseudo-spectral reverse time migration based on wavefield decomposition
NASA Astrophysics Data System (ADS)
Du, Zengli; Liu, Jianjun; Xu, Feng; Li, Yongzhang
2017-02-01
The accuracy of seismic numerical simulations and the effectiveness of imaging conditions are important in reverse time migration studies. Using the pseudo-spectral method, the precision of the calculated spatial derivative of the seismic wavefield can be improved, increasing the vertical resolution of images. Low-frequency background noise, generated by the zero-lag cross-correlation of mismatched forward-propagated and backward-propagated wavefields at the impedance interfaces, can be eliminated effectively by using the imaging condition based on the wavefield decomposition technique. The computation complexity can be reduced when imaging is performed in the frequency domain. Since the Fourier transformation in the z-axis may be derived directly as one of the intermediate results of the spatial derivative calculation, the computation load of the wavefield decomposition can be reduced, improving the computation efficiency of imaging. Comparison of the results for a pulse response in a constant-velocity medium indicates that, compared with the finite difference method, the peak frequency of the Ricker wavelet can be increased by 10-15 Hz for avoiding spatial numerical dispersion, when the second-order spatial derivative of the seismic wavefield is obtained using the pseudo-spectral method. The results for the SEG/EAGE and Sigsbee2b models show that the signal-to-noise ratio of the profile and the imaging quality of the boundaries of the salt dome migrated using the pseudo-spectral method are better than those obtained using the finite difference method.
Sidler, Rolf; Carcione, José M.; Holliger, Klaus
2013-02-15
We present a novel numerical approach for the comprehensive, flexible, and accurate simulation of poro-elastic wave propagation in 2D polar coordinates. An important application of this method and its extensions will be the modeling of complex seismic wave phenomena in fluid-filled boreholes, which represents a major, and as of yet largely unresolved, computational problem in exploration geophysics. In view of this, we consider a numerical mesh, which can be arbitrarily heterogeneous, consisting of two or more concentric rings representing the fluid in the center and the surrounding porous medium. The spatial discretization is based on a Chebyshev expansion in the radial direction and a Fourier expansion in the azimuthal direction and a Runge–Kutta integration scheme for the time evolution. A domain decomposition method is used to match the fluid–solid boundary conditions based on the method of characteristics. This multi-domain approach allows for significant reductions of the number of grid points in the azimuthal direction for the inner grid domain and thus for corresponding increases of the time step and enhancements of computational efficiency. The viability and accuracy of the proposed method has been rigorously tested and verified through comparisons with analytical solutions as well as with the results obtained with a corresponding, previously published, and independently benchmarked solution for 2D Cartesian coordinates. Finally, the proposed numerical solution also satisfies the reciprocity theorem, which indicates that the inherent singularity associated with the origin of the polar coordinate system is adequately handled.
NASA Astrophysics Data System (ADS)
Munro, Eugene
2013-12-01
In this paper, we will solve the Hamiltonian constraint describing a curved general relativistic spacetime to find initial data describing how a black hole exists in vacuum. This has been done before by other researchers [Ansorg, 2004], and we will be adapting our own methods to an existing pseudo spectral Poisson solver [Gourgoulhon, 2001]. The need for this adaptation arises from improper numerical handling, done by pseudo spectral-methods, of a large part the Hamiltonian constraint equation due to the presence of the black hole singularity. To resolve a portion of this issue up to a given order, we will determine irregularities by executing a polynomial expansion on the Hamiltonian constraint, analytically solving the troublesome components of the equation and subtracting those out of the numerical process. This technique will increase the equation's differentiability and allow the numerical solver to run more efficiently. We will cover all the calculations needed to describe one black hole with arbitrary spin and linear momentum. Our process is easily expanded into cases with n black holes [Brandt, 1997], which we will show in chapter 2. We will implement a spherical harmonic decomposition of the black hole conformal factor, using them as basis functions by which to further expand and dissect the Hamiltonian Constraint equation. In the end, the expansion and subtraction method will be done out to the order of r4, where r is the spherical radius assuming the black hole is at the coordinate origin, making the Hamiltonian equation, which, unaltered, is a C 2 equation, become a C7 equation. Smoothing the Hamiltonian improves numerical precision, especially near the BH where the most interesting physics occurs. The method used in this paper can be further implemented to higher orders of r to yield even smoother conditions. We will test the numerical results of using this method against the existing solver that uses the publicly available Lorene numerical libraries
Pseudo spectral Chebyshev representation of few-group cross sections on sparse grids
Bokov, P. M.; Botes, D.; Zimin, V. G.
2012-07-01
This paper presents a pseudo spectral method for representing few-group homogenised cross sections, based on hierarchical polynomial interpolation. The interpolation is performed on a multi-dimensional sparse grid built from Chebyshev nodes. The representation is assembled directly from the samples using basis functions that are constructed as tensor products of the classical one-dimensional Lagrangian interpolation functions. The advantage of this representation is that it combines the accuracy of Chebyshev interpolation with the efficiency of sparse grid methods. As an initial test, this interpolation method was used to construct a representation for the two-group macroscopic cross sections of a VVER pin cell. (authors)
On the application of pseudo-spectral FFT technique to non-periodic problems
NASA Technical Reports Server (NTRS)
Biringen, S.; Kao, K. H.
1988-01-01
The reduction-to-periodicity method using the pseudo-spectral Fast Fourier Transform (FFT) technique is applied to the solution of nonperiodic problems including the two-dimensional Navier-Stokes equations. The accuracy of the method is demonstrated by calculating derivatives of given functions, one- and two-dimensional convective-diffusive problems, and by comparing the relative errors due to the FFT method with seocnd order Finite Difference Methods (FDM). Finally, the two-dimensional Navier-Stokes equations are solved by a fractional step procedure using both the FFT and the FDM methods for the driven cavity flow and the backward facing step problems. Comparisons of these solutions provide a realistic assessment of the FFT method indicating its range of applicability.
NASA Astrophysics Data System (ADS)
Pétri, J.
2015-03-01
The close vicinity of neutron stars remains poorly constrained by observations. Although plenty of data are available for the peculiar class of pulsars we are still unable to deduce the underlying plasma distribution in their magnetosphere. In the present paper, we try to unravel the magnetospheric structure starting from basic physics principles and reasonable assumptions about the magnetosphere. Beginning with the monopole force-free case, we compute accurate general relativistic solutions for the electromagnetic field around a slowly rotating magnetized neutron star. Moreover, here we address this problem by including the important effect of plasma screening. This is achieved by solving the time-dependent Maxwell equations in a curved space-time following the 3+1 formalism. We improved our previous numerical code based on pseudo-spectral methods in order to allow for possible discontinuities in the solution. Our algorithm based on a multidomain decomposition of the simulation box belongs to the discontinuous Galerkin finite element methods. We performed several sets of simulations to look for the general relativistic force-free monopole and split monopole solutions. Results show that our code is extremely powerful in handling extended domains of hundredth of light cylinder radii rL. The code has been validated against known exact analytical monopole solutions in flat space-time. We also present semi-analytical calculations for the general relativistic vacuum monopole.
PSpectRe: a pseudo-spectral code for (P)reheating
Easther, Richard; Finkel, Hal; Roth, Nathaniel E-mail: hal.finkel@yale.edu
2010-10-01
PSpectRe is a C++ program that uses Fourier-space pseudo-spectral methods to evolve interacting scalar fields in an expanding universe. PSpectRe is optimized for the analysis of parametric resonance in the post-inflationary universe and provides an alternative to finite differencing codes, such as Defrost and LatticeEasy. PSpectRe has both second- (Velocity-Verlet) and fourth-order (Runge-Kutta) time integrators. Given the same number of spatial points and/or momentum modes, PSpectRe is not significantly slower than finite differencing codes, despite the need for multiple Fourier transforms at each timestep, and exhibits excellent energy conservation. Further, by computing the post-resonance equation of state, we show that in some circumstances PSpectRe obtains reliable results while using substantially fewer points than a finite differencing code. PSpectRe is designed to be easily extended to other problems in early-universe cosmology, including the generation of gravitational waves during phase transitions and pre-inflationary bubble collisions. Specific applications of this code will be described in future work.
Numerical stability analysis of the pseudo-spectral analytical time-domain PIC algorithm
Godfrey, Brendan B.; Vay, Jean-Luc; Haber, Irving
2014-02-01
The pseudo-spectral analytical time-domain (PSATD) particle-in-cell (PIC) algorithm solves the vacuum Maxwell's equations exactly, has no Courant time-step limit (as conventionally defined), and offers substantial flexibility in plasma and particle beam simulations. It is, however, not free of the usual numerical instabilities, including the numerical Cherenkov instability, when applied to relativistic beam simulations. This paper derives and solves the numerical dispersion relation for the PSATD algorithm and compares the results with corresponding behavior of the more conventional pseudo-spectral time-domain (PSTD) and finite difference time-domain (FDTD) algorithms. In general, PSATD offers superior stability properties over a reasonable range of time steps. More importantly, one version of the PSATD algorithm, when combined with digital filtering, is almost completely free of the numerical Cherenkov instability for time steps (scaled to the speed of light) comparable to or smaller than the axial cell size.
Uday, Anjali; Moustafa, Sayed S. R.; Al-Arifi, Nassir S. N.
2016-01-01
Ground-motion prediction equations that are used to predict acceleration values are generally developed for a 5% viscous damping ratio. Special structures and structures that use damping devices may have damping ratios other than the conventionally used ratio of 5%. Hence, for such structures, the intensity measures predicted by conventional ground-motion prediction equations need to be converted to a particular level of damping using a damping reduction factor (DRF). DRF is the ratio of the spectral ordinate at 5% damping to the ordinate at a defined level of damping. In this study, the DRF has been defined using the spectral ordinate of pseudo-spectral acceleration and the effect of factors such as the duration of ground motion, magnitude, hypocenter distance, site classification, damping, and period are studied. In this study, an attempt has also been made to develop an empirical model for the DRF that is specifically applicable to the Himalayan region in terms of these predictor variables. A recorded earthquake with 410 horizontal motions was used, with data characterized by magnitudes ranging from 4 to 7.8 and hypocentral distances up to 520 km. The damping was varied from 0.5–30% and the period range considered was 0.02 to 10 s. The proposed model was compared and found to coincide well with models in the existing literature. The proposed model can be used to compute the DRF at any specific period, for any given value of predictor variables. PMID:27611854
P, Anbazhagan; Uday, Anjali; Moustafa, Sayed S R; Al-Arifi, Nassir S N
2016-01-01
Ground-motion prediction equations that are used to predict acceleration values are generally developed for a 5% viscous damping ratio. Special structures and structures that use damping devices may have damping ratios other than the conventionally used ratio of 5%. Hence, for such structures, the intensity measures predicted by conventional ground-motion prediction equations need to be converted to a particular level of damping using a damping reduction factor (DRF). DRF is the ratio of the spectral ordinate at 5% damping to the ordinate at a defined level of damping. In this study, the DRF has been defined using the spectral ordinate of pseudo-spectral acceleration and the effect of factors such as the duration of ground motion, magnitude, hypocenter distance, site classification, damping, and period are studied. In this study, an attempt has also been made to develop an empirical model for the DRF that is specifically applicable to the Himalayan region in terms of these predictor variables. A recorded earthquake with 410 horizontal motions was used, with data characterized by magnitudes ranging from 4 to 7.8 and hypocentral distances up to 520 km. The damping was varied from 0.5-30% and the period range considered was 0.02 to 10 s. The proposed model was compared and found to coincide well with models in the existing literature. The proposed model can be used to compute the DRF at any specific period, for any given value of predictor variables.
Adaptive Algebraic Multigrid Methods
Brezina, M; Falgout, R; MacLachlan, S; Manteuffel, T; McCormick, S; Ruge, J
2004-04-09
Our ability to simulate physical processes numerically is constrained by our ability to solve the resulting linear systems, prompting substantial research into the development of multiscale iterative methods capable of solving these linear systems with an optimal amount of effort. Overcoming the limitations of geometric multigrid methods to simple geometries and differential equations, algebraic multigrid methods construct the multigrid hierarchy based only on the given matrix. While this allows for efficient black-box solution of the linear systems associated with discretizations of many elliptic differential equations, it also results in a lack of robustness due to assumptions made on the near-null spaces of these matrices. This paper introduces an extension to algebraic multigrid methods that removes the need to make such assumptions by utilizing an adaptive process. The principles which guide the adaptivity are highlighted, as well as their application to algebraic multigrid solution of certain symmetric positive-definite linear systems.
NASA Astrophysics Data System (ADS)
Hershkovitz, Yaron; Anker, Yaakov; Ben-Dor, Eyal; Schwartz, Guy; Gasith, Avital
2010-05-01
In-stream vegetation is a key ecosystem component in many fluvial ecosystems, having cascading effects on stream conditions and biotic structure. Traditionally, ground-level surveys (e.g. grid and transect analyses) are commonly used for estimating cover of aquatic macrophytes. Nonetheless, this methodological approach is highly time consuming and usually yields information which is practically limited to habitat and sub-reach scales. In contrast, remote-sensing techniques (e.g. satellite imagery and airborne photography), enable collection of large datasets over section, stream and basin scales, in relatively short time and reasonable cost. However, the commonly used spatial high resolution (1m) is often inadequate for examining aquatic vegetation on habitat or sub-reach scales. We examined the utility of a pseudo-spectral methodology, using RGB digital photography for estimating the cover of in-stream vegetation in a small Mediterranean-climate stream. We compared this methodology with that obtained by traditional ground-level grid methodology and with an airborne hyper-spectral remote sensing survey (AISA-ES). The study was conducted along a 2 km section of an intermittent stream (Taninim stream, Israel). When studied, the stream was dominated by patches of watercress (Nasturtium officinale) and mats of filamentous algae (Cladophora glomerata). The extent of vegetation cover at the habitat and section scales (100 and 104 m, respectively) were estimated by the pseudo-spectral methodology, using an airborne Roli camera with a Phase-One P 45 (39 MP) CCD image acquisition unit. The swaths were taken in elevation of about 460 m having a spatial resolution of about 4 cm (NADIR). For measuring vegetation cover at the section scale (104 m) we also used a 'push-broom' AISA-ES hyper-spectral swath having a sensor configuration of 182 bands (350-2500 nm) at elevation of ca. 1,200 m (i.e. spatial resolution of ca. 1 m). Simultaneously, with every swath we used an Analytical
Accelerated Adaptive Integration Method
2015-01-01
Conformational changes that occur upon ligand binding may be too slow to observe on the time scales routinely accessible using molecular dynamics simulations. The adaptive integration method (AIM) leverages the notion that when a ligand is either fully coupled or decoupled, according to λ, barrier heights may change, making some conformational transitions more accessible at certain λ values. AIM adaptively changes the value of λ in a single simulation so that conformations sampled at one value of λ seed the conformational space sampled at another λ value. Adapting the value of λ throughout a simulation, however, does not resolve issues in sampling when barriers remain high regardless of the λ value. In this work, we introduce a new method, called Accelerated AIM (AcclAIM), in which the potential energy function is flattened at intermediate values of λ, promoting the exploration of conformational space as the ligand is decoupled from its receptor. We show, with both a simple model system (Bromocyclohexane) and the more complex biomolecule Thrombin, that AcclAIM is a promising approach to overcome high barriers in the calculation of free energies, without the need for any statistical reweighting or additional processors. PMID:24780083
Advances in Adaptive Control Methods
NASA Technical Reports Server (NTRS)
Nguyen, Nhan
2009-01-01
This poster presentation describes recent advances in adaptive control technology developed by NASA. Optimal Control Modification is a novel adaptive law that can improve performance and robustness of adaptive control systems. A new technique has been developed to provide an analytical method for computing time delay stability margin for adaptive control systems.
Parallel multilevel adaptive methods
NASA Technical Reports Server (NTRS)
Dowell, B.; Govett, M.; Mccormick, S.; Quinlan, D.
1989-01-01
The progress of a project for the design and analysis of a multilevel adaptive algorithm (AFAC/HM/) targeted for the Navier Stokes Computer is discussed. The results of initial timing tests of AFAC, coupled with multigrid and an efficient load balancer, on a 16-node Intel iPSC/2 hypercube are included. The results of timing tests are presented.
Milne, Roger Brent
1995-12-01
This thesis describes a new method for the numerical solution of partial differential equations of the parabolic type on an adaptively refined mesh in two or more spatial dimensions. The method is motivated and developed in the context of the level set formulation for the curvature dependent propagation of surfaces in three dimensions. In that setting, it realizes the multiple advantages of decreased computational effort, localized accuracy enhancement, and compatibility with problems containing a range of length scales.
Method of adaptive artificial viscosity
NASA Astrophysics Data System (ADS)
Popov, I. V.; Fryazinov, I. V.
2011-09-01
A new finite-difference method for the numerical solution of gas dynamics equations is proposed. This method is a uniform monotonous finite-difference scheme of second-order approximation on time and space outside of domains of shock and compression waves. This method is based on inputting adaptive artificial viscosity (AAV) into gas dynamics equations. In this paper, this method is analyzed for 2D geometry. The testing computations of the movement of contact discontinuities and shock waves and the breakup of discontinuities are demonstrated.
Robust Optimal Adaptive Control Method with Large Adaptive Gain
NASA Technical Reports Server (NTRS)
Nguyen, Nhan T.
2009-01-01
In the presence of large uncertainties, a control system needs to be able to adapt rapidly to regain performance. Fast adaptation is referred to the implementation of adaptive control with a large adaptive gain to reduce the tracking error rapidly. However, a large adaptive gain can lead to high-frequency oscillations which can adversely affect robustness of an adaptive control law. A new adaptive control modification is presented that can achieve robust adaptation with a large adaptive gain without incurring high-frequency oscillations as with the standard model-reference adaptive control. The modification is based on the minimization of the Y2 norm of the tracking error, which is formulated as an optimal control problem. The optimality condition is used to derive the modification using the gradient method. The optimal control modification results in a stable adaptation and allows a large adaptive gain to be used for better tracking while providing sufficient stability robustness. Simulations were conducted for a damaged generic transport aircraft with both standard adaptive control and the adaptive optimal control modification technique. The results demonstrate the effectiveness of the proposed modification in tracking a reference model while maintaining a sufficient time delay margin.
NASA Astrophysics Data System (ADS)
Homann, Holger; Dreher, Jürgen; Grauer, Rainer
2007-10-01
In this paper we investigate the impact of the floating-point precision and interpolation scheme on the results of direct numerical simulations (DNS) of turbulence by pseudo-spectral codes. Three different types of floating-point precision configurations show no differences in the statistical results. This implies that single precision computations allow for increased Reynolds numbers due to the reduced amount of memory needed. The interpolation scheme for obtaining velocity values at particle positions has a noticeable impact on the Lagrangian acceleration statistics. A tri-cubic scheme results in a slightly broader acceleration probability density function than a tri-linear scheme. Furthermore the scaling behavior obtained by the cubic interpolation scheme exhibits a tendency towards a slightly increased degree of intermittency compared to the linear one.
A new orientation-adaptive interpolation method.
Wang, Qing; Ward, Rabab Kreidieh
2007-04-01
We propose an isophote-oriented, orientation-adaptive interpolation method. The proposed method employs an interpolation kernel that adapts to the local orientation of isophotes, and the pixel values are obtained through an oriented, bilinear interpolation. We show that, by doing so, the curvature of the interpolated isophotes is reduced, and, thus, zigzagging artifacts are largely suppressed. Analysis and experiments show that images interpolated using the proposed method are visually pleasing and almost artifact free.
The Method of Adaptive Comparative Judgement
ERIC Educational Resources Information Center
Pollitt, Alastair
2012-01-01
Adaptive Comparative Judgement (ACJ) is a modification of Thurstone's method of comparative judgement that exploits the power of adaptivity, but in scoring rather than testing. Professional judgement by teachers replaces the marking of tests; a judge is asked to compare the work of two students and simply to decide which of them is the better.…
Adaptive Discontinuous Galerkin Methods in Multiwavelets Bases
Archibald, Richard K; Fann, George I; Shelton Jr, William Allison
2011-01-01
We use a multiwavelet basis with the Discontinuous Galerkin (DG) method to produce a multi-scale DG method. We apply this Multiwavelet DG method to convection and convection-diffusion problems in multiple dimensions. Merging the DG method with multiwavelets allows the adaptivity in the DG method to be resolved through manipulation of multiwavelet coefficients rather than grid manipulation. Additionally, the Multiwavelet DG method is tested on non-linear equations in one dimension and on the cubed sphere.
Domain adaptive boosting method and its applications
NASA Astrophysics Data System (ADS)
Geng, Jie; Miao, Zhenjiang
2015-03-01
Differences of data distributions widely exist among datasets, i.e., domains. For many pattern recognition, nature language processing, and content-based analysis systems, a decrease in performance caused by the domain differences between the training and testing datasets is still a notable problem. We propose a domain adaptation method called domain adaptive boosting (DAB). It is based on the AdaBoost approach with extensions to cover the domain differences between the source and target domains. Two main stages are contained in this approach: source-domain clustering and source-domain sample selection. By iteratively adding the selected training samples from the source domain, the discrimination model is able to achieve better domain adaptation performance based on a small validation set. The DAB algorithm is suitable for the domains with large scale samples and easy to extend for multisource adaptation. We implement this method on three computer vision systems: the skin detection model in single images, the video concept detection model, and the object classification model. In the experiments, we compare the performances of several commonly used methods and the proposed DAB. Under most situations, the DAB is superior.
Structured adaptive grid generation using algebraic methods
NASA Technical Reports Server (NTRS)
Yang, Jiann-Cherng; Soni, Bharat K.; Roger, R. P.; Chan, Stephen C.
1993-01-01
The accuracy of the numerical algorithm depends not only on the formal order of approximation but also on the distribution of grid points in the computational domain. Grid adaptation is a procedure which allows optimal grid redistribution as the solution progresses. It offers the prospect of accurate flow field simulations without the use of an excessively timely, computationally expensive, grid. Grid adaptive schemes are divided into two basic categories: differential and algebraic. The differential method is based on a variational approach where a function which contains a measure of grid smoothness, orthogonality and volume variation is minimized by using a variational principle. This approach provided a solid mathematical basis for the adaptive method, but the Euler-Lagrange equations must be solved in addition to the original governing equations. On the other hand, the algebraic method requires much less computational effort, but the grid may not be smooth. The algebraic techniques are based on devising an algorithm where the grid movement is governed by estimates of the local error in the numerical solution. This is achieved by requiring the points in the large error regions to attract other points and points in the low error region to repel other points. The development of a fast, efficient, and robust algebraic adaptive algorithm for structured flow simulation applications is presented. This development is accomplished in a three step process. The first step is to define an adaptive weighting mesh (distribution mesh) on the basis of the equidistribution law applied to the flow field solution. The second, and probably the most crucial step, is to redistribute grid points in the computational domain according to the aforementioned weighting mesh. The third and the last step is to reevaluate the flow property by an appropriate search/interpolate scheme at the new grid locations. The adaptive weighting mesh provides the information on the desired concentration
Adaptive Method for Nonsmooth Nonnegative Matrix Factorization.
Yang, Zuyuan; Xiang, Yong; Xie, Kan; Lai, Yue
2017-04-01
Nonnegative matrix factorization (NMF) is an emerging tool for meaningful low-rank matrix representation. In NMF, explicit constraints are usually required, such that NMF generates desired products (or factorizations), especially when the products have significant sparseness features. It is known that the ability of NMF in learning sparse representation can be improved by embedding a smoothness factor between the products. Motivated by this result, we propose an adaptive nonsmooth NMF (Ans-NMF) method in this paper. In our method, the embedded factor is obtained by using a data-related approach, so it matches well with the underlying products, implying a superior faithfulness of the representations. Besides, due to the usage of an adaptive selection scheme to this factor, the sparseness of the products can be separately constrained, leading to wider applicability and interpretability. Furthermore, since the adaptive selection scheme is processed through solving a series of typical linear programming problems, it can be easily implemented. Simulations using computer-generated data and real-world data show the advantages of the proposed Ans-NMF method over the state-of-the-art methods.
Parallel adaptive wavelet collocation method for PDEs
Nejadmalayeri, Alireza; Vezolainen, Alexei; Brown-Dymkoski, Eric; Vasilyev, Oleg V.
2015-10-01
A parallel adaptive wavelet collocation method for solving a large class of Partial Differential Equations is presented. The parallelization is achieved by developing an asynchronous parallel wavelet transform, which allows one to perform parallel wavelet transform and derivative calculations with only one data synchronization at the highest level of resolution. The data are stored using tree-like structure with tree roots starting at a priori defined level of resolution. Both static and dynamic domain partitioning approaches are developed. For the dynamic domain partitioning, trees are considered to be the minimum quanta of data to be migrated between the processes. This allows fully automated and efficient handling of non-simply connected partitioning of a computational domain. Dynamic load balancing is achieved via domain repartitioning during the grid adaptation step and reassigning trees to the appropriate processes to ensure approximately the same number of grid points on each process. The parallel efficiency of the approach is discussed based on parallel adaptive wavelet-based Coherent Vortex Simulations of homogeneous turbulence with linear forcing at effective non-adaptive resolutions up to 2048{sup 3} using as many as 2048 CPU cores.
Parallel adaptive wavelet collocation method for PDEs
NASA Astrophysics Data System (ADS)
Nejadmalayeri, Alireza; Vezolainen, Alexei; Brown-Dymkoski, Eric; Vasilyev, Oleg V.
2015-10-01
A parallel adaptive wavelet collocation method for solving a large class of Partial Differential Equations is presented. The parallelization is achieved by developing an asynchronous parallel wavelet transform, which allows one to perform parallel wavelet transform and derivative calculations with only one data synchronization at the highest level of resolution. The data are stored using tree-like structure with tree roots starting at a priori defined level of resolution. Both static and dynamic domain partitioning approaches are developed. For the dynamic domain partitioning, trees are considered to be the minimum quanta of data to be migrated between the processes. This allows fully automated and efficient handling of non-simply connected partitioning of a computational domain. Dynamic load balancing is achieved via domain repartitioning during the grid adaptation step and reassigning trees to the appropriate processes to ensure approximately the same number of grid points on each process. The parallel efficiency of the approach is discussed based on parallel adaptive wavelet-based Coherent Vortex Simulations of homogeneous turbulence with linear forcing at effective non-adaptive resolutions up to 20483 using as many as 2048 CPU cores.
Adaptive envelope protection methods for aircraft
NASA Astrophysics Data System (ADS)
Unnikrishnan, Suraj
Carefree handling refers to the ability of a pilot to operate an aircraft without the need to continuously monitor aircraft operating limits. At the heart of all carefree handling or maneuvering systems, also referred to as envelope protection systems, are algorithms and methods for predicting future limit violations. Recently, envelope protection methods that have gained more acceptance, translate limit proximity information to its equivalent in the control channel. Envelope protection algorithms either use very small prediction horizon or are static methods with no capability to adapt to changes in system configurations. Adaptive approaches maximizing prediction horizon such as dynamic trim, are only applicable to steady-state-response critical limit parameters. In this thesis, a new adaptive envelope protection method is developed that is applicable to steady-state and transient response critical limit parameters. The approach is based upon devising the most aggressive optimal control profile to the limit boundary and using it to compute control limits. Pilot-in-the-loop evaluations of the proposed approach are conducted at the Georgia Tech Carefree Maneuver lab for transient longitudinal hub moment limit protection. Carefree maneuvering is the dual of carefree handling in the realm of autonomous Uninhabited Aerial Vehicles (UAVs). Designing a flight control system to fully and effectively utilize the operational flight envelope is very difficult. With the increasing role and demands for extreme maneuverability there is a need for developing envelope protection methods for autonomous UAVs. In this thesis, a full-authority automatic envelope protection method is proposed for limit protection in UAVs. The approach uses adaptive estimate of limit parameter dynamics and finite-time horizon predictions to detect impending limit boundary violations. Limit violations are prevented by treating the limit boundary as an obstacle and by correcting nominal control
Ensemble transform sensitivity method for adaptive observations
NASA Astrophysics Data System (ADS)
Zhang, Yu; Xie, Yuanfu; Wang, Hongli; Chen, Dehui; Toth, Zoltan
2016-01-01
The Ensemble Transform (ET) method has been shown to be useful in providing guidance for adaptive observation deployment. It predicts forecast error variance reduction for each possible deployment using its corresponding transformation matrix in an ensemble subspace. In this paper, a new ET-based sensitivity (ETS) method, which calculates the gradient of forecast error variance reduction in terms of analysis error variance reduction, is proposed to specify regions for possible adaptive observations. ETS is a first order approximation of the ET; it requires just one calculation of a transformation matrix, increasing computational efficiency (60%-80% reduction in computational cost). An explicit mathematical formulation of the ETS gradient is derived and described. Both the ET and ETS methods are applied to the Hurricane Irene (2011) case and a heavy rainfall case for comparison. The numerical results imply that the sensitive areas estimated by the ETS and ET are similar. However, ETS is much more efficient, particularly when the resolution is higher and the number of ensemble members is larger.
Adaptive method with intercessory feedback control for an intelligent agent
Goldsmith, Steven Y.
2004-06-22
An adaptive architecture method with feedback control for an intelligent agent provides for adaptively integrating reflexive and deliberative responses to a stimulus according to a goal. An adaptive architecture method with feedback control for multiple intelligent agents provides for coordinating and adaptively integrating reflexive and deliberative responses to a stimulus according to a goal. Re-programming of the adaptive architecture is through a nexus which coordinates reflexive and deliberator components.
Adaptive Accommodation Control Method for Complex Assembly
NASA Astrophysics Data System (ADS)
Kang, Sungchul; Kim, Munsang; Park, Shinsuk
Robotic systems have been used to automate assembly tasks in manufacturing and in teleoperation. Conventional robotic systems, however, have been ineffective in controlling contact force in multiple contact states of complex assemblythat involves interactions between complex-shaped parts. Unlike robots, humans excel at complex assembly tasks by utilizing their intrinsic impedance, forces and torque sensation, and tactile contact clues. By examining the human behavior in assembling complex parts, this study proposes a novel geometry-independent control method for robotic assembly using adaptive accommodation (or damping) algorithm. Two important conditions for complex assembly, target approachability and bounded contact force, can be met by the proposed control scheme. It generates target approachable motion that leads the object to move closer to a desired target position, while contact force is kept under a predetermined value. Experimental results from complex assembly tests have confirmed the feasibility and applicability of the proposed method.
Adapting implicit methods to parallel processors
Reeves, L.; McMillin, B.; Okunbor, D.; Riggins, D.
1994-12-31
When numerically solving many types of partial differential equations, it is advantageous to use implicit methods because of their better stability and more flexible parameter choice, (e.g. larger time steps). However, since implicit methods usually require simultaneous knowledge of the entire computational domain, these methods axe difficult to implement directly on distributed memory parallel processors. This leads to infrequent use of implicit methods on parallel/distributed systems. The usual implementation of implicit methods is inefficient due to the nature of parallel systems where it is common to take the computational domain and distribute the grid points over the processors so as to maintain a relatively even workload per processor. This creates a problem at the locations in the domain where adjacent points are not on the same processor. In order for the values at these points to be calculated, messages have to be exchanged between the corresponding processors. Without special adaptation, this will result in idle processors during part of the computation, and as the number of idle processors increases, the lower the effective speed improvement by using a parallel processor.
Adaptive model training system and method
Bickford, Randall L; Palnitkar, Rahul M; Lee, Vo
2014-04-15
An adaptive model training system and method for filtering asset operating data values acquired from a monitored asset for selectively choosing asset operating data values that meet at least one predefined criterion of good data quality while rejecting asset operating data values that fail to meet at least the one predefined criterion of good data quality; and recalibrating a previously trained or calibrated model having a learned scope of normal operation of the asset by utilizing the asset operating data values that meet at least the one predefined criterion of good data quality for adjusting the learned scope of normal operation of the asset for defining a recalibrated model having the adjusted learned scope of normal operation of the asset.
Adaptive model training system and method
Bickford, Randall L; Palnitkar, Rahul M
2014-11-18
An adaptive model training system and method for filtering asset operating data values acquired from a monitored asset for selectively choosing asset operating data values that meet at least one predefined criterion of good data quality while rejecting asset operating data values that fail to meet at least the one predefined criterion of good data quality; and recalibrating a previously trained or calibrated model having a learned scope of normal operation of the asset by utilizing the asset operating data values that meet at least the one predefined criterion of good data quality for adjusting the learned scope of normal operation of the asset for defining a recalibrated model having the adjusted learned scope of normal operation of the asset.
Adaptive filtering for the lattice Boltzmann method
NASA Astrophysics Data System (ADS)
Marié, Simon; Gloerfelt, Xavier
2017-03-01
In this study, a new selective filtering technique is proposed for the Lattice Boltzmann Method. This technique is based on an adaptive implementation of the selective filter coefficient σ. The proposed model makes the latter coefficient dependent on the shear stress in order to restrict the use of the spatial filtering technique in sheared stress region where numerical instabilities may occur. Different parameters are tested on 2D test-cases sensitive to numerical stability and on a 3D decaying Taylor-Green vortex. The results are compared to the classical static filtering technique and to the use of a standard subgrid-scale model and give significant improvements in particular for low-order filter consistent with the LBM stencil.
Adaptive numerical methods for partial differential equations
Cololla, P.
1995-07-01
This review describes a structured approach to adaptivity. The Automated Mesh Refinement (ARM) algorithms developed by M Berger are described, touching on hyperbolic and parabolic applications. Adaptivity is achieved by overlaying finer grids only in areas flagged by a generalized error criterion. The author discusses some of the issues involved in abutting disparate-resolution grids, and demonstrates that suitable algorithms exist for dissipative as well as hyperbolic systems.
A Method for Severely Constrained Item Selection in Adaptive Testing.
ERIC Educational Resources Information Center
Stocking, Martha L.; Swanson, Len
1993-01-01
A method is presented for incorporating a large number of constraints on adaptive item selection in the construction of computerized adaptive tests. The method, which emulates practices of expert test specialists, is illustrated for verbal and quantitative measures. Its foundation is application of a weighted deviations model and algorithm. (SLD)
Solution-adaptive finite element method in computational fracture mechanics
NASA Technical Reports Server (NTRS)
Min, J. B.; Bass, J. M.; Spradley, L. W.
1993-01-01
Some recent results obtained using solution-adaptive finite element method in linear elastic two-dimensional fracture mechanics problems are presented. The focus is on the basic issue of adaptive finite element method for validating the applications of new methodology to fracture mechanics problems by computing demonstration problems and comparing the stress intensity factors to analytical results.
Adaptive method for electron bunch profile prediction
Scheinker, Alexander; Gessner, Spencer
2015-10-01
We report on an experiment performed at the Facility for Advanced Accelerator Experimental Tests (FACET) at SLAC National Accelerator Laboratory, in which a new adaptive control algorithm, one with known, bounded update rates, despite operating on analytically unknown cost functions, was utilized in order to provide quasi-real-time bunch property estimates of the electron beam. Multiple parameters, such as arbitrary rf phase settings and other time-varying accelerator properties, were simultaneously tuned in order to match a simulated bunch energy spectrum with a measured energy spectrum. The simple adaptive scheme was digitally implemented using matlab and the experimental physics and industrial control system. The main result is a nonintrusive, nondestructive, real-time diagnostic scheme for prediction of bunch profiles, as well as other beam parameters, the precise control of which are important for the plasma wakefield acceleration experiments being explored at FACET. © 2015 authors. Published by the American Physical Society.
Adaptive method for electron bunch profile prediction
NASA Astrophysics Data System (ADS)
Scheinker, Alexander; Gessner, Spencer
2015-10-01
We report on an experiment performed at the Facility for Advanced Accelerator Experimental Tests (FACET) at SLAC National Accelerator Laboratory, in which a new adaptive control algorithm, one with known, bounded update rates, despite operating on analytically unknown cost functions, was utilized in order to provide quasi-real-time bunch property estimates of the electron beam. Multiple parameters, such as arbitrary rf phase settings and other time-varying accelerator properties, were simultaneously tuned in order to match a simulated bunch energy spectrum with a measured energy spectrum. The simple adaptive scheme was digitally implemented using matlab and the experimental physics and industrial control system. The main result is a nonintrusive, nondestructive, real-time diagnostic scheme for prediction of bunch profiles, as well as other beam parameters, the precise control of which are important for the plasma wakefield acceleration experiments being explored at FACET.
Adaptive finite element methods in electrochemistry.
Gavaghan, David J; Gillow, Kathryn; Süli, Endre
2006-12-05
In this article, we review some of our previous work that considers the general problem of numerical simulation of the currents at microelectrodes using an adaptive finite element approach. Microelectrodes typically consist of an electrode embedded (or recessed) in an insulating material. For all such electrodes, numerical simulation is made difficult by the presence of a boundary singularity at the electrode edge (where the electrode meets the insulator), manifested by the large increase in the current density at this point, often referred to as the edge effect. Our approach to overcoming this problem has involved the derivation of an a posteriori bound on the error in the numerical approximation for the current that can be used to drive an adaptive mesh-generation algorithm, allowing calculation of the quantity of interest (the current) to within a prescribed tolerance. We illustrate the generic applicability of the approach by considering a broad range of steady-state applications of the technique.
Adaptive methods, rolling contact, and nonclassical friction laws
NASA Technical Reports Server (NTRS)
Oden, J. T.
1989-01-01
Results and methods on three different areas of contemporary research are outlined. These include adaptive methods, the rolling contact problem for finite deformation of a hyperelastic or viscoelastic cylinder, and non-classical friction laws for modeling dynamic friction phenomena.
An Adaptive Discontinuous Galerkin Method for Modeling Atmospheric Convection (Preprint)
2011-04-13
Giraldo and Volkmar Wirth 5 SENSITIVITY STUDIES One important question for each adaptive numerical model is: how accurate is the adaptive method? For...this criterion that is used later for some sensitivity studies . These studies include a comparison between a simulation on an adaptive mesh with a...simulation on a uniform mesh and a sensitivity study concerning the size of the refinement region. 5.1 Comparison Criterion For comparing different
Adaptable radiation monitoring system and method
Archer, Daniel E.; Beauchamp, Brock R.; Mauger, G. Joseph; Nelson, Karl E.; Mercer, Michael B.; Pletcher, David C.; Riot, Vincent J.; Schek, James L.; Knapp, David A.
2006-06-20
A portable radioactive-material detection system capable of detecting radioactive sources moving at high speeds. The system has at least one radiation detector capable of detecting gamma-radiation and coupled to an MCA capable of collecting spectral data in very small time bins of less than about 150 msec. A computer processor is connected to the MCA for determining from the spectral data if a triggering event has occurred. Spectral data is stored on a data storage device, and a power source supplies power to the detection system. Various configurations of the detection system may be adaptably arranged for various radiation detection scenarios. In a preferred embodiment, the computer processor operates as a server which receives spectral data from other networked detection systems, and communicates the collected data to a central data reporting system.
Adaptive computational methods for aerothermal heating analysis
NASA Technical Reports Server (NTRS)
Price, John M.; Oden, J. Tinsley
1988-01-01
The development of adaptive gridding techniques for finite-element analysis of fluid dynamics equations is described. The developmental work was done with the Euler equations with concentration on shock and inviscid flow field capturing. Ultimately this methodology is to be applied to a viscous analysis for the purpose of predicting accurate aerothermal loads on complex shapes subjected to high speed flow environments. The development of local error estimate strategies as a basis for refinement strategies is discussed, as well as the refinement strategies themselves. The application of the strategies to triangular elements and a finite-element flux-corrected-transport numerical scheme are presented. The implementation of these strategies in the GIM/PAGE code for 2-D and 3-D applications is documented and demonstrated.
An adaptive pseudospectral method for discontinuous problems
NASA Technical Reports Server (NTRS)
Augenbaum, Jeffrey M.
1988-01-01
The accuracy of adaptively chosen, mapped polynomial approximations is studied for functions with steep gradients or discontinuities. It is shown that, for steep gradient functions, one can obtain spectral accuracy in the original coordinate system by using polynomial approximations in a transformed coordinate system with substantially fewer collocation points than are necessary using polynomial expansion directly in the original, physical, coordinate system. It is also shown that one can avoid the usual Gibbs oscillation associated with steep gradient solutions of hyperbolic pde's by approximation in suitably chosen coordinate systems. Continuous, high gradient solutions are computed with spectral accuracy (as measured in the physical coordinate system). Discontinuous solutions associated with nonlinear hyperbolic equations can be accurately computed by using an artificial viscosity chosen to smooth out the solution in the mapped, computational domain. Thus, shocks can be effectively resolved on a scale that is subgrid to the resolution available with collocation only in the physical domain. Examples with Fourier and Chebyshev collocation are given.
Moving and adaptive grid methods for compressible flows
NASA Technical Reports Server (NTRS)
Trepanier, Jean-Yves; Camarero, Ricardo
1995-01-01
This paper describes adaptive grid methods developed specifically for compressible flow computations. The basic flow solver is a finite-volume implementation of Roe's flux difference splitting scheme or arbitrarily moving unstructured triangular meshes. The grid adaptation is performed according to geometric and flow requirements. Some results are included to illustrate the potential of the methodology.
NASA Astrophysics Data System (ADS)
Rosen, A. L.; Krumholz, M. R.; Oishi, J. S.; Lee, A. T.; Klein, R. I.
2017-02-01
We present a highly-parallel multi-frequency hybrid radiation hydrodynamics algorithm that combines a spatially-adaptive long characteristics method for the radiation field from point sources with a moment method that handles the diffuse radiation field produced by a volume-filling fluid. Our Hybrid Adaptive Ray-Moment Method (HARM2) operates on patch-based adaptive grids, is compatible with asynchronous time stepping, and works with any moment method. In comparison to previous long characteristics methods, we have greatly improved the parallel performance of the adaptive long-characteristics method by developing a new completely asynchronous and non-blocking communication algorithm. As a result of this improvement, our implementation achieves near-perfect scaling up to O (103) processors on distributed memory machines. We present a series of tests to demonstrate the accuracy and performance of the method.
Adaptive mesh strategies for the spectral element method
NASA Technical Reports Server (NTRS)
Mavriplis, Catherine
1992-01-01
An adaptive spectral method was developed for the efficient solution of time dependent partial differential equations. Adaptive mesh strategies that include resolution refinement and coarsening by three different methods are illustrated on solutions to the 1-D viscous Burger equation and the 2-D Navier-Stokes equations for driven flow in a cavity. Sharp gradients, singularities, and regions of poor resolution are resolved optimally as they develop in time using error estimators which indicate the choice of refinement to be used. The adaptive formulation presents significant increases in efficiency, flexibility, and general capabilities for high order spectral methods.
Comparing Anisotropic Output-Based Grid Adaptation Methods by Decomposition
NASA Technical Reports Server (NTRS)
Park, Michael A.; Loseille, Adrien; Krakos, Joshua A.; Michal, Todd
2015-01-01
Anisotropic grid adaptation is examined by decomposing the steps of flow solution, ad- joint solution, error estimation, metric construction, and simplex grid adaptation. Multiple implementations of each of these steps are evaluated by comparison to each other and expected analytic results when available. For example, grids are adapted to analytic metric fields and grid measures are computed to illustrate the properties of multiple independent implementations of grid adaptation mechanics. Different implementations of each step in the adaptation process can be evaluated in a system where the other components of the adaptive cycle are fixed. Detailed examination of these properties allows comparison of different methods to identify the current state of the art and where further development should be targeted.
Adaptive Kernel Based Machine Learning Methods
2012-10-15
multiscale collocation method with a matrix compression strategy to discretize the system of integral equations and then use the multilevel...augmentation method to solve the resulting discrete system. A priori and a posteriori 1 parameter choice strategies are developed for thesemethods. The...performance of the proximity algo- rithms for the L1/TV denoising model. This leads us to a new characterization of all solutions to the L1/TV model via fixed
Adaptive upscaling with the dual mesh method
Guerillot, D.; Verdiere, S.
1997-08-01
The objective of this paper is to demonstrate that upscaling should be calculated during the flow simulation instead of trying to enhance the a priori upscaling methods. Hence, counter-examples are given to motivate our approach, the so-called Dual Mesh Method. The main steps of this numerical algorithm are recalled. Applications illustrate the necessity to consider different average relative permeability values depending on the direction in space. Moreover, these values could be different for the same average saturation. This proves that an a priori upscaling cannot be the answer even in homogeneous cases because of the {open_quotes}dynamical heterogeneity{close_quotes} created by the saturation profile. Other examples show the efficiency of the Dual Mesh Method applied to heterogeneous medium and to an actual field case in South America.
Adaptive Finite Element Methods for Continuum Damage Modeling
NASA Technical Reports Server (NTRS)
Min, J. B.; Tworzydlo, W. W.; Xiques, K. E.
1995-01-01
The paper presents an application of adaptive finite element methods to the modeling of low-cycle continuum damage and life prediction of high-temperature components. The major objective is to provide automated and accurate modeling of damaged zones through adaptive mesh refinement and adaptive time-stepping methods. The damage modeling methodology is implemented in an usual way by embedding damage evolution in the transient nonlinear solution of elasto-viscoplastic deformation problems. This nonlinear boundary-value problem is discretized by adaptive finite element methods. The automated h-adaptive mesh refinements are driven by error indicators, based on selected principal variables in the problem (stresses, non-elastic strains, damage, etc.). In the time domain, adaptive time-stepping is used, combined with a predictor-corrector time marching algorithm. The time selection is controlled by required time accuracy. In order to take into account strong temperature dependency of material parameters, the nonlinear structural solution a coupled with thermal analyses (one-way coupling). Several test examples illustrate the importance and benefits of adaptive mesh refinements in accurate prediction of damage levels and failure time.
A numerical study of adaptive space and time discretisations for Gross-Pitaevskii equations.
Thalhammer, Mechthild; Abhau, Jochen
2012-08-15
As a basic principle, benefits of adaptive discretisations are an improved balance between required accuracy and efficiency as well as an enhancement of the reliability of numerical computations. In this work, the capacity of locally adaptive space and time discretisations for the numerical solution of low-dimensional nonlinear Schrödinger equations is investigated. The considered model equation is related to the time-dependent Gross-Pitaevskii equation arising in the description of Bose-Einstein condensates in dilute gases. The performance of the Fourier-pseudo spectral method constrained to uniform meshes versus the locally adaptive finite element method and of higher-order exponential operator splitting methods with variable time stepsizes is studied. Numerical experiments confirm that a local time stepsize control based on a posteriori local error estimators or embedded splitting pairs, respectively, is effective in different situations with an enhancement either in efficiency or reliability. As expected, adaptive time-splitting schemes combined with fast Fourier transform techniques are favourable regarding accuracy and efficiency when applied to Gross-Pitaevskii equations with a defocusing nonlinearity and a mildly varying regular solution. However, the numerical solution of nonlinear Schrödinger equations in the semi-classical regime becomes a demanding task. Due to the highly oscillatory and nonlinear nature of the problem, the spatial mesh size and the time increments need to be of the size of the decisive parameter [Formula: see text], especially when it is desired to capture correctly the quantitative behaviour of the wave function itself. The required high resolution in space constricts the feasibility of numerical computations for both, the Fourier pseudo-spectral and the finite element method. Nevertheless, for smaller parameter values locally adaptive time discretisations facilitate to determine the time stepsizes sufficiently small in order that
A numerical study of adaptive space and time discretisations for Gross–Pitaevskii equations
Thalhammer, Mechthild; Abhau, Jochen
2012-01-01
As a basic principle, benefits of adaptive discretisations are an improved balance between required accuracy and efficiency as well as an enhancement of the reliability of numerical computations. In this work, the capacity of locally adaptive space and time discretisations for the numerical solution of low-dimensional nonlinear Schrödinger equations is investigated. The considered model equation is related to the time-dependent Gross–Pitaevskii equation arising in the description of Bose–Einstein condensates in dilute gases. The performance of the Fourier-pseudo spectral method constrained to uniform meshes versus the locally adaptive finite element method and of higher-order exponential operator splitting methods with variable time stepsizes is studied. Numerical experiments confirm that a local time stepsize control based on a posteriori local error estimators or embedded splitting pairs, respectively, is effective in different situations with an enhancement either in efficiency or reliability. As expected, adaptive time-splitting schemes combined with fast Fourier transform techniques are favourable regarding accuracy and efficiency when applied to Gross–Pitaevskii equations with a defocusing nonlinearity and a mildly varying regular solution. However, the numerical solution of nonlinear Schrödinger equations in the semi-classical regime becomes a demanding task. Due to the highly oscillatory and nonlinear nature of the problem, the spatial mesh size and the time increments need to be of the size of the decisive parameter 0<ε≪1, especially when it is desired to capture correctly the quantitative behaviour of the wave function itself. The required high resolution in space constricts the feasibility of numerical computations for both, the Fourier pseudo-spectral and the finite element method. Nevertheless, for smaller parameter values locally adaptive time discretisations facilitate to determine the time stepsizes sufficiently small in order that the
Adjoint Methods for Guiding Adaptive Mesh Refinement in Tsunami Modeling
NASA Astrophysics Data System (ADS)
Davis, B. N.; LeVeque, R. J.
2016-12-01
One difficulty in developing numerical methods for tsunami modeling is the fact that solutions contain time-varying regions where much higher resolution is required than elsewhere in the domain, particularly when tracking a tsunami propagating across the ocean. The open source GeoClaw software deals with this issue by using block-structured adaptive mesh refinement to selectively refine around propagating waves. For problems where only a target area of the total solution is of interest (e.g., one coastal community), a method that allows identifying and refining the grid only in regions that influence this target area would significantly reduce the computational cost of finding a solution. In this work, we show that solving the time-dependent adjoint equation and using a suitable inner product with the forward solution allows more precise refinement of the relevant waves. We present the adjoint methodology first in one space dimension for illustration and in a broad context since it could also be used in other adaptive software, and potentially for other tsunami applications beyond adaptive refinement. We then show how this adjoint method has been integrated into the adaptive mesh refinement strategy of the open source GeoClaw software and present tsunami modeling results showing that the accuracy of the solution is maintained and the computational time required is significantly reduced through the integration of the adjoint method into adaptive mesh refinement.
Studies of an Adaptive Kaczmarz Method for Electrical Impedance Imaging
NASA Astrophysics Data System (ADS)
Li, Taoran; Isaacson, David; Newell, Jonathan C.; Saulnier, Gary J.
2013-04-01
We present an adaptive Kaczmarz method for solving the inverse problem in electrical impedance tomography and determining the conductivity distribution inside an object from electrical measurements made on the surface. To best characterize an unknown conductivity distribution and avoid inverting the Jacobian-related term JTJ which could be expensive in terms of memory storage in large scale problems, we propose to solve the inverse problem by adaptively updating both the optimal current pattern with improved distinguishability and the conductivity estimate at each iteration. With a novel subset scheme, the memory-efficient reconstruction algorithm which appropriately combines the optimal current pattern generation and the Kaczmarz method can produce accurate and stable solutions adaptively compared to traditional Kaczmarz and Gauss-Newton type methods. Several reconstruction image metrics are used to quantitatively evaluate the performance of the simulation results.
Final Report: Symposium on Adaptive Methods for Partial Differential Equations
Pernice, M.; Johnson, C.R.; Smith, P.J.; Fogelson, A.
1998-12-10
OAK-B135 Final Report: Symposium on Adaptive Methods for Partial Differential Equations. Complex physical phenomena often include features that span a wide range of spatial and temporal scales. Accurate simulation of such phenomena can be difficult to obtain, and computations that are under-resolved can even exhibit spurious features. While it is possible to resolve small scale features by increasing the number of grid points, global grid refinement can quickly lead to problems that are intractable, even on the largest available computing facilities. These constraints are particularly severe for three dimensional problems that involve complex physics. One way to achieve the needed resolution is to refine the computational mesh locally, in only those regions where enhanced resolution is required. Adaptive solution methods concentrate computational effort in regions where it is most needed. These methods have been successfully applied to a wide variety of problems in computational science and engineering. Adaptive methods can be difficult to implement, prompting the development of tools and environments to facilitate their use. To ensure that the results of their efforts are useful, algorithm and tool developers must maintain close communication with application specialists. Conversely it remains difficult for application specialists who are unfamiliar with the methods to evaluate the trade-offs between the benefits of enhanced local resolution and the effort needed to implement an adaptive solution method.
An improved adaptive IHS method for image fusion
NASA Astrophysics Data System (ADS)
Wang, Ting
2015-12-01
An improved adaptive intensity-hue-saturation (IHS) method is proposed for image fusion in this paper based on the adaptive IHS (AIHS) method and its improved method(IAIHS). Through improved method, the weighting matrix, which decides how many spatial details in the panchromatic (Pan) image should be injected into the multispectral (MS) image, is defined on the basis of the linear relationship of the edges of Pan and MS image. At the same time, a modulation parameter t is used to balance the spatial resolution and spectral resolution of the fusion image. Experiments showed that the improved method can improve spectral quality and maintain spatial resolution compared with the AIHS and IAIHS methods.
A Dynamically Adaptive Arbitrary Lagrangian-Eulerian Method for Hydrodynamics
Anderson, R W; Pember, R B; Elliott, N S
2002-10-19
A new method that combines staggered grid Arbitrary Lagrangian-Eulerian (ALE) techniques with structured local adaptive mesh refinement (AMR) has been developed for solution of the Euler equations. The novel components of the combined ALE-AMR method hinge upon the integration of traditional AMR techniques with both staggered grid Lagrangian operators as well as elliptic relaxation operators on moving, deforming mesh hierarchies. Numerical examples demonstrate the utility of the method in performing detailed three-dimensional shock-driven instability calculations.
A Dynamically Adaptive Arbitrary Lagrangian-Eulerian Method for Hydrodynamics
Anderson, R W; Pember, R B; Elliott, N S
2004-01-28
A new method that combines staggered grid Arbitrary Lagrangian-Eulerian (ALE) techniques with structured local adaptive mesh refinement (AMR) has been developed for solution of the Euler equations. The novel components of the combined ALE-AMR method hinge upon the integration of traditional AMR techniques with both staggered grid Lagrangian operators as well as elliptic relaxation operators on moving, deforming mesh hierarchies. Numerical examples demonstrate the utility of the method in performing detailed three-dimensional shock-driven instability calculations.
Wavelet methods in multi-conjugate adaptive optics
NASA Astrophysics Data System (ADS)
Helin, T.; Yudytskiy, M.
2013-08-01
The next generation ground-based telescopes rely heavily on adaptive optics for overcoming the limitation of atmospheric turbulence. In the future adaptive optics modalities, like multi-conjugate adaptive optics (MCAO), atmospheric tomography is the major mathematical and computational challenge. In this severely ill-posed problem, a fast and stable reconstruction algorithm is needed that can take into account many real-life phenomena of telescope imaging. We introduce a novel reconstruction method for the atmospheric tomography problem and demonstrate its performance and flexibility in the context of MCAO. Our method is based on using locality properties of compactly supported wavelets, both in the spatial and frequency domains. The reconstruction in the atmospheric tomography problem is obtained by solving the Bayesian MAP estimator with a conjugate-gradient-based algorithm. An accelerated algorithm with preconditioning is also introduced. Numerical performance is demonstrated on the official end-to-end simulation tool OCTOPUS of European Southern Observatory.
Adaptive computational methods for SSME internal flow analysis
NASA Technical Reports Server (NTRS)
Oden, J. T.
1986-01-01
Adaptive finite element methods for the analysis of classes of problems in compressible and incompressible flow of interest in SSME (space shuttle main engine) analysis and design are described. The general objective of the adaptive methods is to improve and to quantify the quality of numerical solutions to the governing partial differential equations of fluid dynamics in two-dimensional cases. There are several different families of adaptive schemes that can be used to improve the quality of solutions in complex flow simulations. Among these are: (1) r-methods (node-redistribution or moving mesh methods) in which a fixed number of nodal points is allowed to migrate to points in the mesh where high error is detected; (2) h-methods, in which the mesh size h is automatically refined to reduce local error; and (3) p-methods, in which the local degree p of the finite element approximation is increased to reduce local error. Two of the three basic techniques have been studied in this project: an r-method for steady Euler equations in two dimensions and a p-method for transient, laminar, viscous incompressible flow. Numerical results are presented. A brief introduction to residual methods of a-posterior error estimation is also given and some pertinent conclusions of the study are listed.
Adaptive clustering and adaptive weighting methods to detect disease associated rare variants.
Sha, Qiuying; Wang, Shuaicheng; Zhang, Shuanglin
2013-03-01
Current statistical methods to test association between rare variants and phenotypes are essentially the group-wise methods that collapse or aggregate all variants in a predefined group into a single variant. Comparing with the variant-by-variant methods, the group-wise methods have their advantages. However, two factors may affect the power of these methods. One is that some of the causal variants may be protective. When both risk and protective variants are presented, it will lose power by collapsing or aggregating all variants because the effects of risk and protective variants will counteract each other. The other is that not all variants in the group are causal; rather, a large proportion is believed to be neutral. When a large proportion of variants are neutral, collapsing or aggregating all variants may not be an optimal solution. We propose two alternative methods, adaptive clustering (AC) method and adaptive weighting (AW) method, aiming to test rare variant association in the presence of neutral and/or protective variants. Both of AC and AW are applicable to quantitative traits as well as qualitative traits. Results of extensive simulation studies show that AC and AW have similar power and both of them have clear advantages from power to computational efficiency comparing with existing group-wise methods and existing data-driven methods that allow neutral and protective variants. We recommend AW method because AW method is computationally more efficient than AC method.
Adaptive windowed range-constrained Otsu method using local information
NASA Astrophysics Data System (ADS)
Zheng, Jia; Zhang, Dinghua; Huang, Kuidong; Sun, Yuanxi; Tang, Shaojie
2016-01-01
An adaptive windowed range-constrained Otsu method using local information is proposed for improving the performance of image segmentation. First, the reason why traditional thresholding methods do not perform well in the segmentation of complicated images is analyzed. Therein, the influences of global and local thresholdings on the image segmentation are compared. Second, two methods that can adaptively change the size of the local window according to local information are proposed by us. The characteristics of the proposed methods are analyzed. Thereby, the information on the number of edge pixels in the local window of the binarized variance image is employed to adaptively change the local window size. Finally, the superiority of the proposed method over other methods such as the range-constrained Otsu, the active contour model, the double Otsu, the Bradley's, and the distance-regularized level set evolution is demonstrated. It is validated by the experiments that the proposed method can keep more details and acquire much more satisfying area overlap measure as compared with the other conventional methods.
New developments in adaptive methods for computational fluid dynamics
NASA Technical Reports Server (NTRS)
Oden, J. T.; Bass, Jon M.
1990-01-01
New developments in a posteriori error estimates, smart algorithms, and h- and h-p adaptive finite element methods are discussed in the context of two- and three-dimensional compressible and incompressible flow simulations. Applications to rotor-stator interaction, rotorcraft aerodynamics, shock and viscous boundary layer interaction and fluid-structure interaction problems are discussed.
Likelihood Methods for Adaptive Filtering and Smoothing. Technical Report #455.
ERIC Educational Resources Information Center
Butler, Ronald W.
The dynamic linear model or Kalman filtering model provides a useful methodology for predicting the past, present, and future states of a dynamic system, such as an object in motion or an economic or social indicator that is changing systematically with time. Recursive likelihood methods for adaptive Kalman filtering and smoothing are developed.…
A Conditional Exposure Control Method for Multidimensional Adaptive Testing
ERIC Educational Resources Information Center
Finkelman, Matthew; Nering, Michael L.; Roussos, Louis A.
2009-01-01
In computerized adaptive testing (CAT), ensuring the security of test items is a crucial practical consideration. A common approach to reducing item theft is to define maximum item exposure rates, i.e., to limit the proportion of examinees to whom a given item can be administered. Numerous methods for controlling exposure rates have been proposed…
Adaptive reconnection-based arbitrary Lagrangian Eulerian method
Bo, Wurigen; Shashkov, Mikhail
2015-07-21
We present a new adaptive Arbitrary Lagrangian Eulerian (ALE) method. This method is based on the reconnection-based ALE (ReALE) methodology of Refs. [35], [34] and [6]. The main elements in a standard ReALE method are: an explicit Lagrangian phase on an arbitrary polygonal (in 2D) mesh in which the solution and positions of grid nodes are updated; a rezoning phase in which a new grid is defined by changing the connectivity (using Voronoi tessellation) but not the number of cells; and a remapping phase in which the Lagrangian solution is transferred onto the new grid. Furthermore, in the standard ReALEmore » method, the rezoned mesh is smoothed by using one or several steps toward centroidal Voronoi tessellation, but it is not adapted to the solution in any way.« less
Adaptive reconnection-based arbitrary Lagrangian Eulerian method
Bo, Wurigen; Shashkov, Mikhail
2015-07-21
We present a new adaptive Arbitrary Lagrangian Eulerian (ALE) method. This method is based on the reconnection-based ALE (ReALE) methodology of Refs. [35], [34] and [6]. The main elements in a standard ReALE method are: an explicit Lagrangian phase on an arbitrary polygonal (in 2D) mesh in which the solution and positions of grid nodes are updated; a rezoning phase in which a new grid is defined by changing the connectivity (using Voronoi tessellation) but not the number of cells; and a remapping phase in which the Lagrangian solution is transferred onto the new grid. Furthermore, in the standard ReALE method, the rezoned mesh is smoothed by using one or several steps toward centroidal Voronoi tessellation, but it is not adapted to the solution in any way.
Method and system for environmentally adaptive fault tolerant computing
NASA Technical Reports Server (NTRS)
Copenhaver, Jason L. (Inventor); Jeremy, Ramos (Inventor); Wolfe, Jeffrey M. (Inventor); Brenner, Dean (Inventor)
2010-01-01
A method and system for adapting fault tolerant computing. The method includes the steps of measuring an environmental condition representative of an environment. An on-board processing system's sensitivity to the measured environmental condition is measured. It is determined whether to reconfigure a fault tolerance of the on-board processing system based in part on the measured environmental condition. The fault tolerance of the on-board processing system may be reconfigured based in part on the measured environmental condition.
Workshop on adaptive grid methods for fusion plasmas
Wiley, J.C.
1995-07-01
The author describes a general `hp` finite element method with adaptive grids. The code was based on the work of Oden, et al. The term `hp` refers to the method of spatial refinement (h), in conjunction with the order of polynomials used as a part of the finite element discretization (p). This finite element code seems to handle well the different mesh grid sizes occuring between abuted grids with different resolutions.
ICASE/LaRC Workshop on Adaptive Grid Methods
NASA Technical Reports Server (NTRS)
South, Jerry C., Jr. (Editor); Thomas, James L. (Editor); Vanrosendale, John (Editor)
1995-01-01
Solution-adaptive grid techniques are essential to the attainment of practical, user friendly, computational fluid dynamics (CFD) applications. In this three-day workshop, experts gathered together to describe state-of-the-art methods in solution-adaptive grid refinement, analysis, and implementation; to assess the current practice; and to discuss future needs and directions for research. This was accomplished through a series of invited and contributed papers. The workshop focused on a set of two-dimensional test cases designed by the organizers to aid in assessing the current state of development of adaptive grid technology. In addition, a panel of experts from universities, industry, and government research laboratories discussed their views of needs and future directions in this field.
Free energy calculations: an efficient adaptive biasing potential method.
Dickson, Bradley M; Legoll, Frédéric; Lelièvre, Tony; Stoltz, Gabriel; Fleurat-Lessard, Paul
2010-05-06
We develop an efficient sampling and free energy calculation technique within the adaptive biasing potential (ABP) framework. By mollifying the density of states we obtain an approximate free energy and an adaptive bias potential that is computed directly from the population along the coordinates of the free energy. Because of the mollifier, the bias potential is "nonlocal", and its gradient admits a simple analytic expression. A single observation of the reaction coordinate can thus be used to update the approximate free energy at every point within a neighborhood of the observation. This greatly reduces the equilibration time of the adaptive bias potential. This approximation introduces two parameters: strength of mollification and the zero of energy of the bias potential. While we observe that the approximate free energy is a very good estimate of the actual free energy for a large range of mollification strength, we demonstrate that the errors associated with the mollification may be removed via deconvolution. The zero of energy of the bias potential, which is easy to choose, influences the speed of convergence but not the limiting accuracy. This method is simple to apply to free energy or mean force computation in multiple dimensions and does not involve second derivatives of the reaction coordinates, matrix manipulations nor on-the-fly adaptation of parameters. For the alanine dipeptide test case, the new method is found to gain as much as a factor of 10 in efficiency as compared to two basic implementations of the adaptive biasing force methods, and it is shown to be as efficient as well-tempered metadynamics with the postprocess deconvolution giving a clear advantage to the mollified density of states method.
An Adaptive Cross-Architecture Combination Method for Graph Traversal
You, Yang; Song, Shuaiwen; Kerbyson, Darren J.
2014-06-18
Breadth-First Search (BFS) is widely used in many real-world applications including computational biology, social networks, and electronic design automation. The combination method, using both top-down and bottom-up techniques, is the most effective BFS approach. However, current combination methods rely on trial-and-error and exhaustive search to locate the optimal switching point, which may cause significant runtime overhead. To solve this problem, we design an adaptive method based on regression analysis to predict an optimal switching point for the combination method at runtime within less than 0.1% of the BFS execution time.
Adaptive Kaczmarz Method for Image Reconstruction in Electrical Impedance Tomography
Li, Taoran; Kao, Tzu-Jen; Isaacson, David; Newell, Jonathan C.; Saulnier, Gary J.
2013-01-01
We present an adaptive Kaczmarz method for solving the inverse problem in electrical impedance tomography and determining the conductivity distribution inside an object from electrical measurements made on the surface. To best characterize an unknown conductivity distribution and avoid inverting the Jacobian-related term JTJ which could be expensive in terms of computation cost and memory in large scale problems, we propose solving the inverse problem by applying the optimal current patterns for distinguishing the actual conductivity from the conductivity estimate between each iteration of the block Kaczmarz algorithm. With a novel subset scheme, the memory-efficient reconstruction algorithm which appropriately combines the optimal current pattern generation with the Kaczmarz method can produce more accurate and stable solutions adaptively as compared to traditional Kaczmarz and Gauss-Newton type methods. Choices of initial current pattern estimates are discussed in the paper. Several reconstruction image metrics are used to quantitatively evaluate the performance of the simulation results. PMID:23718952
Final Report: Symposium on Adaptive Methods for Partial Differential Equations
Pernice, Michael; Johnson, Christopher R.; Smith, Philip J.; Fogelson, Aaron
1998-12-08
Complex physical phenomena often include features that span a wide range of spatial and temporal scales. Accurate simulation of such phenomena can be difficult to obtain, and computations that are under-resolved can even exhibit spurious features. While it is possible to resolve small scale features by increasing the number of grid points, global grid refinement can quickly lead to problems that are intractable, even on the largest available computing facilities. These constraints are particularly severe for three dimensional problems that involve complex physics. One way to achieve the needed resolution is to refine the computational mesh locally, in only those regions where enhanced resolution is required. Adaptive solution methods concentrate computational effort in regions where it is most needed. These methods have been successfully applied to a wide variety of problems in computational science and engineering. Adaptive methods can be difficult to implement, prompting the development of tools and environments to facilitate their use. To ensure that the results of their efforts are useful, algorithm and tool developers must maintain close communication with application specialists. Conversely it remains difficult for application specialists who are unfamiliar with the methods to evaluate the trade-offs between the benefits of enhanced local resolution and the effort needed to implement an adaptive solution method.
Adaptive Set-Based Methods for Association Testing.
Su, Yu-Chen; Gauderman, William James; Berhane, Kiros; Lewinger, Juan Pablo
2016-02-01
With a typical sample size of a few thousand subjects, a single genome-wide association study (GWAS) using traditional one single nucleotide polymorphism (SNP)-at-a-time methods can only detect genetic variants conferring a sizable effect on disease risk. Set-based methods, which analyze sets of SNPs jointly, can detect variants with smaller effects acting within a gene, a pathway, or other biologically relevant sets. Although self-contained set-based methods (those that test sets of variants without regard to variants not in the set) are generally more powerful than competitive set-based approaches (those that rely on comparison of variants in the set of interest with variants not in the set), there is no consensus as to which self-contained methods are best. In particular, several self-contained set tests have been proposed to directly or indirectly "adapt" to the a priori unknown proportion and distribution of effects of the truly associated SNPs in the set, which is a major determinant of their power. A popular adaptive set-based test is the adaptive rank truncated product (ARTP), which seeks the set of SNPs that yields the best-combined evidence of association. We compared the standard ARTP, several ARTP variations we introduced, and other adaptive methods in a comprehensive simulation study to evaluate their performance. We used permutations to assess significance for all the methods and thus provide a level playing field for comparison. We found the standard ARTP test to have the highest power across our simulations followed closely by the global model of random effects (GMRE) and a least absolute shrinkage and selection operator (LASSO)-based test.
Advanced numerical methods in mesh generation and mesh adaptation
Lipnikov, Konstantine; Danilov, A; Vassilevski, Y; Agonzal, A
2010-01-01
Numerical solution of partial differential equations requires appropriate meshes, efficient solvers and robust and reliable error estimates. Generation of high-quality meshes for complex engineering models is a non-trivial task. This task is made more difficult when the mesh has to be adapted to a problem solution. This article is focused on a synergistic approach to the mesh generation and mesh adaptation, where best properties of various mesh generation methods are combined to build efficiently simplicial meshes. First, the advancing front technique (AFT) is combined with the incremental Delaunay triangulation (DT) to build an initial mesh. Second, the metric-based mesh adaptation (MBA) method is employed to improve quality of the generated mesh and/or to adapt it to a problem solution. We demonstrate with numerical experiments that combination of all three methods is required for robust meshing of complex engineering models. The key to successful mesh generation is the high-quality of the triangles in the initial front. We use a black-box technique to improve surface meshes exported from an unattainable CAD system. The initial surface mesh is refined into a shape-regular triangulation which approximates the boundary with the same accuracy as the CAD mesh. The DT method adds robustness to the AFT. The resulting mesh is topologically correct but may contain a few slivers. The MBA uses seven local operations to modify the mesh topology. It improves significantly the mesh quality. The MBA method is also used to adapt the mesh to a problem solution to minimize computational resources required for solving the problem. The MBA has a solid theoretical background. In the first two experiments, we consider the convection-diffusion and elasticity problems. We demonstrate the optimal reduction rate of the discretization error on a sequence of adaptive strongly anisotropic meshes. The key element of the MBA method is construction of a tensor metric from hierarchical edge
Parallel 3D Mortar Element Method for Adaptive Nonconforming Meshes
NASA Technical Reports Server (NTRS)
Feng, Huiyu; Mavriplis, Catherine; VanderWijngaart, Rob; Biswas, Rupak
2004-01-01
High order methods are frequently used in computational simulation for their high accuracy. An efficient way to avoid unnecessary computation in smooth regions of the solution is to use adaptive meshes which employ fine grids only in areas where they are needed. Nonconforming spectral elements allow the grid to be flexibly adjusted to satisfy the computational accuracy requirements. The method is suitable for computational simulations of unsteady problems with very disparate length scales or unsteady moving features, such as heat transfer, fluid dynamics or flame combustion. In this work, we select the Mark Element Method (MEM) to handle the non-conforming interfaces between elements. A new technique is introduced to efficiently implement MEM in 3-D nonconforming meshes. By introducing an "intermediate mortar", the proposed method decomposes the projection between 3-D elements and mortars into two steps. In each step, projection matrices derived in 2-D are used. The two-step method avoids explicitly forming/deriving large projection matrices for 3-D meshes, and also helps to simplify the implementation. This new technique can be used for both h- and p-type adaptation. This method is applied to an unsteady 3-D moving heat source problem. With our new MEM implementation, mesh adaptation is able to efficiently refine the grid near the heat source and coarsen the grid once the heat source passes. The savings in computational work resulting from the dynamic mesh adaptation is demonstrated by the reduction of the the number of elements used and CPU time spent. MEM and mesh adaptation, respectively, bring irregularity and dynamics to the computer memory access pattern. Hence, they provide a good way to gauge the performance of computer systems when running scientific applications whose memory access patterns are irregular and unpredictable. We select a 3-D moving heat source problem as the Unstructured Adaptive (UA) grid benchmark, a new component of the NAS Parallel
Methods for prismatic/tetrahedral grid generation and adaptation
NASA Technical Reports Server (NTRS)
Kallinderis, Y.
1995-01-01
The present work involves generation of hybrid prismatic/tetrahedral grids for complex 3-D geometries including multi-body domains. The prisms cover the region close to each body's surface, while tetrahedra are created elsewhere. Two developments are presented for hybrid grid generation around complex 3-D geometries. The first is a new octree/advancing front type of method for generation of the tetrahedra of the hybrid mesh. The main feature of the present advancing front tetrahedra generator that is different from previous such methods is that it does not require the creation of a background mesh by the user for the determination of the grid-spacing and stretching parameters. These are determined via an automatically generated octree. The second development is a method for treating the narrow gaps in between different bodies in a multiply-connected domain. This method is applied to a two-element wing case. A High Speed Civil Transport (HSCT) type of aircraft geometry is considered. The generated hybrid grid required only 170 K tetrahedra instead of an estimated two million had a tetrahedral mesh been used in the prisms region as well. A solution adaptive scheme for viscous computations on hybrid grids is also presented. A hybrid grid adaptation scheme that employs both h-refinement and redistribution strategies is developed to provide optimum meshes for viscous flow computations. Grid refinement is a dual adaptation scheme that couples 3-D, isotropic division of tetrahedra and 2-D, directional division of prisms.
Efficient Unstructured Grid Adaptation Methods for Sonic Boom Prediction
NASA Technical Reports Server (NTRS)
Campbell, Richard L.; Carter, Melissa B.; Deere, Karen A.; Waithe, Kenrick A.
2008-01-01
This paper examines the use of two grid adaptation methods to improve the accuracy of the near-to-mid field pressure signature prediction of supersonic aircraft computed using the USM3D unstructured grid flow solver. The first method (ADV) is an interactive adaptation process that uses grid movement rather than enrichment to more accurately resolve the expansion and compression waves. The second method (SSGRID) uses an a priori adaptation approach to stretch and shear the original unstructured grid to align the grid with the pressure waves and reduce the cell count required to achieve an accurate signature prediction at a given distance from the vehicle. Both methods initially create negative volume cells that are repaired in a module in the ADV code. While both approaches provide significant improvements in the near field signature (< 3 body lengths) relative to a baseline grid without increasing the number of grid points, only the SSGRID approach allows the details of the signature to be accurately computed at mid-field distances (3-10 body lengths) for direct use with mid-field-to-ground boom propagation codes.
Methods for prismatic/tetrahedral grid generation and adaptation
NASA Astrophysics Data System (ADS)
Kallinderis, Y.
1995-10-01
The present work involves generation of hybrid prismatic/tetrahedral grids for complex 3-D geometries including multi-body domains. The prisms cover the region close to each body's surface, while tetrahedra are created elsewhere. Two developments are presented for hybrid grid generation around complex 3-D geometries. The first is a new octree/advancing front type of method for generation of the tetrahedra of the hybrid mesh. The main feature of the present advancing front tetrahedra generator that is different from previous such methods is that it does not require the creation of a background mesh by the user for the determination of the grid-spacing and stretching parameters. These are determined via an automatically generated octree. The second development is a method for treating the narrow gaps in between different bodies in a multiply-connected domain. This method is applied to a two-element wing case. A High Speed Civil Transport (HSCT) type of aircraft geometry is considered. The generated hybrid grid required only 170 K tetrahedra instead of an estimated two million had a tetrahedral mesh been used in the prisms region as well. A solution adaptive scheme for viscous computations on hybrid grids is also presented. A hybrid grid adaptation scheme that employs both h-refinement and redistribution strategies is developed to provide optimum meshes for viscous flow computations. Grid refinement is a dual adaptation scheme that couples 3-D, isotropic division of tetrahedra and 2-D, directional division of prisms.
Space-time adaptive numerical methods for geophysical applications.
Castro, C E; Käser, M; Toro, E F
2009-11-28
In this paper we present high-order formulations of the finite volume and discontinuous Galerkin finite-element methods for wave propagation problems with a space-time adaptation technique using unstructured meshes in order to reduce computational cost without reducing accuracy. Both methods can be derived in a similar mathematical framework and are identical in their first-order version. In their extension to higher order accuracy in space and time, both methods use spatial polynomials of higher degree inside each element, a high-order solution of the generalized Riemann problem and a high-order time integration method based on the Taylor series expansion. The static adaptation strategy uses locally refined high-resolution meshes in areas with low wave speeds to improve the approximation quality. Furthermore, the time step length is chosen locally adaptive such that the solution is evolved explicitly in time by an optimal time step determined by a local stability criterion. After validating the numerical approach, both schemes are applied to geophysical wave propagation problems such as tsunami waves and seismic waves comparing the new approach with the classical global time-stepping technique. The problem of mesh partitioning for large-scale applications on multi-processor architectures is discussed and a new mesh partition approach is proposed and tested to further reduce computational cost.
Developing new online calibration methods for multidimensional computerized adaptive testing.
Chen, Ping; Wang, Chun; Xin, Tao; Chang, Hua-Hua
2017-02-01
Multidimensional computerized adaptive testing (MCAT) has received increasing attention over the past few years in educational measurement. Like all other formats of CAT, item replenishment is an essential part of MCAT for its item bank maintenance and management, which governs retiring overexposed or obsolete items over time and replacing them with new ones. Moreover, calibration precision of the new items will directly affect the estimation accuracy of examinees' ability vectors. In unidimensional CAT (UCAT) and cognitive diagnostic CAT, online calibration techniques have been developed to effectively calibrate new items. However, there has been very little discussion of online calibration in MCAT in the literature. Thus, this paper proposes new online calibration methods for MCAT based upon some popular methods used in UCAT. Three representative methods, Method A, the 'one EM cycle' method and the 'multiple EM cycles' method, are generalized to MCAT. Three simulation studies were conducted to compare the three new methods by manipulating three factors (test length, item bank design, and level of correlation between coordinate dimensions). The results showed that all the new methods were able to recover the item parameters accurately, and the adaptive online calibration designs showed some improvements compared to the random design under most conditions.
A simplified self-adaptive grid method, SAGE
NASA Technical Reports Server (NTRS)
Davies, C.; Venkatapathy, E.
1989-01-01
The formulation of the Self-Adaptive Grid Evolution (SAGE) code, based on the work of Nakahashi and Deiwert, is described in the first section of this document. The second section is presented in the form of a user guide which explains the input and execution of the code, and provides many examples. Application of the SAGE code, by Ames Research Center and by others, in the solution of various flow problems has been an indication of the code's general utility and success. Although the basic formulation follows the method of Nakahashi and Deiwert, many modifications have been made to facilitate the use of the self-adaptive grid method for single, zonal, and multiple grids. Modifications to the methodology and the simplified input options make this current version a flexible and user-friendly code.
Optimal and adaptive methods of processing hydroacoustic signals (review)
NASA Astrophysics Data System (ADS)
Malyshkin, G. S.; Sidel'nikov, G. B.
2014-09-01
Different methods of optimal and adaptive processing of hydroacoustic signals for multipath propagation and scattering are considered. Advantages and drawbacks of the classical adaptive (Capon, MUSIC, and Johnson) algorithms and "fast" projection algorithms are analyzed for the case of multipath propagation and scattering of strong signals. The classical optimal approaches to detecting multipath signals are presented. A mechanism of controlled normalization of strong signals is proposed to automatically detect weak signals. The results of simulating the operation of different detection algorithms for a linear equidistant array under multipath propagation and scattering are presented. An automatic detector is analyzed, which is based on classical or fast projection algorithms, which estimates the background proceeding from median filtering or the method of bilateral spatial contrast.
NASA Astrophysics Data System (ADS)
Li, Dongming; Zhang, Lijuan; Wang, Ting; Liu, Huan; Yang, Jinhua; Chen, Guifen
2016-11-01
To improve the adaptive optics (AO) image's quality, we study the AO image restoration algorithm based on wavefront reconstruction technology and adaptive total variation (TV) method in this paper. Firstly, the wavefront reconstruction using Zernike polynomial is used for initial estimated for the point spread function (PSF). Then, we develop our proposed iterative solutions for AO images restoration, addressing the joint deconvolution issue. The image restoration experiments are performed to verify the image restoration effect of our proposed algorithm. The experimental results show that, compared with the RL-IBD algorithm and Wiener-IBD algorithm, we can see that GMG measures (for real AO image) from our algorithm are increased by 36.92%, and 27.44% respectively, and the computation time are decreased by 7.2%, and 3.4% respectively, and its estimation accuracy is significantly improved.
Grid adaptation and remapping for arbitrary lagrangian eulerian (ALE) methods
Lapenta, G. M.
2002-01-01
Methods to include automatic grid adaptation tools within the Arbitrary Lagrangian Eulerian (ALE) method are described. Two main developments will be described. First, a new grid adaptation approach is described, based on an automatic and accurate estimate of the local truncation error. Second, a new method to remap the information between two grids is presented, based on the MPDATA approach. The Arbitrary Lagrangian Eulerian (ALE) method solves hyperbolic equations by splitting the operators is two phases. First, in the Lagrangian phase, the equations under consideration are written in a Lagrangian frame and are discretized. In this phase, the grid moves with the solution, the velocity of each node being the local fluid velocity. Second, in the Eulerian phase, a new grid is generated and the information is transferred to the new grid. The advantage of considering this second step is the possibility of avoiding mesh distortion and tangling typical of pure Lagrangian methods. The second phase of the ALE method is the primary topic of the present communication. In the Eulerian phase two tasks need to be completed. First, a new grid need to be created (we will refer to this task as rezoning). Second, the information is transferred from the grid available at the end of the Lagrangian phase to the new grid (we will refer to this task as remapping). New techniques are presented for the two tasks of the Eulerian phase: rezoning and remapping.
A novel adaptive force control method for IPMC manipulation
NASA Astrophysics Data System (ADS)
Hao, Lina; Sun, Zhiyong; Li, Zhi; Su, Yunquan; Gao, Jianchao
2012-07-01
IPMC is a type of electro-active polymer material, also called artificial muscle, which can generate a relatively large deformation under a relatively low input voltage (generally speaking, less than 5 V), and can be implemented in a water environment. Due to these advantages, IPMC can be used in many fields such as biomimetics, service robots, bio-manipulation, etc. Until now, most existing methods for IPMC manipulation are displacement control not directly force control, however, under most conditions, the success rate of manipulations for tiny fragile objects is limited by the contact force, such as using an IPMC gripper to fix cells. Like most EAPs, a creep phenomenon exists in IPMC, of which the generated force will change with time and the creep model will be influenced by the change of the water content or other environmental factors, so a proper force control method is urgently needed. This paper presents a novel adaptive force control method (AIPOF control—adaptive integral periodic output feedback control), based on employing a creep model of which parameters are obtained by using the FRLS on-line identification method. The AIPOF control method can achieve an arbitrary pole configuration as long as the plant is controllable and observable. This paper also designs the POF and IPOF controller to compare their test results. Simulation and experiments of micro-force-tracking tests are carried out, with results confirming that the proposed control method is viable.
Investigation of the Multiple Method Adaptive Control (MMAC) method for flight control systems
NASA Technical Reports Server (NTRS)
Athans, M.; Baram, Y.; Castanon, D.; Dunn, K. P.; Green, C. S.; Lee, W. H.; Sandell, N. R., Jr.; Willsky, A. S.
1979-01-01
The stochastic adaptive control of the NASA F-8C digital-fly-by-wire aircraft using the multiple model adaptive control (MMAC) method is presented. The selection of the performance criteria for the lateral and the longitudinal dynamics, the design of the Kalman filters for different operating conditions, the identification algorithm associated with the MMAC method, the control system design, and simulation results obtained using the real time simulator of the F-8 aircraft at the NASA Langley Research Center are discussed.
A two-dimensional adaptive mesh generation method
NASA Astrophysics Data System (ADS)
Altas, Irfan; Stephenson, John W.
1991-05-01
The present, two-dimensional adaptive mesh-generation method allows selective modification of a small portion of the mesh without affecting large areas of adjacent mesh-points, and is applicable with or without boundary-fitted coordinate-generation procedures. The cases of differential equation discretization by, on the one hand, classical difference formulas designed for uniform meshes, and on the other the present difference formulas, are illustrated through the application of the method to the Hiemenz flow for which the Navier-Stokes equation's exact solution is known, as well as to a two-dimensional viscous internal flow problem.
An adaptive penalty method for DIRECT algorithm in engineering optimization
NASA Astrophysics Data System (ADS)
Vilaça, Rita; Rocha, Ana Maria A. C.
2012-09-01
The most common approach for solving constrained optimization problems is based on penalty functions, where the constrained problem is transformed into a sequence of unconstrained problem by penalizing the objective function when constraints are violated. In this paper, we analyze the implementation of an adaptive penalty method, within the DIRECT algorithm, in which the constraints that are more difficult to be satisfied will have relatively higher penalty values. In order to assess the applicability and performance of the proposed method, some benchmark problems from engineering design optimization are considered.
Adaptive Current Control Method for Hybrid Active Power Filter
NASA Astrophysics Data System (ADS)
Chau, Minh Thuyen
2016-09-01
This paper proposes an adaptive current control method for Hybrid Active Power Filter (HAPF). It consists of a fuzzy-neural controller, identification and prediction model and cost function. The fuzzy-neural controller parameters are adjusted according to the cost function minimum criteria. For this reason, the proposed control method has a capability on-line control clings to variation of the load harmonic currents. Compared to the single fuzzy logic control method, the proposed control method shows the advantages of better dynamic response, compensation error in steady-state is smaller, able to online control is better and harmonics cancelling is more effective. Simulation and experimental results have demonstrated the effectiveness of the proposed control method.
Parallel, adaptive finite element methods for conservation laws
NASA Technical Reports Server (NTRS)
Biswas, Rupak; Devine, Karen D.; Flaherty, Joseph E.
1994-01-01
We construct parallel finite element methods for the solution of hyperbolic conservation laws in one and two dimensions. Spatial discretization is performed by a discontinuous Galerkin finite element method using a basis of piecewise Legendre polynomials. Temporal discretization utilizes a Runge-Kutta method. Dissipative fluxes and projection limiting prevent oscillations near solution discontinuities. A posteriori estimates of spatial errors are obtained by a p-refinement technique using superconvergence at Radau points. The resulting method is of high order and may be parallelized efficiently on MIMD computers. We compare results using different limiting schemes and demonstrate parallel efficiency through computations on an NCUBE/2 hypercube. We also present results using adaptive h- and p-refinement to reduce the computational cost of the method.
A novel adaptive noise filtering method for SAR images
NASA Astrophysics Data System (ADS)
Li, Weibin; He, Mingyi
2009-08-01
In the most application situation, signal or image always is corrupted by additive noise. As a result there are mass methods to remove the additive noise while few approaches can work well for the multiplicative noise. The paper presents an improved MAP-based filter for multiplicative noise by adaptive window denoising technique. A Gamma noise models is discussed and a preprocessing technique to differential the matured and un-matured pixel is applied to get accurate estimate for Equivalent Number of Looks. Also the adaptive local window growth and 3 different denoise strategies are applied to smooth noise while keep its subtle information according to its local statistics feature. The simulation results show that the performance is better than existing filter. Several image experiments demonstrate its theoretical performance.
Planetary gearbox fault diagnosis using an adaptive stochastic resonance method
NASA Astrophysics Data System (ADS)
Lei, Yaguo; Han, Dong; Lin, Jing; He, Zhengjia
2013-07-01
Planetary gearboxes are widely used in aerospace, automotive and heavy industry applications due to their large transmission ratio, strong load-bearing capacity and high transmission efficiency. The tough operation conditions of heavy duty and intensive impact load may cause gear tooth damage such as fatigue crack and teeth missed etc. The challenging issues in fault diagnosis of planetary gearboxes include selection of sensitive measurement locations, investigation of vibration transmission paths and weak feature extraction. One of them is how to effectively discover the weak characteristics from noisy signals of faulty components in planetary gearboxes. To address the issue in fault diagnosis of planetary gearboxes, an adaptive stochastic resonance (ASR) method is proposed in this paper. The ASR method utilizes the optimization ability of ant colony algorithms and adaptively realizes the optimal stochastic resonance system matching input signals. Using the ASR method, the noise may be weakened and weak characteristics highlighted, and therefore the faults can be diagnosed accurately. A planetary gearbox test rig is established and experiments with sun gear faults including a chipped tooth and a missing tooth are conducted. And the vibration signals are collected under the loaded condition and various motor speeds. The proposed method is used to process the collected signals and the results of feature extraction and fault diagnosis demonstrate its effectiveness.
Adaptation of fast marching methods to intracellular signaling
NASA Astrophysics Data System (ADS)
Chikando, Aristide C.; Kinser, Jason M.
2006-02-01
Imaging of signaling phenomena within the intracellular domain is a well studied field. Signaling is the process by which all living cells communicate with their environment and with each other. In the case of signaling calcium waves, numerous computational models based on solving homogeneous reaction diffusion equations have been developed. Typically, the reaction diffusion approach consists of solving systems of partial differential equations at each update step. The traditional methods used to solve these reaction diffusion equations are very computationally expensive since they must employ small time steps in order to reduce the computational error. The presented research suggests the application of fast marching methods to imaging signaling calcium waves, more specifically fertilization calcium waves, in Xenopus laevis eggs. The fast marching approach provides fast and efficient means of tracking the evolution of monotonically advancing fronts. A model that employs biophysical properties of intracellular calcium signaling, and adapts fast marching methods to tracking the propagation of signaling calcium waves is presented. The developed model is used to reproduce simulation results obtained with reaction diffusion based model. Results obtained with our model agree with both the results obtained with reaction diffusion based models, and confocal microscopy observations during in vivo experiments. The adaptation of fast marching methods to intracellular protein or macromolecule trafficking is also briefly explored.
Robust time and frequency domain estimation methods in adaptive control
NASA Technical Reports Server (NTRS)
Lamaire, Richard Orville
1987-01-01
A robust identification method was developed for use in an adaptive control system. The type of estimator is called the robust estimator, since it is robust to the effects of both unmodeled dynamics and an unmeasurable disturbance. The development of the robust estimator was motivated by a need to provide guarantees in the identification part of an adaptive controller. To enable the design of a robust control system, a nominal model as well as a frequency-domain bounding function on the modeling uncertainty associated with this nominal model must be provided. Two estimation methods are presented for finding parameter estimates, and, hence, a nominal model. One of these methods is based on the well developed field of time-domain parameter estimation. In a second method of finding parameter estimates, a type of weighted least-squares fitting to a frequency-domain estimated model is used. The frequency-domain estimator is shown to perform better, in general, than the time-domain parameter estimator. In addition, a methodology for finding a frequency-domain bounding function on the disturbance is used to compute a frequency-domain bounding function on the additive modeling error due to the effects of the disturbance and the use of finite-length data. The performance of the robust estimator in both open-loop and closed-loop situations is examined through the use of simulations.
The SMART CLUSTER METHOD - adaptive earthquake cluster analysis and declustering
NASA Astrophysics Data System (ADS)
Schaefer, Andreas; Daniell, James; Wenzel, Friedemann
2016-04-01
Earthquake declustering is an essential part of almost any statistical analysis of spatial and temporal properties of seismic activity with usual applications comprising of probabilistic seismic hazard assessments (PSHAs) and earthquake prediction methods. The nature of earthquake clusters and subsequent declustering of earthquake catalogues plays a crucial role in determining the magnitude-dependent earthquake return period and its respective spatial variation. Various methods have been developed to address this issue from other researchers. These have differing ranges of complexity ranging from rather simple statistical window methods to complex epidemic models. This study introduces the smart cluster method (SCM), a new methodology to identify earthquake clusters, which uses an adaptive point process for spatio-temporal identification. Hereby, an adaptive search algorithm for data point clusters is adopted. It uses the earthquake density in the spatio-temporal neighbourhood of each event to adjust the search properties. The identified clusters are subsequently analysed to determine directional anisotropy, focussing on a strong correlation along the rupture plane and adjusts its search space with respect to directional properties. In the case of rapid subsequent ruptures like the 1992 Landers sequence or the 2010/2011 Darfield-Christchurch events, an adaptive classification procedure is applied to disassemble subsequent ruptures which may have been grouped into an individual cluster using near-field searches, support vector machines and temporal splitting. The steering parameters of the search behaviour are linked to local earthquake properties like magnitude of completeness, earthquake density and Gutenberg-Richter parameters. The method is capable of identifying and classifying earthquake clusters in space and time. It is tested and validated using earthquake data from California and New Zealand. As a result of the cluster identification process, each event in
A decentralized adaptive robust method for chaos control.
Kobravi, Hamid-Reza; Erfanian, Abbas
2009-09-01
This paper presents a control strategy, which is based on sliding mode control, adaptive control, and fuzzy logic system for controlling the chaotic dynamics. We consider this control paradigm in chaotic systems where the equations of motion are not known. The proposed control strategy is robust against the external noise disturbance and system parameter variations and can be used to convert the chaotic orbits not only to the desired periodic ones but also to any desired chaotic motions. Simulation results of controlling some typical higher order chaotic systems demonstrate the effectiveness of the proposed control method.
Adaptive grid methods for RLV environment assessment and nozzle analysis
NASA Technical Reports Server (NTRS)
Thornburg, Hugh J.
1996-01-01
Rapid access to highly accurate data about complex configurations is needed for multi-disciplinary optimization and design. In order to efficiently meet these requirements a closer coupling between the analysis algorithms and the discretization process is needed. In some cases, such as free surface, temporally varying geometries, and fluid structure interaction, the need is unavoidable. In other cases the need is to rapidly generate and modify high quality grids. Techniques such as unstructured and/or solution-adaptive methods can be used to speed the grid generation process and to automatically cluster mesh points in regions of interest. Global features of the flow can be significantly affected by isolated regions of inadequately resolved flow. These regions may not exhibit high gradients and can be difficult to detect. Thus excessive resolution in certain regions does not necessarily increase the accuracy of the overall solution. Several approaches have been employed for both structured and unstructured grid adaption. The most widely used involve grid point redistribution, local grid point enrichment/derefinement or local modification of the actual flow solver. However, the success of any one of these methods ultimately depends on the feature detection algorithm used to determine solution domain regions which require a fine mesh for their accurate representation. Typically, weight functions are constructed to mimic the local truncation error and may require substantial user input. Most problems of engineering interest involve multi-block grids and widely disparate length scales. Hence, it is desirable that the adaptive grid feature detection algorithm be developed to recognize flow structures of different type as well as differing intensity, and adequately address scaling and normalization across blocks. These weight functions can then be used to construct blending functions for algebraic redistribution, interpolation functions for unstructured grid generation
Turbulence profiling methods applied to ESO's adaptive optics facility
NASA Astrophysics Data System (ADS)
Valenzuela, Javier; Béchet, Clémentine; Garcia-Rissmann, Aurea; Gonté, Frédéric; Kolb, Johann; Le Louarn, Miska; Neichel, Benoît; Madec, Pierre-Yves; Guesalaga, Andrés.
2014-07-01
Two algorithms were recently studied for C2n profiling from wide-field Adaptive Optics (AO) measurements on GeMS (Gemini Multi-Conjugate AO system). They both rely on the Slope Detection and Ranging (SLODAR) approach, using spatial covariances of the measurements issued from various wavefront sensors. The first algorithm estimates the C2n profile by applying the truncated least-squares inverse of a matrix modeling the response of slopes covariances to various turbulent layer heights. In the second method, the profile is estimated by deconvolution of these spatial cross-covariances of slopes. We compare these methods in the new configuration of ESO Adaptive Optics Facility (AOF), a high-order multiple laser system under integration. For this, we use measurements simulated by the AO cluster of ESO. The impact of the measurement noise and of the outer scale of the atmospheric turbulence is analyzed. The important influence of the outer scale on the results leads to the development of a new step for outer scale fitting included in each algorithm. This increases the reliability and robustness of the turbulence strength and profile estimations.
An adaptive stepsize method for the chemical Langevin equation.
Ilie, Silvana; Teslya, Alexandra
2012-05-14
Mathematical and computational modeling are key tools in analyzing important biological processes in cells and living organisms. In particular, stochastic models are essential to accurately describe the cellular dynamics, when the assumption of the thermodynamic limit can no longer be applied. However, stochastic models are computationally much more challenging than the traditional deterministic models. Moreover, many biochemical systems arising in applications have multiple time-scales, which lead to mathematical stiffness. In this paper we investigate the numerical solution of a stochastic continuous model of well-stirred biochemical systems, the chemical Langevin equation. The chemical Langevin equation is a stochastic differential equation with multiplicative, non-commutative noise. We propose an adaptive stepsize algorithm for approximating the solution of models of biochemical systems in the Langevin regime, with small noise, based on estimates of the local error. The underlying numerical method is the Milstein scheme. The proposed adaptive method is tested on several examples arising in applications and it is shown to have improved efficiency and accuracy compared to the existing fixed stepsize schemes.
NASA Technical Reports Server (NTRS)
Kantor, A. V.; Timonin, V. G.; Azarova, Y. S.
1974-01-01
The method of adaptive discretization is the most promising for elimination of redundancy from telemetry messages characterized by signal shape. Adaptive discretization with associative sorting was considered as a way to avoid the shortcomings of adaptive discretization with buffer smoothing and adaptive discretization with logical switching in on-board information compression devices (OICD) in spacecraft. Mathematical investigations of OICD are presented.
Robust image registration using adaptive coherent point drift method
NASA Astrophysics Data System (ADS)
Yang, Lijuan; Tian, Zheng; Zhao, Wei; Wen, Jinhuan; Yan, Weidong
2016-04-01
Coherent point drift (CPD) method is a powerful registration tool under the framework of the Gaussian mixture model (GMM). However, the global spatial structure of point sets is considered only without other forms of additional attribute information. The equivalent simplification of mixing parameters and the manual setting of the weight parameter in GMM make the CPD method less robust to outlier and have less flexibility. An adaptive CPD method is proposed to automatically determine the mixing parameters by embedding the local attribute information of features into the construction of GMM. In addition, the weight parameter is treated as an unknown parameter and automatically determined in the expectation-maximization algorithm. In image registration applications, the block-divided salient image disk extraction method is designed to detect sparse salient image features and local self-similarity is used as attribute information to describe the local neighborhood structure of each feature. The experimental results on optical images and remote sensing images show that the proposed method can significantly improve the matching performance.
Research on PGNAA adaptive analysis method with BP neural network
NASA Astrophysics Data System (ADS)
Peng, Ke-Xin; Yang, Jian-Bo; Tuo, Xian-Guo; Du, Hua; Zhang, Rui-Xue
2016-11-01
A new approach method to dealing with the puzzle of spectral analysis in prompt gamma neutron activation analysis (PGNAA) is developed and demonstrated. It consists of utilizing BP neural network to PGNAA energy spectrum analysis which is based on Monte Carlo (MC) simulation, the main tasks which we will accomplish as follows: (1) Completing the MC simulation of PGNAA spectrum library, we respectively set mass fractions of element Si, Ca, Fe from 0.00 to 0.45 with a step of 0.05 and each sample is simulated using MCNP. (2) Establishing the BP model of adaptive quantitative analysis of PGNAA energy spectrum, we calculate peak areas of eight characteristic gamma rays that respectively correspond to eight elements in each individual of 1000 samples and that of the standard sample. (3) Verifying the viability of quantitative analysis of the adaptive algorithm where 68 samples were used successively. Results show that the precision when using neural network to calculate the content of each element is significantly higher than the MCLLS.
Efficient Combustion Simulation via the Adaptive Wavelet Collocation Method
NASA Astrophysics Data System (ADS)
Lung, Kevin; Brown-Dymkoski, Eric; Guerrero, Victor; Doran, Eric; Museth, Ken; Balme, Jo; Urberger, Bob; Kessler, Andre; Jones, Stephen; Moses, Billy; Crognale, Anthony
Rocket engine development continues to be driven by the intuition and experience of designers, progressing through extensive trial-and-error test campaigns. Extreme temperatures and pressures frustrate direct observation, while high-fidelity simulation can be impractically expensive owing to the inherent muti-scale, multi-physics nature of the problem. To address this cost, an adaptive multi-resolution PDE solver has been designed which targets the high performance, many-core architecture of GPUs. The adaptive wavelet collocation method is used to maintain a sparse-data representation of the high resolution simulation, greatly reducing the memory footprint while tightly controlling physical fidelity. The tensorial, stencil topology of wavelet-based grids lends itself to highly vectorized algorithms which are necessary to exploit the performance of GPUs. This approach permits efficient implementation of direct finite-rate kinetics, and improved resolution of steep thermodynamic gradients and the smaller mixing scales that drive combustion dynamics. Resolving these scales is crucial for accurate chemical kinetics, which are typically degraded or lost in statistical modeling approaches.
A locally adaptive kernel regression method for facies delineation
NASA Astrophysics Data System (ADS)
Fernàndez-Garcia, D.; Barahona-Palomo, M.; Henri, C. V.; Sanchez-Vila, X.
2015-12-01
Facies delineation is defined as the separation of geological units with distinct intrinsic characteristics (grain size, hydraulic conductivity, mineralogical composition). A major challenge in this area stems from the fact that only a few scattered pieces of hydrogeological information are available to delineate geological facies. Several methods to delineate facies are available in the literature, ranging from those based only on existing hard data, to those including secondary data or external knowledge about sedimentological patterns. This paper describes a methodology to use kernel regression methods as an effective tool for facies delineation. The method uses both the spatial and the actual sampled values to produce, for each individual hard data point, a locally adaptive steering kernel function, self-adjusting the principal directions of the local anisotropic kernels to the direction of highest local spatial correlation. The method is shown to outperform the nearest neighbor classification method in a number of synthetic aquifers whenever the available number of hard data is small and randomly distributed in space. In the case of exhaustive sampling, the steering kernel regression method converges to the true solution. Simulations ran in a suite of synthetic examples are used to explore the selection of kernel parameters in typical field settings. It is shown that, in practice, a rule of thumb can be used to obtain suboptimal results. The performance of the method is demonstrated to significantly improve when external information regarding facies proportions is incorporated. Remarkably, the method allows for a reasonable reconstruction of the facies connectivity patterns, shown in terms of breakthrough curves performance.
Sparse diffraction imaging method using an adaptive reweighting homotopy algorithm
NASA Astrophysics Data System (ADS)
Yu, Caixia; Zhao, Jingtao; Wang, Yanfei; Qiu, Zhen
2017-02-01
Seismic diffractions carry valuable information from subsurface small-scale geologic discontinuities, such as faults, cavities and other features associated with hydrocarbon reservoirs. However, seismic imaging methods mainly use reflection theory for constructing imaging models, which means a smooth constraint on imaging conditions. In fact, diffractors occupy a small account of distributions in an imaging model and possess discontinuous characteristics. In mathematics, this kind of phenomena can be described by the sparse optimization theory. Therefore, we propose a diffraction imaging method based on a sparsity-constraint model for studying diffractors. A reweighted L 2-norm and L 1-norm minimization model is investigated, where the L 2 term requests a least-square error between modeled diffractions and observed diffractions and the L 1 term imposes sparsity on the solution. In order to efficiently solve this model, we use an adaptive reweighting homotopy algorithm that updates the solutions by tracking a path along inexpensive homotopy steps. Numerical examples and field data application demonstrate the feasibility of the proposed method and show its significance for detecting small-scale discontinuities in a seismic section. The proposed method has an advantage in improving the focusing ability of diffractions and reducing the migration artifacts.
An adaptive Cartesian grid generation method for Dirty geometry
NASA Astrophysics Data System (ADS)
Wang, Z. J.; Srinivasan, Kumar
2002-07-01
Traditional structured and unstructured grid generation methods need a water-tight boundary surface grid to start. Therefore, these methods are named boundary to interior (B2I) approaches. Although these methods have achieved great success in fluid flow simulations, the grid generation process can still be very time consuming if non-water-tight geometries are given. Significant user time can be taken to repair or clean a dirty geometry with cracks, overlaps or invalid manifolds before grid generation can take place. In this paper, we advocate a different approach in grid generation, namely the interior to boundary (I2B) approach. With an I2B approach, the computational grid is first generated inside the computational domain. Then this grid is intelligently connected to the boundary, and the boundary grid is a result of this connection. A significant advantage of the I2B approach is that dirty geometries can be handled without cleaning or repairing, dramatically reducing grid generation time. An I2B adaptive Cartesian grid generation method is developed in this paper to handle dirty geometries without geometry repair. Comparing with a B2I approach, the grid generation time with the I2B approach for a complex automotive engine can be reduced by three orders of magnitude. Copyright
A method of camera calibration with adaptive thresholding
NASA Astrophysics Data System (ADS)
Gao, Lei; Yan, Shu-hua; Wang, Guo-chao; Zhou, Chun-lei
2009-07-01
In order to calculate the parameters of the camera correctly, we must figure out the accurate coordinates of the certain points in the image plane. Corners are the important features in the 2D images. Generally speaking, they are the points that have high curvature and lie in the junction of different brightness regions of images. So corners detection has already widely used in many fields. In this paper we use the pinhole camera model and SUSAN corner detection algorithm to calibrate the camera. When using the SUSAN corner detection algorithm, we propose an approach to retrieve the gray difference threshold, adaptively. That makes it possible to pick up the right chessboard inner comers in all kinds of gray contrast. The experiment result based on this method was proved to be feasible.
A forward method for optimal stochastic nonlinear and adaptive control
NASA Technical Reports Server (NTRS)
Bayard, David S.
1988-01-01
A computational approach is taken to solve the optimal nonlinear stochastic control problem. The approach is to systematically solve the stochastic dynamic programming equations forward in time, using a nested stochastic approximation technique. Although computationally intensive, this provides a straightforward numerical solution for this class of problems and provides an alternative to the usual dimensionality problem associated with solving the dynamic programming equations backward in time. It is shown that the cost degrades monotonically as the complexity of the algorithm is reduced. This provides a strategy for suboptimal control with clear performance/computation tradeoffs. A numerical study focusing on a generic optimal stochastic adaptive control example is included to demonstrate the feasibility of the method.
Adaptive mesh refinement and adjoint methods in geophysics simulations
NASA Astrophysics Data System (ADS)
Burstedde, Carsten
2013-04-01
It is an ongoing challenge to increase the resolution that can be achieved by numerical geophysics simulations. This applies to considering sub-kilometer mesh spacings in global-scale mantle convection simulations as well as to using frequencies up to 1 Hz in seismic wave propagation simulations. One central issue is the numerical cost, since for three-dimensional space discretizations, possibly combined with time stepping schemes, a doubling of resolution can lead to an increase in storage requirements and run time by factors between 8 and 16. A related challenge lies in the fact that an increase in resolution also increases the dimensionality of the model space that is needed to fully parametrize the physical properties of the simulated object (a.k.a. earth). Systems that exhibit a multiscale structure in space are candidates for employing adaptive mesh refinement, which varies the resolution locally. An example that we found well suited is the mantle, where plate boundaries and fault zones require a resolution on the km scale, while deeper area can be treated with 50 or 100 km mesh spacings. This approach effectively reduces the number of computational variables by several orders of magnitude. While in this case it is possible to derive the local adaptation pattern from known physical parameters, it is often unclear what are the most suitable criteria for adaptation. We will present the goal-oriented error estimation procedure, where such criteria are derived from an objective functional that represents the observables to be computed most accurately. Even though this approach is well studied, it is rarely used in the geophysics community. A related strategy to make finer resolution manageable is to design methods that automate the inference of model parameters. Tweaking more than a handful of numbers and judging the quality of the simulation by adhoc comparisons to known facts and observations is a tedious task and fundamentally limited by the turnaround times
Adaptive Elastic Net for Generalized Methods of Moments.
Caner, Mehmet; Zhang, Hao Helen
2014-01-30
Model selection and estimation are crucial parts of econometrics. This paper introduces a new technique that can simultaneously estimate and select the model in generalized method of moments (GMM) context. The GMM is particularly powerful for analyzing complex data sets such as longitudinal and panel data, and it has wide applications in econometrics. This paper extends the least squares based adaptive elastic net estimator of Zou and Zhang (2009) to nonlinear equation systems with endogenous variables. The extension is not trivial and involves a new proof technique due to estimators lack of closed form solutions. Compared to Bridge-GMM of Caner (2009), we allow for the number of parameters to diverge to infinity as well as collinearity among a large number of variables, also the redundant parameters set to zero via a data dependent technique. This method has the oracle property, meaning that we can estimate nonzero parameters with their standard limit and the redundant parameters are dropped from the equations simultaneously. Numerical examples are used to illustrate the performance of the new method.
Evaluation of Adaptive Subdivision Method on Mobile Device
NASA Astrophysics Data System (ADS)
Rahim, Mohd Shafry Mohd; Isa, Siti Aida Mohd; Rehman, Amjad; Saba, Tanzila
2013-06-01
Recently, there are significant improvements in the capabilities of mobile devices; but rendering large 3D object is still tedious because of the constraint in resources of mobile devices. To reduce storage requirement, 3D object is simplified but certain area of curvature is compromised and the surface will not be smooth. Therefore a method to smoother selected area of a curvature is implemented. One of the popular methods is adaptive subdivision method. Experiments are performed using two data with results based on processing time, rendering speed and the appearance of the object on the devices. The result shows a downfall in frame rate performance due to the increase in the number of triangles with each level of iteration while the processing time of generating the new mesh also significantly increase. Since there is a difference in screen size between the devices the surface on the iPhone appears to have more triangles and more compact than the surface displayed on the iPad. [Figure not available: see fulltext.
Method for removing tilt control in adaptive optics systems
Salmon, Joseph Thaddeus
1998-01-01
A new adaptive optics system and method of operation, whereby the method removes tilt control, and includes the steps of using a steering mirror to steer a wavefront in the desired direction, for aiming an impinging aberrated light beam in the direction of a deformable mirror. The deformable mirror has its surface deformed selectively by means of a plurality of actuators, and compensates, at least partially, for existing aberrations in the light beam. The light beam is split into an output beam and a sample beam, and the sample beam is sampled using a wavefront sensor. The sampled signals are converted into corresponding electrical signals for driving a controller, which, in turn, drives the deformable mirror in a feedback loop in response to the sampled signals, for compensating for aberrations in the wavefront. To this purpose, a displacement error (gradient) of the wavefront is measured, and adjusted by a modified gain matrix, which satisfies the following equation: G'=(I-X(X.sup.T X).sup.-1 X.sup.T)G(I-A)
Method for removing tilt control in adaptive optics systems
Salmon, J.T.
1998-04-28
A new adaptive optics system and method of operation are disclosed, whereby the method removes tilt control, and includes the steps of using a steering mirror to steer a wavefront in the desired direction, for aiming an impinging aberrated light beam in the direction of a deformable mirror. The deformable mirror has its surface deformed selectively by means of a plurality of actuators, and compensates, at least partially, for existing aberrations in the light beam. The light beam is split into an output beam and a sample beam, and the sample beam is sampled using a wavefront sensor. The sampled signals are converted into corresponding electrical signals for driving a controller, which, in turn, drives the deformable mirror in a feedback loop in response to the sampled signals, for compensating for aberrations in the wavefront. To this purpose, a displacement error (gradient) of the wavefront is measured, and adjusted by a modified gain matrix, which satisfies the following equation: G{prime} = (I{minus}X(X{sup T} X){sup {minus}1}X{sup T})G(I{minus}A). 3 figs.
Adapted G-mode Clustering Method applied to Asteroid Taxonomy
NASA Astrophysics Data System (ADS)
Hasselmann, Pedro H.; Carvano, Jorge M.; Lazzaro, D.
2013-11-01
The original G-mode was a clustering method developed by A. I. Gavrishin in the late 60's for geochemical classification of rocks, but was also applied to asteroid photometry, cosmic rays, lunar sample and planetary science spectroscopy data. In this work, we used an adapted version to classify the asteroid photometry from SDSS Moving Objects Catalog. The method works by identifying normal distributions in a multidimensional space of variables. The identification starts by locating a set of points with smallest mutual distance in the sample, which is a problem when data is not planar. Here we present a modified version of the G-mode algorithm, which was previously written in FORTRAN 77, in Python 2.7 and using NumPy, SciPy and Matplotlib packages. The NumPy was used for array and matrix manipulation and Matplotlib for plot control. The Scipy had a import role in speeding up G-mode, Scipy.spatial.distance.mahalanobis was chosen as distance estimator and Numpy.histogramdd was applied to find the initial seeds from which clusters are going to evolve. Scipy was also used to quickly produce dendrograms showing the distances among clusters. Finally, results for Asteroids Taxonomy and tests for different sample sizes and implementations are presented.
A Self-Adaptive Projection and Contraction Method for Linear Complementarity Problems
Liao Lizhi Wang Shengli
2003-10-15
In this paper we develop a self-adaptive projection and contraction method for the linear complementarity problem (LCP). This method improves the practical performance of the modified projection and contraction method by adopting a self-adaptive technique. The global convergence of our new method is proved under mild assumptions. Our numerical tests clearly demonstrate the necessity and effectiveness of our proposed method.
Adaptable Metadata Rich IO Methods for Portable High Performance IO
Lofstead, J.; Zheng, Fang; Klasky, Scott A; Schwan, Karsten
2009-01-01
Since IO performance on HPC machines strongly depends on machine characteristics and configuration, it is important to carefully tune IO libraries and make good use of appropriate library APIs. For instance, on current petascale machines, independent IO tends to outperform collective IO, in part due to bottlenecks at the metadata server. The problem is exacerbated by scaling issues, since each IO library scales differently on each machine, and typically, operates efficiently to different levels of scaling on different machines. With scientific codes being run on a variety of HPC resources, efficient code execution requires us to address three important issues: (1) end users should be able to select the most efficient IO methods for their codes, with minimal effort in terms of code updates or alterations; (2) such performance-driven choices should not prevent data from being stored in the desired file formats, since those are crucial for later data analysis; and (3) it is important to have efficient ways of identifying and selecting certain data for analysis, to help end users cope with the flood of data produced by high end codes. This paper employs ADIOS, the ADaptable IO System, as an IO API to address (1)-(3) above. Concerning (1), ADIOS makes it possible to independently select the IO methods being used by each grouping of data in an application, so that end users can use those IO methods that exhibit best performance based on both IO patterns and the underlying hardware. In this paper, we also use this facility of ADIOS to experimentally evaluate on petascale machines alternative methods for high performance IO. Specific examples studied include methods that use strong file consistency vs. delayed parallel data consistency, as that provided by MPI-IO or POSIX IO. Concerning (2), to avoid linking IO methods to specific file formats and attain high IO performance, ADIOS introduces an efficient intermediate file format, termed BP, which can be converted, at small
Principles and Methods of Adapted Physical Education and Recreation.
ERIC Educational Resources Information Center
Arnheim, Daniel D.; And Others
This text is designed for the elementary and secondary school physical educator and the recreation specialist in adapted physical education and, more specifically, as a text for college courses in adapted and corrective physical education and therapeutic recreation. The text is divided into four major divisions: scope, key teaching and therapy…
Broom, Donald M
2006-01-01
The term adaptation is used in biology in three different ways. It may refer to changes which occur at the cell and organ level, or at the individual level, or at the level of gene action and evolutionary processes. Adaptation by cells, especially nerve cells helps in: communication within the body, the distinguishing of stimuli, the avoidance of overload and the conservation of energy. The time course and complexity of these mechanisms varies. Adaptive characters of organisms, including adaptive behaviours, increase fitness so this adaptation is evolutionary. The major part of this paper concerns adaptation by individuals and its relationships to welfare. In complex animals, feed forward control is widely used. Individuals predict problems and adapt by acting before the environmental effect is substantial. Much of adaptation involves brain control and animals have a set of needs, located in the brain and acting largely via motivational mechanisms, to regulate life. Needs may be for resources but are also for actions and stimuli which are part of the mechanism which has evolved to obtain the resources. Hence pigs do not just need food but need to be able to carry out actions like rooting in earth or manipulating materials which are part of foraging behaviour. The welfare of an individual is its state as regards its attempts to cope with its environment. This state includes various adaptive mechanisms including feelings and those which cope with disease. The part of welfare which is concerned with coping with pathology is health. Disease, which implies some significant effect of pathology, always results in poor welfare. Welfare varies over a range from very good, when adaptation is effective and there are feelings of pleasure or contentment, to very poor. A key point concerning the concept of individual adaptation in relation to welfare is that welfare may be good or poor while adaptation is occurring. Some adaptation is very easy and energetically cheap and
Tsunami modelling with adaptively refined finite volume methods
LeVeque, R.J.; George, D.L.; Berger, M.J.
2011-01-01
Numerical modelling of transoceanic tsunami propagation, together with the detailed modelling of inundation of small-scale coastal regions, poses a number of algorithmic challenges. The depth-averaged shallow water equations can be used to reduce this to a time-dependent problem in two space dimensions, but even so it is crucial to use adaptive mesh refinement in order to efficiently handle the vast differences in spatial scales. This must be done in a 'wellbalanced' manner that accurately captures very small perturbations to the steady state of the ocean at rest. Inundation can be modelled by allowing cells to dynamically change from dry to wet, but this must also be done carefully near refinement boundaries. We discuss these issues in the context of Riemann-solver-based finite volume methods for tsunami modelling. Several examples are presented using the GeoClaw software, and sample codes are available to accompany the paper. The techniques discussed also apply to a variety of other geophysical flows. ?? 2011 Cambridge University Press.
A hybrid method for optimization of the adaptive Goldstein filter
NASA Astrophysics Data System (ADS)
Jiang, Mi; Ding, Xiaoli; Tian, Xin; Malhotra, Rakesh; Kong, Weixue
2014-12-01
The Goldstein filter is a well-known filter for interferometric filtering in the frequency domain. The main parameter of this filter, alpha, is set as a power of the filtering function. Depending on it, considered areas are strongly or weakly filtered. Several variants have been developed to adaptively determine alpha using different indicators such as the coherence, and phase standard deviation. The common objective of these methods is to prevent areas with low noise from being over filtered while simultaneously allowing stronger filtering over areas with high noise. However, the estimators of these indicators are biased in the real world and the optimal model to accurately determine the functional relationship between the indicators and alpha is also not clear. As a result, the filter always under- or over-filters and is rarely correct. The study presented in this paper aims to achieve accurate alpha estimation by correcting the biased estimator using homogeneous pixel selection and bootstrapping algorithms, and by developing an optimal nonlinear model to determine alpha. In addition, an iteration is also merged into the filtering procedure to suppress the high noise over incoherent areas. The experimental results from synthetic and real data show that the new filter works well under a variety of conditions and offers better and more reliable performance when compared to existing approaches.
LDRD Final Report: Adaptive Methods for Laser Plasma Simulation
Dorr, M R; Garaizar, F X; Hittinger, J A
2003-01-29
The goal of this project was to investigate the utility of parallel adaptive mesh refinement (AMR) in the simulation of laser plasma interaction (LPI). The scope of work included the development of new numerical methods and parallel implementation strategies. The primary deliverables were (1) parallel adaptive algorithms to solve a system of equations combining plasma fluid and light propagation models, (2) a research code implementing these algorithms, and (3) an analysis of the performance of parallel AMR on LPI problems. The project accomplished these objectives. New algorithms were developed for the solution of a system of equations describing LPI. These algorithms were implemented in a new research code named ALPS (Adaptive Laser Plasma Simulator) that was used to test the effectiveness of the AMR algorithms on the Laboratory's large-scale computer platforms. The details of the algorithm and the results of the numerical tests were documented in an article published in the Journal of Computational Physics [2]. A principal conclusion of this investigation is that AMR is most effective for LPI systems that are ''hydrodynamically large'', i.e., problems requiring the simulation of a large plasma volume relative to the volume occupied by the laser light. Since the plasma-only regions require less resolution than the laser light, AMR enables the use of efficient meshes for such problems. In contrast, AMR is less effective for, say, a single highly filamented beam propagating through a phase plate, since the resulting speckle pattern may be too dense to adequately separate scales with a locally refined mesh. Ultimately, the gain to be expected from the use of AMR is highly problem-dependent. One class of problems investigated in this project involved a pair of laser beams crossing in a plasma flow. Under certain conditions, energy can be transferred from one beam to the other via a resonant interaction with an ion acoustic wave in the crossing region. AMR provides an
Nishimaru, Eiji; Ichikawa, Katsuhiro; Hara, Takanori; Terakawa, Shoichi; Yokomachi, Kazushi; Fujioka, Chikako; Kiguchi, Masao; Ishifuro, Minoru
2012-01-01
Adaptive iterative reconstruction techniques (IRs) can decrease image noise in computed tomography (CT) and are expected to contribute to reduction of the radiation dose. To evaluate the performance of IRs, the conventional two-dimensional (2D) noise power spectrum (NPS) is widely used. However, when an IR provides an NPS value drop at all spatial frequency (which is similar to NPS changes by dose increase), the conventional method cannot evaluate the correct noise property because the conventional method does not correspond to the volume data natures of CT images. The purpose of our study was to develop a new method for NPS measurements that can be adapted to IRs. Our method utilized thick multi-planar reconstruction (MPR) images. The thick images are generally made by averaging CT volume data in a direction perpendicular to a MPR plane (e.g. z-direction for axial MPR plane). By using this averaging technique as a cutter for 3D-NPS, we can obtain adequate 2D-extracted NPS (eNPS) from 3D NPS. We applied this method to IR images generated with adaptive iterative dose reduction 3D (AIDR-3D, Toshiba) to investigate the validity of our method. A water phantom with 24 cm-diameters was scanned at 120 kV and 200 mAs with a 320-row CT (Acquilion One, Toshiba). From the results of study, the adequate thickness of MPR images for eNPS was more than 25.0 mm. Our new NPS measurement method utilizing thick MPR images was accurate and effective for evaluating noise reduction effects of IRs.
On Accuracy of Adaptive Grid Methods for Captured Shocks
NASA Technical Reports Server (NTRS)
Yamaleev, Nail K.; Carpenter, Mark H.
2002-01-01
The accuracy of two grid adaptation strategies, grid redistribution and local grid refinement, is examined by solving the 2-D Euler equations for the supersonic steady flow around a cylinder. Second- and fourth-order linear finite difference shock-capturing schemes, based on the Lax-Friedrichs flux splitting, are used to discretize the governing equations. The grid refinement study shows that for the second-order scheme, neither grid adaptation strategy improves the numerical solution accuracy compared to that calculated on a uniform grid with the same number of grid points. For the fourth-order scheme, the dominant first-order error component is reduced by the grid adaptation, while the design-order error component drastically increases because of the grid nonuniformity. As a result, both grid adaptation techniques improve the numerical solution accuracy only on the coarsest mesh or on very fine grids that are seldom found in practical applications because of the computational cost involved. Similar error behavior has been obtained for the pressure integral across the shock. A simple analysis shows that both grid adaptation strategies are not without penalties in the numerical solution accuracy. Based on these results, a new grid adaptation criterion for captured shocks is proposed.
NASA Technical Reports Server (NTRS)
Wang, Ray (Inventor)
2009-01-01
A method and system for spatial data manipulation input and distribution via an adaptive wireless transceiver. The method and system include a wireless transceiver for automatically and adaptively controlling wireless transmissions using a Waveform-DNA method. The wireless transceiver can operate simultaneously over both the short and long distances. The wireless transceiver is automatically adaptive and wireless devices can send and receive wireless digital and analog data from various sources rapidly in real-time via available networks and network services.
Adaptive L₁/₂ shooting regularization method for survival analysis using gene expression data.
Liu, Xiao-Ying; Liang, Yong; Xu, Zong-Ben; Zhang, Hai; Leung, Kwong-Sak
2013-01-01
A new adaptive L₁/₂ shooting regularization method for variable selection based on the Cox's proportional hazards mode being proposed. This adaptive L₁/₂ shooting algorithm can be easily obtained by the optimization of a reweighed iterative series of L₁ penalties and a shooting strategy of L₁/₂ penalty. Simulation results based on high dimensional artificial data show that the adaptive L₁/₂ shooting regularization method can be more accurate for variable selection than Lasso and adaptive Lasso methods. The results from real gene expression dataset (DLBCL) also indicate that the L₁/₂ regularization method performs competitively.
Adaptation of a-Stratified Method in Variable Length Computerized Adaptive Testing.
ERIC Educational Resources Information Center
Wen, Jian-Bing; Chang, Hua-Hua; Hau, Kit-Tai
Test security has often been a problem in computerized adaptive testing (CAT) because the traditional wisdom of item selection overly exposes high discrimination items. The a-stratified (STR) design advocated by H. Chang and his collaborators, which uses items of less discrimination in earlier stages of testing, has been shown to be very…
Systems and Methods for Derivative-Free Adaptive Control
NASA Technical Reports Server (NTRS)
Yucelen, Tansel (Inventor); Kim, Kilsoo (Inventor); Calise, Anthony J. (Inventor)
2015-01-01
An adaptive control system is disclosed. The control system can control uncertain dynamic systems. The control system can employ one or more derivative-free adaptive control architectures. The control system can further employ one or more derivative-free weight update laws. The derivative-free weight update laws can comprise a time-varying estimate of an ideal vector of weights. The control system of the present invention can therefore quickly stabilize systems that undergo sudden changes in dynamics, caused by, for example, sudden changes in weight. Embodiments of the present invention can also provide a less complex control system than existing adaptive control systems. The control system can control aircraft and other dynamic systems, such as, for example, those with non-minimum phase dynamics.
Study of adaptive methods for data compression of scanner data
NASA Technical Reports Server (NTRS)
1977-01-01
The performance of adaptive image compression techniques and the applicability of a variety of techniques to the various steps in the data dissemination process are examined in depth. It is concluded that the bandwidth of imagery generated by scanners can be reduced without introducing significant degradation such that the data can be transmitted over an S-band channel. This corresponds to a compression ratio equivalent to 1.84 bits per pixel. It is also shown that this can be achieved using at least two fairly simple techniques with weight-power requirements well within the constraints of the LANDSAT-D satellite. These are the adaptive 2D DPCM and adaptive hybrid techniques.
NASA Astrophysics Data System (ADS)
Bargatze, L. F.
2015-12-01
Active Data Archive Product Tracking (ADAPT) is a collection of software routines that permits one to generate XML metadata files to describe and register data products in support of the NASA Heliophysics Virtual Observatory VxO effort. ADAPT is also a philosophy. The ADAPT concept is to use any and all available metadata associated with scientific data to produce XML metadata descriptions in a consistent, uniform, and organized fashion to provide blanket access to the full complement of data stored on a targeted data server. In this poster, we present an application of ADAPT to describe all of the data products that are stored by using the Common Data File (CDF) format served out by the CDAWEB and SPDF data servers hosted at the NASA Goddard Space Flight Center. These data servers are the primary repositories for NASA Heliophysics data. For this purpose, the ADAPT routines have been used to generate data resource descriptions by using an XML schema named Space Physics Archive, Search, and Extract (SPASE). SPASE is the designated standard for documenting Heliophysics data products, as adopted by the Heliophysics Data and Model Consortium. The set of SPASE XML resource descriptions produced by ADAPT includes high-level descriptions of numerical data products, display data products, or catalogs and also includes low-level "Granule" descriptions. A SPASE Granule is effectively a universal access metadata resource; a Granule associates an individual data file (e.g. a CDF file) with a "parent" high-level data resource description, assigns a resource identifier to the file, and lists the corresponding assess URL(s). The CDAWEB and SPDF file systems were queried to provide the input required by the ADAPT software to create an initial set of SPASE metadata resource descriptions. Then, the CDAWEB and SPDF data repositories were queried subsequently on a nightly basis and the CDF file lists were checked for any changes such as the occurrence of new, modified, or deleted
Inner string cementing adapter and method of use
Helms, L.C.
1991-08-20
This patent describes an inner string cementing adapter for use on a work string in a well casing having floating equipment therein. It comprises mandrel means for connecting to a lower end of the work string; and sealing means adjacent to the mandrel means for substantially flatly sealing against a surface of the floating equipment without engaging a central opening in the floating equipment.
An adaptive precision gradient method for optimal control.
NASA Technical Reports Server (NTRS)
Klessig, R.; Polak, E.
1973-01-01
This paper presents a gradient algorithm for unconstrained optimal control problems. The algorithm is stated in terms of numerical integration formulas, the precision of which is controlled adaptively by a test that ensures convergence. Empirical results show that this algorithm is considerably faster than its fixed precision counterpart.-
A New Method to Cancel RFI---The Adaptive Filter
NASA Astrophysics Data System (ADS)
Bradley, R.; Barnbaum, C.
1996-12-01
An increasing amount of precious radio frequency spectrum in the VHF, UHF, and microwave bands is being utilized each year to support new commercial and military ventures, and all have the potential to interfere with radio astronomy observations. Some radio spectral lines of astronomical interest occur outside the protected radio astronomy bands and are unobservable due to heavy interference. Conventional approaches to deal with RFI include legislation, notch filters, RF shielding, and post-processing techniques. Although these techniques are somewhat successful, each suffers from insufficient interference cancellation. One concept of interference excision that has not been used before in radio astronomy is adaptive interference cancellation. The concept of adaptive interference canceling was first introduced in the mid-1970s as a way to reduce unwanted noise in low frequency (audio) systems. Examples of such systems include the canceling of maternal ECG in fetal electrocardiography and the reduction of engine noise in the passenger compartment of automobiles. Only recently have high-speed digital filter chips made adaptive filtering possible in a bandwidth as large a few megahertz, finally opening the door to astronomical uses. The system consists of two receivers: the main beam of the radio telescope receives the desired signal corrupted by RFI coming in the sidelobes, and the reference antenna receives only the RFI. The reference antenna is processed using a digital adaptive filter and then subtracted from the signal in the main beam, thus producing the system output. The weights of the digital filter are adjusted by way of an algorithm that minimizes, in a least-squares sense, the power output of the system. Through an adaptive-iterative process, the interference canceler will lock onto the RFI and the filter will adjust itself to minimize the effect of the RFI at the system output. We are building a prototype 100 MHz receiver and will measure the cancellation
The use of the spectral method within the fast adaptive composite grid method
McKay, S.M.
1994-12-31
The use of efficient algorithms for the solution of partial differential equations has been sought for many years. The fast adaptive composite grid (FAC) method combines an efficient algorithm with high accuracy to obtain low cost solutions to partial differential equations. The FAC method achieves fast solution by combining solutions on different grids with varying discretizations and using multigrid like techniques to find fast solution. Recently, the continuous FAC (CFAC) method has been developed which utilizes an analytic solution within a subdomain to iterate to a solution of the problem. This has been shown to achieve excellent results when the analytic solution can be found. The CFAC method will be extended to allow solvers which construct a function for the solution, e.g., spectral and finite element methods. In this discussion, the spectral methods will be used to provide a fast, accurate solution to the partial differential equation. As spectral methods are more accurate than finite difference methods, the ensuing accuracy from this hybrid method outside of the subdomain will be investigated.
Adaptive finite element methods for two-dimensional problems in computational fracture mechanics
NASA Technical Reports Server (NTRS)
Min, J. B.; Bass, J. M.; Spradley, L. W.
1994-01-01
Some recent results obtained using solution-adaptive finite element methods in two-dimensional problems in linear elastic fracture mechanics are presented. The focus is on the basic issue of adaptive finite element methods for validating the new methodology by computing demonstration problems and comparing the stress intensity factors to analytical results.
Method and apparatus for adaptive force and position control of manipulators
NASA Technical Reports Server (NTRS)
Seraji, Homayoun (Inventor)
1989-01-01
The present invention discloses systematic methods and apparatus for the design of real time controllers. Real-time control employs adaptive force/position by use of feedforward and feedback controllers, with the feedforward controller being the inverse of the linearized model of robot dynamics and containing only proportional-double-derivative terms is disclosed. The feedback controller, of the proportional-integral-derivative type, ensures that manipulator joints follow reference trajectories and the feedback controller achieves robust tracking of step-plus-exponential trajectories, all in real time. The adaptive controller includes adaptive force and position control within a hybrid control architecture. The adaptive controller, for force control, achieves tracking of desired force setpoints, and the adaptive position controller accomplishes tracking of desired position trajectories. Circuits in the adaptive feedback and feedforward controllers are varied by adaptation laws.
A new adaptive time step method for unsteady flow simulations in a human lung.
Fenández-Tena, Ana; Marcos, Alfonso C; Martínez, Cristina; Keith Walters, D
2017-04-07
The innovation presented is a method for adaptive time-stepping that allows clustering of time steps in portions of the cycle for which flow variables are rapidly changing, based on the concept of using a uniform step in a relevant dependent variable rather than a uniform step in the independent variable time. A user-defined function was developed to adapt the magnitude of the time step (adaptive time step) to a defined rate of change in inlet velocity. Quantitative comparison indicates that the new adaptive time stepping method significantly improves accuracy for simulations using an equivalent number of time steps per cycle.
NASA Astrophysics Data System (ADS)
Bussetta, Philippe; Marceau, Daniel; Ponthot, Jean-Philippe
2012-02-01
The aim of this work is to propose a new numerical method for solving the mechanical frictional contact problem in the general case of multi-bodies in a three dimensional space. This method is called adapted augmented Lagrangian method (AALM) and can be used in a multi-physical context (like thermo-electro-mechanical fields problems). This paper presents this new method and its advantages over other classical methods such as penalty method (PM), adapted penalty method (APM) and, augmented Lagrangian method (ALM). In addition, the efficiency and the reliability of the AALM are proved with some academic problems and an industrial thermo-electromechanical problem.
Surface estimation methods with phased-arrays for adaptive ultrasonic imaging in complex components
NASA Astrophysics Data System (ADS)
Robert, S.; Calmon, P.; Calvo, M.; Le Jeune, L.; Iakovleva, E.
2015-03-01
Immersion ultrasonic testing of structures with complex geometries may be significantly improved by using phased-arrays and specific adaptive algorithms that allow to image flaws under a complex and unknown interface. In this context, this paper presents a comparative study of different Surface Estimation Methods (SEM) available in the CIVA software and used for adaptive imaging. These methods are based either on time-of-flight measurements or on image processing. We also introduce a generalized adaptive method where flaws may be fully imaged with half-skip modes. In this method, both the surface and the back-wall of a complex structure are estimated before imaging flaws.
Lingel, Christian; Haist, Tobias; Osten, Wolfgang
2016-12-20
We propose an adaptive optical setup using a spatial light modulator (SLM), which is suitable to perform different phase retrieval methods with varying optical features and without mechanical movement. By this approach, it is possible to test many different phase retrieval methods and their parameters (optical and algorithmic) using one stable setup and without hardware adaption. We show exemplary results for the well-known transport of intensity equation (TIE) method and a new iterative adaptive phase retrieval method, where the object phase is canceled by an inverse phase written into part of the SLM. The measurement results are compared to white light interferometric measurements.
Nonlinear mode decomposition: A noise-robust, adaptive decomposition method
NASA Astrophysics Data System (ADS)
Iatsenko, Dmytro; McClintock, Peter V. E.; Stefanovska, Aneta
2015-09-01
The signals emanating from complex systems are usually composed of a mixture of different oscillations which, for a reliable analysis, should be separated from each other and from the inevitable background of noise. Here we introduce an adaptive decomposition tool—nonlinear mode decomposition (NMD)—which decomposes a given signal into a set of physically meaningful oscillations for any wave form, simultaneously removing the noise. NMD is based on the powerful combination of time-frequency analysis techniques—which, together with the adaptive choice of their parameters, make it extremely noise robust—and surrogate data tests used to identify interdependent oscillations and to distinguish deterministic from random activity. We illustrate the application of NMD to both simulated and real signals and demonstrate its qualitative and quantitative superiority over other approaches, such as (ensemble) empirical mode decomposition, Karhunen-Loève expansion, and independent component analysis. We point out that NMD is likely to be applicable and useful in many different areas of research, such as geophysics, finance, and the life sciences. The necessary matlab codes for running NMD are freely available for download.
Nonlinear mode decomposition: a noise-robust, adaptive decomposition method.
Iatsenko, Dmytro; McClintock, Peter V E; Stefanovska, Aneta
2015-09-01
The signals emanating from complex systems are usually composed of a mixture of different oscillations which, for a reliable analysis, should be separated from each other and from the inevitable background of noise. Here we introduce an adaptive decomposition tool-nonlinear mode decomposition (NMD)-which decomposes a given signal into a set of physically meaningful oscillations for any wave form, simultaneously removing the noise. NMD is based on the powerful combination of time-frequency analysis techniques-which, together with the adaptive choice of their parameters, make it extremely noise robust-and surrogate data tests used to identify interdependent oscillations and to distinguish deterministic from random activity. We illustrate the application of NMD to both simulated and real signals and demonstrate its qualitative and quantitative superiority over other approaches, such as (ensemble) empirical mode decomposition, Karhunen-Loève expansion, and independent component analysis. We point out that NMD is likely to be applicable and useful in many different areas of research, such as geophysics, finance, and the life sciences. The necessary matlab codes for running NMD are freely available for download.
Investigating Item Exposure Control Methods in Computerized Adaptive Testing
ERIC Educational Resources Information Center
Ozturk, Nagihan Boztunc; Dogan, Nuri
2015-01-01
This study aims to investigate the effects of item exposure control methods on measurement precision and on test security under various item selection methods and item pool characteristics. In this study, the Randomesque (with item group sizes of 5 and 10), Sympson-Hetter, and Fade-Away methods were used as item exposure control methods. Moreover,…
NASA Astrophysics Data System (ADS)
Ng, C. S.; Rosenberg, D.; Pouquet, A.; Germaschewski, K.; Bhattacharjee, A.
2009-04-01
A recently developed spectral-element adaptive refinement incompressible magnetohydrodynamic (MHD) code [Rosenberg, Fournier, Fischer, Pouquet, J. Comp. Phys. 215, 59-80 (2006)] is applied to simulate the problem of MHD island coalescence instability (\\ci) in two dimensions. \\ci is a fundamental MHD process that can produce sharp current layers and subsequent reconnection and heating in a high-Lundquist number plasma such as the solar corona [Ng and Bhattacharjee, Phys. Plasmas, 5, 4028 (1998)]. Due to the formation of thin current layers, it is highly desirable to use adaptively or statically refined grids to resolve them, and to maintain accuracy at the same time. The output of the spectral-element static adaptive refinement simulations are compared with simulations using a finite difference method on the same refinement grids, and both methods are compared to pseudo-spectral simulations with uniform grids as baselines. It is shown that with the statically refined grids roughly scaling linearly with effective resolution, spectral element runs can maintain accuracy significantly higher than that of the finite difference runs, in some cases achieving close to full spectral accuracy.
An examination of an adapter method for measuring the vibration transmitted to the human arms.
Xu, Xueyan S; Dong, Ren G; Welcome, Daniel E; Warren, Christopher; McDowell, Thomas W
2015-09-01
The objective of this study is to evaluate an adapter method for measuring the vibration on the human arms. Four instrumented adapters with different weights were used to measure the vibration transmitted to the wrist, forearm, and upper arm of each subject. Each adapter was attached at each location on the subjects using an elastic cloth wrap. Two laser vibrometers were also used to measure the transmitted vibration at each location to evaluate the validity of the adapter method. The apparent mass at the palm of the hand along the forearm direction was also measured to enhance the evaluation. This study found that the adapter and laser-measured transmissibility spectra were comparable with some systematic differences. While increasing the adapter mass reduced the resonant frequency at the measurement location, increasing the tightness of the adapter attachment increased the resonant frequency. However, the use of lightweight (≤15 g) adapters under medium attachment tightness did not change the basic trends of the transmissibility spectrum. The resonant features observed in the transmissibility spectra were also correlated with those observed in the apparent mass spectra. Because the local coordinate systems of the adapters may be significantly misaligned relative to the global coordinates of the vibration test systems, large errors were observed for the adapter-measured transmissibility in some individual orthogonal directions. This study, however, also demonstrated that the misalignment issue can be resolved by either using the total vibration transmissibility or by measuring the misalignment angles to correct the errors. Therefore, the adapter method is acceptable for understanding the basic characteristics of the vibration transmission in the human arms, and the adapter-measured data are acceptable for approximately modeling the system.
An examination of an adapter method for measuring the vibration transmitted to the human arms
Xu, Xueyan S.; Dong, Ren G.; Welcome, Daniel E.; Warren, Christopher; McDowell, Thomas W.
2016-01-01
The objective of this study is to evaluate an adapter method for measuring the vibration on the human arms. Four instrumented adapters with different weights were used to measure the vibration transmitted to the wrist, forearm, and upper arm of each subject. Each adapter was attached at each location on the subjects using an elastic cloth wrap. Two laser vibrometers were also used to measure the transmitted vibration at each location to evaluate the validity of the adapter method. The apparent mass at the palm of the hand along the forearm direction was also measured to enhance the evaluation. This study found that the adapter and laser-measured transmissibility spectra were comparable with some systematic differences. While increasing the adapter mass reduced the resonant frequency at the measurement location, increasing the tightness of the adapter attachment increased the resonant frequency. However, the use of lightweight (≤15 g) adapters under medium attachment tightness did not change the basic trends of the transmissibility spectrum. The resonant features observed in the transmissibility spectra were also correlated with those observed in the apparent mass spectra. Because the local coordinate systems of the adapters may be significantly misaligned relative to the global coordinates of the vibration test systems, large errors were observed for the adapter-measured transmissibility in some individual orthogonal directions. This study, however, also demonstrated that the misalignment issue can be resolved by either using the total vibration transmissibility or by measuring the misalignment angles to correct the errors. Therefore, the adapter method is acceptable for understanding the basic characteristics of the vibration transmission in the human arms, and the adapter-measured data are acceptable for approximately modeling the system. PMID:26834309
A new and efficient method to obtain benzalkonium chloride adapted cells of Listeria monocytogenes.
Saá Ibusquiza, Paula; Herrera, Juan J R; Vázquez-Sánchez, Daniel; Parada, Adelaida; Cabo, Marta L
2012-10-01
A new method to obtain benzalkonium chloride (BAC) adapted L. monocytogenes cells was developed. A factorial design was used to assess the effects of the inoculum size and BAC concentration on the adaptation (measured in terms of lethal dose 50 -LD50-) of 6 strains of Listeria monocytogenes after only one exposure. The proposed method could be applied successfully in the L. monocytogenes strains with higher adaptive capacity to BAC. In those cases, a significant empirical equation was obtained showing a positive effect of the inoculum size and a positive interaction between the effects of BAC and inoculum size on the level of adaptation achieved. However, a slight negative effect of BAC, due to the biocide, was also significant. The proposed method improves the classical method based on successive stationary phase cultures in sublethal BAC concentrations because it is less time-consuming and more effective. For the laboratory strain L. monocytogenes 5873, by applying the new procedure it was possible to increase BAC-adaptation 3.69-fold in only 33 h, whereas using the classical procedure 2.61-fold of increase was reached after 5 days. Moreover, with the new method, the maximum level of adaptation was determined for all the strains reaching surprisingly almost the same concentration of BAC (mg/l) for 5 out 6 strains. Thus, a good reference for establishing the effective concentrations of biocides to ensure the maximum level of adaptation was also determined.
Analysis of modified SMI method for adaptive array weight control
NASA Technical Reports Server (NTRS)
Dilsavor, R. L.; Moses, R. L.
1989-01-01
An adaptive array is applied to the problem of receiving a desired signal in the presence of weak interference signals which need to be suppressed. A modification, suggested by Gupta, of the sample matrix inversion (SMI) algorithm controls the array weights. In the modified SMI algorithm, interference suppression is increased by subtracting a fraction F of the noise power from the diagonal elements of the estimated covariance matrix. Given the true covariance matrix and the desired signal direction, the modified algorithm is shown to maximize a well-defined, intuitive output power ratio criterion. Expressions are derived for the expected value and variance of the array weights and output powers as a function of the fraction F and the number of snapshots used in the covariance matrix estimate. These expressions are compared with computer simulation and good agreement is found. A trade-off is found to exist between the desired level of interference suppression and the number of snapshots required in order to achieve that level with some certainty. The removal of noise eigenvectors from the covariance matrix inverse is also discussed with respect to this application. Finally, the type and severity of errors which occur in the covariance matrix estimate are characterized through simulation.
Parallel architectures for iterative methods on adaptive, block structured grids
NASA Technical Reports Server (NTRS)
Gannon, D.; Vanrosendale, J.
1983-01-01
A parallel computer architecture well suited to the solution of partial differential equations in complicated geometries is proposed. Algorithms for partial differential equations contain a great deal of parallelism. But this parallelism can be difficult to exploit, particularly on complex problems. One approach to extraction of this parallelism is the use of special purpose architectures tuned to a given problem class. The architecture proposed here is tuned to boundary value problems on complex domains. An adaptive elliptic algorithm which maps effectively onto the proposed architecture is considered in detail. Two levels of parallelism are exploited by the proposed architecture. First, by making use of the freedom one has in grid generation, one can construct grids which are locally regular, permitting a one to one mapping of grids to systolic style processor arrays, at least over small regions. All local parallelism can be extracted by this approach. Second, though there may be a regular global structure to the grids constructed, there will be parallelism at this level. One approach to finding and exploiting this parallelism is to use an architecture having a number of processor clusters connected by a switching network. The use of such a network creates a highly flexible architecture which automatically configures to the problem being solved.
Mixed Methods in Intervention Research: Theory to Adaptation
ERIC Educational Resources Information Center
Nastasi, Bonnie K.; Hitchcock, John; Sarkar, Sreeroopa; Burkholder, Gary; Varjas, Kristen; Jayasena, Asoka
2007-01-01
The purpose of this article is to demonstrate the application of mixed methods research designs to multiyear programmatic research and development projects whose goals include integration of cultural specificity when generating or translating evidence-based practices. The authors propose a set of five mixed methods designs related to different…
Adaptive Discontinuous Evolution Galerkin Method for Dry Atmospheric Flow
2013-04-02
standard one-dimensional approximate Riemann solver used for the flux integration demonstrate better stability, accuracy as well as reliability of the...discontinuous evolution Galerkin method for dry atmospheric convection. Comparisons with the standard one-dimensional approximate Riemann solver used...instead of a standard one- dimensional approximate Riemann solver, the flux integration within the discontinuous Galerkin method is now realized by
Speckle reduction in optical coherence tomography by adaptive total variation method
NASA Astrophysics Data System (ADS)
Wu, Tong; Shi, Yaoyao; Liu, Youwen; He, Chongjun
2015-12-01
An adaptive total variation method based on the combination of speckle statistics and total variation restoration is proposed and developed for reducing speckle noise in optical coherence tomography (OCT) images. The statistical distribution of the speckle noise in OCT image is investigated and measured. With the measured parameters such as the mean value and variance of the speckle noise, the OCT image is restored by the adaptive total variation restoration method. The adaptive total variation restoration algorithm was applied to the OCT images of a volunteer's hand skin, which showed effective speckle noise reduction and image quality improvement. For image quality comparison, the commonly used median filtering method was also applied to the same images to reduce the speckle noise. The measured results demonstrate the superior performance of the adaptive total variation restoration method in terms of image signal-to-noise ratio, equivalent number of looks, contrast-to-noise ratio, and mean square error.
An adaptation of Krylov subspace methods to path following
Walker, H.F.
1996-12-31
Krylov subspace methods at present constitute a very well known and highly developed class of iterative linear algebra methods. These have been effectively applied to nonlinear system solving through Newton-Krylov methods, in which Krylov subspace methods are used to solve the linear systems that characterize steps of Newton`s method (the Newton equations). Here, we will discuss the application of Krylov subspace methods to path following problems, in which the object is to track a solution curve as a parameter varies. Path following methods are typically of predictor-corrector form, in which a point near the solution curve is {open_quotes}predicted{close_quotes} by some easy but relatively inaccurate means, and then a series of Newton-like corrector iterations is used to return approximately to the curve. The analogue of the Newton equation is underdetermined, and an additional linear condition must be specified to determine corrector steps uniquely. This is typically done by requiring that the steps be orthogonal to an approximate tangent direction. Augmenting the under-determined system with this orthogonality condition in a straightforward way typically works well if direct linear algebra methods are used, but Krylov subspace methods are often ineffective with this approach. We will discuss recent work in which this orthogonality condition is imposed directly as a constraint on the corrector steps in a certain way. The means of doing this preserves problem conditioning, allows the use of preconditioners constructed for the fixed-parameter case, and has certain other advantages. Experiments on standard PDE continuation test problems indicate that this approach is effective.
USEPA ambient air monitoring methods for volatile organic compounds (VOCs) using specially-prepared canisters and solid adsorbents are directly adaptable to monitoring for vapors in the indoor environment. The draft Method TO-15 Supplement, an extension of the USEPA Method TO-15,...
Adapting Western research methods to indigenous ways of knowing.
Simonds, Vanessa W; Christopher, Suzanne
2013-12-01
Indigenous communities have long experienced exploitation by researchers and increasingly require participatory and decolonizing research processes. We present a case study of an intervention research project to exemplify a clash between Western research methodologies and Indigenous methodologies and how we attempted reconciliation. We then provide implications for future research based on lessons learned from Native American community partners who voiced concern over methods of Western deductive qualitative analysis. Decolonizing research requires constant reflective attention and action, and there is an absence of published guidance for this process. Continued exploration is needed for implementing Indigenous methods alone or in conjunction with appropriate Western methods when conducting research in Indigenous communities. Currently, examples of Indigenous methods and theories are not widely available in academic texts or published articles, and are often not perceived as valid.
Automatic multirate methods for ordinary differential equations. [Adaptive time steps
Gear, C.W.
1980-01-01
A study is made of the application of integration methods in which different step sizes are used for different members of a system of equations. Such methods can result in savings if the cost of derivative evaluation is high or if a system is sparse; however, the estimation and control of errors is very difficult and can lead to high overheads. Three approaches are discussed, and it is shown that the least intuitive is the most promising. 2 figures.
Systems and Methods for Parameter Dependent Riccati Equation Approaches to Adaptive Control
NASA Technical Reports Server (NTRS)
Kim, Kilsoo (Inventor); Yucelen, Tansel (Inventor); Calise, Anthony J. (Inventor)
2015-01-01
Systems and methods for adaptive control are disclosed. The systems and methods can control uncertain dynamic systems. The control system can comprise a controller that employs a parameter dependent Riccati equation. The controller can produce a response that causes the state of the system to remain bounded. The control system can control both minimum phase and non-minimum phase systems. The control system can augment an existing, non-adaptive control design without modifying the gains employed in that design. The control system can also avoid the use of high gains in both the observer design and the adaptive control law.
ZZ-Type a posteriori error estimators for adaptive boundary element methods on a curve☆
Feischl, Michael; Führer, Thomas; Karkulik, Michael; Praetorius, Dirk
2014-01-01
In the context of the adaptive finite element method (FEM), ZZ-error estimators named after Zienkiewicz and Zhu (1987) [52] are mathematically well-established and widely used in practice. In this work, we propose and analyze ZZ-type error estimators for the adaptive boundary element method (BEM). We consider weakly singular and hyper-singular integral equations and prove, in particular, convergence of the related adaptive mesh-refining algorithms. Throughout, the theoretical findings are underlined by numerical experiments. PMID:24748725
Adaptive error covariances estimation methods for ensemble Kalman filters
Zhen, Yicun; Harlim, John
2015-08-01
This paper presents a computationally fast algorithm for estimating, both, the system and observation noise covariances of nonlinear dynamics, that can be used in an ensemble Kalman filtering framework. The new method is a modification of Belanger's recursive method, to avoid an expensive computational cost in inverting error covariance matrices of product of innovation processes of different lags when the number of observations becomes large. When we use only product of innovation processes up to one-lag, the computational cost is indeed comparable to a recently proposed method by Berry–Sauer's. However, our method is more flexible since it allows for using information from product of innovation processes of more than one-lag. Extensive numerical comparisons between the proposed method and both the original Belanger's and Berry–Sauer's schemes are shown in various examples, ranging from low-dimensional linear and nonlinear systems of SDEs and 40-dimensional stochastically forced Lorenz-96 model. Our numerical results suggest that the proposed scheme is as accurate as the original Belanger's scheme on low-dimensional problems and has a wider range of more accurate estimates compared to Berry–Sauer's method on L-96 example.
Adaptive entropy-constrained discontinuous Galerkin method for simulation of turbulent flows
NASA Astrophysics Data System (ADS)
Lv, Yu; Ihme, Matthias
2015-11-01
A robust and adaptive computational framework will be presented for high-fidelity simulations of turbulent flows based on the discontinuous Galerkin (DG) scheme. For this, an entropy-residual based adaptation indicator is proposed to enable adaptation in polynomial and physical space. The performance and generality of this entropy-residual indicator is evaluated through direct comparisons with classical indicators. In addition, a dynamic load balancing procedure is developed to improve computational efficiency. The adaptive framework is tested by considering a series of turbulent test cases, which include homogeneous isotropic turbulence, channel flow and flow-over-a-cylinder. The accuracy, performance and scalability are assessed, and the benefit of this adaptive high-order method is discussed. The funding from NSF CAREER award is greatly acknowledged.
A high-throughput multiplex method adapted for GMO detection.
Chaouachi, Maher; Chupeau, Gaëlle; Berard, Aurélie; McKhann, Heather; Romaniuk, Marcel; Giancola, Sandra; Laval, Valérie; Bertheau, Yves; Brunel, Dominique
2008-12-24
A high-throughput multiplex assay for the detection of genetically modified organisms (GMO) was developed on the basis of the existing SNPlex method designed for SNP genotyping. This SNPlex assay allows the simultaneous detection of up to 48 short DNA sequences (approximately 70 bp; "signature sequences") from taxa endogenous reference genes, from GMO constructions, screening targets, construct-specific, and event-specific targets, and finally from donor organisms. This assay avoids certain shortcomings of multiplex PCR-based methods already in widespread use for GMO detection. The assay demonstrated high specificity and sensitivity. The results suggest that this assay is reliable, flexible, and cost- and time-effective for high-throughput GMO detection.
MR Image Reconstruction Using Block Matching and Adaptive Kernel Methods
Schmidt, Johannes F. M.; Santelli, Claudio; Kozerke, Sebastian
2016-01-01
An approach to Magnetic Resonance (MR) image reconstruction from undersampled data is proposed. Undersampling artifacts are removed using an iterative thresholding algorithm applied to nonlinearly transformed image block arrays. Each block array is transformed using kernel principal component analysis where the contribution of each image block to the transform depends in a nonlinear fashion on the distance to other image blocks. Elimination of undersampling artifacts is achieved by conventional principal component analysis in the nonlinear transform domain, projection onto the main components and back-mapping into the image domain. Iterative image reconstruction is performed by interleaving the proposed undersampling artifact removal step and gradient updates enforcing consistency with acquired k-space data. The algorithm is evaluated using retrospectively undersampled MR cardiac cine data and compared to k-t SPARSE-SENSE, block matching with spatial Fourier filtering and k-t ℓ1-SPIRiT reconstruction. Evaluation of image quality and root-mean-squared-error (RMSE) reveal improved image reconstruction for up to 8-fold undersampled data with the proposed approach relative to k-t SPARSE-SENSE, block matching with spatial Fourier filtering and k-t ℓ1-SPIRiT. In conclusion, block matching and kernel methods can be used for effective removal of undersampling artifacts in MR image reconstruction and outperform methods using standard compressed sensing and ℓ1-regularized parallel imaging methods. PMID:27116675
An Adaptive Kalman Filter using a Simple Residual Tuning Method
NASA Technical Reports Server (NTRS)
Harman, Richard R.
1999-01-01
One difficulty in using Kalman filters in real world situations is the selection of the correct process noise, measurement noise, and initial state estimate and covariance. These parameters are commonly referred to as tuning parameters. Multiple methods have been developed to estimate these parameters. Most of those methods such as maximum likelihood, subspace, and observer Kalman Identification require extensive offline processing and are not suitable for real time processing. One technique, which is suitable for real time processing, is the residual tuning method. Any mismodeling of the filter tuning parameters will result in a non-white sequence for the filter measurement residuals. The residual tuning technique uses this information to estimate corrections to those tuning parameters. The actual implementation results in a set of sequential equations that run in parallel with the Kalman filter. Equations for the estimation of the measurement noise have also been developed. These algorithms are used to estimate the process noise and measurement noise for the Wide Field Infrared Explorer star tracker and gyro.
An Adaptive Kalman Filter Using a Simple Residual Tuning Method
NASA Technical Reports Server (NTRS)
Harman, Richard R.
1999-01-01
One difficulty in using Kalman filters in real world situations is the selection of the correct process noise, measurement noise, and initial state estimate and covariance. These parameters are commonly referred to as tuning parameters. Multiple methods have been developed to estimate these parameters. Most of those methods such as maximum likelihood, subspace, and observer Kalman Identification require extensive offline processing and are not suitable for real time processing. One technique, which is suitable for real time processing, is the residual tuning method. Any mismodeling of the filter tuning parameters will result in a non-white sequence for the filter measurement residuals. The residual tuning technique uses this information to estimate corrections to those tuning parameters. The actual implementation results in a set of sequential equations that run in parallel with the Kalman filter. A. H. Jazwinski developed a specialized version of this technique for estimation of process noise. Equations for the estimation of the measurement noise have also been developed. These algorithms are used to estimate the process noise and measurement noise for the Wide Field Infrared Explorer star tracker and gyro.
The Pilates method and cardiorespiratory adaptation to training.
Tinoco-Fernández, Maria; Jiménez-Martín, Miguel; Sánchez-Caravaca, M Angeles; Fernández-Pérez, Antonio M; Ramírez-Rodrigo, Jesús; Villaverde-Gutiérrez, Carmen
2016-01-01
Although all authors report beneficial health changes following training based on the Pilates method, no explicit analysis has been performed of its cardiorespiratory effects. The objective of this study was to evaluate possible changes in cardiorespiratory parameters with the Pilates method. A total of 45 university students aged 18-35 years (77.8% female and 22.2% male), who did not routinely practice physical exercise or sports, volunteered for the study and signed informed consent. The Pilates training was conducted over 10 weeks, with three 1-hour sessions per week. Physiological cardiorespiratory responses were assessed using a MasterScreen CPX apparatus. After the 10-week training, statistically significant improvements were observed in mean heart rate (135.4-124.2 beats/min), respiratory exchange ratio (1.1-0.9) and oxygen equivalent (30.7-27.6) values, among other spirometric parameters, in submaximal aerobic testing. These findings indicate that practice of the Pilates method has a positive influence on cardiorespiratory parameters in healthy adults who do not routinely practice physical exercise activities.
Restrictive Stochastic Item Selection Methods in Cognitive Diagnostic Computerized Adaptive Testing
ERIC Educational Resources Information Center
Wang, Chun; Chang, Hua-Hua; Huebner, Alan
2011-01-01
This paper proposes two new item selection methods for cognitive diagnostic computerized adaptive testing: the restrictive progressive method and the restrictive threshold method. They are built upon the posterior weighted Kullback-Leibler (KL) information index but include additional stochastic components either in the item selection index or in…
Self-Adaptive Filon's Integration Method and Its Application to Computing Synthetic Seismograms
NASA Astrophysics Data System (ADS)
Zhang, Hai-Ming; Chen, Xiao-Fei
2001-03-01
Based on the principle of the self-adaptive Simpson integration method, and by incorporating the `fifth-order' Filon's integration algorithm [Bull. Seism. Soc. Am. 73(1983)913], we have proposed a simple and efficient numerical integration method, i.e., the self-adaptive Filon's integration method (SAFIM), for computing synthetic seismograms at large epicentral distances. With numerical examples, we have demonstrated that the SAFIM is not only accurate but also very efficient. This new integration method is expected to be very useful in seismology, as well as in computing similar oscillatory integrals in other branches of physics.
NASA Astrophysics Data System (ADS)
Tanizawa, Ken; Hirose, Akira
Adaptive polarization mode dispersion (PMD) compensation is required for the speed-up and advancement of the present optical communications. The combination of a tunable PMD compensator and its adaptive control method achieves adaptive PMD compensation. In this paper, we report an effective search control algorithm for the feedback control of the PMD compensator. The algorithm is based on the hill-climbing method. However, the step size changes randomly to prevent the convergence from being trapped at a local maximum or a flat, unlike the conventional hill-climbing method. The randomness depends on the Gaussian probability density functions. We conducted transmission simulations at 160Gb/s and the results show that the proposed method provides more optimal compensator control than the conventional hill-climbing method.
A Massively Parallel Adaptive Fast Multipole Method on Heterogeneous Architectures
Lashuk, Ilya; Chandramowlishwaran, Aparna; Langston, Harper; Nguyen, Tuan-Anh; Sampath, Rahul S; Shringarpure, Aashay; Vuduc, Richard; Ying, Lexing; Zorin, Denis; Biros, George
2012-01-01
We describe a parallel fast multipole method (FMM) for highly nonuniform distributions of particles. We employ both distributed memory parallelism (via MPI) and shared memory parallelism (via OpenMP and GPU acceleration) to rapidly evaluate two-body nonoscillatory potentials in three dimensions on heterogeneous high performance computing architectures. We have performed scalability tests with up to 30 billion particles on 196,608 cores on the AMD/CRAY-based Jaguar system at ORNL. On a GPU-enabled system (NSF's Keeneland at Georgia Tech/ORNL), we observed 30x speedup over a single core CPU and 7x speedup over a multicore CPU implementation. By combining GPUs with MPI, we achieve less than 10 ns/particle and six digits of accuracy for a run with 48 million nonuniformly distributed particles on 192 GPUs.
Adaptive bit truncation and compensation method for EZW image coding
NASA Astrophysics Data System (ADS)
Dai, Sheng-Kui; Zhu, Guangxi; Wang, Yao
2003-09-01
The embedded zero-tree wavelet algorithm (EZW) is widely adopted to compress wavelet coefficients of images with the property that the bits stream can be truncated and produced anywhere. The lower bit plane of the wavelet coefficents is verified to be less important than the higher bit plane. Therefore it can be truncated and not encoded. Based on experiments, a generalized function, which can provide a glancing guide for EZW encoder to intelligently decide the number of low bit plane to be truncated, is deduced in this paper. In the EZW decoder, a simple method is presented to compensate for the truncated wavelet coefficients, and finally it can surprisingly enhance the quality of reconstructed image and spend scarcely any additional cost at the same time.
An Adaptive Unstructured Grid Method by Grid Subdivision, Local Remeshing, and Grid Movement
NASA Technical Reports Server (NTRS)
Pirzadeh, Shahyar Z.
1999-01-01
An unstructured grid adaptation technique has been developed and successfully applied to several three dimensional inviscid flow test cases. The approach is based on a combination of grid subdivision, local remeshing, and grid movement. For solution adaptive grids, the surface triangulation is locally refined by grid subdivision, and the tetrahedral grid in the field is partially remeshed at locations of dominant flow features. A grid redistribution strategy is employed for geometric adaptation of volume grids to moving or deforming surfaces. The method is automatic and fast and is designed for modular coupling with different solvers. Several steady state test cases with different inviscid flow features were tested for grid/solution adaptation. In all cases, the dominant flow features, such as shocks and vortices, were accurately and efficiently predicted with the present approach. A new and robust method of moving tetrahedral "viscous" grids is also presented and demonstrated on a three-dimensional example.
Impedance adaptation methods of the piezoelectric energy harvesting
NASA Astrophysics Data System (ADS)
Kim, Hyeoungwoo
In this study, the important issues of energy recovery were addressed and a comprehensive investigation was performed on harvesting electrical power from an ambient mechanical vibration source. Also discussed are the impedance matching methods used to increase the efficiency of energy transfer from the environment to the application. Initially, the mechanical impedance matching method was investigated to increase mechanical energy transferred to the transducer from the environment. This was done by reducing the mechanical impedance such as damping factor and energy reflection ratio. The vibration source and the transducer were modeled by a two-degree-of-freedom dynamic system with mass, spring constant, and damper. The transmissibility employed to show how much mechanical energy that was transferred in this system was affected by the damping ratio and the stiffness of elastic materials. The mechanical impedance of the system was described by electrical system using analogy between the two systems in order to simply the total mechanical impedance. Secondly, the transduction rate of mechanical energy to electrical energy was improved by using a PZT material which has a high figure of merit and a high electromechanical coupling factor for electrical power generation, and a piezoelectric transducer which has a high transduction rate was designed and fabricated. The high g material (g33 = 40 [10-3Vm/N]) was developed to improve the figure of merit of the PZT ceramics. The cymbal composite transducer has been found as a promising structure for piezoelectric energy harvesting under high force at cyclic conditions (10--200 Hz), because it has almost 40 times higher effective strain coefficient than PZT ceramics. The endcap of cymbal also enhances the endurance of the ceramic to sustain ac load along with stress amplification. In addition, a macro fiber composite (MFC) was employed as a strain component because of its flexibility and the high electromechanical coupling
A self-adaptive-grid method with application to airfoil flow
NASA Technical Reports Server (NTRS)
Nakahashi, K.; Deiwert, G. S.
1985-01-01
A self-adaptive-grid method is described that is suitable for multidimensional steady and unsteady computations. Based on variational principles, a spring analogy is used to redistribute grid points in an optimal sense to reduce the overall solution error. User-specified parameters, denoting both maximum and minimum permissible grid spacings, are used to define the all-important constants, thereby minimizing the empiricism and making the method self-adaptive. Operator splitting and one-sided controls for orthogonality and smoothness are used to make the method practical, robust, and efficient. Examples are included for both steady and unsteady viscous flow computations about airfoils in two dimensions, as well as for a steady inviscid flow computation and a one-dimensional case. These examples illustrate the precise control the user has with the self-adaptive method and demonstrate a significant improvement in accuracy and quality of the solutions.
NASA Astrophysics Data System (ADS)
Susanti, D.; Hartini, E.; Permana, A.
2017-01-01
Sale and purchase of the growing competition between companies in Indonesian, make every company should have a proper planning in order to win the competition with other companies. One of the things that can be done to design the plan is to make car sales forecast for the next few periods, it’s required that the amount of inventory of cars that will be sold in proportion to the number of cars needed. While to get the correct forecasting, on of the methods that can be used is the method of Adaptive Spline Threshold Autoregression (ASTAR). Therefore, this time the discussion will focus on the use of Adaptive Spline Threshold Autoregression (ASTAR) method in forecasting the volume of car sales in PT.Srikandi Diamond Motors using time series data.In the discussion of this research, forecasting using the method of forecasting value Adaptive Spline Threshold Autoregression (ASTAR) produce approximately correct.
Webster, Clayton G; Zhang, Guannan; Gunzburger, Max D
2012-10-01
Accurate predictive simulations of complex real world applications require numerical approximations to first, oppose the curse of dimensionality and second, converge quickly in the presence of steep gradients, sharp transitions, bifurcations or finite discontinuities in high-dimensional parameter spaces. In this paper we present a novel multi-dimensional multi-resolution adaptive (MdMrA) sparse grid stochastic collocation method, that utilizes hierarchical multiscale piecewise Riesz basis functions constructed from interpolating wavelets. The basis for our non-intrusive method forms a stable multiscale splitting and thus, optimal adaptation is achieved. Error estimates and numerical examples will used to compare the efficiency of the method with several other techniques.
Anderson, R W; Pember, R B; Elliott, N S
2001-10-22
A new method that combines staggered grid Arbitrary Lagrangian-Eulerian (ALE) techniques with structured local adaptive mesh refinement (AMR) has been developed for solution of the Euler equations. This method facilitates the solution of problems currently at and beyond the boundary of soluble problems by traditional ALE methods by focusing computational resources where they are required through dynamic adaption. Many of the core issues involved in the development of the combined ALEAMR method hinge upon the integration of AMR with a staggered grid Lagrangian integration method. The novel components of the method are mainly driven by the need to reconcile traditional AMR techniques, which are typically employed on stationary meshes with cell-centered quantities, with the staggered grids and grid motion employed by Lagrangian methods. Numerical examples are presented which demonstrate the accuracy and efficiency of the method.
ERIC Educational Resources Information Center
Wang, Ze; Rohrer, David; Chuang, Chi-ching; Fujiki, Mayo; Herman, Keith; Reinke, Wendy
2015-01-01
This study compared 5 scoring methods in terms of their statistical assumptions. They were then used to score the Teacher Observation of Classroom Adaptation Checklist, a measure consisting of 3 subscales and 21 Likert-type items. The 5 methods used were (a) sum/average scores of items, (b) latent factor scores with continuous indicators, (c)…
Recent advances in high-performance modeling of plasma-based acceleration using the full PIC method
NASA Astrophysics Data System (ADS)
Vay, J.-L.; Lehe, R.; Vincenti, H.; Godfrey, B. B.; Haber, I.; Lee, P.
2016-09-01
Numerical simulations have been critical in the recent rapid developments of plasma-based acceleration concepts. Among the various available numerical techniques, the particle-in-cell (PIC) approach is the method of choice for self-consistent simulations from first principles. The fundamentals of the PIC method were established decades ago, but improvements or variations are continuously being proposed. We report on several recent advances in PIC-related algorithms that are of interest for application to plasma-based accelerators, including (a) detailed analysis of the numerical Cherenkov instability and its remediation for the modeling of plasma accelerators in laboratory and Lorentz boosted frames, (b) analytic pseudo-spectral electromagnetic solvers in Cartesian and cylindrical (with azimuthal modes decomposition) geometries, and (c) novel analysis of Maxwell's solvers' stencil variation and truncation, in application to domain decomposition strategies and implementation of perfectly matched layers in high-order and pseudo-spectral solvers.
An adaptive, formally second order accurate version of the immersed boundary method
NASA Astrophysics Data System (ADS)
Griffith, Boyce E.; Hornung, Richard D.; McQueen, David M.; Peskin, Charles S.
2007-04-01
Like many problems in biofluid mechanics, cardiac mechanics can be modeled as the dynamic interaction of a viscous incompressible fluid (the blood) and a (visco-)elastic structure (the muscular walls and the valves of the heart). The immersed boundary method is a mathematical formulation and numerical approach to such problems that was originally introduced to study blood flow through heart valves, and extensions of this work have yielded a three-dimensional model of the heart and great vessels. In the present work, we introduce a new adaptive version of the immersed boundary method. This adaptive scheme employs the same hierarchical structured grid approach (but a different numerical scheme) as the two-dimensional adaptive immersed boundary method of Roma et al. [A multilevel self adaptive version of the immersed boundary method, Ph.D. Thesis, Courant Institute of Mathematical Sciences, New York University, 1996; An adaptive version of the immersed boundary method, J. Comput. Phys. 153 (2) (1999) 509-534] and is based on a formally second order accurate (i.e., second order accurate for problems with sufficiently smooth solutions) version of the immersed boundary method that we have recently described [B.E. Griffith, C.S. Peskin, On the order of accuracy of the immersed boundary method: higher order convergence rates for sufficiently smooth problems, J. Comput. Phys. 208 (1) (2005) 75-105]. Actual second order convergence rates are obtained for both the uniform and adaptive methods by considering the interaction of a viscous incompressible flow and an anisotropic incompressible viscoelastic shell. We also present initial results from the application of this methodology to the three-dimensional simulation of blood flow in the heart and great vessels. The results obtained by the adaptive method show good qualitative agreement with simulation results obtained by earlier non-adaptive versions of the method, but the flow in the vicinity of the model heart valves
Adaptive methods: when and how should they be used in clinical trials?
Porcher, Raphaël; Lecocq, Brigitte; Vray, Muriel
2011-01-01
Adaptive clinical trial designs are defined as designs that use data cumulated during trial to possibly modify certain aspects without compromising the validity and integrity of the said trial. Compared to more traditional trials, in theory, adaptive designs allow the same information to be generated but in a more efficient manner. The advantages and limits of this type of design together with the weight of the constraints, in particular of a logistic nature, that their use implies, differ depending on whether the trial is exploratory or confirmatory with a view to registration. One of the key elements ensuring trial integrity is the involvement of an independent committee to determine adaptations in terms of experimental design during the study. Adaptive methods for clinical trials are appealing and may be accepted by the relevant authorities. However, the constraints that they impose must be determined well in advance.
An h-adaptive local discontinuous Galerkin method for the Navier-Stokes-Korteweg equations
NASA Astrophysics Data System (ADS)
Tian, Lulu; Xu, Yan; Kuerten, J. G. M.; van der Vegt, J. J. W.
2016-08-01
In this article, we develop a mesh adaptation algorithm for a local discontinuous Galerkin (LDG) discretization of the (non)-isothermal Navier-Stokes-Korteweg (NSK) equations modeling liquid-vapor flows with phase change. This work is a continuation of our previous research, where we proposed LDG discretizations for the (non)-isothermal NSK equations with a time-implicit Runge-Kutta method. To save computing time and to capture the thin interfaces more accurately, we extend the LDG discretization with a mesh adaptation method. Given the current adapted mesh, a criterion for selecting candidate elements for refinement and coarsening is adopted based on the locally largest value of the density gradient. A strategy to refine and coarsen the candidate elements is then provided. We emphasize that the adaptive LDG discretization is relatively simple and does not require additional stabilization. The use of a locally refined mesh in combination with an implicit Runge-Kutta time method is, however, non-trivial, but results in an efficient time integration method for the NSK equations. Computations, including cases with solid wall boundaries, are provided to demonstrate the accuracy, efficiency and capabilities of the adaptive LDG discretizations.
Adaptive remeshing method in 2D based on refinement and coarsening techniques
NASA Astrophysics Data System (ADS)
Giraud-Moreau, L.; Borouchaki, H.; Cherouat, A.
2007-04-01
The analysis of mechanical structures using the Finite Element Method, in the framework of large elastoplastic strains, needs frequent remeshing of the deformed domain during computation. Remeshing is necessary for two main reasons, the large geometric distortion of finite elements and the adaptation of the mesh size to the physical behavior of the solution. This paper presents an adaptive remeshing method to remesh a mechanical structure in two dimensions subjected to large elastoplastic deformations with damage. The proposed remeshing technique includes adaptive refinement and coarsening procedures, based on geometrical and physical criteria. The proposed method has been integrated in a computational environment using the ABAQUS solver. Numerical examples show the efficiency of the proposed approach.
NASA Astrophysics Data System (ADS)
Moore, F.; Burke, M.
2015-12-01
A wide range of studies using a variety of methods strongly suggest that climate change will have a negative impact on agricultural production in many areas. Farmers though should be able to learn about a changing climate and to adjust what they grow and how they grow it in order to reduce these negative impacts. However, it remains unclear how effective these private (autonomous) adaptations will be, or how quickly they will be adopted. Constraining the uncertainty on this adaptation is important for understanding the impacts of climate change on agriculture. Here we review a number of empirical methods that have been proposed for understanding the rate and effectiveness of private adaptation to climate change. We compare these methods using data on agricultural yields in the United States and western Europe.
Fast multipole and space adaptive multiresolution methods for the solution of the Poisson equation
NASA Astrophysics Data System (ADS)
Bilek, Petr; Duarte, Max; Nečas, David; Bourdon, Anne; Bonaventura, Zdeněk
2016-09-01
This work focuses on the conjunction of the fast multipole method (FMM) with the space adaptive multiresolution (MR) technique for grid adaptation. Since both methods, MR and FMM provide a priori error estimates, both achieve O(N) computational complexity, and both operate on the same hierarchical space division, their conjunction represents a natural choice when designing a numerically efficient and robust strategy for time dependent problems. Special attention is given to the use of these methods in the simulation of streamer discharges in air. We have designed a FMM Poisson solver on multiresolution adapted grid in 2D. The accuracy and the computation complexity of the solver has been verified for a set of manufactured solutions. We confirmed that the developed solver attains desired accuracy and this accuracy is controlled only by the number of terms in the multipole expansion in combination with the multiresolution accuracy tolerance. The implementation has a linear computation complexity O(N).
NASA Astrophysics Data System (ADS)
Ran, Qiwen; Yang, Zhonghua; Ma, Jing; Tan, Liying; Liao, Huixi; Liu, Qingfeng
2013-02-01
In this paper, a weighted adaptive threshold estimating method is proposed to deal with long and deep channel fades in Satellite-to-Ground optical communications. During the channel correlation interval where there are sufficient correlations in adjacent signal samples, the correlations in its change rates are described by weighted equations in the form of Toeplitz matrix. As vital inputs to the proposed adaptive threshold estimator, the optimal values of the change rates can be obtained by solving the weighted equation systems. The effect of channel fades and aberrant samples can be mitigated by joint use of weighted equation systems and Kalman estimation. Based on the channel information data from star observation trails, simulations are made and the numerical results show that the proposed method have better anti-fade performances than the D-value adaptive threshold estimating method in both weak and strong turbulence conditions.
The adaptive problems of female teenage refugees and their behavioral adjustment methods for coping
Mhaidat, Fatin
2016-01-01
This study aimed at identifying the levels of adaptive problems among teenage female refugees in the government schools and explored the behavioral methods that were used to cope with the problems. The sample was composed of 220 Syrian female students (seventh to first secondary grades) enrolled at government schools within the Zarqa Directorate and who came to Jordan due to the war conditions in their home country. The study used the scale of adaptive problems that consists of four dimensions (depression, anger and hostility, low self-esteem, and feeling insecure) and a questionnaire of the behavioral adjustment methods for dealing with the problem of asylum. The results indicated that the Syrian teenage female refugees suffer a moderate degree of adaptation problems, and the positive adjustment methods they have used are more than the negatives. PMID:27175098
Recent advances in the modeling of plasmas with the Particle-In-Cell methods
NASA Astrophysics Data System (ADS)
Vay, Jean-Luc; Lehe, Remi; Vincenti, Henri; Godfrey, Brendan; Lee, Patrick; Haber, Irv
2015-11-01
The Particle-In-Cell (PIC) approach is the method of choice for self-consistent simulations of plasmas from first principles. The fundamentals of the PIC method were established decades ago but improvements or variations are continuously being proposed. We report on several recent advances in PIC related algorithms, including: (a) detailed analysis of the numerical Cherenkov instability and its remediation, (b) analytic pseudo-spectral electromagnetic solvers in Cartesian and cylindrical (with azimuthal modes decomposition) geometries, (c) arbitrary-order finite-difference and generalized pseudo-spectral Maxwell solvers, (d) novel analysis of Maxwell's solvers' stencil variation and truncation, in application to domain decomposition strategies and implementation of Perfectly Matched Layers in high-order and pseudo-spectral solvers. Work supported by US-DOE Contracts DE-AC02-05CH11231 and the US-DOE SciDAC program ComPASS. Used resources of NERSC, supported by US-DOE Contract DE-AC02-05CH11231.
Lei, Xusheng; Li, Jingjing
2012-01-01
This paper presents an adaptive information fusion method to improve the accuracy and reliability of the altitude measurement information for small unmanned aerial rotorcraft during the landing process. Focusing on the low measurement performance of sensors mounted on small unmanned aerial rotorcraft, a wavelet filter is applied as a pre-filter to attenuate the high frequency noises in the sensor output. Furthermore, to improve altitude information, an adaptive extended Kalman filter based on a maximum a posteriori criterion is proposed to estimate measurement noise covariance matrix in real time. Finally, the effectiveness of the proposed method is proved by static tests, hovering flight and autonomous landing flight tests. PMID:23201993
A comparison of locally adaptive multigrid methods: LDC, FAC and FIC
NASA Technical Reports Server (NTRS)
Khadra, Khodor; Angot, Philippe; Caltagirone, Jean-Paul
1993-01-01
This study is devoted to a comparative analysis of three 'Adaptive ZOOM' (ZOom Overlapping Multi-level) methods based on similar concepts of hierarchical multigrid local refinement: LDC (Local Defect Correction), FAC (Fast Adaptive Composite), and FIC (Flux Interface Correction)--which we proposed recently. These methods are tested on two examples of a bidimensional elliptic problem. We compare, for V-cycle procedures, the asymptotic evolution of the global error evaluated by discrete norms, the corresponding local errors, and the convergence rates of these algorithms.
Software for the parallel adaptive solution of conservation laws by discontinous Galerkin methods.
Flaherty, J. E.; Loy, R. M.; Shephard, M. S.; Teresco, J. D.
1999-08-17
The authors develop software tools for the solution of conservation laws using parallel adaptive discontinuous Galerkin methods. In particular, the Rensselaer Partition Model (RPM) provides parallel mesh structures within an adaptive framework to solve the Euler equations of compressible flow by a discontinuous Galerkin method (LOCO). Results are presented for a Rayleigh-Taylor flow instability for computations performed on 128 processors of an IBM SP computer. In addition to managing the distributed data and maintaining a load balance, RPM provides information about the parallel environment that can be used to tailor partitions to a specific computational environment.
The block adaptive multigrid method applied to the solution of the Euler equations
NASA Technical Reports Server (NTRS)
Pantelelis, Nikos
1993-01-01
In the present study, a scheme capable of solving very fast and robust complex nonlinear systems of equations is presented. The Block Adaptive Multigrid (BAM) solution method offers multigrid acceleration and adaptive grid refinement based on the prediction of the solution error. The proposed solution method was used with an implicit upwind Euler solver for the solution of complex transonic flows around airfoils. Very fast results were obtained (18-fold acceleration of the solution) using one fourth of the volumes of a global grid with the same solution accuracy for two test cases.
Adaptive-Anisotropic Wavelet Collocation Method on general curvilinear coordinate systems
NASA Astrophysics Data System (ADS)
Brown-Dymkoski, Eric; Vasilyev, Oleg V.
2017-03-01
A new general framework for an Adaptive-Anisotropic Wavelet Collocation Method (A-AWCM) for the solution of partial differential equations is developed. This proposed framework addresses two major shortcomings of existing wavelet-based adaptive numerical methodologies, namely the reliance on a rectangular domain and the "curse of anisotropy", i.e. drastic over-resolution of sheet- and filament-like features arising from the inability of the wavelet refinement mechanism to distinguish highly correlated directional information in the solution. The A-AWCM addresses both of these challenges by incorporating coordinate transforms into the Adaptive Wavelet Collocation Method for the solution of PDEs. The resulting integrated framework leverages the advantages of both the curvilinear anisotropic meshes and wavelet-based adaptive refinement in a complimentary fashion, resulting in greatly reduced cost of resolution for anisotropic features. The proposed Adaptive-Anisotropic Wavelet Collocation Method retains the a priori error control of the solution and fully automated mesh refinement, while offering new abilities through the flexible mesh geometry, including body-fitting. The new A-AWCM is demonstrated for a variety of cases, including parabolic diffusion, acoustic scattering, and unsteady external flow.
A NOISE ADAPTIVE FUZZY EQUALIZATION METHOD FOR PROCESSING SOLAR EXTREME ULTRAVIOLET IMAGES
Druckmueller, M.
2013-08-15
A new image enhancement tool ideally suited for the visualization of fine structures in extreme ultraviolet images of the corona is presented in this paper. The Noise Adaptive Fuzzy Equalization method is particularly suited for the exceptionally high dynamic range images from the Atmospheric Imaging Assembly instrument on the Solar Dynamics Observatory. This method produces artifact-free images and gives significantly better results than methods based on convolution or Fourier transform which are often used for that purpose.
FLIP: A method for adaptively zoned, particle-in-cell calculations of fluid in two dimensions
Brackbill, J.U.; Ruppel, H.M.
1986-08-01
A method is presented for calculating fluid flow in two dimensions using a full particle-in-cell representation on an adaptively zoned grid. The method has many interesting properties, among them an almost total absence of numerical dissipation and the ability to represent large variations in the data. The method is described using a standard formalism and its properties are illustrated by supersonic flow over a step and the interaction of a shock with a thin foil.
Adaptive eigenspace method for inverse scattering problems in the frequency domain
NASA Astrophysics Data System (ADS)
Grote, Marcus J.; Kray, Marie; Nahum, Uri
2017-02-01
A nonlinear optimization method is proposed for the solution of inverse scattering problems in the frequency domain, when the scattered field is governed by the Helmholtz equation. The time-harmonic inverse medium problem is formulated as a PDE-constrained optimization problem and solved by an inexact truncated Newton-type iteration. Instead of a grid-based discrete representation, the unknown wave speed is projected to a particular finite-dimensional basis of eigenfunctions, which is iteratively adapted during the optimization. Truncating the adaptive eigenspace (AE) basis at a (small and slowly increasing) finite number of eigenfunctions effectively introduces regularization into the inversion and thus avoids the need for standard Tikhonov-type regularization. Both analytical and numerical evidence underpins the accuracy of the AE representation. Numerical experiments demonstrate the efficiency and robustness to missing or noisy data of the resulting adaptive eigenspace inversion method.
Automatic off-body overset adaptive Cartesian mesh method based on an octree approach
NASA Astrophysics Data System (ADS)
Péron, Stéphanie; Benoit, Christophe
2013-01-01
This paper describes a method for generating adaptive structured Cartesian grids within a near-body/off-body mesh partitioning framework for the flow simulation around complex geometries. The off-body Cartesian mesh generation derives from an octree structure, assuming each octree leaf node defines a structured Cartesian block. This enables one to take into account the large scale discrepancies in terms of resolution between the different bodies involved in the simulation, with minimum memory requirements. Two different conversions from the octree to Cartesian grids are proposed: the first one generates Adaptive Mesh Refinement (AMR) type grid systems, and the second one generates abutting or minimally overlapping Cartesian grid set. We also introduce an algorithm to control the number of points at each adaptation, that automatically determines relevant values of the refinement indicator driving the grid refinement and coarsening. An application to a wing tip vortex computation assesses the capability of the method to capture accurately the flow features.
A GPU-accelerated adaptive discontinuous Galerkin method for level set equation
NASA Astrophysics Data System (ADS)
Karakus, A.; Warburton, T.; Aksel, M. H.; Sert, C.
2016-01-01
This paper presents a GPU-accelerated nodal discontinuous Galerkin method for the solution of two- and three-dimensional level set (LS) equation on unstructured adaptive meshes. Using adaptive mesh refinement, computations are localised mostly near the interface location to reduce the computational cost. Small global time step size resulting from the local adaptivity is avoided by local time-stepping based on a multi-rate Adams-Bashforth scheme. Platform independence of the solver is achieved with an extensible multi-threading programming API that allows runtime selection of different computing devices (GPU and CPU) and different threading interfaces (CUDA, OpenCL and OpenMP). Overall, a highly scalable, accurate and mass conservative numerical scheme that preserves the simplicity of LS formulation is obtained. Efficiency, performance and local high-order accuracy of the method are demonstrated through distinct numerical test cases.
Method study on fuzzy-PID adaptive control of electric-hydraulic hitch system
NASA Astrophysics Data System (ADS)
Li, Mingsheng; Wang, Liubu; Liu, Jian; Ye, Jin
2017-03-01
In this paper, fuzzy-PID adaptive control method is applied to the control of tractor electric-hydraulic hitch system. According to the characteristics of the system, a fuzzy-PID adaptive controller is designed and the electric-hydraulic hitch system model is established. Traction control and position control performance simulation are carried out with the common PID control method. A field test rig was set up to test the electric-hydraulic hitch system. The test results showed that, after the fuzzy-PID adaptive control is adopted, when the tillage depth steps from 0.1m to 0.3m, the system transition process time is 4s, without overshoot, and when the tractive force steps from 3000N to 7000N, the system transition process time is 5s, the system overshoot is 25%.
Three-dimensional self-adaptive grid method for complex flows
NASA Technical Reports Server (NTRS)
Djomehri, M. Jahed; Deiwert, George S.
1988-01-01
A self-adaptive grid procedure for efficient computation of three-dimensional complex flow fields is described. The method is based on variational principles to minimize the energy of a spring system analogy which redistributes the grid points. Grid control parameters are determined by specifying maximum and minimum grid spacing. Multidirectional adaptation is achieved by splitting the procedure into a sequence of successive applications of a unidirectional adaptation. One-sided, two-directional constraints for orthogonality and smoothness are used to enhance the efficiency of the method. Feasibility of the scheme is demonstrated by application to a multinozzle, afterbody, plume flow field. Application of the algorithm for initial grid generation is illustrated by constructing a three-dimensional grid about a bump-like geometry.
Method and system for training dynamic nonlinear adaptive filters which have embedded memory
NASA Technical Reports Server (NTRS)
Rabinowitz, Matthew (Inventor)
2002-01-01
Described herein is a method and system for training nonlinear adaptive filters (or neural networks) which have embedded memory. Such memory can arise in a multi-layer finite impulse response (FIR) architecture, or an infinite impulse response (IIR) architecture. We focus on filter architectures with separate linear dynamic components and static nonlinear components. Such filters can be structured so as to restrict their degrees of computational freedom based on a priori knowledge about the dynamic operation to be emulated. The method is detailed for an FIR architecture which consists of linear FIR filters together with nonlinear generalized single layer subnets. For the IIR case, we extend the methodology to a general nonlinear architecture which uses feedback. For these dynamic architectures, we describe how one can apply optimization techniques which make updates closer to the Newton direction than those of a steepest descent method, such as backpropagation. We detail a novel adaptive modified Gauss-Newton optimization technique, which uses an adaptive learning rate to determine both the magnitude and direction of update steps. For a wide range of adaptive filtering applications, the new training algorithm converges faster and to a smaller value of cost than both steepest-descent methods such as backpropagation-through-time, and standard quasi-Newton methods. We apply the algorithm to modeling the inverse of a nonlinear dynamic tracking system 5, as well as a nonlinear amplifier 6.
A Hyperspherical Adaptive Sparse-Grid Method for High-Dimensional Discontinuity Detection
Zhang, Guannan; Webster, Clayton G.; Gunzburger, Max D.; Burkardt, John V.
2015-06-24
This study proposes and analyzes a hyperspherical adaptive hierarchical sparse-grid method for detecting jump discontinuities of functions in high-dimensional spaces. The method is motivated by the theoretical and computational inefficiencies of well-known adaptive sparse-grid methods for discontinuity detection. Our novel approach constructs a function representation of the discontinuity hypersurface of an N-dimensional discontinuous quantity of interest, by virtue of a hyperspherical transformation. Then, a sparse-grid approximation of the transformed function is built in the hyperspherical coordinate system, whose value at each point is estimated by solving a one-dimensional discontinuity detection problem. Due to the smoothness of the hypersurface, the new technique can identify jump discontinuities with significantly reduced computational cost, compared to existing methods. In addition, hierarchical acceleration techniques are also incorporated to further reduce the overall complexity. Rigorous complexity analyses of the new method are provided as are several numerical examples that illustrate the effectiveness of the approach.
A Dynamically Adaptive Arbitrary Lagrangian-Eulerian Method for Solution of the Euler Equations
Anderson, R W; Elliott, N S; Pember, R B
2003-02-14
A new method that combines staggered grid arbitrary Lagrangian-Eulerian (ALE) techniques with structured local adaptive mesh refinement (AMR) has been developed for solution of the Euler equations. The novel components of the methods are driven by the need to reconcile traditional AMR techniques with the staggered variables and moving, deforming meshes associated with Lagrange based ALE schemes. We develop interlevel solution transfer operators and interlevel boundary conditions first in the case of purely Lagrangian hydrodynamics, and then extend these ideas into an ALE method by developing adaptive extensions of elliptic mesh relaxation techniques. Conservation properties of the method are analyzed, and a series of test problem calculations are presented which demonstrate the utility and efficiency of the method.
Adaptive iteration method for star centroid extraction under highly dynamic conditions
NASA Astrophysics Data System (ADS)
Gao, Yushan; Qin, Shiqiao; Wang, Xingshu
2016-10-01
Star centroiding accuracy decreases significantly when star sensor works under highly dynamic conditions or star images are corrupted by severe noise, reducing the output attitude precision. Herein, an adaptive iteration method is proposed to solve this problem. Firstly, initial star centroids are predicted by traditional method, and then based on initial reported star centroids and angular velocities of the star sensor, adaptive centroiding windows are generated to cover the star area and then an iterative method optimizing the location of centroiding window is used to obtain the final star spot extraction results. Simulation results shows that, compared with traditional star image restoration method and Iteratively Weighted Center of Gravity method, AWI algorithm maintains higher extraction accuracy when rotation velocities or noise level increases.
A numerical study of 2D detonation waves with adaptive finite volume methods on unstructured grids
NASA Astrophysics Data System (ADS)
Hu, Guanghui
2017-02-01
In this paper, a framework of adaptive finite volume solutions for the reactive Euler equations on unstructured grids is proposed. The main ingredients of the algorithm include a second order total variation diminishing Runge-Kutta method for temporal discretization, and the finite volume method with piecewise linear solution reconstruction of the conservative variables for the spatial discretization in which the least square method is employed for the reconstruction, and weighted essentially nonoscillatory strategy is used to restrain the potential numerical oscillation. To resolve the high demanding on the computational resources due to the stiffness of the system caused by the reaction term and the shock structure in the solutions, the h-adaptive method is introduced. OpenMP parallelization of the algorithm is also adopted to further improve the efficiency of the implementation. Several one and two dimensional benchmark tests on the ZND model are studied in detail, and numerical results successfully show the effectiveness of the proposed method.
Development and evaluation of a method of calibrating medical displays based on fixed adaptation
Sund, Patrik Månsson, Lars Gunnar; Båth, Magnus
2015-04-15
Purpose: The purpose of this work was to develop and evaluate a new method for calibration of medical displays that includes the effect of fixed adaptation and by using equipment and luminance levels typical for a modern radiology department. Methods: Low contrast sinusoidal test patterns were derived at nine luminance levels from 2 to 600 cd/m{sup 2} and used in a two alternative forced choice observer study, where the adaptation level was fixed at the logarithmic average of 35 cd/m{sup 2}. The contrast sensitivity at each luminance level was derived by establishing a linear relationship between the ten pattern contrast levels used at every luminance level and a detectability index (d′) calculated from the fraction of correct responses. A Gaussian function was fitted to the data and normalized to the adaptation level. The corresponding equation was used in a display calibration method that included the grayscale standard display function (GSDF) but compensated for fixed adaptation. In the evaluation study, the contrast of circular objects with a fixed pixel contrast was displayed using both calibration methods and was rated on a five-grade scale. Results were calculated using a visual grading characteristics method. Error estimations in both observer studies were derived using a bootstrap method. Results: The contrast sensitivities for the darkest and brightest patterns compared to the contrast sensitivity at the adaptation luminance were 37% and 56%, respectively. The obtained Gaussian fit corresponded well with similar studies. The evaluation study showed a higher degree of equally distributed contrast throughout the luminance range with the calibration method compensated for fixed adaptation than for the GSDF. The two lowest scores for the GSDF were obtained for the darkest and brightest patterns. These scores were significantly lower than the lowest score obtained for the compensated GSDF. For the GSDF, the scores for all luminance levels were statistically
Method for reducing the drag of blunt-based vehicles by adaptively increasing forebody roughness
NASA Technical Reports Server (NTRS)
Whitmore, Stephen A. (Inventor); Saltzman, Edwin J. (Inventor); Moes, Timothy R. (Inventor); Iliff, Kenneth W. (Inventor)
2005-01-01
A method for reducing drag upon a blunt-based vehicle by adaptively increasing forebody roughness to increase drag at the roughened area of the forebody, which results in a decrease in drag at the base of this vehicle, and in total vehicle drag.
Kornilova, L N; Cowings, P S; Toscano, W B; Arlashchenko, N I; Korneev, D Iu; Ponomarenko, A V; Salagovich, S V; Sarantseva, A V; Kozlovskaia, I B
2000-01-01
Presented are results of testing the method of adaptive biocontrol during preflight training of cosmonauts. Within the MIR-25 crew, a high level of controllability of the autonomous reactions was characteristic of Flight Commanders MIR-23 and MIR-25 and flight Engineer MIR-23, while Flight Engineer MIR-25 displayed a weak intricate dependence of these reactions on the depth of relaxation or strain.
New cardiac MRI gating method using event-synchronous adaptive digital filter.
Park, Hodong; Park, Youngcheol; Cho, Sungpil; Jang, Bongryoel; Lee, Kyoungjoung
2009-11-01
When imaging the heart using MRI, an artefact-free electrocardiograph (ECG) signal is not only important for monitoring the patient's heart activity but also essential for cardiac gating to reduce noise in MR images induced by moving organs. The fundamental problem in conventional ECG is the distortion induced by electromagnetic interference. Here, we propose an adaptive algorithm for the suppression of MR gradient artefacts (MRGAs) in ECG leads of a cardiac MRI gating system. We have modeled MRGAs by assuming a source of strong pulses used for dephasing the MR signal. The modeled MRGAs are rectangular pulse-like signals. We used an event-synchronous adaptive digital filter whose reference signal is synchronous to the gradient peaks of MRI. The event detection processor for the event-synchronous adaptive digital filter was implemented using the phase space method-a sort of topology mapping method-and least-squares acceleration filter. For evaluating the efficiency of the proposed method, the filter was tested using simulation and actual data. The proposed method requires a simple experimental setup that does not require extra hardware connections to obtain the reference signals of adaptive digital filter. The proposed algorithm was more effective than the multichannel approach.
An adaptive multiresolution gradient-augmented level set method for advection problems
NASA Astrophysics Data System (ADS)
Schneider, Kai; Kolomenskiy, Dmitry; Nave, Jean-Chtristophe
2014-11-01
Advection problems are encountered in many applications, such as transport of passive scalars modeling pollution or mixing in chemical engineering. In some problems, the solution develops small-scale features localized in a part of the computational domain. If the location of these features changes in time, the efficiency of the numerical method can be significantly improved by adapting the partition dynamically to the solution. We present a space-time adaptive scheme for solving advection equations in two space dimensions. The third order accurate gradient-augmented level set method using a semi-Lagrangian formulation with backward time integration is coupled with a point value multiresolution analysis using Hermite interpolation. Thus locally refined dyadic spatial grids are introduced which are efficiently implemented with dynamic quad-tree data structures. For adaptive time integration, an embedded Runge-Kutta method is employed. The precision of the new fully adaptive method is analysed and speed up of CPU time and memory compression with respect to the uniform grid discretization are reported.
NASA Technical Reports Server (NTRS)
Kornilova, L. N.; Cowings, P. S.; Toscano, W. B.; Arlashchenko, N. I.; Korneev, D. Iu; Ponomarenko, A. V.; Salagovich, S. V.; Sarantseva, A. V.; Kozlovskaia, I. B.
2000-01-01
Presented are results of testing the method of adaptive biocontrol during preflight training of cosmonauts. Within the MIR-25 crew, a high level of controllability of the autonomous reactions was characteristic of Flight Commanders MIR-23 and MIR-25 and flight Engineer MIR-23, while Flight Engineer MIR-25 displayed a weak intricate dependence of these reactions on the depth of relaxation or strain.
ERIC Educational Resources Information Center
Zwick, Rebecca; And Others
1994-01-01
Simulated data were used to investigate the performance of modified versions of the Mantel-Haenszel method of differential item functioning (DIF) analysis in computerized adaptive tests (CAT). Results indicate that CAT-based DIF procedures perform well and support the use of item response theory-based matching variables in DIF analysis. (SLD)
ERIC Educational Resources Information Center
Zwick, Rebecca; And Others
Simulated data were used to investigate the performance of modified versions of the Mantel-Haenszel and standardization methods of differential item functioning (DIF) analysis in computer-adaptive tests (CATs). Each "examinee" received 25 items out of a 75-item pool. A three-parameter logistic item response model was assumed, and…
Matthews, Devin A.; Stanton, John F.
2015-02-14
The theory of non-orthogonal spin-adaptation for closed-shell molecular systems is applied to coupled cluster methods with quadruple excitations (CCSDTQ). Calculations at this level of detail are of critical importance in describing the properties of molecular systems to an accuracy which can meet or exceed modern experimental techniques. Such calculations are of significant (and growing) importance in such fields as thermodynamics, kinetics, and atomic and molecular spectroscopies. With respect to the implementation of CCSDTQ and related methods, we show that there are significant advantages to non-orthogonal spin-adaption with respect to simplification and factorization of the working equations and to creating an efficient implementation. The resulting algorithm is implemented in the CFOUR program suite for CCSDT, CCSDTQ, and various approximate methods (CCSD(T), CC3, CCSDT-n, and CCSDT(Q))
Matthews, Devin A; Stanton, John F
2015-02-14
The theory of non-orthogonal spin-adaptation for closed-shell molecular systems is applied to coupled cluster methods with quadruple excitations (CCSDTQ). Calculations at this level of detail are of critical importance in describing the properties of molecular systems to an accuracy which can meet or exceed modern experimental techniques. Such calculations are of significant (and growing) importance in such fields as thermodynamics, kinetics, and atomic and molecular spectroscopies. With respect to the implementation of CCSDTQ and related methods, we show that there are significant advantages to non-orthogonal spin-adaption with respect to simplification and factorization of the working equations and to creating an efficient implementation. The resulting algorithm is implemented in the CFOUR program suite for CCSDT, CCSDTQ, and various approximate methods (CCSD(T), CC3, CCSDT-n, and CCSDT(Q)).
NASA Astrophysics Data System (ADS)
Matthews, Devin A.; Stanton, John F.
2015-02-01
The theory of non-orthogonal spin-adaptation for closed-shell molecular systems is applied to coupled cluster methods with quadruple excitations (CCSDTQ). Calculations at this level of detail are of critical importance in describing the properties of molecular systems to an accuracy which can meet or exceed modern experimental techniques. Such calculations are of significant (and growing) importance in such fields as thermodynamics, kinetics, and atomic and molecular spectroscopies. With respect to the implementation of CCSDTQ and related methods, we show that there are significant advantages to non-orthogonal spin-adaption with respect to simplification and factorization of the working equations and to creating an efficient implementation. The resulting algorithm is implemented in the CFOUR program suite for CCSDT, CCSDTQ, and various approximate methods (CCSD(T), CC3, CCSDT-n, and CCSDT(Q)).
Cochard, E; Aubry, J F; Tanter, M; Prada, C
2011-08-01
An adaptive projection method for ultrasonic focusing through the rib cage, with minimal energy deposition on the ribs, was evaluated experimentally in 3D geometry. Adaptive projection is based on decomposition of the time-reversal operator (DORT method) and projection on the "noise" subspace. It is shown that 3D implementation of this method is straightforward, and not more time-consuming than 2D. Comparisons are made between adaptive projection, spherical focusing, and a previously proposed time-reversal focusing method, by measuring pressure fields in the focal plane and rib region using the three methods. The ratio of the specific absorption rate at the focus over the one at the ribs was found to be increased by a factor of up to eight, versus spherical emission. Beam steering out of geometric focus was also investigated. For all configurations projecting steered emissions were found to deposit less energy on the ribs than steering time-reversed emissions: thus the non-invasive method presented here is more efficient than state-of-the-art invasive techniques. In fact, this method could be used for real-time treatment, because a single acquisition of back-scattered echoes from the ribs is enough to treat a large volume around the focus, thanks to real time projection of the steered beams.
An edge-based solution-adaptive method applied to the AIRPLANE code
NASA Technical Reports Server (NTRS)
Biswas, Rupak; Thomas, Scott D.; Cliff, Susan E.
1995-01-01
Computational methods to solve large-scale realistic problems in fluid flow can be made more efficient and cost effective by using them in conjunction with dynamic mesh adaption procedures that perform simultaneous coarsening and refinement to capture flow features of interest. This work couples the tetrahedral mesh adaption scheme, 3D_TAG, with the AIRPLANE code to solve complete aircraft configuration problems in transonic and supersonic flow regimes. Results indicate that the near-field sonic boom pressure signature of a cone-cylinder is improved, the oblique and normal shocks are better resolved on a transonic wing, and the bow shock ahead of an unstarted inlet is better defined.
An edge-based solution-adaptive method applied to the AIRPLANE code
NASA Astrophysics Data System (ADS)
Biswas, Rupak; Thomas, Scott D.; Cliff, Susan E.
1995-11-01
Computational methods to solve large-scale realistic problems in fluid flow can be made more efficient and cost effective by using them in conjunction with dynamic mesh adaption procedures that perform simultaneous coarsening and refinement to capture flow features of interest. This work couples the tetrahedral mesh adaption scheme, 3D_TAG, with the AIRPLANE code to solve complete aircraft configuration problems in transonic and supersonic flow regimes. Results indicate that the near-field sonic boom pressure signature of a cone-cylinder is improved, the oblique and normal shocks are better resolved on a transonic wing, and the bow shock ahead of an unstarted inlet is better defined.
Applying Parallel Adaptive Methods with GeoFEST/PYRAMID to Simulate Earth Surface Crustal Dynamics
NASA Technical Reports Server (NTRS)
Norton, Charles D.; Lyzenga, Greg; Parker, Jay; Glasscoe, Margaret; Donnellan, Andrea; Li, Peggy
2006-01-01
This viewgraph presentation reviews the use Adaptive Mesh Refinement (AMR) in simulating the Crustal Dynamics of Earth's Surface. AMR simultaneously improves solution quality, time to solution, and computer memory requirements when compared to generating/running on a globally fine mesh. The use of AMR in simulating the dynamics of the Earth's Surface is spurred by future proposed NASA missions, such as InSAR for Earth surface deformation and other measurements. These missions will require support for large-scale adaptive numerical methods using AMR to model observations. AMR was chosen because it has been successful in computation fluid dynamics for predictive simulation of complex flows around complex structures.
An Adaptive Instability Suppression Controls Method for Aircraft Gas Turbine Engine Combustors
NASA Technical Reports Server (NTRS)
Kopasakis, George; DeLaat, John C.; Chang, Clarence T.
2008-01-01
An adaptive controls method for instability suppression in gas turbine engine combustors has been developed and successfully tested with a realistic aircraft engine combustor rig. This testing was part of a program that demonstrated, for the first time, successful active combustor instability control in an aircraft gas turbine engine-like environment. The controls method is called Adaptive Sliding Phasor Averaged Control. Testing of the control method has been conducted in an experimental rig with different configurations designed to simulate combustors with instabilities of about 530 and 315 Hz. Results demonstrate the effectiveness of this method in suppressing combustor instabilities. In addition, a dramatic improvement in suppression of the instability was achieved by focusing control on the second harmonic of the instability. This is believed to be due to a phenomena discovered and reported earlier, the so called Intra-Harmonic Coupling. These results may have implications for future research in combustor instability control.
NASA Astrophysics Data System (ADS)
Chai, Runqi; Savvaris, Al; Tsourdos, Antonios
2016-06-01
In this paper, a fuzzy physical programming (FPP) method has been introduced for solving multi-objective Space Manoeuvre Vehicles (SMV) skip trajectory optimization problem based on hp-adaptive pseudospectral methods. The dynamic model of SMV is elaborated and then, by employing hp-adaptive pseudospectral methods, the problem has been transformed to nonlinear programming (NLP) problem. According to the mission requirements, the solutions were calculated for each single-objective scenario. To get a compromised solution for each target, the fuzzy physical programming (FPP) model is proposed. The preference function is established with considering the fuzzy factor of the system such that a proper compromised trajectory can be acquired. In addition, the NSGA-II is tested to obtain the Pareto-optimal solution set and verify the Pareto optimality of the FPP solution. Simulation results indicate that the proposed method is effective and feasible in terms of dealing with the multi-objective skip trajectory optimization for the SMV.
Framework for Instructional Technology: Methods of Implementing Adaptive Training and Education
2014-01-01
business , or the military. With Role Adaptation, trainees select their role (e.g., tank driver vs. tank gunner) and are then presented with different...one-size-fits-all, non -mastery based methods (for a review see Durlach & Ray, 2011). After conducting a meta-analysis of various tutoring methods... verbal ), and/or to challenge or stimulate learners with above average aptitude. Multiple versions might also be created to suit students with
Xia, Kelin; Zhan, Meng; Wan, Decheng; Wei, Guo-Wei
2011-01-01
Mesh deformation methods are a versatile strategy for solving partial differential equations (PDEs) with a vast variety of practical applications. However, these methods break down for elliptic PDEs with discontinuous coefficients, namely, elliptic interface problems. For this class of problems, the additional interface jump conditions are required to maintain the well-posedness of the governing equation. Consequently, in order to achieve high accuracy and high order convergence, additional numerical algorithms are required to enforce the interface jump conditions in solving elliptic interface problems. The present work introduces an interface technique based adaptively deformed mesh strategy for resolving elliptic interface problems. We take the advantages of the high accuracy, flexibility and robustness of the matched interface and boundary (MIB) method to construct an adaptively deformed mesh based interface method for elliptic equations with discontinuous coefficients. The proposed method generates deformed meshes in the physical domain and solves the transformed governed equations in the computational domain, which maintains regular Cartesian meshes. The mesh deformation is realized by a mesh transformation PDE, which controls the mesh redistribution by a source term. The source term consists of a monitor function, which builds in mesh contraction rules. Both interface geometry based deformed meshes and solution gradient based deformed meshes are constructed to reduce the L∞ and L2 errors in solving elliptic interface problems. The proposed adaptively deformed mesh based interface method is extensively validated by many numerical experiments. Numerical results indicate that the adaptively deformed mesh based interface method outperforms the original MIB method for dealing with elliptic interface problems. PMID:22586356
Refinement trajectory and determination of eigenstates by a wavelet based adaptive method
Pipek, Janos; Nagy, Szilvia
2006-11-07
The detail structure of the wave function is analyzed at various refinement levels using the methods of wavelet analysis. The eigenvalue problem of a model system is solved in granular Hilbert spaces, and the trajectory of the eigenstates is traced in terms of the resolution. An adaptive method is developed for identifying the fine structure localization regions, where further refinement of the wave function is necessary.
A wavelet-optimized, very high order adaptive grid and order numerical method
NASA Technical Reports Server (NTRS)
Jameson, Leland
1996-01-01
Differencing operators of arbitrarily high order can be constructed by interpolating a polynomial through a set of data followed by differentiation of this polynomial and finally evaluation of the polynomial at the point where a derivative approximation is desired. Furthermore, the interpolating polynomial can be constructed from algebraic, trigonometric, or, perhaps exponential polynomials. This paper begins with a comparison of such differencing operator construction. Next, the issue of proper grids for high order polynomials is addressed. Finally, an adaptive numerical method is introduced which adapts the numerical grid and the order of the differencing operator depending on the data. The numerical grid adaptation is performed on a Chebyshev grid. That is, at each level of refinement the grid is a Chebvshev grid and this grid is refined locally based on wavelet analysis.
A Digitalized Gyroscope System Based on a Modified Adaptive Control Method.
Xia, Dunzhu; Hu, Yiwei; Ni, Peizhen
2016-03-04
In this work we investigate the possibility of applying the adaptive control algorithm to Micro-Electro-Mechanical System (MEMS) gyroscopes. Through comparing the gyroscope working conditions with the reference model, the adaptive control method can provide online estimation of the key parameters and the proper control strategy for the system. The digital second-order oscillators in the reference model are substituted for two phase locked loops (PLLs) to achieve a more steady amplitude and frequency control. The adaptive law is modified to satisfy the condition of unequal coupling stiffness and coupling damping coefficient. The rotation mode of the gyroscope system is considered in our work and a rotation elimination section is added to the digitalized system. Before implementing the algorithm in the hardware platform, different simulations are conducted to ensure the algorithm can meet the requirement of the angular rate sensor, and some of the key adaptive law coefficients are optimized. The coupling components are detected and suppressed respectively and Lyapunov criterion is applied to prove the stability of the system. The modified adaptive control algorithm is verified in a set of digitalized gyroscope system, the control system is realized in digital domain, with the application of Field Programmable Gate Array (FPGA). Key structure parameters are measured and compared with the estimation results, which validated that the algorithm is feasible in the setup. Extra gyroscopes are used in repeated experiments to prove the commonality of the algorithm.
Adaptive Kalman filtering methods for tracking GPS signals in high noise/high dynamic environments
NASA Astrophysics Data System (ADS)
Zuo, Qiyao; Yuan, Hong; Lin, Baojun
2007-11-01
GPS C/A signal tracking algorithms have been developed based on adaptive Kalman filtering theory. In the research, an adaptive Kalman filter is used to substitute for standard tracking loop filters. The goal is to improve estimation accuracy and tracking stabilization in high noise and high dynamic environments. The linear dynamics model and the measurements model are designed to estimate code phase, carrier phase, Doppler shift, and rate of change of Doppler shift. Two adaptive algorithms are applied to improve robustness and adaptive faculty of the tracking, one is Sage adaptive filtering approach and the other is strong tracking method. Both the new algorithms and the conventional tracking loop have been tested by using simulation data. In the simulation experiment, the highest jerk of the receiver is set to 10G m/s 3 with the lowest C/No 30dBHz. The results indicate that the Kalman filtering algorithms are more robust than the standard tracking loop, and performance of tracking loop using the algorithms is satisfactory in such extremely adverse circumstances.
Huttunen, Sanna; Olsson, Sanna; Buchbender, Volker; Enroth, Johannes; Hedenäs, Lars; Quandt, Dietmar
2012-01-01
Adaptive evolution has often been proposed to explain correlations between habitats and certain phenotypes. In mosses, a high frequency of species with specialized sporophytic traits in exposed or epiphytic habitats was, already 100 years ago, suggested as due to adaptation. We tested this hypothesis by contrasting phylogenetic and morphological data from two moss families, Neckeraceae and Lembophyllaceae, both of which show parallel shifts to a specialized morphology and to exposed epiphytic or epilithic habitats. Phylogeny-based tests for correlated evolution revealed that evolution of four sporophytic traits is correlated with a habitat shift. For three of them, evolutionary rates of dual character-state changes suggest that habitat shifts appear prior to changes in morphology. This suggests that they could have evolved as adaptations to new habitats. Regarding the fourth correlated trait the specialized morphology had already evolved before the habitat shift. In addition, several other specialized "epiphytic" traits show no correlation with a habitat shift. Besides adaptive diversification, other processes thus also affect the match between phenotype and environment. Several potential factors such as complex genetic and developmental pathways yielding the same phenotypes, differences in strength of selection, or constraints in phenotypic evolution may lead to an inability of phylogeny-based comparative methods to detect potential adaptations.
Huttunen, Sanna; Olsson, Sanna; Buchbender, Volker; Enroth, Johannes; Hedenäs, Lars; Quandt, Dietmar
2012-01-01
Adaptive evolution has often been proposed to explain correlations between habitats and certain phenotypes. In mosses, a high frequency of species with specialized sporophytic traits in exposed or epiphytic habitats was, already 100 years ago, suggested as due to adaptation. We tested this hypothesis by contrasting phylogenetic and morphological data from two moss families, Neckeraceae and Lembophyllaceae, both of which show parallel shifts to a specialized morphology and to exposed epiphytic or epilithic habitats. Phylogeny-based tests for correlated evolution revealed that evolution of four sporophytic traits is correlated with a habitat shift. For three of them, evolutionary rates of dual character-state changes suggest that habitat shifts appear prior to changes in morphology. This suggests that they could have evolved as adaptations to new habitats. Regarding the fourth correlated trait the specialized morphology had already evolved before the habitat shift. In addition, several other specialized “epiphytic” traits show no correlation with a habitat shift. Besides adaptive diversification, other processes thus also affect the match between phenotype and environment. Several potential factors such as complex genetic and developmental pathways yielding the same phenotypes, differences in strength of selection, or constraints in phenotypic evolution may lead to an inability of phylogeny-based comparative methods to detect potential adaptations. PMID:23118967
A Digitalized Gyroscope System Based on a Modified Adaptive Control Method
Xia, Dunzhu; Hu, Yiwei; Ni, Peizhen
2016-01-01
In this work we investigate the possibility of applying the adaptive control algorithm to Micro-Electro-Mechanical System (MEMS) gyroscopes. Through comparing the gyroscope working conditions with the reference model, the adaptive control method can provide online estimation of the key parameters and the proper control strategy for the system. The digital second-order oscillators in the reference model are substituted for two phase locked loops (PLLs) to achieve a more steady amplitude and frequency control. The adaptive law is modified to satisfy the condition of unequal coupling stiffness and coupling damping coefficient. The rotation mode of the gyroscope system is considered in our work and a rotation elimination section is added to the digitalized system. Before implementing the algorithm in the hardware platform, different simulations are conducted to ensure the algorithm can meet the requirement of the angular rate sensor, and some of the key adaptive law coefficients are optimized. The coupling components are detected and suppressed respectively and Lyapunov criterion is applied to prove the stability of the system. The modified adaptive control algorithm is verified in a set of digitalized gyroscope system, the control system is realized in digital domain, with the application of Field Programmable Gate Array (FPGA). Key structure parameters are measured and compared with the estimation results, which validated that the algorithm is feasible in the setup. Extra gyroscopes are used in repeated experiments to prove the commonality of the algorithm. PMID:26959019
Menkir, A; Bramel-Cox, P J; Witt, M D
1994-08-01
The association among six traits in the F2 lines derived from adapted × exotic backcrosses of sorghum developed via two introgression methods was studied using principal component analysis. The first principal component defined a hybrid index in matings of the wild accession ('12-26') but not in matings of the cultivated sorghum genotypes ('Segeolane' and 'SC408'), no matter which adapted parent was used. This component accounted for 27-42% of the total variation in each mating. The 'recombination spindle' was wide in all matings of CK60 and KP9B, which indicated that the relationships among traits were not strong enough to restrict recombination among the parental characters. The index scores of both CK60 and KP9B matings showed clear differentiation of the backcross generations only when the exotic parent was the undomesticated wild accession ('12-26'). None of the distributions of the first principal component scores in any backcross population was bimodal. The frequency of recombinant genotypes derived from a mating was determined by the level of domestication and adaptation of the exotic parent and the genetic background of the adapted parent. Backcrossing to a population (KP9B) was found to be superior to backcrossing to an inbred line (CK60) to produce lines with an improved adapted phenotype.
An h-adaptive finite element method for turbulent heat transfer
Carriington, David B
2009-01-01
A two-equation turbulence closure model (k-{omega}) using an h-adaptive grid technique and finite element method (FEM) has been developed to simulate low Mach flow and heat transfer. These flows are applicable to many flows in engineering and environmental sciences. Of particular interest in the engineering modeling areas are: combustion, solidification, and heat exchanger design. Flows for indoor air quality modeling and atmospheric pollution transport are typical types of environmental flows modeled with this method. The numerical method is based on a hybrid finite element model using an equal-order projection process. The model includes thermal and species transport, localized mesh refinement (h-adaptive) and Petrov-Galerkin weighting for the stabilizing the advection. This work develops the continuum model of a two-equation turbulence closure method. The fractional step solution method is stated along with the h-adaptive grid method (Carrington and Pepper, 2002). Solutions are presented for 2d flow over a backward-facing step.
High-precision self-adaptive phase-calibration method for wavelength-tuning interferometry
NASA Astrophysics Data System (ADS)
Zhu, Xueliang; Zhao, Huiying; Dong, Longchao; Wang, Hongjun; Liu, Bingcai; Yuan, Daocheng; Tian, Ailing; Wang, Fangjie; Zhang, Chupeng; Ban, Xinxing
2017-03-01
We introduce a high-precision self-adaptive phase-calibration method for performing wavelength-tuning interferometry. Our method is insensitive to the nonlinearity of the phase shifter, even under random control. Intensity errors derived from laser voltage changes can be restrained by adopting this approach. Furthermore, this method can effectively overcome the influences from the background and modulation intensities in the interferogram, regardless of the phase structure. Numerical simulations and experiments are implemented to verify the validity of this high-precision calibration method.
An adaptive subspace trust-region method for frequency-domain seismic full waveform inversion
NASA Astrophysics Data System (ADS)
Zhang, Huan; Li, Xiaofan; Song, Hanjie; Liu, Shaolin
2015-05-01
Full waveform inversion is currently considered as a promising seismic imaging method to obtain high-resolution and quantitative images of the subsurface. It is a nonlinear ill-posed inverse problem, the main difficulty of which that prevents the full waveform inversion from widespread applying to real data is the sensitivity to incorrect initial models and noisy data. Local optimization theories including Newton's method and gradient method always lead the convergence to local minima, while global optimization algorithms such as simulated annealing are computationally costly. To confront this issue, in this paper we investigate the possibility of applying the trust-region method to the full waveform inversion problem. Different from line search methods, trust-region methods force the new trial step within a certain neighborhood of the current iterate point. Theoretically, the trust-region methods are reliable and robust, and they have very strong convergence properties. The capability of this inversion technique is tested with the synthetic Marmousi velocity model and the SEG/EAGE Salt model. Numerical examples demonstrate that the adaptive subspace trust-region method can provide solutions closer to the global minima compared to the conventional Approximate Hessian approach and the L-BFGS method with a higher convergence rate. In addition, the match between the inverted model and the true model is still excellent even when the initial model deviates far from the true model. Inversion results with noisy data also exhibit the remarkable capability of the adaptive subspace trust-region method for low signal-to-noise data inversions. Promising numerical results suggest this adaptive subspace trust-region method is suitable for full waveform inversion, as it has stronger convergence and higher convergence rate.
Long-time simulations of the Kelvin-Helmholtz instability using an adaptive vortex method.
Sohn, Sung-Ik; Yoon, Daeki; Hwang, Woonjae
2010-10-01
The nonlinear evolution of an interface subject to a parallel shear flow is studied by the vortex sheet model. We perform long-time computations for the vortex sheet in density-stratified fluids by using the point vortex method and investigate late-time dynamics of the Kelvin-Helmholtz instability. We apply an adaptive point insertion procedure and a high-order shock-capturing scheme to the vortex method to handle the nonuniform distribution of point vortices and enhance the resolution. Our adaptive vortex method successfully simulates chaotically distorted interfaces of the Kelvin-Helmholtz instability with fine resolutions. The numerical results show that the Kelvin-Helmholtz instability evolves a secondary instability at a late time, distorting the internal rollup, and eventually develops to a disordered structure.
NASA Astrophysics Data System (ADS)
Lee, Sanghyun; Wheeler, Mary F.
2017-02-01
We present a novel approach to the simulation of miscible displacement by employing adaptive enriched Galerkin finite element methods (EG) coupled with entropy residual stabilization for transport. In particular, numerical simulations of viscous fingering instabilities in heterogeneous porous media and Hele-Shaw cells are illustrated. EG is formulated by enriching the conforming continuous Galerkin finite element method (CG) with piecewise constant functions. The method provides locally and globally conservative fluxes, which are crucial for coupled flow and transport problems. Moreover, EG has fewer degrees of freedom in comparison with discontinuous Galerkin (DG) and an efficient flow solver has been derived which allows for higher order schemes. Dynamic adaptive mesh refinement is applied in order to reduce computational costs for large-scale three dimensional applications. In addition, entropy residual based stabilization for high order EG transport systems prevents spurious oscillations. Numerical tests are presented to show the capabilities of EG applied to flow and transport.
An adaptive method for determining an acquisition parameter t0 in a modified CPMG sequence
NASA Astrophysics Data System (ADS)
Xing, Donghui; Fan, Yiren; Hao, Jianfei; Ge, Xinmin; Li, Chaoliu; Xiao, Yufeng; Wu, Fei
2017-03-01
The modified CPMG (Carr-Purcell-Meiboom-Gill) pulse sequence is a common sequence used for measuring the internal magnetic field gradient distribution of formation rocks, for which t0 (the duration of the first window) is a key acquisition parameter. In order to obtain the optimal t0, an adaptive method is proposed in this paper. By studying the factors influencing discriminant factor σ and its variation trend using T2-G forward numerical simulation, it is found that the optimal t0 corresponds to the maximum value of σ. Then combining the constraint condition of SNR (Signal Noise Ratio) of spin echo, an optimal t0 in modified CPMG pulse sequence is determined. This method can reduce the difficulties of operating T2-G experiments. Finally, the adaptive method is verified by the results of the T2-G experiments for four water-saturated sandstone samples.
A novel adaptive 3D medical image interpolation method based on shape
NASA Astrophysics Data System (ADS)
Chen, Jiaxin; Ma, Wei
2013-03-01
Image interpolation of cross-sections is one of the key steps of medical visualization. Aiming at the problem of fuzzy boundaries and large amount of calculation, which are brought by the traditional interpolation, a novel adaptive 3-D medical image interpolation method is proposed in this paper. Firstly, the contour is obtained by the edge interpolation, and the corresponding points are found according to the relation of the contour and points on the original images. Secondly, this algorithm utilizes volume relativity to get the best point-pair with the adaptive methods. Finally, the grey value of interpolation pixel is got by the matching point interpolation. The experimental results show that the method presented in the paper not only can meet the requirements of interpolation accuracy, but also can be used effectively in medical image 3D reconstruction.
[Adaptation of a method for determining serum iron after deproteinization on a parallel analyzer].
Pontézière, C; Meneguzzer, E; Succari, M; Miocque, M
1989-04-01
The study of the determination of iron in sera by a bathophenanthroline method after deproteinization, has been realized according to the protocol Valtec conceived by SFBC, after adaptation on a FP9 parallel analyzer. The critical study of this adaptation included trials of within run precision (CV of 1.25%), total precision (CV 2.29 to 4.66%) as also evaluation of analytical range: the limit of linearity is 140 mumol/l. The evaluation of inaccuracy performed with patient specimens leads to establishment of follow up norms and interpretation norms of allometry line. Our whole results are in agreement with the performance standards of the protocol for the validation of methods published by the Société Française de Biologie Clinique. Finally the described method is quick acting, reliable and very inexpensive.
Functional phase response curves: a method for understanding synchronization of adapting neurons.
Cui, Jianxia; Canavier, Carmen C; Butera, Robert J
2009-07-01
Phase response curves (PRCs) for a single neuron are often used to predict the synchrony of mutually coupled neurons. Previous theoretical work on pulse-coupled oscillators used single-pulse perturbations. We propose an alternate method in which functional PRCs (fPRCs) are generated using a train of pulses applied at a fixed delay after each spike, with the PRC measured when the phasic relationship between the stimulus and the subsequent spike in the neuron has converged. The essential information is the dependence of the recovery time from pulse onset until the next spike as a function of the delay between the previous spike and the onset of the applied pulse. Experimental fPRCs in Aplysia pacemaker neurons were different from single-pulse PRCs, principally due to adaptation. In the biological neuron, convergence to the fully adapted recovery interval was slower at some phases than that at others because the change in the effective intrinsic period due to adaptation changes the effective phase resetting in a way that opposes and slows the effects of adaptation. The fPRCs for two isolated adapting model neurons were used to predict the existence and stability of 1:1 phase-locked network activity when the two neurons were coupled. A stability criterion was derived by linearizing a coupled map based on the fPRC and the existence and stability criteria were successfully tested in two-simulated-neuron networks with reciprocal inhibition or excitation. The fPRC is the first PRC-based tool that can account for adaptation in analyzing networks of neural oscillators.
Functional Phase Response Curves: A Method for Understanding Synchronization of Adapting Neurons
Cui, Jianxia; Canavier, Carmen C.; Butera, Robert J.
2009-01-01
Phase response curves (PRCs) for a single neuron are often used to predict the synchrony of mutually coupled neurons. Previous theoretical work on pulse-coupled oscillators used single-pulse perturbations. We propose an alternate method in which functional PRCs (fPRCs) are generated using a train of pulses applied at a fixed delay after each spike, with the PRC measured when the phasic relationship between the stimulus and the subsequent spike in the neuron has converged. The essential information is the dependence of the recovery time from pulse onset until the next spike as a function of the delay between the previous spike and the onset of the applied pulse. Experimental fPRCs in Aplysia pacemaker neurons were different from single-pulse PRCs, principally due to adaptation. In the biological neuron, convergence to the fully adapted recovery interval was slower at some phases than that at others because the change in the effective intrinsic period due to adaptation changes the effective phase resetting in a way that opposes and slows the effects of adaptation. The fPRCs for two isolated adapting model neurons were used to predict the existence and stability of 1:1 phase-locked network activity when the two neurons were coupled. A stability criterion was derived by linearizing a coupled map based on the fPRC and the existence and stability criteria were successfully tested in two-simulated-neuron networks with reciprocal inhibition or excitation. The fPRC is the first PRC-based tool that can account for adaptation in analyzing networks of neural oscillators. PMID:19420126
A time-accurate adaptive grid method and the numerical simulation of a shock-vortex interaction
NASA Technical Reports Server (NTRS)
Bockelie, Michael J.; Eiseman, Peter R.
1990-01-01
A time accurate, general purpose, adaptive grid method is developed that is suitable for multidimensional steady and unsteady numerical simulations. The grid point movement is performed in a manner that generates smooth grids which resolve the severe solution gradients and the sharp transitions in the solution gradients. The temporal coupling of the adaptive grid and the PDE solver is performed with a grid prediction correction method that is simple to implement and ensures the time accuracy of the grid. Time accurate solutions of the 2-D Euler equations for an unsteady shock vortex interaction demonstrate the ability of the adaptive method to accurately adapt the grid to multiple solution features.
A method for online verification of adapted fields using an independent dose monitor
Chang Jina; Norrlinger, Bernhard D.; Heaton, Robert K.; Jaffray, David A.; Cho, Young-Bin; Islam, Mohammad K.; Mahon, Robert
2013-07-15
Purpose: Clinical implementation of online adaptive radiotherapy requires generation of modified fields and a method of dosimetric verification in a short time. We present a method of treatment field modification to account for patient setup error, and an online method of verification using an independent monitoring system.Methods: The fields are modified by translating each multileaf collimator (MLC) defined aperture in the direction of the patient setup error, and magnifying to account for distance variation to the marked isocentre. A modified version of a previously reported online beam monitoring system, the integral quality monitoring (IQM) system, was investigated for validation of adapted fields. The system consists of a large area ion-chamber with a spatial gradient in electrode separation to provide a spatially sensitive signal for each beam segment, mounted below the MLC, and a calculation algorithm to predict the signal. IMRT plans of ten prostate patients have been modified in response to six randomly chosen setup errors in three orthogonal directions.Results: A total of approximately 49 beams for the modified fields were verified by the IQM system, of which 97% of measured IQM signal agree with the predicted value to within 2%.Conclusions: The modified IQM system was found to be suitable for online verification of adapted treatment fields.
A multigrid method for steady Euler equations on unstructured adaptive grids
NASA Technical Reports Server (NTRS)
Riemslagh, Kris; Dick, Erik
1993-01-01
A flux-difference splitting type algorithm is formulated for the steady Euler equations on unstructured grids. The polynomial flux-difference splitting technique is used. A vertex-centered finite volume method is employed on a triangular mesh. The multigrid method is in defect-correction form. A relaxation procedure with a first order accurate inner iteration and a second-order correction performed only on the finest grid, is used. A multi-stage Jacobi relaxation method is employed as a smoother. Since the grid is unstructured a Jacobi type is chosen. The multi-staging is necessary to provide sufficient smoothing properties. The domain is discretized using a Delaunay triangular mesh generator. Three grids with more or less uniform distribution of nodes but with different resolution are generated by successive refinement of the coarsest grid. Nodes of coarser grids appear in the finer grids. The multigrid method is started on these grids. As soon as the residual drops below a threshold value, an adaptive refinement is started. The solution on the adaptively refined grid is accelerated by a multigrid procedure. The coarser multigrid grids are generated by successive coarsening through point removement. The adaption cycle is repeated a few times. Results are given for the transonic flow over a NACA-0012 airfoil.
Stabilized Conservative Level Set Method with Adaptive Wavelet-based Mesh Refinement
NASA Astrophysics Data System (ADS)
Shervani-Tabar, Navid; Vasilyev, Oleg V.
2016-11-01
This paper addresses one of the main challenges of the conservative level set method, namely the ill-conditioned behavior of the normal vector away from the interface. An alternative formulation for reconstruction of the interface is proposed. Unlike the commonly used methods which rely on the unit normal vector, Stabilized Conservative Level Set (SCLS) uses a modified renormalization vector with diminishing magnitude away from the interface. With the new formulation, in the vicinity of the interface the reinitialization procedure utilizes compressive flux and diffusive terms only in the normal direction to the interface, thus, preserving the conservative level set properties, while away from the interfaces the directional diffusion mechanism automatically switches to homogeneous diffusion. The proposed formulation is robust and general. It is especially well suited for use with adaptive mesh refinement (AMR) approaches due to need for a finer resolution in the vicinity of the interface in comparison with the rest of the domain. All of the results were obtained using the Adaptive Wavelet Collocation Method, a general AMR-type method, which utilizes wavelet decomposition to adapt on steep gradients in the solution while retaining a predetermined order of accuracy.
Vivid Motor Imagery as an Adaptation Method for Head Turns on a Short-Arm Centrifuge
NASA Technical Reports Server (NTRS)
Newby, N. J.; Mast, F. W.; Natapoff, A.; Paloski, W. H.
2006-01-01
from one another. For the perceived duration of sensations, the CG group again exhibited the least amount of adaptation. However, the rates of adaptation of the PA and the MA groups were indistinguishable, suggesting that the imagined pseudostimulus appeared to be just as effective a means of adaptation as the actual stimulus. The MA group's rate of adaptation to motion sickness symptoms was also comparable to the PA group. The use of vivid motor imagery may be an effective method for adapting to the illusory sensations and motion sickness symptoms produced by cross-coupled stimuli. For space-based AG applications, this technique may prove quite useful in retaining astronauts considered highly susceptible to motion sickness as it reduces the number of actual CCS required to attain adaptation.
Adaptation of LASCA method for diagnostics of malignant tumours in laboratory animals
NASA Astrophysics Data System (ADS)
Ul'yanov, S. S.; Laskavyi, V. N.; Glova, Alina B.; Polyanina, T. I.; Ul'yanova, O. V.; Fedorova, V. A.; Ul'yanov, A. S.
2012-05-01
The LASCA method is adapted for diagnostics of malignant neoplasms in laboratory animals. Tumours are studied in mice of Balb/c inbred line after inoculation of cells of syngeneic myeloma cell line Sp.2/0 — Ag.8. The appropriateness of using the tLASCA method in tumour investigations is substantiated; its advantages in comparison with the sLASCA method are demonstrated. It is found that the most informative characteristic, indicating the presence of a tumour, is the fractal dimension of LASCA images.
Adaptive stochastic resonance method for impact signal detection based on sliding window
NASA Astrophysics Data System (ADS)
Li, Jimeng; Chen, Xuefeng; He, Zhengjia
2013-04-01
Aiming at solving the existing sharp problems in impact signal detection by using stochastic resonance (SR) in the fault diagnosis of rotating machinery, such as the measurement index selection of SR and the detection of impact signal with different impact amplitudes, the present study proposes an adaptive SR method for impact signal detection based on sliding window by analyzing the SR characteristics of impact signal. This method can not only achieve the optimal selection of system parameters by means of weighted kurtosis index constructed through using kurtosis index and correlation coefficient, but also achieve the detection of weak impact signal through the algorithm of data segmentation based on sliding window, even though the differences between different impact amplitudes are great. The algorithm flow of adaptive SR method is given and effectiveness of the method has been verified by the contrastive results between the proposed method and the traditional SR method of simulation experiments. Finally, the proposed method has been applied to a gearbox fault diagnosis in a hot strip finishing mill in which two local faults located on the pinion are obtained successfully. Therefore, it can be concluded that the proposed method is of great practical value in engineering.
NASA Astrophysics Data System (ADS)
Kim, Nakwan
Utilizing the universal approximation property of neural networks, we develop several novel approaches to neural network-based adaptive output feedback control of nonlinear systems, and illustrate these approaches for several flight control applications. In particular, we address the problem of non-affine systems and eliminate the fixed point assumption present in earlier work. All of the stability proofs are carried out in a form that eliminates an algebraic loop in the neural network implementation. An approximate input/output feedback linearizing controller is augmented with a neural network using input/output sequences of the uncertain system. These approaches permit adaptation to both parametric uncertainty and unmodeled dynamics. All physical systems also have control position and rate limits, which may either deteriorate performance or cause instability for a sufficiently high control bandwidth. Here we apply a method for protecting an adaptive process from the effects of input saturation and time delays, known as "pseudo control hedging". This method was originally developed for the state feedback case, and we provide a stability analysis that extends its domain of applicability to the case of output feedback. The approach is illustrated by the design of a pitch-attitude flight control system for a linearized model of an R-50 experimental helicopter, and by the design of a pitch-rate control system for a 58-state model of a flexible aircraft consisting of rigid body dynamics coupled with actuator and flexible modes. A new approach to augmentation of an existing linear controller is introduced. It is especially useful when there is limited information concerning the plant model, and the existing controller. The approach is applied to the design of an adaptive autopilot for a guided munition. Design of a neural network adaptive control that ensures asymptotically stable tracking performance is also addressed.
NASA Astrophysics Data System (ADS)
Sheng, Qin; Sun, Hai-wei
2016-11-01
This study concerns the asymptotic stability of an eikonal, or ray, transformation based Peaceman-Rachford splitting method for solving the paraxial Helmholtz equation with high wave numbers. Arbitrary nonuniform grids are considered in transverse and beam propagation directions. The differential equation targeted has been used for modeling propagations of high intensity laser pulses over a long distance without diffractions. Self-focusing of high intensity beams may be balanced with the de-focusing effect of created ionized plasma channel in the situation, and applications of grid adaptations are frequently essential. It is shown rigorously that the fully discretized oscillation-free decomposition method on arbitrary adaptive grids is asymptotically stable with a stability index one. Simulation experiments are carried out to illustrate our concern and conclusions.
Advanced adaptive computational methods for Navier-Stokes simulations in rotorcraft aerodynamics
NASA Technical Reports Server (NTRS)
Stowers, S. T.; Bass, J. M.; Oden, J. T.
1993-01-01
A phase 2 research and development effort was conducted in area transonic, compressible, inviscid flows with an ultimate goal of numerically modeling complex flows inherent in advanced helicopter blade designs. The algorithms and methodologies therefore are classified as adaptive methods, which are error estimation techniques for approximating the local numerical error, and automatically refine or unrefine the mesh so as to deliver a given level of accuracy. The result is a scheme which attempts to produce the best possible results with the least number of grid points, degrees of freedom, and operations. These types of schemes automatically locate and resolve shocks, shear layers, and other flow details to an accuracy level specified by the user of the code. The phase 1 work involved a feasibility study of h-adaptive methods for steady viscous flows, with emphasis on accurate simulation of vortex initiation, migration, and interaction. Phase 2 effort focused on extending these algorithms and methodologies to a three-dimensional topology.
Liu, Hui; Zhang, Cai-Ming; Su, Zhi-Yuan; Wang, Kai; Deng, Kai
2015-01-01
The key problem of computer-aided diagnosis (CAD) of lung cancer is to segment pathologically changed tissues fast and accurately. As pulmonary nodules are potential manifestation of lung cancer, we propose a fast and self-adaptive pulmonary nodules segmentation method based on a combination of FCM clustering and classification learning. The enhanced spatial function considers contributions to fuzzy membership from both the grayscale similarity between central pixels and single neighboring pixels and the spatial similarity between central pixels and neighborhood and improves effectively the convergence rate and self-adaptivity of the algorithm. Experimental results show that the proposed method can achieve more accurate segmentation of vascular adhesion, pleural adhesion, and ground glass opacity (GGO) pulmonary nodules than other typical algorithms. PMID:25945120
An a posteriori-driven adaptive Mixed High-Order method with application to electrostatics
NASA Astrophysics Data System (ADS)
Di Pietro, Daniele A.; Specogna, Ruben
2016-12-01
In this work we propose an adaptive version of the recently introduced Mixed High-Order method and showcase its performance on a comprehensive set of academic and industrial problems in computational electromagnetism. The latter include, in particular, the numerical modeling of comb-drive and MEMS devices. Mesh adaptation is driven by newly derived, residual-based error estimators. The resulting method has several advantageous features: It supports fairly general meshes, it enables arbitrary approximation orders, and has a moderate computational cost thanks to hybridization and static condensation. The a posteriori-driven mesh refinement is shown to significantly enhance the performance on problems featuring singular solutions, allowing to fully exploit the high-order of approximation.
An adaptive tau-leaping method for stochastic simulations of reaction-diffusion systems
NASA Astrophysics Data System (ADS)
Padgett, Jill M. A.; Ilie, Silvana
2016-03-01
Stochastic modelling is critical for studying many biochemical processes in a cell, in particular when some reacting species have low population numbers. For many such cellular processes the spatial distribution of the molecular species plays a key role. The evolution of spatially heterogeneous biochemical systems with some species in low amounts is accurately described by the mesoscopic model of the Reaction-Diffusion Master Equation. The Inhomogeneous Stochastic Simulation Algorithm provides an exact strategy to numerically solve this model, but it is computationally very expensive on realistic applications. We propose a novel adaptive time-stepping scheme for the tau-leaping method for approximating the solution of the Reaction-Diffusion Master Equation. This technique combines effective strategies for variable time-stepping with path preservation to reduce the computational cost, while maintaining the desired accuracy. The numerical tests on various examples arising in applications show the improved efficiency achieved by the new adaptive method.
System and method for adaptively deskewing parallel data signals relative to a clock
Jenkins, Philip Nord; Cornett, Frank N.
2008-10-07
A system and method of reducing skew between a plurality of signals transmitted with a transmit clock is described. Skew is detected between the received transmit clock and each of received data signals. Delay is added to the clock or to one or more of the plurality of data signals to compensate for the detected skew. The delay added to each of the plurality of delayed signals is updated to adapt to changes in detected skew.
System and method for adaptively deskewing parallel data signals relative to a clock
Jenkins, Philip Nord; Cornett, Frank N
2011-10-04
A system and method of reducing skew between a plurality of signals transmitted with a transmit clock is described. Skew is detected between the received transmit clock and each of received data signals. Delay is added to the clock or to one or more of the plurality of data signals to compensate for the detected skew. The delay added to each of the plurality of delayed signals is updated to adapt to changes in detected skew.
A Mass Conservation Algorithm for Adaptive Unrefinement Meshes Used by Finite Element Methods
2012-01-01
dimensional mesh generation. In: Proc. 4th ACM-SIAM Symp. on Disc. Algorithms. (1993) 83–92 [9] Weatherill, N., Hassan, O., Marcum, D., Marchant, M.: Grid ...Conference on Computational Science, ICCS 2012 A Mass Conservation Algorithm For Adaptive Unrefinement Meshes Used By Finite Element Methods Hung V. Nguyen...velocity fields, and chemical distribution, as well as conserve mass, especially for water quality applications. Solution accuracy depends highly on mesh
Cox-Davenport, Rebecca A; Phelan, Julia C
2015-05-01
First-time NCLEX-RN pass rates are an important indicator of nursing school success and quality. Nursing schools use different methods to anticipate NCLEX outcomes and help prevent student failure and possible threat to accreditation. This study evaluated the impact of a shift in NCLEX preparation policy at a BSN program in the southeast United States. The policy shifted from the use of predictor score thresholds to determine graduation eligibility to a more proactive remediation strategy involving adaptive quizzing. A descriptive correlational design evaluated the impact of an adaptive quizzing system designed to give students ongoing active practice and feedback and explored the relationship between predictor examinations and NCLEX success. Data from student usage of the system as well as scores on predictor tests were collected for three student cohorts. Results revealed a positive correlation between adaptive quizzing system usage and content mastery. Two of the 69 students in the sample did not pass the NCLEX. With so few students failing the NCLEX, predictability of any course variables could not be determined. The power of predictor examinations to predict NCLEX failure could also not be supported. The most consistent factor among students, however, was their content mastery level within the adaptive quizzing system. Implications of these findings are discussed.
Anderson, R W; Pember, R B; Elliot, N S
2000-09-26
A new method for the solution of the unsteady Euler equations has been developed. The method combines staggered grid Lagrangian techniques with structured local adaptive mesh refinement (AMR). This method is a precursor to a more general adaptive arbitrary Lagrangian Eulerian (ALE-AMR) algorithm under development, which will facilitate the solution of problems currently at and beyond the boundary of soluble problems by traditional ALE methods by focusing computational resources where they are required. Many of the core issues involved in the development of the ALE-AMR method hinge upon the integration of AMR with a Lagrange step, which is the focus of the work described here. The novel components of the method are mainly driven by the need to reconcile traditional AMR techniques, which are typically employed on stationary meshes with cell-centered quantities, with the staggered grids and grid motion employed by Lagrangian methods. These new algorithmic components are first developed in one dimension and are then generalized to two dimensions. Solutions of several model problems involving shock hydrodynamics are presented and discussed.
NASA Astrophysics Data System (ADS)
Pedretti, Daniele; Fernàndez-Garcia, Daniel
2013-09-01
Particle tracking methods to simulate solute transport deal with the issue of having to reconstruct smooth concentrations from a limited number of particles. This is an error-prone process that typically leads to large fluctuations in the determined late-time behavior of breakthrough curves (BTCs). Kernel density estimators (KDE) can be used to automatically reconstruct smooth BTCs from a small number of particles. The kernel approach incorporates the uncertainty associated with subsampling a large population by equipping each particle with a probability density function. Two broad classes of KDE methods can be distinguished depending on the parametrization of this function: global and adaptive methods. This paper shows that each method is likely to estimate a specific portion of the BTCs. Although global methods offer a valid approach to estimate early-time behavior and peak of BTCs, they exhibit important fluctuations at the tails where fewer particles exist. In contrast, locally adaptive methods improve tail estimation while oversmoothing both early-time and peak concentrations. Therefore a new method is proposed combining the strength of both KDE approaches. The proposed approach is universal and only needs one parameter (α) which slightly depends on the shape of the BTCs. Results show that, for the tested cases, heavily-tailed BTCs are properly reconstructed with α ≈ 0.5 .
Adaptive reproducing kernel particle method for extraction of the cortical surface.
Xu, Meihe; Thompson, Paul M; Toga, Arthur W
2006-06-01
We propose a novel adaptive approach based on the Reproducing Kernel Particle Method (RKPM) to extract the cortical surfaces of the brain from three-dimensional (3-D) magnetic resonance images (MRIs). To formulate the discrete equations of the deformable model, a flexible particle shape function is employed in the Galerkin approximation of the weak form of the equilibrium equations. The proposed support generation method ensures that support of all particles cover the entire computational domains. The deformable model is adaptively adjusted by dilating the shape function and by inserting or merging particles in the high curvature regions or regions stopped by the target boundary. The shape function of the particle with a dilation parameter is adaptively constructed in response to particle insertion or merging. The proposed method offers flexibility in representing highly convolved structures and in refining the deformable models. Self-intersection of the surface, during evolution, is prevented by tracing backward along gradient descent direction from the crest interface of the distance field, which is computed by fast marching. These operations involve a significant computational cost. The initial model for the deformable surface is simple and requires no prior knowledge of the segmented structure. No specific template is required, e.g., an average cortical surface obtained from many subjects. The extracted cortical surface efficiently localizes the depths of the cerebral sulci, unlike some other active surface approaches that penalize regions of high curvature. Comparisons with manually segmented landmark data are provided to demonstrate the high accuracy of the proposed method. We also compare the proposed method to the finite element method, and to a commonly used cortical surface extraction approach, the CRUISE method. We also show that the independence of the shape functions of the RKPM from the underlying mesh enhances the convergence speed of the deformable
Daneshmand, Saeed; Marathe, Thyagaraja; Lachapelle, Gérard
2016-10-31
The use of antenna arrays in Global Navigation Satellite System (GNSS) applications is gaining significant attention due to its superior capability to suppress both narrowband and wideband interference. However, the phase distortions resulting from array processing may limit the applicability of these methods for high precision applications using carrier phase based positioning techniques. This paper studies the phase distortions occurring with the adaptive blind beamforming method in which satellite angle of arrival (AoA) information is not employed in the optimization problem. To cater to non-stationary interference scenarios, the array weights of the adaptive beamformer are continuously updated. The effects of these continuous updates on the tracking parameters of a GNSS receiver are analyzed. The second part of this paper focuses on reducing the phase distortions during the blind beamforming process in order to allow the receiver to perform carrier phase based positioning by applying a constraint on the structure of the array configuration and by compensating the array uncertainties. Limitations of the previous methods are studied and a new method is proposed that keeps the simplicity of the blind beamformer structure and, at the same time, reduces tracking degradations while achieving millimetre level positioning accuracy in interference environments. To verify the applicability of the proposed method and analyze the degradations, array signals corresponding to the GPS L1 band are generated using a combination of hardware and software simulators. Furthermore, the amount of degradation and performance of the proposed method under different conditions are evaluated based on Monte Carlo simulations.
Daneshmand, Saeed; Marathe, Thyagaraja; Lachapelle, Gérard
2016-01-01
The use of antenna arrays in Global Navigation Satellite System (GNSS) applications is gaining significant attention due to its superior capability to suppress both narrowband and wideband interference. However, the phase distortions resulting from array processing may limit the applicability of these methods for high precision applications using carrier phase based positioning techniques. This paper studies the phase distortions occurring with the adaptive blind beamforming method in which satellite angle of arrival (AoA) information is not employed in the optimization problem. To cater to non-stationary interference scenarios, the array weights of the adaptive beamformer are continuously updated. The effects of these continuous updates on the tracking parameters of a GNSS receiver are analyzed. The second part of this paper focuses on reducing the phase distortions during the blind beamforming process in order to allow the receiver to perform carrier phase based positioning by applying a constraint on the structure of the array configuration and by compensating the array uncertainties. Limitations of the previous methods are studied and a new method is proposed that keeps the simplicity of the blind beamformer structure and, at the same time, reduces tracking degradations while achieving millimetre level positioning accuracy in interference environments. To verify the applicability of the proposed method and analyze the degradations, array signals corresponding to the GPS L1 band are generated using a combination of hardware and software simulators. Furthermore, the amount of degradation and performance of the proposed method under different conditions are evaluated based on Monte Carlo simulations. PMID:27809252
Atzberger, Paul J.
2010-05-01
Stochastic partial differential equations are introduced for the continuum concentration fields of reaction-diffusion systems. The stochastic partial differential equations account for fluctuations arising from the finite number of molecules which diffusively migrate and react. Spatially adaptive stochastic numerical methods are developed for approximation of the stochastic partial differential equations. The methods allow for adaptive meshes with multiple levels of resolution, Neumann and Dirichlet boundary conditions, and domains having geometries with curved boundaries. A key issue addressed by the methods is the formulation of consistent discretizations for the stochastic driving fields at coarse-refined interfaces of the mesh and at boundaries. Methods are also introduced for the efficient generation of the required stochastic driving fields on such meshes. As a demonstration of the methods, investigations are made of the role of fluctuations in a biological model for microorganism direction sensing based on concentration gradients. Also investigated, a mechanism for spatial pattern formation induced by fluctuations. The discretization approaches introduced for SPDEs have the potential to be widely applicable in the development of numerical methods for the study of spatially extended stochastic systems.
NASA Astrophysics Data System (ADS)
Abedini, Mohammad; Nojoumian, Mohammad Ali; Salarieh, Hassan; Meghdari, Ali
2015-08-01
In this paper, model reference control of a fractional order system has been discussed. In order to control the fractional order plant, discrete-time approximation methods have been applied. Plant and reference model are discretized by Grünwald-Letnikov definition of the fractional order derivative using "Short Memory Principle". Unknown parameters of the fractional order system are appeared in the discrete time approximate model as combinations of parameters of the main system. The discrete time MRAC via RLS identification is modified to estimate the parameters and control the fractional order plant. Numerical results show the effectiveness of the proposed method of model reference adaptive control.
Souza-Junior, Eduardo José; de Souza-Régis, Marcos Ribeiro; Alonso, Roberta Caroline Bruschi; de Freitas, Anderson Pinheiro; Sinhoreti, Mario Alexandre Coelho; Cunha, Leonardo Gonçalves
2011-01-01
The aim of the present study was to evaluate the influence of curing methods and composite volumes on the marginal and internal adaptation of composite restoratives. Two cavities with different volumes (Lower volume: 12.6 mm(3); Higher volume: 24.5 mm(3)) were prepared on the buccal surface of 60 bovine teeth and restored using Filtek Z250 in bulk filling. For each cavity, specimens were randomly assigned into three groups according to the curing method (n=10): 1) continuous light (CL: 27 seconds at 600 mW/cm(2)); 2) soft-start (SS: 10 seconds at 150 mW/cm(2)+24 seconds at 600 mW/cm(2)); and 3) pulse delay (PD: five seconds at 150 mW/cm(2)+three minutes with no light+25 seconds at 600 mW/cm(2)). The radiant exposure for all groups was 16 J/cm(2). Marginal adaptation was measured with the dye staining gap procedure, using Caries Detector. Outer margins were stained for five seconds and the gap percentage was determined using digital images on a computer measurement program (Image Tool). Then, specimens were sectioned in slices and stained for five seconds, and the internal gaps were measured using the same method. Data were submitted to two-way analysis of variance and Tukey test (p<0.05). Composite volume had a significant influence on superficial and internal gap formation, depending on the curing method. For CL groups, restorations with higher volume showed higher marginal gap incidence than did the lower volume restorations. Additionally, the effect of the curing method depended on the volume. Regarding marginal adaptation, SS resulted in a significant reduction of gap formation, when compared to CL, for higher volume restorations. For lower volume restorations, there was no difference among the curing methods. For internal adaptation, the modulated curing methods SS and PD promoted a significant reduction of gap formation, when compared to CL, only for the lower volume restoration. Therefore, in similar conditions of the cavity configuration, the higher the
An Adaptive INS-Aided PLL Tracking Method for GNSS Receivers in Harsh Environments.
Cong, Li; Li, Xin; Jin, Tian; Yue, Song; Xue, Rui
2016-01-23
As the weak link in global navigation satellite system (GNSS) signal processing, the phase-locked loop (PLL) is easily influenced with frequent cycle slips and loss of lock as a result of higher vehicle dynamics and lower signal-to-noise ratios. With inertial navigation system (INS) aid, PLLs' tracking performance can be improved. However, for harsh environments with high dynamics and signal attenuation, the traditional INS-aided PLL with fixed loop parameters has some limitations to improve the tracking adaptability. In this paper, an adaptive INS-aided PLL capable of adjusting its noise bandwidth and coherent integration time has been proposed. Through theoretical analysis, the relation between INS-aided PLL phase tracking error and carrier to noise density ratio (C/N₀), vehicle dynamics, aiding information update time, noise bandwidth, and coherent integration time has been built. The relation formulae are used to choose the optimal integration time and bandwidth for a given application under the minimum tracking error criterion. Software and hardware simulation results verify the correctness of the theoretical analysis, and demonstrate that the adaptive tracking method can effectively improve the PLL tracking ability and integrated GNSS/INS navigation performance. For harsh environments, the tracking sensitivity is increased by 3 to 5 dB, velocity errors are decreased by 36% to 50% and position errors are decreased by 6% to 24% when compared with other INS-aided PLL methods.
Classical FEM-BEM coupling methods: nonlinearities, well-posedness, and adaptivity
NASA Astrophysics Data System (ADS)
Aurada, Markus; Feischl, Michael; Führer, Thomas; Karkulik, Michael; Melenk, Jens Markus; Praetorius, Dirk
2013-04-01
We consider a (possibly) nonlinear interface problem in 2D and 3D, which is solved by use of various adaptive FEM-BEM coupling strategies, namely the Johnson-Nédélec coupling, the Bielak-MacCamy coupling, and Costabel's symmetric coupling. We provide a framework to prove that the continuous as well as the discrete Galerkin solutions of these coupling methods additionally solve an appropriate operator equation with a Lipschitz continuous and strongly monotone operator. Therefore, the original coupling formulations are well-defined, and the Galerkin solutions are quasi-optimal in the sense of a Céa-type lemma. For the respective Galerkin discretizations with lowest-order polynomials, we provide reliable residual-based error estimators. Together with an estimator reduction property, we prove convergence of the adaptive FEM-BEM coupling methods. A key point for the proof of the estimator reduction are novel inverse-type estimates for the involved boundary integral operators which are advertized. Numerical experiments conclude the work and compare performance and effectivity of the three adaptive coupling procedures in the presence of generic singularities.
Parallel level-set methods on adaptive tree-based grids
NASA Astrophysics Data System (ADS)
Mirzadeh, Mohammad; Guittet, Arthur; Burstedde, Carsten; Gibou, Frederic
2016-10-01
We present scalable algorithms for the level-set method on dynamic, adaptive Quadtree and Octree Cartesian grids. The algorithms are fully parallelized and implemented using the MPI standard and the open-source p4est library. We solve the level set equation with a semi-Lagrangian method which, similar to its serial implementation, is free of any time-step restrictions. This is achieved by introducing a scalable global interpolation scheme on adaptive tree-based grids. Moreover, we present a simple parallel reinitialization scheme using the pseudo-time transient formulation. Both parallel algorithms scale on the Stampede supercomputer, where we are currently using up to 4096 CPU cores, the limit of our current account. Finally, a relevant application of the algorithms is presented in modeling a crystallization phenomenon by solving a Stefan problem, illustrating a level of detail that would be impossible to achieve without a parallel adaptive strategy. We believe that the algorithms presented in this article will be of interest and useful to researchers working with the level-set framework and modeling multi-scale physics in general.
An Adaptive INS-Aided PLL Tracking Method for GNSS Receivers in Harsh Environments
Cong, Li; Li, Xin; Jin, Tian; Yue, Song; Xue, Rui
2016-01-01
As the weak link in global navigation satellite system (GNSS) signal processing, the phase-locked loop (PLL) is easily influenced with frequent cycle slips and loss of lock as a result of higher vehicle dynamics and lower signal-to-noise ratios. With inertial navigation system (INS) aid, PLLs’ tracking performance can be improved. However, for harsh environments with high dynamics and signal attenuation, the traditional INS-aided PLL with fixed loop parameters has some limitations to improve the tracking adaptability. In this paper, an adaptive INS-aided PLL capable of adjusting its noise bandwidth and coherent integration time has been proposed. Through theoretical analysis, the relation between INS-aided PLL phase tracking error and carrier to noise density ratio (C/N0), vehicle dynamics, aiding information update time, noise bandwidth, and coherent integration time has been built. The relation formulae are used to choose the optimal integration time and bandwidth for a given application under the minimum tracking error criterion. Software and hardware simulation results verify the correctness of the theoretical analysis, and demonstrate that the adaptive tracking method can effectively improve the PLL tracking ability and integrated GNSS/INS navigation performance. For harsh environments, the tracking sensitivity is increased by 3 to 5 dB, velocity errors are decreased by 36% to 50% and position errors are decreased by 6% to 24% when compared with other INS-aided PLL methods. PMID:26805853
Lee, W H; Kim, T-S; Cho, M H; Ahn, Y B; Lee, S Y
2006-12-07
In studying bioelectromagnetic problems, finite element analysis (FEA) offers several advantages over conventional methods such as the boundary element method. It allows truly volumetric analysis and incorporation of material properties such as anisotropic conductivity. For FEA, mesh generation is the first critical requirement and there exist many different approaches. However, conventional approaches offered by commercial packages and various algorithms do not generate content-adaptive meshes (cMeshes), resulting in numerous nodes and elements in modelling the conducting domain, and thereby increasing computational load and demand. In this work, we present efficient content-adaptive mesh generation schemes for complex biological volumes of MR images. The presented methodology is fully automatic and generates FE meshes that are adaptive to the geometrical contents of MR images, allowing optimal representation of conducting domain for FEA. We have also evaluated the effect of cMeshes on FEA in three dimensions by comparing the forward solutions from various cMesh head models to the solutions from the reference FE head model in which fine and equidistant FEs constitute the model. The results show that there is a significant gain in computation time with minor loss in numerical accuracy. We believe that cMeshes should be useful in the FEA of bioelectromagnetic problems.
NASA Astrophysics Data System (ADS)
Lee, W. H.; Kim, T.-S.; Cho, M. H.; Ahn, Y. B.; Lee, S. Y.
2006-12-01
In studying bioelectromagnetic problems, finite element analysis (FEA) offers several advantages over conventional methods such as the boundary element method. It allows truly volumetric analysis and incorporation of material properties such as anisotropic conductivity. For FEA, mesh generation is the first critical requirement and there exist many different approaches. However, conventional approaches offered by commercial packages and various algorithms do not generate content-adaptive meshes (cMeshes), resulting in numerous nodes and elements in modelling the conducting domain, and thereby increasing computational load and demand. In this work, we present efficient content-adaptive mesh generation schemes for complex biological volumes of MR images. The presented methodology is fully automatic and generates FE meshes that are adaptive to the geometrical contents of MR images, allowing optimal representation of conducting domain for FEA. We have also evaluated the effect of cMeshes on FEA in three dimensions by comparing the forward solutions from various cMesh head models to the solutions from the reference FE head model in which fine and equidistant FEs constitute the model. The results show that there is a significant gain in computation time with minor loss in numerical accuracy. We believe that cMeshes should be useful in the FEA of bioelectromagnetic problems.
Pulse front adaptive optics: a new method for control of ultrashort laser pulses.
Sun, Bangshan; Salter, Patrick S; Booth, Martin J
2015-07-27
Ultrafast lasers enable a wide range of physics research and the manipulation of short pulses is a critical part of the ultrafast tool kit. Current methods of laser pulse shaping are usually considered separately in either the spatial or the temporal domain, but laser pulses are complex entities existing in four dimensions, so full freedom of manipulation requires advanced forms of spatiotemporal control. We demonstrate through a combination of adaptable diffractive and reflective optical elements - a liquid crystal spatial light modulator (SLM) and a deformable mirror (DM) - decoupled spatial control over the pulse front (temporal group delay) and phase front of an ultra-short pulse was enabled. Pulse front modulation was confirmed through autocorrelation measurements. This new adaptive optics technique, for the first time enabling in principle arbitrary shaping of the pulse front, promises to offer a further level of control for ultrafast lasers.
A DAFT DL_POLY distributed memory adaptation of the Smoothed Particle Mesh Ewald method
NASA Astrophysics Data System (ADS)
Bush, I. J.; Todorov, I. T.; Smith, W.
2006-09-01
The Smoothed Particle Mesh Ewald method [U. Essmann, L. Perera, M.L. Berkowtz, T. Darden, H. Lee, L.G. Pedersen, J. Chem. Phys. 103 (1995) 8577] for calculating long ranged forces in molecular simulation has been adapted for the parallel molecular dynamics code DL_POLY_3 [I.T. Todorov, W. Smith, Philos. Trans. Roy. Soc. London 362 (2004) 1835], making use of a novel 3D Fast Fourier Transform (DAFT) [I.J. Bush, The Daresbury Advanced Fourier transform, Daresbury Laboratory, 1999] that perfectly matches the Domain Decomposition (DD) parallelisation strategy [W. Smith, Comput. Phys. Comm. 62 (1991) 229; M.R.S. Pinches, D. Tildesley, W. Smith, Mol. Sim. 6 (1991) 51; D. Rapaport, Comput. Phys. Comm. 62 (1991) 217] of the DL_POLY_3 code. In this article we describe software adaptations undertaken to import this functionality and provide a review of its performance.
An adaptive two-stage dose-response design method for establishing Proof of Concept
Franchetti, Yoko; Anderson, Stewart J.; Sampson, Allan R.
2013-01-01
We propose an adaptive two-stage dose-response design where a pre-specified adaptation rule is used to add and/or drop treatment arms between the stages. We extend the multiple comparison procedures-modeling (MCP-Mod) approach into a two-stage design. In each stage, we use the same set of candidate dose-response models and test for a dose-response relationship or proof of concept (PoC) via model-associated statistics. The stage-wise test results are then combined to establish ‘global’ PoC using a conditional error function. Our simulation studies showed good and more robust power in our design method compared to conventional and fixed designs. PMID:23957520
Computation of variably saturated subsurface flow by adaptive mixed hybrid finite element methods
NASA Astrophysics Data System (ADS)
Bause, M.; Knabner, P.
2004-06-01
We present adaptive mixed hybrid finite element discretizations of the Richards equation, a nonlinear parabolic partial differential equation modeling the flow of water into a variably saturated porous medium. The approach simultaneously constructs approximations of the flux and the pressure head in Raviart-Thomas spaces. The resulting nonlinear systems of equations are solved by a Newton method. For the linear problems of the Newton iteration a multigrid algorithm is used. We consider two different kinds of error indicators for space adaptive grid refinement: superconvergence and residual based indicators. They can be calculated easily by means of the available finite element approximations. This seems attractive for computations since no additional (sub-)problems have to be solved. Computational experiments conducted for realistic water table recharge problems illustrate the effectiveness and robustness of the approach.
An adaptive segment method for smoothing lidar signal based on noise estimation
NASA Astrophysics Data System (ADS)
Wang, Yuzhao; Luo, Pingping
2014-10-01
An adaptive segmentation smoothing method (ASSM) is introduced in the paper to smooth the signal and suppress the noise. In the ASSM, the noise is defined as the 3σ of the background signal. An integer number N is defined for finding the changing positions in the signal curve. If the difference of adjacent two points is greater than 3Nσ, the position is recorded as an end point of the smoothing segment. All the end points detected as above are recorded and the curves between them will be smoothed separately. In the traditional method, the end points of the smoothing windows in the signals are fixed. The ASSM creates changing end points in different signals and the smoothing windows could be set adaptively. The windows are always set as the half of the segmentations and then the average smoothing method will be applied in the segmentations. The Iterative process is required for reducing the end-point aberration effect in the average smoothing method and two or three times are enough. In ASSM, the signals are smoothed in the spacial area nor frequent area, that means the frequent disturbance will be avoided. A lidar echo was simulated in the experimental work. The echo was supposed to be created by a space-born lidar (e.g. CALIOP). And white Gaussian noise was added to the echo to act as the random noise resulted from environment and the detector. The novel method, ASSM, was applied to the noisy echo to filter the noise. In the test, N was set to 3 and the Iteration time is two. The results show that, the signal could be smoothed adaptively by the ASSM, but the N and the Iteration time might be optimized when the ASSM is applied in a different lidar.
Adaptive Projection Subspace Dimension for the Thick-Restart Lanczos Method
Yamazaki, Ichitaro; Bai, Zhaojun; Simon, Horst; Wang, Lin-Wang; Wu, K.
2008-10-01
The Thick-Restart Lanczos (TRLan) method is an effective method for solving large-scale Hermitian eigenvalue problems. However, its performance strongly depends on the dimension of the projection subspace. In this paper, we propose an objective function to quantify the effectiveness of a chosen subspace dimension, and then introduce an adaptive scheme to dynamically adjust the dimension at each restart. An open-source software package, nu-TRLan, which implements the TRLan method with this adaptive projection subspace dimension is available in the public domain. The numerical results of synthetic eigenvalue problems are presented to demonstrate that nu-TRLan achieves speedups of between 0.9 and 5.1 over the static method using a default subspace dimension. To demonstrate the effectiveness of nu-TRLan in a real application, we apply it to the electronic structure calculations of quantum dots. We show that nu-TRLan can achieve speedups of greater than 1.69 over the state-of-the-art eigensolver for this application, which is based on the Conjugate Gradient method with a powerful preconditioner.
Cen, Guanjun; Yu, Yonghao; Zeng, Xianru; Long, Xiuzhen; Wei, Dewei; Gao, Xuyuan; Zeng, Tao
2015-01-01
In insects, the frequency distribution of the measurements of sclerotized body parts is generally used to classify larval instars and is characterized by a multimodal overlap between instar stages. Nonparametric methods with fixed bandwidths, such as histograms, have significant limitations when used to fit this type of distribution, making it difficult to identify divisions between instars. Fixed bandwidths have also been chosen somewhat subjectively in the past, which is another problem. In this study, we describe an adaptive kernel smoothing method to differentiate instars based on discontinuities in the growth rates of sclerotized insect body parts. From Brooks' rule, we derived a new standard for assessing the quality of instar classification and a bandwidth selector that more accurately reflects the distributed character of specific variables. We used this method to classify the larvae of Austrosimulium tillyardianum (Diptera: Simuliidae) based on five different measurements. Based on head capsule width and head capsule length, the larvae were separated into nine instars. Based on head capsule postoccipital width and mandible length, the larvae were separated into 8 instars and 10 instars, respectively. No reasonable solution was found for antennal segment 3 length. Separation of the larvae into nine instars using head capsule width or head capsule length was most robust and agreed with Crosby's growth rule. By strengthening the distributed character of the separation variable through the use of variable bandwidths, the adaptive kernel smoothing method could identify divisions between instars more effectively and accurately than previous methods.
Cen, Guanjun; Zeng, Xianru; Long, Xiuzhen; Wei, Dewei; Gao, Xuyuan; Zeng, Tao
2015-01-01
In insects, the frequency distribution of the measurements of sclerotized body parts is generally used to classify larval instars and is characterized by a multimodal overlap between instar stages. Nonparametric methods with fixed bandwidths, such as histograms, have significant limitations when used to fit this type of distribution, making it difficult to identify divisions between instars. Fixed bandwidths have also been chosen somewhat subjectively in the past, which is another problem. In this study, we describe an adaptive kernel smoothing method to differentiate instars based on discontinuities in the growth rates of sclerotized insect body parts. From Brooks’ rule, we derived a new standard for assessing the quality of instar classification and a bandwidth selector that more accurately reflects the distributed character of specific variables. We used this method to classify the larvae of Austrosimulium tillyardianum (Diptera: Simuliidae) based on five different measurements. Based on head capsule width and head capsule length, the larvae were separated into nine instars. Based on head capsule postoccipital width and mandible length, the larvae were separated into 8 instars and 10 instars, respectively. No reasonable solution was found for antennal segment 3 length. Separation of the larvae into nine instars using head capsule width or head capsule length was most robust and agreed with Crosby’s growth rule. By strengthening the distributed character of the separation variable through the use of variable bandwidths, the adaptive kernel smoothing method could identify divisions between instars more effectively and accurately than previous methods. PMID:26546689
Jokinen, Emma; Yrttiaho, Santeri; Pulakka, Hannu; Vainio, Martti; Alku, Paavo
2012-12-01
Post-filtering can be utilized to improve the quality and intelligibility of telephone speech. Previous studies have shown that energy reallocation with a high-pass type filter works effectively in improving the intelligibility of speech in difficult noise conditions. The present study introduces a signal-to-noise ratio adaptive post-filtering method that utilizes energy reallocation to transfer energy from the first formant to higher frequencies. The proposed method adapts to the level of the background noise so that, in favorable noise conditions, the post-filter has a flat frequency response and the effect of the post-filtering is increased as the level of the ambient noise increases. The performance of the proposed method is compared with a similar post-filtering algorithm and unprocessed speech in subjective listening tests which evaluate both intelligibility and listener preference. The results indicate that both of the post-filtering methods maintain the quality of speech in negligible noise conditions and are able to provide intelligibility improvement over unprocessed speech in adverse noise conditions. Furthermore, the proposed post-filtering algorithm performs better than the other post-filtering method under evaluation in moderate to difficult noise conditions, where intelligibility improvement is mostly required.
A Parallel Adaptive Wavelet Method for the Simulation of Compressible Reacting Flows
NASA Astrophysics Data System (ADS)
Zikoski, Zachary; Paolucci, Samuel
2011-11-01
The Wavelet Adaptive Multiresolution Representation (WAMR) method provides a robust method for controlling spatial grid adaption--fine grid spacing in regions of a solution requiring high resolution (i.e. near steep gradients, singularities, or near- singularities) and using much coarser grid spacing where the solution is slowly varying. The sparse grids produced using the WAMR method exhibit very high compression ratios compared to uniform grids of equivalent resolution. Subsequently, a wide range of spatial scales often occurring in continuum physics models can be captured efficiently. Furthermore, the wavelet transform provides a direct measure of local error at each grid point, effectively producing automatically verified solutions. The algorithm is parallelized using an MPI-based domain decomposition approach suitable for a wide range of distributed-memory parallel architectures. The method is applied to the solution of the compressible, reactive Navier-Stokes equations and includes multi-component diffusive transport and chemical kinetics models. Results for the method's parallel performance are reported, and its effectiveness on several challenging compressible reacting flow problems is highlighted.
A Hyperspherical Adaptive Sparse-Grid Method for High-Dimensional Discontinuity Detection
Zhang, Guannan; Webster, Clayton G.; Gunzburger, Max D.; ...
2015-06-24
This study proposes and analyzes a hyperspherical adaptive hierarchical sparse-grid method for detecting jump discontinuities of functions in high-dimensional spaces. The method is motivated by the theoretical and computational inefficiencies of well-known adaptive sparse-grid methods for discontinuity detection. Our novel approach constructs a function representation of the discontinuity hypersurface of an N-dimensional discontinuous quantity of interest, by virtue of a hyperspherical transformation. Then, a sparse-grid approximation of the transformed function is built in the hyperspherical coordinate system, whose value at each point is estimated by solving a one-dimensional discontinuity detection problem. Due to the smoothness of the hypersurface, the newmore » technique can identify jump discontinuities with significantly reduced computational cost, compared to existing methods. In addition, hierarchical acceleration techniques are also incorporated to further reduce the overall complexity. Rigorous complexity analyses of the new method are provided as are several numerical examples that illustrate the effectiveness of the approach.« less
Johansson, A Torbjorn; White, Paul R
2011-08-01
This paper proposes an adaptive filter-based method for detection and frequency estimation of whistle calls, such as the calls of birds and marine mammals, which are typically analyzed in the time-frequency domain using a spectrogram. The approach taken here is based on adaptive notch filtering, which is an established technique for frequency tracking. For application to automatic whistle processing, methods for detection and improved frequency tracking through frequency crossings as well as interfering transients are developed and coupled to the frequency tracker. Background noise estimation and compensation is accomplished using order statistics and pre-whitening. Using simulated signals as well as recorded calls of marine mammals and a human whistled speech utterance, it is shown that the proposed method can detect more simultaneous whistles than two competing spectrogram-based methods while not reporting any false alarms on the example datasets. In one example, it extracts complete 1.4 and 1.8 s bottlenose dolphin whistles successfully through frequency crossings. The method performs detection and estimates frequency tracks even at high sweep rates. The algorithm is also shown to be effective on human whistled utterances.
NASA Astrophysics Data System (ADS)
Bu, Guochao; Wang, Pei
2016-04-01
Terrestrial laser scanning (TLS) has been used to extract accurate forest biophysical parameters for inventory purposes. The diameter at breast height (DBH) is a key parameter for individual trees because it has the potential for modeling the height, volume, biomass, and carbon sequestration potential of the tree based on empirical allometric scaling equations. In order to extract the DBH from the single-scan data of TLS automatically and accurately within a certain range, we proposed an adaptive circle-ellipse fitting method based on the point cloud transect. This proposed method can correct the error caused by the simple circle fitting method when a tree is slanted. A slanted tree was detected by the circle-ellipse fitting analysis, then the corresponding slant angle was found based on the ellipse fitting result. With this information, the DBH of the trees could be recalculated based on reslicing the point cloud data at breast height. Artificial stem data simulated by a cylindrical model of leaning trees and the scanning data acquired with the RIEGL VZ-400 were used to test the proposed adaptive fitting method. The results shown that the proposed method can detect the trees and accurately estimate the DBH for leaning trees.
A hyper-spherical adaptive sparse-grid method for high-dimensional discontinuity detection
Zhang, Guannan; Webster, Clayton G; Gunzburger, Max D; Burkardt, John V
2014-03-01
This work proposes and analyzes a hyper-spherical adaptive hi- erarchical sparse-grid method for detecting jump discontinuities of functions in high-dimensional spaces is proposed. The method is motivated by the the- oretical and computational inefficiencies of well-known adaptive sparse-grid methods for discontinuity detection. Our novel approach constructs a func- tion representation of the discontinuity hyper-surface of an N-dimensional dis- continuous quantity of interest, by virtue of a hyper-spherical transformation. Then, a sparse-grid approximation of the transformed function is built in the hyper-spherical coordinate system, whose value at each point is estimated by solving a one-dimensional discontinuity detection problem. Due to the smooth- ness of the hyper-surface, the new technique can identify jump discontinuities with significantly reduced computational cost, compared to existing methods. Moreover, hierarchical acceleration techniques are also incorporated to further reduce the overall complexity. Rigorous error estimates and complexity anal- yses of the new method are provided as are several numerical examples that illustrate the effectiveness of the approach.
Locomotor adaptation to a powered ankle-foot orthosis depends on control method
Cain, Stephen M; Gordon, Keith E; Ferris, Daniel P
2007-01-01
Background We studied human locomotor adaptation to powered ankle-foot orthoses with the intent of identifying differences between two different orthosis control methods. The first orthosis control method used a footswitch to provide bang-bang control (a kinematic control) and the second orthosis control method used a proportional myoelectric signal from the soleus (a physiological control). Both controllers activated an artificial pneumatic muscle providing plantar flexion torque. Methods Subjects walked on a treadmill for two thirty-minute sessions spaced three days apart under either footswitch control (n = 6) or myoelectric control (n = 6). We recorded lower limb electromyography (EMG), joint kinematics, and orthosis kinetics. We compared stance phase EMG amplitudes, correlation of joint angle patterns, and mechanical work performed by the powered orthosis between the two controllers over time. Results During steady state at the end of the second session, subjects using proportional myoelectric control had much lower soleus and gastrocnemius activation than the subjects using footswitch control. The substantial decrease in triceps surae recruitment allowed the proportional myoelectric control subjects to walk with ankle kinematics close to normal and reduce negative work performed by the orthosis. The footswitch control subjects walked with substantially perturbed ankle kinematics and performed more negative work with the orthosis. Conclusion These results provide evidence that the choice of orthosis control method can greatly alter how humans adapt to powered orthosis assistance during walking. Specifically, proportional myoelectric control results in larger reductions in muscle activation and gait kinematics more similar to normal compared to footswitch control. PMID:18154649
The stochastic control of the F-8C aircraft using the Multiple Model Adaptive Control (MMAC) method
NASA Technical Reports Server (NTRS)
Athans, M.; Dunn, K. P.; Greene, E. S.; Lee, W. H.; Sandel, N. R., Jr.
1975-01-01
The purpose of this paper is to summarize results obtained for the adaptive control of the F-8C aircraft using the so-called Multiple Model Adaptive Control method. The discussion includes the selection of the performance criteria for both the lateral and the longitudinal dynamics, the design of the Kalman filters for different flight conditions, the 'identification' aspects of the design using hypothesis testing ideas, and the performance of the closed loop adaptive system.
Calvo, Juan Francisco; San José, Sol; Garrido, LLuís; Puertas, Enrique; Moragues, Sandra; Pozo, Miquel; Casals, Joan
2013-10-01
To introduce an approach for online adaptive replanning (i.e., dose-guided radiosurgery) in frameless stereotactic radiosurgery, when a 6-dimensional (6D) robotic couch is not available in the linear accelerator (linac). Cranial radiosurgical treatments are planned in our department using intensity-modulated technique. Patients are immobilized using thermoplastic mask. A cone-beam computed tomography (CBCT) scan is acquired after the initial laser-based patient setup (CBCT{sub setup}). The online adaptive replanning procedure we propose consists of a 6D registration-based mapping of the reference plan onto actual CBCT{sub setup}, followed by a reoptimization of the beam fluences (“6D plan”) to achieve similar dosage as originally was intended, while the patient is lying in the linac couch and the original beam arrangement is kept. The goodness of the online adaptive method proposed was retrospectively analyzed for 16 patients with 35 targets treated with CBCT-based frameless intensity modulated technique. Simulation of reference plan onto actual CBCT{sub setup}, according to the 4 degrees of freedom, supported by linac couch was also generated for each case (4D plan). Target coverage (D99%) and conformity index values of 6D and 4D plans were compared with the corresponding values of the reference plans. Although the 4D-based approach does not always assure the target coverage (D99% between 72% and 103%), the proposed online adaptive method gave a perfect coverage in all cases analyzed as well as a similar conformity index value as was planned. Dose-guided radiosurgery approach is effective to assure the dose coverage and conformity of an intracranial target volume, avoiding resetting the patient inside the mask in a “trial and error” way so as to remove the pitch and roll errors when a robotic table is not available.
Self-adaptive method for high frequency multi-channel analysis of surface wave method
Technology Transfer Automated Retrieval System (TEKTRAN)
When the high frequency multi-channel analysis of surface waves (MASW) method is conducted to explore soil properties in the vadose zone, existing rules for selecting the near offset and spread lengths cannot satisfy the requirements of planar dominant Rayleigh waves for all frequencies of interest ...
Adaptation to environmental change is not a new concept. Humans have shown throughout history a capacity for adapting to different climates and environmental changes. Farmers, foresters, civil engineers, have all been forced to adapt to numerous challenges to overcome adversity...
Adaptive method for quantifying uncertainty in discharge measurements using velocity-area method.
NASA Astrophysics Data System (ADS)
Despax, Aurélien; Favre, Anne-Catherine; Belleville, Arnaud
2015-04-01
Streamflow information provided by hydrometric services such as EDF-DTG allow real time monitoring of rivers, streamflow forecasting, paramount hydrological studies and engineering design. In open channels, the traditional approach to measure flow uses a rating curve, which is an indirect method to estimate the discharge in rivers based on water level and punctual discharge measurements. A large proportion of these discharge measurements are performed using the velocity-area method; it consists in integrating flow velocities and depths through the cross-section [1]. The velocity field is estimated by choosing a number m of verticals, distributed across the river, where vertical velocity profile is sampled by a current-meter at ni different depths. Uncertainties coming from several sources are related to the measurement process. To date, the framework for assessing uncertainty in velocity-area discharge measurements is the method presented in the ISO 748 standard [2] which follows the GUM [3] approach. The equation for the combined uncertainty in measured discharge u(Q), at 68% level of confidence, proposed by the ISO 748 standard is expressed as: Σ 2 2 2 -q2i[u2(Bi)+-u2(Di)+-u2p(Vi)+-(1ni) ×-[u2c(Vi)+-u2exp(Vi)
An HP Adaptive Discontinuous Galerkin Method for Hyperbolic Conservation Laws. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Bey, Kim S.
1994-01-01
This dissertation addresses various issues for model classes of hyperbolic conservation laws. The basic approach developed in this work employs a new family of adaptive, hp-version, finite element methods based on a special discontinuous Galerkin formulation for hyperbolic problems. The discontinuous Galerkin formulation admits high-order local approximations on domains of quite general geometry, while providing a natural framework for finite element approximations and for theoretical developments. The use of hp-versions of the finite element method makes possible exponentially convergent schemes with very high accuracies in certain cases; the use of adaptive hp-schemes allows h-refinement in regions of low regularity and p-enrichment to deliver high accuracy, while keeping problem sizes manageable and dramatically smaller than many conventional approaches. The use of discontinuous Galerkin methods is uncommon in applications, but the methods rest on a reasonable mathematical basis for low-order cases and has local approximation features that can be exploited to produce very efficient schemes, especially in a parallel, multiprocessor environment. The place of this work is to first and primarily focus on a model class of linear hyperbolic conservation laws for which concrete mathematical results, methodologies, error estimates, convergence criteria, and parallel adaptive strategies can be developed, and to then briefly explore some extensions to more general cases. Next, we provide preliminaries to the study and a review of some aspects of the theory of hyperbolic conservation laws. We also provide a review of relevant literature on this subject and on the numerical analysis of these types of problems.
A vertical parallax reduction method for stereoscopic video based on adaptive interpolation
NASA Astrophysics Data System (ADS)
Li, Qingyu; Zhao, Yan
2016-10-01
The existence of vertical parallax is the main factor of affecting the viewing comfort of stereo video. Visual fatigue is gaining widespread attention with the booming development of 3D stereoscopic video technology. In order to reduce the vertical parallax without affecting the horizontal parallax, a self-adaptive image scaling algorithm is proposed, which can use the edge characteristics efficiently. In the meantime, the nonlinear Levenberg-Marquardt (L-M) algorithm is introduced in this paper to improve the accuracy of the transformation matrix. Firstly, the self-adaptive scaling algorithm is used for the original image interpolation. When the pixel point of original image is in the edge areas, the interpretation is implemented adaptively along the edge direction obtained by Sobel operator. Secondly the SIFT algorithm, which is invariant to scaling, rotation and affine transformation, is used to detect the feature matching points from the binocular images. Then according to the coordinate position of matching points, the transformation matrix, which can reduce the vertical parallax, is calculated using Levenberg-Marquardt algorithm. Finally, the transformation matrix is applied to target image to calculate the new coordinate position of each pixel from the view image. The experimental results show that: comparing with the method which reduces the vertical parallax using linear algorithm to calculate two-dimensional projective transformation, the proposed method improves the vertical parallax reduction obviously. At the same time, in terms of the impact on horizontal parallax, the proposed method has more similar horizontal parallax to that of the original image after vertical parallax reduction. Therefore, the proposed method can optimize the vertical parallax reduction.
Investigation of self-adaptive LED surgical lighting based on entropy contrast enhancing method
NASA Astrophysics Data System (ADS)
Liu, Peng; Wang, Huihui; Zhang, Yaqin; Shen, Junfei; Wu, Rengmao; Zheng, Zhenrong; Li, Haifeng; Liu, Xu
2014-05-01
Investigation was performed to explore the possibility of enhancing contrast by varying the spectral distribution (SPD) of the surgical lighting. The illumination scenes with different SPDs were generated by the combination of a self-adaptive white light optimization method and the LED ceiling system, the images of biological sample are taken by a CCD camera and then processed by an 'Entropy' based contrast evaluation model which is proposed specific for surgery occasion. Compared with the neutral white LED based and traditional algorithm based image enhancing methods, the illumination based enhancing method turns out a better performance in contrast enhancing and improves the average contrast value about 9% and 6%, respectively. This low cost method is simple, practicable, and thus may provide an alternative solution for the expensive visual facility medical instruments.
Motion correction of magnetic resonance imaging data by using adaptive moving least squares method.
Nam, Haewon; Lee, Yeon Ju; Jeong, Byeongseon; Park, Hae-Jeong; Yoon, Jungho
2015-06-01
Image artifacts caused by subject motion during the imaging sequence are one of the most common problems in magnetic resonance imaging (MRI) and often degrade the image quality. In this study, we develop a motion correction algorithm for the interleaved-MR acquisition. An advantage of the proposed method is that it does not require either additional equipment or redundant over-sampling. The general framework of this study is similar to that of Rohlfing et al. [1], except for the introduction of the following fundamental modification. The three-dimensional (3-D) scattered data approximation method is used to correct the artifacted data as a post-processing step. In order to obtain a better match to the local structures of the given image, we use the data-adapted moving least squares (MLS) method that can improve the performance of the classical method. Numerical results are provided to demonstrate the advantages of the proposed algorithm.
A Cartesian Adaptive Level Set Method for Two-Phase Flows
NASA Technical Reports Server (NTRS)
Ham, F.; Young, Y.-N.
2003-01-01
In the present contribution we develop a level set method based on local anisotropic Cartesian adaptation as described in Ham et al. (2002). Such an approach should allow for the smallest possible Cartesian grid capable of resolving a given flow. The remainder of the paper is organized as follows. In section 2 the level set formulation for free surface calculations is presented and its strengths and weaknesses relative to the other free surface methods reviewed. In section 3 the collocated numerical method is described. In section 4 the method is validated by solving the 2D and 3D drop oscilation problem. In section 5 we present some results from more complex cases including the 3D drop breakup in an impulsively accelerated free stream, and the 3D immiscible Rayleigh-Taylor instability. Conclusions are given in section 6.
NASA Astrophysics Data System (ADS)
Yao, Zhenjian; Wang, Zhongyu; Yi-Lin Forrest, Jeffrey; Wang, Qiyue; Lv, Jing
2017-04-01
In this paper, an approach combining empirical mode decomposition (EMD) with adaptive least squares (ALS) is proposed to improve the dynamic calibration accuracy of pressure sensors. With EMD, the original output of the sensor can be represented as sums of zero-mean amplitude modulation frequency modulation components. By identifying and excluding those components involved in noises, the noise-free output could be reconstructed with the useful frequency modulation ones. Then the least squares method is iteratively performed to estimate the optimal order and parameters of the mathematical model. The dynamic characteristic parameters of the sensor can be derived from the model in both time and frequency domains. A series of shock tube calibration tests are carried out to validate the performance of this method. Experimental results show that the proposed method works well in reducing the influence of noise and yields an appropriate mathematical model. Furthermore, comparative experiments also demonstrate the superiority of the proposed method over the existing ones.
Novel Multistatic Adaptive Microwave Imaging Methods for Early Breast Cancer Detection
NASA Astrophysics Data System (ADS)
Xie, Yao; Guo, Bin; Li, Jian; Stoica, Petre
2006-12-01
Multistatic adaptive microwave imaging (MAMI) methods are presented and compared for early breast cancer detection. Due to the significant contrast between the dielectric properties of normal and malignant breast tissues, developing microwave imaging techniques for early breast cancer detection has attracted much interest lately. MAMI is one of the microwave imaging modalities and employs multiple antennas that take turns to transmit ultra-wideband (UWB) pulses while all antennas are used to receive the reflected signals. MAMI can be considered as a special case of the multi-input multi-output (MIMO) radar with the multiple transmitted waveforms being either UWB pulses or zeros. Since the UWB pulses transmitted by different antennas are displaced in time, the multiple transmitted waveforms are orthogonal to each other. The challenge to microwave imaging is to improve resolution and suppress strong interferences caused by the breast skin, nipple, and so forth. The MAMI methods we investigate herein utilize the data-adaptive robust Capon beamformer (RCB) to achieve high resolution and interference suppression. We will demonstrate the effectiveness of our proposed methods for breast cancer detection via numerical examples with data simulated using the finite-difference time-domain method based on a 3D realistic breast model.
Patched based methods for adaptive mesh refinement solutions of partial differential equations
Saltzman, J.
1997-09-02
This manuscript contains the lecture notes for a course taught from July 7th through July 11th at the 1997 Numerical Analysis Summer School sponsored by C.E.A., I.N.R.I.A., and E.D.F. The subject area was chosen to support the general theme of that year`s school which is ``Multiscale Methods and Wavelets in Numerical Simulation.`` The first topic covered in these notes is a description of the problem domain. This coverage is limited to classical PDEs with a heavier emphasis on hyperbolic systems and constrained hyperbolic systems. The next topic is difference schemes. These schemes are the foundation for the adaptive methods. After the background material is covered, attention is focused on a simple patched based adaptive algorithm and its associated data structures for square grids and hyperbolic conservation laws. Embellishments include curvilinear meshes, embedded boundary and overset meshes. Next, several strategies for parallel implementations are examined. The remainder of the notes contains descriptions of elliptic solutions on the mesh hierarchy, elliptically constrained flow solution methods and elliptically constrained flow solution methods with diffusion.
Validation of an Adaptive Combustion Instability Control Method for Gas-Turbine Engines
NASA Technical Reports Server (NTRS)
Kopasakis, George; DeLaat, John C.; Chang, Clarence T.
2004-01-01
This paper describes ongoing testing of an adaptive control method to suppress high frequency thermo-acoustic instabilities like those found in lean-burning, low emission combustors that are being developed for future aircraft gas turbine engines. The method called Adaptive Sliding Phasor Averaged Control, was previously tested in an experimental rig designed to simulate a combustor with an instability of about 530 Hz. Results published earlier, and briefly presented here, demonstrated that this method was effective in suppressing the instability. Because this test rig did not exhibit a well pronounced instability, a question remained regarding the effectiveness of the control methodology when applied to a more coherent instability. To answer this question, a modified combustor rig was assembled at the NASA Glenn Research Center in Cleveland, Ohio. The modified rig exhibited a more coherent, higher amplitude instability, but at a lower frequency of about 315 Hz. Test results show that this control method successfully reduced the instability pressure of the lower frequency test rig. In addition, due to a certain phenomena discovered and reported earlier, the so called Intra-Harmonic Coupling, a dramatic suppression of the instability was achieved by focusing control on the second harmonic of the instability. These results and their implications are discussed, as well as a hypothesis describing the mechanism of intra-harmonic coupling.
Adaptation of the Conditions of US EPA Method 538 for the ...
Report The objective of this study was to evaluate U.S. EPA’s Method 538 for the assessment of drinking water exposure to the nerve agent degradation product, EA2192, the most toxic degradation product of nerve agent VX. As a result of the similarities in sample preparation and analysis that Method 538 uses for nonvolatile chemicals, this method is applicable to the nonvolatile Chemical Warfare Agent (CWA) degradation product, EA2192, in drinking water. The method may be applicable to other nonvolatile CWAs and their respective degradation products as well, but the method will need extensive testing to verify compatibility. Gaps associated with the need for analysis methods capable of analyzing such analytes were addressed by adapting the EPA 538 method for this CWA degradation product. Many laboratories have the experience and capability to run the already rigorous method for nonvolatile compounds in drinking water. Increasing the number of laboratories capable of carrying out these methods serves to significantly increase the surge laboratory capacity to address sample throughput during a large exposure event. The approach desired for this study was to start with a proven high performance liquid chromatography tandem mass spectrometry (HPLC/MS/MS) method for nonvolatile chemicals in drinking water and assess the inclusion of a similar nonvolatile chemical, EA2192.
A Self-Adaptive Model-Based Wi-Fi Indoor Localization Method
Tuta, Jure; Juric, Matjaz B.
2016-01-01
This paper presents a novel method for indoor localization, developed with the main aim of making it useful for real-world deployments. Many indoor localization methods exist, yet they have several disadvantages in real-world deployments—some are static, which is not suitable for long-term usage; some require costly human recalibration procedures; and others require special hardware such as Wi-Fi anchors and transponders. Our method is self-calibrating and self-adaptive thus maintenance free and based on Wi-Fi only. We have employed two well-known propagation models—free space path loss and ITU models—which we have extended with additional parameters for better propagation simulation. Our self-calibrating procedure utilizes one propagation model to infer parameters of the space and the other to simulate the propagation of the signal without requiring any additional hardware beside Wi-Fi access points, which is suitable for real-world usage. Our method is also one of the few model-based Wi-Fi only self-adaptive approaches that do not require the mobile terminal to be in the access-point mode. The only input requirements of the method are Wi-Fi access point positions, and positions and properties of the walls. Our method has been evaluated in single- and multi-room environments, with measured mean error of 2–3 and 3–4 m, respectively, which is similar to existing methods. The evaluation has proven that usable localization accuracy can be achieved in real-world environments solely by the proposed Wi-Fi method that relies on simple hardware and software requirements. PMID:27929453
A Self-Adaptive Model-Based Wi-Fi Indoor Localization Method.
Tuta, Jure; Juric, Matjaz B
2016-12-06
This paper presents a novel method for indoor localization, developed with the main aim of making it useful for real-world deployments. Many indoor localization methods exist, yet they have several disadvantages in real-world deployments-some are static, which is not suitable for long-term usage; some require costly human recalibration procedures; and others require special hardware such as Wi-Fi anchors and transponders. Our method is self-calibrating and self-adaptive thus maintenance free and based on Wi-Fi only. We have employed two well-known propagation models-free space path loss and ITU models-which we have extended with additional parameters for better propagation simulation. Our self-calibrating procedure utilizes one propagation model to infer parameters of the space and the other to simulate the propagation of the signal without requiring any additional hardware beside Wi-Fi access points, which is suitable for real-world usage. Our method is also one of the few model-based Wi-Fi only self-adaptive approaches that do not require the mobile terminal to be in the access-point mode. The only input requirements of the method are Wi-Fi access point positions, and positions and properties of the walls. Our method has been evaluated in single- and multi-room environments, with measured mean error of 2-3 and 3-4 m, respectively, which is similar to existing methods. The evaluation has proven that usable localization accuracy can be achieved in real-world environments solely by the proposed Wi-Fi method that relies on simple hardware and software requirements.
Comparative adaptation accuracy of acrylic denture bases evaluated by two different methods.
Lee, Chung-Jae; Bok, Sung-Bem; Bae, Ji-Young; Lee, Hae-Hyoung
2010-08-01
This study examined the adaptation accuracy of acrylic denture base processed using fluid-resin (PERform), injection-moldings (SR-Ivocap, Success, Mak Press), and two compression-molding techniques. The adaptation accuracy was measured primarily by the posterior border gaps at the mid-palatal area using a microscope and subsequently by weighing of the weight of the impression material between the denture base and master cast using hand-mixed and automixed silicone. The correlation between the data measured using these two test methods was examined. The PERform and Mak Press produced significantly smaller maximum palatal gap dimensions than the other groups (p<0.05). Mak Press also showed a significantly smaller weight of automixed silicone material than the other groups (p<0.05), while SR-Ivocap and Success showed similar adaptation accuracy to the compression-molding denture. The correlationship between the magnitude of the posterior border gap and the weight of the silicone impression materials was affected by either the material or mixing variables.
Fraisier, V; Clouvel, G; Jasaitis, A; Dimitrov, A; Piolot, T; Salamero, J
2015-09-01
Multiconfocal microscopy gives a good compromise between fast imaging and reasonable resolution. However, the low intensity of live fluorescent emitters is a major limitation to this technique. Aberrations induced by the optical setup, especially the mismatch of the refractive index and the biological sample itself, distort the point spread function and further reduce the amount of detected photons. Altogether, this leads to impaired image quality, preventing accurate analysis of molecular processes in biological samples and imaging deep in the sample. The amount of detected fluorescence can be improved with adaptive optics. Here, we used a compact adaptive optics module (adaptive optics box for sectioning optical microscopy), which was specifically designed for spinning disk confocal microscopy. The module overcomes undesired anomalies by correcting for most of the aberrations in confocal imaging. Existing aberration detection methods require prior illumination, which bleaches the sample. To avoid multiple exposures of the sample, we established an experimental model describing the depth dependence of major aberrations. This model allows us to correct for those aberrations when performing a z-stack, gradually increasing the amplitude of the correction with depth. It does not require illumination of the sample for aberration detection, thus minimizing photobleaching and phototoxicity. With this model, we improved both signal-to-background ratio and image contrast. Here, we present comparative studies on a variety of biological samples.
Ergün, Ayla; Barbieri, Riccardo; Eden, Uri T; Wilson, Matthew A; Brown, Emery N
2007-03-01
The stochastic state point process filter (SSPPF) and steepest descent point process filter (SDPPF) are adaptive filter algorithms for state estimation from point process observations that have been used to track neural receptive field plasticity and to decode the representations of biological signals in ensemble neural spiking activity. The SSPPF and SDPPF are constructed using, respectively, Gaussian and steepest descent approximations to the standard Bayes and Chapman-Kolmogorov (BCK) system of filter equations. To extend these approaches for constructing point process adaptive filters, we develop sequential Monte Carlo (SMC) approximations to the BCK equations in which the SSPPF and SDPPF serve as the proposal densities. We term the two new SMC point process filters SMC-PPFs and SMC-PPFD, respectively. We illustrate the new filter algorithms by decoding the wind stimulus magnitude from simulated neural spiking activity in the cricket cercal system. The SMC-PPFs and SMC-PPFD provide more accurate state estimates at low number of particles than a conventional bootstrap SMC filter algorithm in which the state transition probability density is the proposal density. We also use the SMC-PPFs algorithm to track the temporal evolution of a spatial receptive field of a rat hippocampal neuron recorded while the animal foraged in an open environment. Our results suggest an approach for constructing point process adaptive filters using SMC methods.
Zhou, Hui; Kunz, Thomas; Schwartz, Howard
2011-01-01
Traditional oscillators used in timing modules of CDMA and WiMAX base stations are large and expensive. Applying cheaper and smaller, albeit more inaccurate, oscillators in timing modules is an interesting research challenge. An adaptive control algorithm is presented to enhance the oscillators to meet the requirements of base stations during holdover mode. An oscillator frequency stability model is developed for the adaptive control algorithm. This model takes into account the control loop which creates the correction signal when the timing module is in locked mode. A recursive prediction error method is used to identify the system model parameters. Simulation results show that an oscillator enhanced by our adaptive control algorithm improves the oscillator performance significantly, compared with uncorrected oscillators. Our results also show the benefit of explicitly modeling the control loop. Finally, the cumulative time error upper bound of such enhanced oscillators is investigated analytically and comparison results between the analytical and simulated upper bound are provided. The results show that the analytical upper bound can serve as a practical guide for system designers.
Application of a self-adaptive grid method to complex flows
NASA Technical Reports Server (NTRS)
Deiwert, G. S.; Venkatapathy, E.; Davies, C.; Djomehri, J.; Abrahamson, K.
1989-01-01
A directional-split, modular, user-friendly grid point distribution code is applied to several test problems. The code is self-adaptive in the sense that grid point spacing is determined by user-specified constants denoting maximum and minimum grid spacings and constants relating the relative influence of smoothness and orthogonality. Estimates of truncation error, in terms of flow-field gradients and/or geometric features, are used to determine the point distribution. Points are redistributed along grid lines in a specified direction in an elliptic manner over a user-specified subdomain, while orthogonality and smoothness are controlled in a parabolic (marching) manner in the remaining directions. Multidirectional adaption is achieved by sequential application of the method in each coordinate direction. The flow-field solution is redistributed onto the newly distributed grid points after each unidirectional adaption by a simple one-dimensional interpolation scheme. For time-accurate schemes such interpolation is not necessary and time-dependent metrics are carried in the fluid dynamic equations to account for grid movement.
Adaptive control system having hedge unit and related apparatus and methods
NASA Technical Reports Server (NTRS)
Johnson, Eric Norman (Inventor); Calise, Anthony J. (Inventor)
2003-01-01
The invention includes an adaptive control system used to control a plant. The adaptive control system includes a hedge unit that receives at least one control signal and a plant state signal. The hedge unit generates a hedge signal based on the control signal, the plant state signal, and a hedge model including a first model having one or more characteristics to which the adaptive control system is not to adapt, and a second model not having the characteristic(s) to which the adaptive control system is not to adapt. The hedge signal is used in the adaptive control system to remove the effect of the characteristic from a signal supplied to an adaptation law unit of the adaptive control system so that the adaptive control system does not adapt to the characteristic in controlling the plant.
Adaptive control system having hedge unit and related apparatus and methods
NASA Technical Reports Server (NTRS)
Johnson, Eric Norman (Inventor); Calise, Anthony J. (Inventor)
2007-01-01
The invention includes an adaptive control system used to control a plant. The adaptive control system includes a hedge unit that receives at least one control signal and a plant state signal. The hedge unit generates a hedge signal based on the control signal, the plant state signal, and a hedge model including a first model having one or more characteristics to which the adaptive control system is not to adapt, and a second model not having the characteristic(s) to which the adaptive control system is not to adapt. The hedge signal is used in the adaptive control system to remove the effect of the characteristic from a signal supplied to an adaptation law unit of the adaptive control system so that the adaptive control system does not adapt to the characteristic in controlling the plant.
1983-03-01
AN ANALYSIS OF A FINITE ELEMENT METHOD FOR CONVECTION- DIFFUSION PROBLEMS PART II: A POSTERIORI ERROR ESTIMATES AND ADAPTIVITY by W. G. Szymczak Y 6a...PERIOD COVERED AN ANALYSIS OF A FINITE ELEMENT METHOD FOR final life of the contract CONVECTION- DIFFUSION PROBLEM S. Part II: A POSTERIORI ERROR ...Element Method for Convection- Diffusion Problems. Part II: A Posteriori Error Estimates and Adaptivity W. G. Szvmczak and I. Babu~ka# Laboratory for
Adaptive Filtering Methods for Identifying Cross-Frequency Couplings in Human EEG
Van Zaen, Jérôme; Murray, Micah M.; Meuli, Reto A.; Vesin, Jean-Marc
2013-01-01
Oscillations have been increasingly recognized as a core property of neural responses that contribute to spontaneous, induced, and evoked activities within and between individual neurons and neural ensembles. They are considered as a prominent mechanism for information processing within and communication between brain areas. More recently, it has been proposed that interactions between periodic components at different frequencies, known as cross-frequency couplings, may support the integration of neuronal oscillations at different temporal and spatial scales. The present study details methods based on an adaptive frequency tracking approach that improve the quantification and statistical analysis of oscillatory components and cross-frequency couplings. This approach allows for time-varying instantaneous frequency, which is particularly important when measuring phase interactions between components. We compared this adaptive approach to traditional band-pass filters in their measurement of phase-amplitude and phase-phase cross-frequency couplings. Evaluations were performed with synthetic signals and EEG data recorded from healthy humans performing an illusory contour discrimination task. First, the synthetic signals in conjunction with Monte Carlo simulations highlighted two desirable features of the proposed algorithm vs. classical filter-bank approaches: resilience to broad-band noise and oscillatory interference. Second, the analyses with real EEG signals revealed statistically more robust effects (i.e. improved sensitivity) when using an adaptive frequency tracking framework, particularly when identifying phase-amplitude couplings. This was further confirmed after generating surrogate signals from the real EEG data. Adaptive frequency tracking appears to improve the measurements of cross-frequency couplings through precise extraction of neuronal oscillations. PMID:23560098
Adaptive filtering methods for identifying cross-frequency couplings in human EEG.
Van Zaen, Jérôme; Murray, Micah M; Meuli, Reto A; Vesin, Jean-Marc
2013-01-01
Oscillations have been increasingly recognized as a core property of neural responses that contribute to spontaneous, induced, and evoked activities within and between individual neurons and neural ensembles. They are considered as a prominent mechanism for information processing within and communication between brain areas. More recently, it has been proposed that interactions between periodic components at different frequencies, known as cross-frequency couplings, may support the integration of neuronal oscillations at different temporal and spatial scales. The present study details methods based on an adaptive frequency tracking approach that improve the quantification and statistical analysis of oscillatory components and cross-frequency couplings. This approach allows for time-varying instantaneous frequency, which is particularly important when measuring phase interactions between components. We compared this adaptive approach to traditional band-pass filters in their measurement of phase-amplitude and phase-phase cross-frequency couplings. Evaluations were performed with synthetic signals and EEG data recorded from healthy humans performing an illusory contour discrimination task. First, the synthetic signals in conjunction with Monte Carlo simulations highlighted two desirable features of the proposed algorithm vs. classical filter-bank approaches: resilience to broad-band noise and oscillatory interference. Second, the analyses with real EEG signals revealed statistically more robust effects (i.e. improved sensitivity) when using an adaptive frequency tracking framework, particularly when identifying phase-amplitude couplings. This was further confirmed after generating surrogate signals from the real EEG data. Adaptive frequency tracking appears to improve the measurements of cross-frequency couplings through precise extraction of neuronal oscillations.
Adapting and Evaluating a Rapid, Low-Cost Method to Enumerate Flies in the Household Setting.
Wolfe, Marlene K; Dentz, Holly N; Achando, Beryl; Mureithi, MaryAnne; Wolfe, Tim; Null, Clair; Pickering, Amy J
2017-02-08
Diarrhea is a leading cause of death among children under 5 years of age worldwide. Flies are important vectors of diarrheal pathogens in settings lacking networked sanitation services. There is no standardized method for measuring fly density in households; many methods are cumbersome and unvalidated. We adapted a rapid, low-cost fly enumeration technique previously developed for industrial settings, the Scudder fly grill, for field use in household settings. We evaluated its performance in comparison to a sticky tape fly trapping method at latrine and food preparation areas among households in rural Kenya. The grill method was more sensitive; it detected the presence of any flies at 80% (433/543) of sampling locations versus 64% (348/543) of locations by the sticky tape. We found poor concordance between the two methods, suggesting that standardizing protocols is important for comparison of fly densities between studies. Fly species identification was feasible with both methods; however, the sticky tape trap allowed for more nuanced identification. Both methods detected a greater presence of bottle flies near latrines compared with food preparation areas (P < 0.01). The grill method detected more flies at the food preparation area compared with near the latrine (P = 0.014) while the sticky tape method detected no difference. We recommend the Scudder grill as a sensitive fly enumeration tool that is rapid and low cost to implement.
Data-adapted moving least squares method for 3-D image interpolation
NASA Astrophysics Data System (ADS)
Jang, Sumi; Nam, Haewon; Lee, Yeon Ju; Jeong, Byeongseon; Lee, Rena; Yoon, Jungho
2013-12-01
In this paper, we present a nonlinear three-dimensional interpolation scheme for gray-level medical images. The scheme is based on the moving least squares method but introduces a fundamental modification. For a given evaluation point, the proposed method finds the local best approximation by reproducing polynomials of a certain degree. In particular, in order to obtain a better match to the local structures of the given image, we employ locally data-adapted least squares methods that can improve the classical one. Some numerical experiments are presented to demonstrate the performance of the proposed method. Five types of data sets are used: MR brain, MR foot, MR abdomen, CT head, and CT foot. From each of the five types, we choose five volumes. The scheme is compared with some well-known linear methods and other recently developed nonlinear methods. For quantitative comparison, we follow the paradigm proposed by Grevera and Udupa (1998). (Each slice is first assumed to be unknown then interpolated by each method. The performance of each interpolation method is assessed statistically.) The PSNR results for the estimated volumes are also provided. We observe that the new method generates better results in both quantitative and visual quality comparisons.
Adapting and Evaluating a Rapid, Low-Cost Method to Enumerate Flies in the Household Setting
Wolfe, Marlene K.; Dentz, Holly N.; Achando, Beryl; Mureithi, MaryAnne; Wolfe, Tim; Null, Clair; Pickering, Amy J.
2017-01-01
Diarrhea is a leading cause of death among children under 5 years of age worldwide. Flies are important vectors of diarrheal pathogens in settings lacking networked sanitation services. There is no standardized method for measuring fly density in households; many methods are cumbersome and unvalidated. We adapted a rapid, low-cost fly enumeration technique previously developed for industrial settings, the Scudder fly grill, for field use in household settings. We evaluated its performance in comparison to a sticky tape fly trapping method at latrine and food preparation areas among households in rural Kenya. The grill method was more sensitive; it detected the presence of any flies at 80% (433/543) of sampling locations versus 64% (348/543) of locations by the sticky tape. We found poor concordance between the two methods, suggesting that standardizing protocols is important for comparison of fly densities between studies. Fly species identification was feasible with both methods; however, the sticky tape trap allowed for more nuanced identification. Both methods detected a greater presence of bottle flies near latrines compared with food preparation areas (P < 0.01). The grill method detected more flies at the food preparation area compared with near the latrine (P = 0.014) while the sticky tape method detected no difference. We recommend the Scudder grill as a sensitive fly enumeration tool that is rapid and low cost to implement. PMID:27956654
System and method for adaptively deskewing parallel data signals relative to a clock
Jenkins, Philip Nord; Cornett, Frank N.
2006-04-18
A system and method of reducing skew between a plurality of signals transmitted with a transmit clock is described. Skew is detected between the received transmit clock and each of received data signals. Delay is added to the clock or to one or more of the plurality of data signals to compensate for the detected skew. Each of the plurality of delayed signals is compared to a reference signal to detect changes in the skew. The delay added to each of the plurality of delayed signals is updated to adapt to changes in the detected skew.
Adaptive Forward Modeling Method for Analysis and Reconstructions of Orientation Image Map
Frankie Li, Shiu Fai
2014-06-01
IceNine is a MPI-parallel orientation reconstruction and microstructure analysis code. It's primary purpose is to reconstruct a spatially resolved orientation map given a set of diffraction images from a high energy x-ray diffraction microscopy (HEDM) experiment (1). In particular, IceNine implements the adaptive version of the forward modeling method (2, 3). Part of IceNine is a library used to for conbined analysis of the microstructure with the experimentally measured diffraction signal. The libraries is also designed for tapid prototyping of new reconstruction and analysis algorithms. IceNine is also built with a simulator of diffraction images with an input microstructure.
Adaptive Low Dissipative High Order Filter Methods for Multiscale MHD Flows
NASA Technical Reports Server (NTRS)
Yee, H. C.; Sjoegreen, Bjoern
2004-01-01
Adaptive low-dissipative high order filter finite difference methods for long time wave propagation of shock/turbulence/combustion compressible viscous MHD flows has been constructed. Several variants of the filter approach that cater to different flow types are proposed. These filters provide a natural and efficient way for the minimization of the divergence of the magnetic field [divergence of B] numerical error in the sense that no standard divergence cleaning is required. For certain 2-D MHD test problems, divergence free preservation of the magnetic fields of these filter schemes has been achieved.
Overcoming the Curse of Dimension: Methods Based on Sparse Representation and Adaptive Sampling
2011-02-28
carried out mainly by him, together with our joint post-doc Haijun Yu. Please refer to his report for the progress made in this direction. 3 Exploring...multiscale modeling using sparse representation”, Comm. Comp. Phys., 4(5), pp. 1025–1033 (2008). [3] X. Zhou and W. Ren and W. E, “Adaptive minimum...action method for the study of rare events”, J. Chem. Phys., 128, 10, 2008. [4] X. Wan, X. Zhou and W. E, “Noise-induced transitions in the Kuramoto-Sivashinsky equation”, preprint, submitted. 4
Adaptive Wavelet Galerkin Methods on Distorted Domains: Setup of the Algebraic System
2000-01-01
let T, and T• be the largest integers such that O E W7!,’°(!2) andj E wTf’,-(Q), respectively. Then, we set R:= min{Ro, Tý - II & II , Th - 11[111. We...the first time. Moreover, for computing the right-hand side, two Adaptive Wavelet Galerkin Methods 71 AI = Ij = jo, AI= jo, = jo + 1 AI= ii = Jo + 1 4J...during the preparation of this paper. The first author is extremely grateful to the Dipartimento di Matematica of the Politecnico di Torino for using its
FALCON: A method for flexible adaptation of local coordinates of nuclei
NASA Astrophysics Data System (ADS)
König, Carolin; Hansen, Mads Bøttger; Godtliebsen, Ian H.; Christiansen, Ove
2016-02-01
We present a flexible scheme for calculating vibrational rectilinear coordinates with well-defined strict locality on a certain set of atoms. Introducing a method for Flexible Adaption of Local COordinates of Nuclei (FALCON) we show how vibrational subspaces can be "grown" in an adaptive manner. Subspace Hessian matrices are set up and used to calculate and analyze vibrational modes and frequencies. FALCON coordinates can more generally be used to construct vibrational coordinates for describing local and (semi-local) interacting modes with desired features. For instance, spatially local vibrations can be approximately described as internal motion within only a group of atoms and delocalized modes can be approximately expressed as relative motions of rigid groups of atoms. The FALCON method can support efficiency in the calculation and analysis of vibrational coordinates and energies in the context of harmonic and anharmonic calculations. The features of this method are demonstrated on a few small molecules, i.e., formylglycine, coumarin, and dimethylether as well as for the amide-I band and low-frequency modes of alanine oligomers and alpha conotoxin.
NASA Astrophysics Data System (ADS)
Danaila, Ionut; Moglan, Raluca; Hecht, Frédéric; Le Masson, Stéphane
2014-10-01
We present a new numerical system using finite elements with mesh adaptivity for the simulation of solid-liquid phase change systems. In the liquid phase, the natural convection flow is simulated by solving the incompressible Navier-Stokes equations with Boussinesq approximation. A variable viscosity model allows the velocity to progressively vanish in the solid phase, through an intermediate mushy region. The phase change is modeled by introducing an implicit enthalpy source term in the heat equation. The final system of equations describing the liquid-solid system by a single domain approach is solved using a Newton iterative algorithm. The space discretization is based on a P2-P1 Taylor-Hood finite elements and mesh adaptivity by metric control is used to accurately track the solid-liquid interface or the density inversion interface for water flows. The numerical method is validated against classical benchmarks that progressively add strong non-linearities in the system of equations: natural convection of air, natural convection of water, melting of a phase-change material and water freezing. Very good agreement with experimental data is obtained for each test case, proving the capability of the method to deal with both melting and solidification problems with convection. The presented numerical method is easy to implement using FreeFem++ software using a syntax close to the mathematical formulation.
Wagner, Roland; Helin, Tapio; Obereder, Andreas; Ramlau, Ronny
2016-02-20
The imaging quality of modern ground-based telescopes such as the planned European Extremely Large Telescope is affected by atmospheric turbulence. In consequence, they heavily depend on stable and high-performance adaptive optics (AO) systems. Using measurements of incoming light from guide stars, an AO system compensates for the effects of turbulence by adjusting so-called deformable mirror(s) (DMs) in real time. In this paper, we introduce a novel reconstruction method for ground layer adaptive optics. In the literature, a common approach to this problem is to use Bayesian inference in order to model the specific noise structure appearing due to spot elongation. This approach leads to large coupled systems with high computational effort. Recently, fast solvers of linear order, i.e., with computational complexity O(n), where n is the number of DM actuators, have emerged. However, the quality of such methods typically degrades in low flux conditions. Our key contribution is to achieve the high quality of the standard Bayesian approach while at the same time maintaining the linear order speed of the recent solvers. Our method is based on performing a separate preprocessing step before applying the cumulative reconstructor (CuReD). The efficiency and performance of the new reconstructor are demonstrated using the OCTOPUS, the official end-to-end simulation environment of the ESO for extremely large telescopes. For more specific simulations we also use the MOST toolbox.
NASA Technical Reports Server (NTRS)
Kim, Hyoungin; Liou, Meng-Sing
2011-01-01
In this paper, we demonstrate improved accuracy of the level set method for resolving deforming interfaces by proposing two key elements: (1) accurate level set solutions on adapted Cartesian grids by judiciously choosing interpolation polynomials in regions of different grid levels and (2) enhanced reinitialization by an interface sharpening procedure. The level set equation is solved using a fifth order WENO scheme or a second order central differencing scheme depending on availability of uniform stencils at each grid point. Grid adaptation criteria are determined so that the Hamiltonian functions at nodes adjacent to interfaces are always calculated by the fifth order WENO scheme. This selective usage between the fifth order WENO and second order central differencing schemes is confirmed to give more accurate results compared to those in literature for standard test problems. In order to further improve accuracy especially near thin filaments, we suggest an artificial sharpening method, which is in a similar form with the conventional re-initialization method but utilizes sign of curvature instead of sign of the level set function. Consequently, volume loss due to numerical dissipation on thin filaments is remarkably reduced for the test problems
Effective wavelet-based compression method with adaptive quantization threshold and zerotree coding
NASA Astrophysics Data System (ADS)
Przelaskowski, Artur; Kazubek, Marian; Jamrogiewicz, Tomasz
1997-10-01
Efficient image compression technique especially for medical applications is presented. Dyadic wavelet decomposition by use of Antonini and Villasenor bank filters is followed by adaptive space-frequency quantization and zerotree-based entropy coding of wavelet coefficients. Threshold selection and uniform quantization is made on a base of spatial variance estimate built on the lowest frequency subband data set. Threshold value for each coefficient is evaluated as linear function of 9-order binary context. After quantization zerotree construction, pruning and arithmetic coding is applied for efficient lossless data coding. Presented compression method is less complex than the most effective EZW-based techniques but allows to achieve comparable compression efficiency. Specifically our method has similar to SPIHT efficiency in MR image compression, slightly better for CT image and significantly better in US image compression. Thus the compression efficiency of presented method is competitive with the best published algorithms in the literature across diverse classes of medical images.
NASA Technical Reports Server (NTRS)
Demkowicz, L.; Oden, J. T.; Rachowicz, W.
1990-01-01
A new finite element method solving compressible Navier-Stokes equations is proposed. The method is based on a version of Strang's operator splitting and an h-p adaptive finite element approximation in space. This paper contains the formulation of the method with a detailed discussion of boundary conditions, a sample adaptive strategy and numerical examples involving compressible viscous flow over a flat plate with Reynolds number Re = 1000 and Re = 10,000.
Rodrigues, Daniele Bobrowski; Mariutti, Lilian Regina Barros; Mercadante, Adriana Zerlotti
2016-12-07
In vitro digestion methods are a useful approach to predict the bioaccessibility of food components and overcome some limitations or disadvantages associated with in vivo methodologies. Recently, the INFOGEST network published a static method of in vitro digestion with a proposal for assay standardization. The INFOGEST method is not specific for any food component; therefore, we aimed to adapt this method to assess the in vitro bioaccessibility of carotenoids and carotenoid esters in a model fruit (Byrsonima crassifolia). Two additional steps were coupled to the in vitro digestion procedure, centrifugation at 20 000g for the separation of the aqueous phase containing mixed micelles and exhaustive carotenoid extraction with an organic solvent. The effect of electrolytes, enzymes and bile acids on carotenoid micellarization and stability was also tested. The results were compared with those found with a simpler method that has already been used for carotenoid bioaccessibility analysis. These values were in the expected range for free carotenoids (5-29%), monoesters (9-26%) and diesters (4-28%). In general, the in vitro bioaccessibility of carotenoids assessed by the adapted INFOGEST method was significantly higher (p < 0.05) than those assessed by the simplest protocol, with or without the addition of simulated fluids. Although no trend was observed, differences in bioaccessibility values depended on the carotenoid form (free, monoester or diester), isomerization (Z/E) and the in vitro digestion protocol. To the best of our knowledge, it was the first time that a systematic identification of carotenoid esters by HPLC-DAD-MS/MS after in vitro digestion using the INFOGEST protocol was carried out.
An adaptive distance-based group contribution method for thermodynamic property prediction.
He, Tanjin; Li, Shuang; Chi, Yawei; Zhang, Hong-Bo; Wang, Zhi; Yang, Bin; He, Xin; You, Xiaoqing
2016-09-14
In the search for an accurate yet inexpensive method to predict thermodynamic properties of large hydrocarbon molecules, we have developed an automatic and adaptive distance-based group contribution (DBGC) method. The method characterizes the group interaction within a molecule with an exponential decay function of the group-to-group distance, defined as the number of bonds between the groups. A database containing the molecular bonding information and the standard enthalpy of formation (Hf,298K) for alkanes, alkenes, and their radicals at the M06-2X/def2-TZVP//B3LYP/6-31G(d) level of theory was constructed. Multiple linear regression (MLR) and artificial neural network (ANN) fitting were used to obtain the contributions from individual groups and group interactions for further predictions. Compared with the conventional group additivity (GA) method, the DBGC method predicts Hf,298K for alkanes more accurately using the same training sets. Particularly for some highly branched large hydrocarbons, the discrepancy with the literature data is smaller for the DBGC method than the conventional GA method. When extended to other molecular classes, including alkenes and radicals, the overall accuracy level of this new method is still satisfactory.
NASA Astrophysics Data System (ADS)
Coleman, S.; Hurley, S.; Koliba, C.; Zia, A.; Exler, S.
2014-12-01
Eutrophication and nutrient pollution of surface waters occur within complex governance, social, hydrologic and biophysical basin contexts. The pervasive and perennial nutrient pollution in Lake Champlain Basin, despite decades of efforts, exemplifies problems found across the world's surface waters. Stakeholders with diverse values, interests, and forms of explicit and tacit knowledge determine water quality impacts through land use, agricultural and water resource decisions. Uncertainty, ambiguity and dynamic feedback further complicate the ability to promote the continual provision of water quality and ecosystem services. Adaptive management of water resources and land use requires mechanisms to allow for learning and integration of new information over time. The transdisciplinary Research on Adaptation to Climate Change (RACC) team is working to build regional adaptive capacity in Lake Champlain Basin while studying and integrating governance, land use, hydrological, and biophysical systems to evaluate implications for adaptive management. The RACC team has engaged stakeholders through mediated modeling workshops, online forums, surveys, focus groups and interviews. In March 2014, CSS2CC.org, an interactive online forum to source and identify adaptive interventions from a group of stakeholders across sectors was launched. The forum, based on the Delphi Method, brings forward the collective wisdom of stakeholders and experts to identify potential interventions and governance designs in response to scientific uncertainty and ambiguity surrounding the effectiveness of any strategy, climate change impacts, and the social and natural systems governing water quality and eutrophication. A Mediated Modeling Workshop followed the forum in May 2014, where participants refined and identified plausible interventions under different governance, policy and resource scenarios. Results from the online forum and workshop can identify emerging consensus across scales and sectors
The adaptive EVP method for solving the sea ice momentum equation
NASA Astrophysics Data System (ADS)
Kimmritz, Madlen; Danilov, Sergey; Losch, Martin
2016-04-01
Most dynamic sea ice models for climate-type simulations are based on the viscous-plastic (VP) rheology. The resulting stiff system of partial differential equations for the sea ice velocity is either solved implicitly at great computational cost, or explicitly with added pseudo-elasticity (elastic-viscous-plastic, EVP). Bouillon et al. (Ocean Modell., 2013) reinterpreted the EVP method for solving the sea ice momentum equation as an iterative pseudotime VP solver with improved convergence properties. In Kimmritz et al. (J. Comput. Physics, 2015) we showed that this modified EVP (mEVP) scheme should warrant converging solutions if its stability is maintained and the number of pseudotime iterations is sufficiently high. Here, we focus on the role of spatial discretizations. We analyze stability and convergence of mEVP on B- and C-grids. We show that the implementation on B-grids is less restrictive with respect to stability constraints than on C-grids. Additionally, convergence on C-grids is sensitive to the discretization of the viscosities and can be lost for some variants of discretization. Building on these findings we present an adaptive version of the mEVP scheme, which satisfies local stability constraints and aims to accelerate convergence where possible. This is achieved by local adaptation of the parameters governing the pseudotime subcycling of the scheme. We analyze the performance of this new ``adaptive EVP'' approach in a series of experiments with the sea ice component of the general circulation model MITgcm, which is formulated on a C-grid. We show that convergence in realistic settings is sensitive to the details of the implementation of the rheology. In particular, the use of the pressure replacement method deteriorates convergence.
The Adaptive Biasing Force Method: Everything You Always Wanted To Know but Were Afraid To Ask
2014-01-01
In the host of numerical schemes devised to calculate free energy differences by way of geometric transformations, the adaptive biasing force algorithm has emerged as a promising route to map complex free-energy landscapes. It relies upon the simple concept that as a simulation progresses, a continuously updated biasing force is added to the equations of motion, such that in the long-time limit it yields a Hamiltonian devoid of an average force acting along the transition coordinate of interest. This means that sampling proceeds uniformly on a flat free-energy surface, thus providing reliable free-energy estimates. Much of the appeal of the algorithm to the practitioner is in its physically intuitive underlying ideas and the absence of any requirements for prior knowledge about free-energy landscapes. Since its inception in 2001, the adaptive biasing force scheme has been the subject of considerable attention, from in-depth mathematical analysis of convergence properties to novel developments and extensions. The method has also been successfully applied to many challenging problems in chemistry and biology. In this contribution, the method is presented in a comprehensive, self-contained fashion, discussing with a critical eye its properties, applicability, and inherent limitations, as well as introducing novel extensions. Through free-energy calculations of prototypical molecular systems, many methodological aspects are examined, from stratification strategies to overcoming the so-called hidden barriers in orthogonal space, relevant not only to the adaptive biasing force algorithm but also to other importance-sampling schemes. On the basis of the discussions in this paper, a number of good practices for improving the efficiency and reliability of the computed free-energy differences are proposed. PMID:25247823
The adaptive biasing force method: everything you always wanted to know but were afraid to ask.
Comer, Jeffrey; Gumbart, James C; Hénin, Jérôme; Lelièvre, Tony; Pohorille, Andrew; Chipot, Christophe
2015-01-22
In the host of numerical schemes devised to calculate free energy differences by way of geometric transformations, the adaptive biasing force algorithm has emerged as a promising route to map complex free-energy landscapes. It relies upon the simple concept that as a simulation progresses, a continuously updated biasing force is added to the equations of motion, such that in the long-time limit it yields a Hamiltonian devoid of an average force acting along the transition coordinate of interest. This means that sampling proceeds uniformly on a flat free-energy surface, thus providing reliable free-energy estimates. Much of the appeal of the algorithm to the practitioner is in its physically intuitive underlying ideas and the absence of any requirements for prior knowledge about free-energy landscapes. Since its inception in 2001, the adaptive biasing force scheme has been the subject of considerable attention, from in-depth mathematical analysis of convergence properties to novel developments and extensions. The method has also been successfully applied to many challenging problems in chemistry and biology. In this contribution, the method is presented in a comprehensive, self-contained fashion, discussing with a critical eye its properties, applicability, and inherent limitations, as well as introducing novel extensions. Through free-energy calculations of prototypical molecular systems, many methodological aspects are examined, from stratification strategies to overcoming the so-called hidden barriers in orthogonal space, relevant not only to the adaptive biasing force algorithm but also to other importance-sampling schemes. On the basis of the discussions in this paper, a number of good practices for improving the efficiency and reliability of the computed free-energy differences are proposed.
Coakley, K J; Imtiaz, A; Wallis, T M; Weber, J C; Berweger, S; Kabos, P
2015-03-01
Near-field scanning microwave microscopy offers great potential to facilitate characterization, development and modeling of materials. By acquiring microwave images at multiple frequencies and amplitudes (along with the other modalities) one can study material and device physics at different lateral and depth scales. Images are typically noisy and contaminated by artifacts that can vary from scan line to scan line and planar-like trends due to sample tilt errors. Here, we level images based on an estimate of a smooth 2-d trend determined with a robust implementation of a local regression method. In this robust approach, features and outliers which are not due to the trend are automatically downweighted. We denoise images with the Adaptive Weights Smoothing method. This method smooths out additive noise while preserving edge-like features in images. We demonstrate the feasibility of our methods on topography images and microwave |S11| images. For one challenging test case, we demonstrate that our method outperforms alternative methods from the scanning probe microscopy data analysis software package Gwyddion. Our methods should be useful for massive image data sets where manual selection of landmarks or image subsets by a user is impractical.
Compact integration factor methods for complex domains and adaptive mesh refinement.
Liu, Xinfeng; Nie, Qing
2010-08-10
Implicit integration factor (IIF) method, a class of efficient semi-implicit temporal scheme, was introduced recently for stiff reaction-diffusion equations. To reduce cost of IIF, compact implicit integration factor (cIIF) method was later developed for efficient storage and calculation of exponential matrices associated with the diffusion operators in two and three spatial dimensions for Cartesian coordinates with regular meshes. Unlike IIF, cIIF cannot be directly extended to other curvilinear coordinates, such as polar and spherical coordinate, due to the compact representation for the diffusion terms in cIIF. In this paper, we present a method to generalize cIIF for other curvilinear coordinates through examples of polar and spherical coordinates. The new cIIF method in polar and spherical coordinates has similar computational efficiency and stability properties as the cIIF in Cartesian coordinate. In addition, we present a method for integrating cIIF with adaptive mesh refinement (AMR) to take advantage of the excellent stability condition for cIIF. Because the second order cIIF is unconditionally stable, it allows large time steps for AMR, unlike a typical explicit temporal scheme whose time step is severely restricted by the smallest mesh size in the entire spatial domain. Finally, we apply those methods to simulating a cell signaling system described by a system of stiff reaction-diffusion equations in both two and three spatial dimensions using AMR, curvilinear and Cartesian coordinates. Excellent performance of the new methods is observed.
NASA Astrophysics Data System (ADS)
Cheng, Junsheng; Peng, Yanfeng; Yang, Yu; Wu, Zhantao
2017-02-01
Enlightened by ASTFA method, adaptive sparsest narrow-band decomposition (ASNBD) method is proposed in this paper. In ASNBD method, an optimized filter must be established at first. The parameters of the filter are determined by solving a nonlinear optimization problem. A regulated differential operator is used as the objective function so that each component is constrained to be a local narrow-band signal. Afterwards, the signal is filtered by the optimized filter to generate an intrinsic narrow-band component (INBC). ASNBD is proposed aiming at solving the problems existed in ASTFA. Gauss-Newton type method, which is applied to solve the optimization problem in ASTFA, is irreplaceable and very sensitive to initial values. However, more appropriate optimization method such as genetic algorithm (GA) can be utilized to solve the optimization problem in ASNBD. Meanwhile, compared with ASTFA, the decomposition results generated by ASNBD have better physical meaning by constraining the components to be local narrow-band signals. Comparisons are made between ASNBD, ASTFA and EMD by analyzing simulation and experimental signals. The results indicate that ASNBD method is superior to the other two methods in generating more accurate components from noise signal, restraining the boundary effect, possessing better orthogonality and diagnosing rolling element bearing fault.
A Bayesian adaptive blinded sample size adjustment method for risk differences.
Hartley, Andrew Montgomery
2015-01-01
Adaptive sample size adjustment (SSA) for clinical trials consists of examining early subsets of on trial data to adjust estimates of sample size requirements. Blinded SSA is often preferred over unblinded SSA because it obviates many logistical complications of the latter and generally introduces less bias. On the other hand, current blinded SSA methods for binary data offer little to no new information about the treatment effect, ignore uncertainties associated with the population treatment proportions, and/or depend on enhanced randomization schemes that risk partial unblinding. I propose an innovative blinded SSA method for use when the primary analysis is a non-inferiority or superiority test regarding a risk difference. The method incorporates evidence about the treatment effect via the likelihood function of a mixture distribution. I compare the new method with an established one and with the fixed sample size study design, in terms of maximization of an expected utility function. The new method maximizes the expected utility better than do the comparators, under a range of assumptions. I illustrate the use of the proposed method with an example that incorporates a Bayesian hierarchical model. Lastly, I suggest topics for future study regarding the proposed methods.
An adaptive multifluid interface-capturing method for compressible flow in complex geometries
Greenough, J.A.; Beckner, V.; Pember, R.B.; Crutchfield, W.Y.; Bell, J.B.; Colella, P.
1995-04-01
We present a numerical method for solving the multifluid equations of gas dynamics using an operator-split second-order Godunov method for flow in complex geometries in two and three dimensions. The multifluid system treats the fluid components as thermodynamically distinct entities and correctly models fluids with different compressibilities. This treatment allows a general equation-of-state (EOS) specification and the method is implemented so that the EOS references are minimized. The current method is complementary to volume-of-fluid (VOF) methods in the sense that a VOF representation is used, but no interface reconstruction is performed. The Godunov integrator captures the interface during the solution process. The basic multifluid integrator is coupled to a Cartesian grid algorithm that also uses a VOF representation of the fluid-body interface. This representation of the fluid-body interface allows the algorithm to easily accommodate arbitrarily complex geometries. The resulting single grid multifluid-Cartesian grid integration scheme is coupled to a local adaptive mesh refinement algorithm that dynamically refines selected regions of the computational grid to achieve a desired level of accuracy. The overall method is fully conservative with respect to the total mixture. The method will be used for a simple nozzle problem in two-dimensional axisymmetric coordinates.
An adaptive block-based fusion method with LUE-SSIM for multi-focus images
NASA Astrophysics Data System (ADS)
Zheng, Jianing; Guo, Yongcai; Huang, Yukun
2016-09-01
Because of the lenses' limited depth of field, digital cameras are incapable of acquiring an all-in-focus image of objects at varying distances in a scene. Multi-focus image fusion technique can effectively solve this problem. Aiming at the block-based multi-focus image fusion methods, the problem that blocking-artifacts often occurs. An Adaptive block-based fusion method based on lifting undistorted-edge structural similarity (LUE-SSIM) is put forward. In this method, image quality metrics LUE-SSIM is firstly proposed, which utilizes the characteristics of human visual system (HVS) and structural similarity (SSIM) to make the metrics consistent with the human visual perception. Particle swarm optimization(PSO) algorithm which selects LUE-SSIM as the object function is used for optimizing the block size to construct the fused image. Experimental results on LIVE image database shows that LUE-SSIM outperform SSIM on Gaussian defocus blur images quality assessment. Besides, multi-focus image fusion experiment is carried out to verify our proposed image fusion method in terms of visual and quantitative evaluation. The results show that the proposed method performs better than some other block-based methods, especially in reducing the blocking-artifact of the fused image. And our method can effectively preserve the undistorted-edge details in focus region of the source images.
Optimal energy-splitting method for an open-loop liquid crystal adaptive optics system.
Cao, Zhaoliang; Mu, Quanquan; Hu, Lifa; Liu, Yonggang; Peng, Zenghui; Yang, Qingyun; Meng, Haoran; Yao, Lishuang; Xuan, Li
2012-08-13
A waveband-splitting method is proposed for open-loop liquid crystal adaptive optics systems (LC AOSs). The proposed method extends the working waveband, splits energy flexibly, and improves detection capability. Simulated analysis is performed for a waveband in the range of 350 nm to 950 nm. The results show that the optimal energy split is 7:3 for the wavefront sensor (WFS) and for the imaging camera with the waveband split into 350 nm to 700 nm and 700 nm to 950 nm, respectively. A validation experiment is conducted by measuring the signal-to-noise ratio (SNR) of the WFS and the imaging camera. The results indicate that for the waveband-splitting method, the SNR of WFS is approximately equal to that of the imaging camera with a variation in the intensity. On the other hand, the SNR of the WFS is significantly different from that of the imaging camera for the polarized beam splitter energy splitting scheme. Therefore, the waveband-splitting method is more suitable for an open-loop LC AOS. An adaptive correction experiment is also performed on a 1.2-meter telescope. A star with a visual magnitude of 4.45 is observed and corrected and an angular resolution ability of 0.31″ is achieved. A double star with a combined visual magnitude of 4.3 is observed as well, and its two components are resolved after correction. The results indicate that the proposed method can significantly improve the detection capability of an open-loop LC AOS.
Automatic white matter lesion segmentation using an adaptive outlier detection method.
Ong, Kok Haur; Ramachandram, Dhanesh; Mandava, Rajeswari; Shuaib, Ibrahim Lutfi
2012-07-01
White matter (WM) lesions are diffuse WM abnormalities that appear as hyperintense (bright) regions in cranial magnetic resonance imaging (MRI). WM lesions are often observed in older populations and are important indicators of stroke, multiple sclerosis, dementia and other brain-related disorders. In this paper, a new automated method for WM lesions segmentation is presented. In the proposed method, the presence of WM lesions is detected as outliers in the intensity distribution of the fluid-attenuated inversion recovery (FLAIR) MR images using an adaptive outlier detection approach. Outliers are detected using a novel adaptive trimmed mean algorithm and box-whisker plot. In addition, pre- and postprocessing steps are implemented to reduce false positives attributed to MRI artifacts commonly observed in FLAIR sequences. The approach is validated using the cranial MRI sequences of 38 subjects. A significant correlation (R=0.9641, P value=3.12×10(-3)) is observed between the automated approach and manual segmentation by radiologist. The accuracy of the proposed approach was further validated by comparing the lesion volumes computed using the automated approach and lesions manually segmented by an expert radiologist. Finally, the proposed approach is compared against leading lesion segmentation algorithms using a benchmark dataset.
A texture-analysis-based design method for self-adaptive focus criterion function.
Liang, Q; Qu, Y F
2012-05-01
Autofocusing (AF) criterion functions are critical to the performance of a passive autofocusing system in automatic video microscopy. Most of the autofocusing criterion functions proposed are dependent on the imaging system and image captured by the objective being focused or ranged. This dependence destabilizes the performance of the system when the criterion functions are applied to objectives with different characteristics. In this paper, a new design method for autofocusing criterion functions is introduced. This method enables the system to have the ability to tell the texture directional information of the objective. Based on this information, the optimal focus criterion function specific to one texture direction is designed, voiding blindly using autofocusing functions which cannot perform well when applied to the certain surface and can even lead to failure of the whole process. In this way, we improved the self-adaptability, robustness, reliability and focusing accuracy of the algorithm. First, the grey-level co-occurrence matrices of real-time images are calculated in four directions. Next, the contrast values of the four matrices are computed and then compared. The result reflects the directional information of the measured objective surfaces. Finally, with the directional information, an adaptive criterion function is constructed. To demonstrate the effectiveness of the new focus algorithm, we conducted experiments on different texture surfaces and compared the results with those obtained by existing algorithms. The proposed algorithm excellently performs with different measured objectives.
NASA Technical Reports Server (NTRS)
Kopasakis, George
2004-01-01
An adaptive feedback control method was demonstrated that suppresses thermoacoustic instabilities in a liquid-fueled combustor of a type used in aircraft engines. Extensive research has been done to develop lean-burning (low fuel-to-air ratio) combustors that can reduce emissions throughout the mission cycle to reduce the environmental impact of aerospace propulsion systems. However, these lean-burning combustors are susceptible to thermoacoustic instabilities (high-frequency pressure waves), which can fatigue combustor components and even the downstream turbine blades. This can significantly decrease the safe operating lives of the combustor and turbine. Thus, suppressing the thermoacoustic combustor instabilities is an enabling technology for lean, low-emissions combustors under NASA's Propulsion and Power Program. This control methodology has been developed and tested in a partnership of the NASA Glenn Research Center, Pratt & Whitney, United Technologies Research Center, and the Georgia Institute of Technology. Initial combustor rig testing of the controls algorithm was completed during 2002. Subsequently, the test results were analyzed and improvements to the method were incorporated in 2003, which culminated in the final status of this controls algorithm. This control methodology is based on adaptive phase shifting. The combustor pressure oscillations are sensed and phase shifted, and a high-frequency fuel valve is actuated to put pressure oscillations into the combustor to cancel pressure oscillations produced by the instability.
An Adaptive Fast Multipole Boundary Element Method for Poisson−Boltzmann Electrostatics
2009-01-01
The numerical solution of the Poisson−Boltzmann (PB) equation is a useful but a computationally demanding tool for studying electrostatic solvation effects in chemical and biomolecular systems. Recently, we have described a boundary integral equation-based PB solver accelerated by a new version of the fast multipole method (FMM). The overall algorithm shows an order N complexity in both the computational cost and memory usage. Here, we present an updated version of the solver by using an adaptive FMM for accelerating the convolution type matrix-vector multiplications. The adaptive algorithm, when compared to our previous nonadaptive one, not only significantly improves the performance of the overall memory usage but also remarkably speeds the calculation because of an improved load balancing between the local- and far-field calculations. We have also implemented a node-patch discretization scheme that leads to a reduction of unknowns by a factor of 2 relative to the constant element method without sacrificing accuracy. As a result of these improvements, the new solver makes the PB calculation truly feasible for large-scale biomolecular systems such as a 30S ribosome molecule even on a typical 2008 desktop computer. PMID:19517026
Florio, C S
2015-04-01
Improved methods to analyze and compare the muscle-based influences that drive bone strength adaptation can aid in the understanding of the wide array of experimental observations about the effectiveness of various mechanical countermeasures to losses in bone strength that result from age, disuse, and reduced gravity environments. The coupling of gradient-based and gradientless numerical optimization routines with finite element methods in this work results in a modeling technique that determines the individual magnitudes of the muscle forces acting in a multisegment musculoskeletal system and predicts the improvement in the stress state uniformity and, therefore, strength, of a targeted bone through simulated local cortical material accretion and resorption. With a performance-based stopping criteria, no experimentally based or system-based parameters, and designed to include the direct and indirect effects of muscles attached to the targeted bone as well as to its neighbors, shape and strength alterations resulting from a wide range of boundary conditions can be consistently quantified. As demonstrated in a representative parametric study, the developed technique effectively provides a clearer foundation for the study of the relationships between muscle forces and the induced changes in bone strength. Its use can lead to the better control of such adaptive phenomena.
A goal-oriented adaptive procedure for the quasi-continuum method with cluster approximation
NASA Astrophysics Data System (ADS)
Memarnahavandi, Arash; Larsson, Fredrik; Runesson, Kenneth
2015-04-01
We present a strategy for adaptive error control for the quasi-continuum (QC) method applied to molecular statics problems. The QC-method is introduced in two steps: Firstly, introducing QC-interpolation while accounting for the exact summation of all the bond-energies, we compute goal-oriented error estimators in a straight-forward fashion based on the pertinent adjoint (dual) problem. Secondly, for large QC-elements the bond energy and its derivatives are typically computed using an appropriate discrete quadrature using cluster approximations, which introduces a model error. The combined error is estimated approximately based on the same dual problem in conjunction with a hierarchical strategy for approximating the residual. As a model problem, we carry out atomistic-to-continuum homogenization of a graphene monolayer, where the Carbon-Carbon energy bonds are modeled via the Tersoff-Brenner potential, which involves next-nearest neighbor couplings. In particular, we are interested in computing the representative response for an imperfect lattice. Within the goal-oriented framework it becomes natural to choose the macro-scale (continuum) stress as the "quantity of interest". Two different formulations are adopted: The Basic formulation and the Global formulation. The presented numerical investigation shows the accuracy and robustness of the proposed error estimator and the pertinent adaptive algorithm.
An Efficient Adaptive Window Size Selection Method for Improving Spectrogram Visualization
Khan, Omar Usman
2016-01-01
Short Time Fourier Transform (STFT) is an important technique for the time-frequency analysis of a time varying signal. The basic approach behind it involves the application of a Fast Fourier Transform (FFT) to a signal multiplied with an appropriate window function with fixed resolution. The selection of an appropriate window size is difficult when no background information about the input signal is known. In this paper, a novel empirical model is proposed that adaptively adjusts the window size for a narrow band-signal using spectrum sensing technique. For wide-band signals, where a fixed time-frequency resolution is undesirable, the approach adapts the constant Q transform (CQT). Unlike the STFT, the CQT provides a varying time-frequency resolution. This results in a high spectral resolution at low frequencies and high temporal resolution at high frequencies. In this paper, a simple but effective switching framework is provided between both STFT and CQT. The proposed method also allows for the dynamic construction of a filter bank according to user-defined parameters. This helps in reducing redundant entries in the filter bank. Results obtained from the proposed method not only improve the spectrogram visualization but also reduce the computation cost and achieves 87.71% of the appropriate window length selection. PMID:27642291
An Adaptive Fast Multipole Boundary Element Method for Poisson-Boltzmann Electrostatics
Lu, Benzhuo; Cheng, Xiaolin; Huang, Jingfang; McCammon, Jonathan
2009-01-01
The numerical solution of the Poisson Boltzmann (PB) equation is a useful but a computationally demanding tool for studying electrostatic solvation effects in chemical and biomolecular systems. Recently, we have described a boundary integral equation-based PB solver accelerated by a new version of the fast multipole method (FMM). The overall algorithm shows an order N complexity in both the computational cost and memory usage. Here, we present an updated version of the solver by using an adaptive FMM for accelerating the convolution type matrix-vector multiplications. The adaptive algorithm, when compared to our previous nonadaptive one, not only significantly improves the performance of the overall memory usage but also remarkably speeds the calculation because of an improved load balancing between the local- and far-field calculations. We have also implemented a node-patch discretization scheme that leads to a reduction of unknowns by a factor of 2 relative to the constant element method without sacrificing accuracy. As a result of these improvements, the new solver makes the PB calculation truly feasible for large-scale biomolecular systems such as a 30S ribosome molecule even on a typical 2008 desktop computer.
An Efficient Adaptive Window Size Selection Method for Improving Spectrogram Visualization.
Nisar, Shibli; Khan, Omar Usman; Tariq, Muhammad
2016-01-01
Short Time Fourier Transform (STFT) is an important technique for the time-frequency analysis of a time varying signal. The basic approach behind it involves the application of a Fast Fourier Transform (FFT) to a signal multiplied with an appropriate window function with fixed resolution. The selection of an appropriate window size is difficult when no background information about the input signal is known. In this paper, a novel empirical model is proposed that adaptively adjusts the window size for a narrow band-signal using spectrum sensing technique. For wide-band signals, where a fixed time-frequency resolution is undesirable, the approach adapts the constant Q transform (CQT). Unlike the STFT, the CQT provides a varying time-frequency resolution. This results in a high spectral resolution at low frequencies and high temporal resolution at high frequencies. In this paper, a simple but effective switching framework is provided between both STFT and CQT. The proposed method also allows for the dynamic construction of a filter bank according to user-defined parameters. This helps in reducing redundant entries in the filter bank. Results obtained from the proposed method not only improve the spectrogram visualization but also reduce the computation cost and achieves 87.71% of the appropriate window length selection.
NASA Astrophysics Data System (ADS)
Le Jeune, L.; Robert, S.; Dumas, P.; Membre, A.; Prada, C.
2015-03-01
In this paper, we propose an ultrasonic adaptive imaging method based on the phased-array technology and the synthetic focusing algorithm Total Focusing Method (TFM). The general principle is to image the surface by applying the TFM algorithm in a semi-infinite water medium. Then, the reconstructed surface is taken into account to make a second TFM image inside the component. In the surface reconstruction step, the TFM algorithm has been optimized to decrease computation time and to limit noise in water. In the second step, the ultrasonic paths through the reconstructed surface are calculated by the Fermat's principle and an iterative algorithm, and the classical TFM is applied to obtain an image inside the component. This paper presents several results of TFM imaging in components of different geometries, and a result obtained with a new technology of probes equipped with a flexible wedge filled with water (manufactured by Imasonic).
Adaptive-Grid Methods for Phase Field Models of Microstructure Development
NASA Technical Reports Server (NTRS)
Provatas, Nikolas; Goldenfeld, Nigel; Dantzig, Jonathan A.
1999-01-01
In this work the authors show how the phase field model can be solved in a computationally efficient manner that opens a new large-scale simulational window on solidification physics. Our method uses a finite element, adaptive-grid formulation, and exploits the fact that the phase and temperature fields vary significantly only near the interface. We illustrate how our method allows efficient simulation of phase-field models in very large systems, and verify the predictions of solvability theory at intermediate undercooling. We then present new results at low undercoolings that suggest that solvability theory may not give the correct tip speed in that regime. We model solidification using the phase-field model used by Karma and Rappel.
Numerical simulation of diffusion MRI signals using an adaptive time-stepping method
NASA Astrophysics Data System (ADS)
Li, Jing-Rebecca; Calhoun, Donna; Poupon, Cyril; Le Bihan, Denis
2014-01-01
The effect on the MRI signal of water diffusion in biological tissues in the presence of applied magnetic field gradient pulses can be modelled by a multiple compartment Bloch-Torrey partial differential equation. We present a method for the numerical solution of this equation by coupling a standard Cartesian spatial discretization with an adaptive time discretization. The time discretization is done using the explicit Runge-Kutta-Chebyshev method, which is more efficient than the forward Euler time discretization for diffusive-type problems. We use this approach to simulate the diffusion MRI signal from the extra-cylindrical compartment in a tissue model of the brain gray matter consisting of cylindrical and spherical cells and illustrate the effect of cell membrane permeability.
Adaptation of an ethnographic method for investigation of the task domain in diagnostic radiology
NASA Astrophysics Data System (ADS)
Ramey, Judith A.; Rowberg, Alan H.; Robinson, Carol
1992-07-01
A number of user-centered methods for designing radiology workstations have been described by researchers at Carleton University (Ottawa), Georgetown University, George Washington University, and University of Arizona, among others. The approach described here differs in that it enriches standard human-factors practices with methods adapted from ethnography to study users (in this case, diagnostic radiologists) as members of a distinct culture. The overall approach combines several methods; the core method, based on ethnographic ''stream of behavior chronicles'' and their analysis, has four phases: (1) first, we gather the stream of behavior by videotaping a radiologist as he or she works; (2) we view the tape ourselves and formulate questions and hypothesis about the work; and then (3) in a second videotaped session, we show the radiologist the original tape and ask for a running commentary on the work, into which, at the appropriate points, we interject our questions for clarification. We then (4) categorize/index the behavior on the ''raw data'' tapes for various kinds of follow-on analysis. We describe and illustrate this method in detail, describe how we analyze the ''raw data'' videotapes and the commentary tapes, and explain how the method can be integrated into an overall user-centered design process based on standard human-factors techniques.
Liang, Xiaoyu; Wang, Zhenchuan; Sha, Qiuying; Zhang, Shuanglin
2016-01-01
Currently, the analyses of most genome-wide association studies (GWAS) have been performed on a single phenotype. There is increasing evidence showing that pleiotropy is a widespread phenomenon in complex diseases. Therefore, using only one single phenotype may lose statistical power to identify the underlying genetic mechanism. There is an increasing need to develop and apply powerful statistical tests to detect association between multiple phenotypes and a genetic variant. In this paper, we develop an Adaptive Fisher’s Combination (AFC) method for joint analysis of multiple phenotypes in association studies. The AFC method combines p-values obtained in standard univariate GWAS by using the optimal number of p-values which is determined by the data. We perform extensive simulations to evaluate the performance of the AFC method and compare the power of our method with the powers of TATES, Tippett’s method, Fisher’s combination test, MANOVA, MultiPhen, and SUMSCORE. Our simulation studies show that the proposed method has correct type I error rates and is either the most powerful test or comparable with the most powerful test. Finally, we illustrate our proposed methodology by analyzing whole-genome genotyping data from a lung function study. PMID:27694844
Trinh, Quoclinh; Xu, Wentao; Shi, Hui; Luo, Yunbo; Huang, Kunlun
2012-06-01
A-T linker adapter polymerase chain reaction (PCR) was modified and employed for the isolation of genomic fragments adjacent to a known DNA sequence. The improvements in the method focus on two points. The first is the modification of the PO(4) and NH(2) groups in the adapter to inhibit the self-ligation of the adapter or the generation of nonspecific products. The second improvement is the use of the capacity of rTaq DNA polymerase to add an adenosine overhang at the 3' ends of digested DNA to suppress self-ligation in the digested DNA and simultaneously resolve restriction site clone bias. The combination of modifications in the adapter and in the digested DNA leads to T/A-specific ligation, which enhances the flexibility of this method and makes it feasible to use many different restriction enzymes with a single adapter. This novel A-T linker adapter PCR overcomes the inherent limitations of the original ligation-mediated PCR method such as low specificity and a lack of restriction enzyme choice. Moreover, this method also offers higher amplification efficiency, greater flexibility, and easier manipulation compared with other PCR methods for chromosome walking. Experimental results from 143 Arabidopsis mutants illustrate that this method is reliable and efficient in high-throughput experiments.
Ritter, André V; Cavalcante, Larissa M; Swift, Edward J; Thompson, Jeffrey Y; Pimenta, Luiz A
2006-08-01
The objective of this study was to investigate the effects of different light-curing methods on microleakage, marginal adaptation, and microhardness of composite restorations. Slot-type preparations were made in bovine teeth, with gingival margins on dentin. Specimens were divided into 12 groups (n = 12) according to composite-light-curing unit (LCU) combinations. Three composites were used: Filtek Supreme, Herculite XRV, and Heliomolar. All restorations were placed using the same adhesive. Four LCUs were used: a quartz-tungsten-halogen (QTH) LCU (Optilux 501), a first-generation light-emitting diode (LED) LCU (FreeLight 1), and two second-generation LED LCUs (FreeLight 2 and Translux Power Blue). After finishing and polishing, specimens were subjected to mechanical load cycling (100,000 cycles). Gingival margin adaptation was determined as a function of gap formation using epoxy replicas. Microleakage was evaluated by measuring dye penetration across the gingival wall in cross-sectioned specimens. Microhardness was measured as Knoop Hardness number (KHN) at different occluso-gingival locations in cross-sectioned specimens. Data were analyzed for statistical significance (p = 0.05) using appropriate statistical tests. Marginal adaptation was affected by load-cycling in most specimens, but no significant differences were observed among composites and LCUs. Microleakage was not affected by LCU, except for Heliomolar specimens which when cured with Optilux 501 resulted in higher microleakage scores than those obtained with the other LCUs. For microhardness, Translux Power Blue generally produced the highest values and the FreeLight 1 produced the lowest. The performance of the second-generation LED LCUs generally was similar to that of the QTH control, and better than that of the first-generation LED unit.
Adaptive Finite Element Method for Solving the Exact Kohn-Sham Equation of Density Functional Theory
Bylaska, Eric J.; Holst, Michael; Weare, John H.
2009-04-14
Results of the application of an adaptive finite element (FE) based solution using the FETK library of M. Holst to Density Functional Theory (DFT) approximation to the electronic structure of atoms and molecules are reported. The severe problem associated with the rapid variation of the electronic wave functions in the near singular regions of the atomic centers is treated by implementing completely unstructured simplex meshes that resolve these features around atomic nuclei. This concentrates the computational work in the regions in which the shortest length scales are necessary and provides for low resolution in regions for which there is no electron density. The accuracy of the solutions significantly improved when adaptive mesh refinement was applied, and it was found that the essential difficulties of the Kohn-Sham eigenvalues equation were the result of the singular behavior of the atomic potentials. Even though the matrix representations of the discrete Hamiltonian operator in the adaptive finite element basis are always sparse with a linear complexity in the number of discretization points, the overall memory and computational requirements for the solver implemented were found to be quite high. The number of mesh vertices per atom as a function of the atomic number Z and the required accuracy e (in atomic units) was esitmated to be v (e;Z) = 122:37 * Z2:2346 /1:1173 , and the number of floating point operations per minimization step for a system of NA atoms was found to be 0(N3A*v(e,Z0) (e.g. Z=26, e=0.0015 au, and NA=100, the memory requirement and computational cost would be ~0.2 terabytes and ~25 petaflops). It was found that the high cost of the method could be reduced somewhat by using a geometric based refinement strategy to fix the error near the singularities.
NASA Astrophysics Data System (ADS)
Zhang, Lin-Lin; Yuan, Shi-Jin; Mu, Bin; Zhou, Fei-Fan
2017-02-01
In this paper, conditional nonlinear optimal perturbation (CNOP) was investigated to identify sensitive areas for tropical cyclone adaptive observations with principal component analysis based genetic algorithm (PCAGA) method and two tropical cyclones, Fitow (2013) and Matmo (2014), were studied with a 120 km resolution using the fifth-generation Mesoscale Model (MM5). To verify the effectiveness of PCAGA method, CNOPs were also calculated by an adjoint-based method as a benchmark for comparison on patterns, energies, and vertical distributions of temperatures. Comparing with the benchmark, the CNOPs obtained from PCAGA had similar patterns for Fitow and a little different for Matmo; the vertically integrated energies were located closer to the verification areas and the initial tropical cyclones. Experimental results also presented that the CNOPs of PCAGA had a more positive impact on the forecast improvement, which gained from the reductions of the CNOPs in the whole domain containing sensitive areas. Furthermore, the PCAGA program was executed 40 times for each case and all the averages of benefits were larger than the benchmark. This also proved the validity and stability of the PCAGA method. All results showed that the PCAGA method could approximately solve CNOP of complicated models without computing adjoint models, and obtain more benefits of reducing the CNOPs in the whole domain.
Dynamic Adaptive Runtime Systems for Advanced Multipole Method-based Science Achievement
NASA Astrophysics Data System (ADS)
Debuhr, Jackson; Anderson, Matthew; Sterling, Thomas; Zhang, Bo
2015-04-01
Multipole methods are a key computational kernel for a large class of scientific applications spanning multiple disciplines. Yet many of these applications are strong scaling constrained when using conventional programming practices. Hardware parallelism continues to grow, emphasizing medium and fine-grained thread parallelism rather than the coarse-grained process parallelism favored by conventional programming practices. Emerging, dynamic task management execution models can go beyond these conventional practices to significantly improve both efficiency and scalability for algorithms like multipole methods which exhibit irregular and time-varying execution properties. We present a new scientific library, DASHMM, built on the ParalleX HPX-5 runtime system, which explores the use of dynamic adaptive runtime techniques to improve scalability and efficiency for multipole-method based scientific computing. DASHMM allows application scientists to rapidly create custom, scalable, and efficient multipole methods, especially targeting the Fast Multipole Method and the Barnes-Hut N-body algorithm. After a discussion of the system and its goals, some application examples will be presented.
NASA Astrophysics Data System (ADS)
Hong, Wien; Chen, Tung-Shou; Wu, Mei-Chen
2013-03-01
Jung et al., IEEE Signal Processing Letters, 18, 2, 95, 2011 proposed a reversible data hiding method considering the human visual system (HVS). They employed the mean of visited neighboring pixels to predict the current pixel value, and estimated the just noticeable difference (JND) of the current pixel. Message bits are then embedded by adjusting the embedding level according to the calculated JND. Jung et al.'s method achieved excellent image quality. However, the embedding algorithm they used may result in over modification of pixel values and a large location map, which may deteriorate the image quality and decrease the pure payload. The proposed method exploits the nearest neighboring pixels to predict the visited pixel value and to estimate the corresponding JND. The cover pixels are preprocessed adaptively to reduce the size of the location map. We also employ an embedding level selection mechanism to prevent near-saturated pixels from being over modified. Experimental results show that the image quality of the proposed method is higher than that of Jung et al.'s method, and the payload can also be increased due to the reduction of the location map.
P-method post hoc test for adaptive trimmed mean, HQ
NASA Astrophysics Data System (ADS)
Low, Joon Khim; Yahaya, Sharipah Soaad Syed; Abdullah, Suhaida; Yusof, Zahayu Md; Othman, Abdul Rahman
2014-12-01
Adaptive trimmed mean, HQ, which is one of the latest additions in robust estimators, had been proven to be good in controlling Type I error in omnibus test. However, post hoc (pairwise multiple comparison) procedure for HQ was yet to be developed then. Thus, we have taken the initiative to develop post hoc procedure for HQ. Percentile bootstrap method or P-Method was proposed as it was proven to be effective in controlling Type I error rate even when the sample size was small. This paper deliberates on the effectiveness of P-Method on HQ, denoted as P-HQ. The strength and weakness of the proposed method were put to test on various conditions created by manipulating several variables such as shape of distributions, number of groups, sample sizes, degree of variance heterogeneity and pairing of sample sizes and group variances. For such, a simulation study on 2000 datasets was conducted using SAS/IML Version 9.2. The performance of the method on various conditions was based on its ability in controlling Type I error which was benchmarked using Bradley's criterion of robustness. The finding revealed that P-HQ could effectively control Type I error for almost all the conditions investigated.
TU-C-17A-07: FusionARC Treatment with Adaptive Beam Selection Method
Kim, H; Li, R; Xing, L; Lee, R
2014-06-15
Purpose: Recently, a new treatment scheme, FusionARC, has been introduced to compensate for the pitfalls in single-arc VMAT planning. It basically allows for the static field treatment in selected locations, while the remaining is treated by single-rotational arc delivery. The important issue is how to choose the directions for static field treatment. This study presents an adaptive beam selection method to formulate fusionARC treatment scheme. Methods: The optimal plan for single-rotational arc treatment is obtained from two-step approach based on the reweighted total-variation (TV) minimization. To choose the directions for static field treatment with extra segments, a value of our proposed cost function at each field is computed on the new fluence-map, which adds an extra segment to the designated field location only. The cost function is defined as a summation of equivalent uniform dose (EUD) of all structures with the fluence-map, while assuming that the lower cost function value implies the enhancement of plan quality. Finally, the extra segments for static field treatment would be added to the selected directions with low cost function values. A prostate patient data was applied and evaluated with three different plans: conventional VMAT, fusionARC, and static IMRT. Results: The 7 field locations, corresponding to the lowest cost function values, are chosen to insert extra segment for step-and-shoot dose delivery. Our proposed fusionARC plan with the selected angles improves the dose sparing to the critical organs, relative to static IMRT and conventional VMAT plans. The dose conformity to the target is significantly enhanced at the small expense of treatment time, compared with VMAT plan. Its estimated treatment time, however, is still much faster than IMRT. Conclusion: The fusionARC treatment with adaptive beam selection method could improve the plan quality with insignificant damage in the treatment time, relative to the conventional VMAT.
The rejection of vibrations in adaptive optics systems using a DFT-based estimation method
NASA Astrophysics Data System (ADS)
Kania, Dariusz; Borkowski, Józef
2016-04-01
Adaptive optics systems are commonly used in many optical structures to reduce perturbations and to increase the system performance. The problem in such systems is undesirable vibrations due to some effects as shaking of the whole structure or the tracking process. This paper presents a frequency, amplitude and phase estimation method of a multifrequency signal that can be used to reject these vibrations in an adaptive method. The estimation method is based on using the FFT procedure. The undesirable signals are usually exponentially damped harmonic oscillations. The estimation error depends on several parameters and consists of a systematic component and a random component. The systematic error depends on the signal phase, the number of samples N in a measurement window, the value of CiR (number of signal periods in a measurement window), the THD value and the time window order H. The random error depends mainly on the variance of noise and the SNR value. This paper shows research on the sinusoidal signal phase and the estimation of exponentially damped sinusoids parameters. The shape of errors signals is periodical and it is associated with the signal period and with the sliding measurement window. For CiR=1.6 and the damping ratio 0.1% the error was in the order of 10-5 Hz/Hz, 10-4 V/V and 10-4 rad for the frequency, the amplitude and the phase estimation respectively. The information provided in this paper can be used to determine the approximate level of the efficiency of the vibrations elimination process before starting it.
An Adaptive Finite Difference Method for Hyperbolic Systems in OneSpace Dimension
Bolstad, John H.
1982-06-01
Many problems of physical interest have solutions which are generally quite smooth in a large portion of the region of interest, but have local phenomena such as shocks, discontinuities or large gradients which require much more accurate approximations or finer grids for reasonable accuracy. Examples are atmospheric fronts, ocean currents, and geological discontinuities. In this thesis we develop and partially analyze an adaptive finite difference mesh refinement algorithm for the initial boundary value problem for hyperbolic systems in one space dimension. The method uses clusters of uniform grids which can ''move'' along with pulses or steep gradients appearing in the calculation, and which are superimposed over a uniform coarse grid. Such refinements are created, destroyed, merged, separated, recursively nested or moved based on estimates of the local truncation error. We use a four-way linked tree and sequentially allocated deques (double-ended queues) to perform these operations efficiently. The local truncation error in the interior of the region is estimated using a three-step Richardson extrapolation procedure, which can also be considered a deferred correction method. At the boundaries we employ differences to estimate the error. Our algorithm was implemented using a portable, extensible Fortran preprocessor, to which we added records and pointers. The method is applied to three model problems: the first order wave equation, the second order wave equation, and the inviscid Burgers equation. For the first two model problems our algorithm is shown to be three to five times more efficient (in computing time) than the use of a uniform coarse mesh, for the same accuracy. Furthermore, to our knowledge, our algorithm is the only one which adaptively treats time-dependent boundary conditions for hyperbolic systems.
Adaptive spacetime method using Riemann jump conditions for coupled atomistic-continuum dynamics
Kraczek, B. Miller, S.T. Haber, R.B. Johnson, D.D.
2010-03-20
We combine the Spacetime Discontinuous Galerkin (SDG) method for elastodynamics with the mathematically consistent Atomistic Discontinuous Galerkin (ADG) method in a new scheme that concurrently couples continuum and atomistic models of dynamic response in solids. The formulation couples non-overlapping continuum and atomistic models across sharp interfaces by weakly enforcing jump conditions, for both momentum balance and kinematic compatibility, using Riemann values to preserve the characteristic structure of the underlying hyperbolic system. Momentum balances to within machine-precision accuracy over every element, on each atom, and over the coupled system, with small, controllable energy dissipation in the continuum region that ensures numerical stability. When implemented on suitable unstructured spacetime grids, the continuum SDG model offers linear computational complexity in the number of elements and powerful adaptive analysis capabilities that readily bridge between atomic and continuum scales in both space and time. A special trace operator for the atomic velocities and an associated atomistic traction field enter the jump conditions at the coupling interface. The trace operator depends on parameters that specify, at the scale of the atomic spacing, the position of the coupling interface relative to the atoms. In a key finding, we demonstrate that optimizing these parameters suppresses spurious reflections at the coupling interface without the use of non-physical damping or special boundary conditions. We formulate the implicit SDG-ADG coupling scheme in up to three spatial dimensions, and describe an efficient iterative solution scheme that outperforms common explicit schemes, such as the Velocity Verlet integrator. Numerical examples, in 1dxtime and employing both linear and nonlinear potentials, demonstrate the performance of the SDG-ADG method and show how adaptive spacetime meshing reconciles disparate time steps and resolves atomic-scale signals in
NASA Technical Reports Server (NTRS)
Aftosmis, M. J.; Berger, M. J.; Adomavicius, G.
2000-01-01
Preliminary verification and validation of an efficient Euler solver for adaptively refined Cartesian meshes with embedded boundaries is presented. The parallel, multilevel method makes use of a new on-the-fly parallel domain decomposition strategy based upon the use of space-filling curves, and automatically generates a sequence of coarse meshes for processing by the multigrid smoother. The coarse mesh generation algorithm produces grids which completely cover the computational domain at every level in the mesh hierarchy. A series of examples on realistically complex three-dimensional configurations demonstrate that this new coarsening algorithm reliably achieves mesh coarsening ratios in excess of 7 on adaptively refined meshes. Numerical investigations of the scheme's local truncation error demonstrate an achieved order of accuracy between 1.82 and 1.88. Convergence results for the multigrid scheme are presented for both subsonic and transonic test cases and demonstrate W-cycle multigrid convergence rates between 0.84 and 0.94. Preliminary parallel scalability tests on both simple wing and complex complete aircraft geometries shows a computational speedup of 52 on 64 processors using the run-time mesh partitioner.
A New Feedback-Based Method for Parameter Adaptation in Image Processing Routines
Mikut, Ralf; Reischl, Markus
2016-01-01
The parametrization of automatic image processing routines is time-consuming if a lot of image processing parameters are involved. An expert can tune parameters sequentially to get desired results. This may not be productive for applications with difficult image analysis tasks, e.g. when high noise and shading levels in an image are present or images vary in their characteristics due to different acquisition conditions. Parameters are required to be tuned simultaneously. We propose a framework to improve standard image segmentation methods by using feedback-based automatic parameter adaptation. Moreover, we compare algorithms by implementing them in a feedforward fashion and then adapting their parameters. This comparison is proposed to be evaluated by a benchmark data set that contains challenging image distortions in an increasing fashion. This promptly enables us to compare different standard image segmentation algorithms in a feedback vs. feedforward implementation by evaluating their segmentation quality and robustness. We also propose an efficient way of performing automatic image analysis when only abstract ground truth is present. Such a framework evaluates robustness of different image processing pipelines using a graded data set. This is useful for both end-users and experts. PMID:27764213
A New Feedback-Based Method for Parameter Adaptation in Image Processing Routines.
Khan, Arif Ul Maula; Mikut, Ralf; Reischl, Markus
2016-01-01
The parametrization of automatic image processing routines is time-consuming if a lot of image processing parameters are involved. An expert can tune parameters sequentially to get desired results. This may not be productive for applications with difficult image analysis tasks, e.g. when high noise and shading levels in an image are present or images vary in their characteristics due to different acquisition conditions. Parameters are required to be tuned simultaneously. We propose a framework to improve standard image segmentation methods by using feedback-based automatic parameter adaptation. Moreover, we compare algorithms by implementing them in a feedforward fashion and then adapting their parameters. This comparison is proposed to be evaluated by a benchmark data set that contains challenging image distortions in an increasing fashion. This promptly enables us to compare different standard image segmentation algorithms in a feedback vs. feedforward implementation by evaluating their segmentation quality and robustness. We also propose an efficient way of performing automatic image analysis when only abstract ground truth is present. Such a framework evaluates robustness of different image processing pipelines using a graded data set. This is useful for both end-users and experts.
Zheng, Xiao-Min; Tao, Yun-Li; Chi, Hsin; Wan, Fang-Hao; Chu, Dong
2017-01-01
In this study, we evaluated the adaptability of the small brown planthopper (SBPH), Laodelphax striatellus (Hemiptera: Delphacidae) to four rice cultivars including Shengdao13 (SD13), Shengdao14 (SD14), Shengdao15 (SD15), and Zixiangnuo (ZXN) using the age-stage, two-sex life table with a simplified method for recording egg production (i.e., every five days vs. daily). The intrinsic rate of increase (r) of the SBPH was the highest (0.1067 d−1) on cultivar SD15, which was similar to the rate on SD14 (0.1029 d−1), but was significantly higher than that occurring on ZXN (0.0897 d−1) and SD13 (0.0802 d−1). The differences of the finite rate of increase (λ) on the four rice cultivars were consistent with the r values. Population projection predicted an explosive population growth of the SBPH occurring in a relatively short time when reared on SD14 and SD15. These findings demonstrated that the SBPH can successfully survive on the four rice cultivars, although there were varying host adaptabilities. PMID:28205522
An adaptive 6-DOF tracking method by hybrid sensing for ultrasonic endoscopes.
Du, Chengyang; Chen, Xiaodong; Wang, Yi; Li, Junwei; Yu, Daoyin
2014-06-06
In this paper, a novel hybrid sensing method for tracking an ultrasonic endoscope within the gastrointestinal (GI) track is presented, and the prototype of the tracking system is also developed. We implement 6-DOF localization by sensing integration and information fusion. On the hardware level, a tri-axis gyroscope and accelerometer, and a magnetic angular rate and gravity (MARG) sensor array are attached at the end of endoscopes, and three symmetric cylindrical coils are placed around patients' abdomens. On the algorithm level, an adaptive fast quaternion convergence (AFQC) algorithm is introduced to determine the orientation by fusing inertial/magnetic measurements, in which the effects of magnetic disturbance and acceleration are estimated to gain an adaptive convergence output. A simplified electro-magnetic tracking (SEMT) algorithm for dimensional position is also implemented, which can easily integrate the AFQC's results and magnetic measurements. Subsequently, the average position error is under 0.3 cm by reasonable setting, and the average orientation error is 1° without noise. If magnetic disturbance or acceleration exists, the average orientation error can be controlled to less than 3.5°.
Jacobi-like method for a control algorithm in adaptive-optics imaging
NASA Astrophysics Data System (ADS)
Pitsianis, Nikos P.; Ellerbroek, Brent L.; Van Loan, Charles; Plemmons, Robert J.
1998-10-01
A study is made of a non-smooth optimization problem arising in adaptive-optics, which involves the real-time control of a deformable mirror designed to compensate for atmospheric turbulence and other dynamic image degradation factors. One formulation of this problem yields a functional f(U) equals (Sigma) iequals1n maxj[(UTMjU)ii] to be maximized over orthogonal matrices U for a fixed collection of n X n symmetric matrices Mj. We consider first the situation which can arise in practical applications where the matrices Mj are nearly pairwise commutative. Besides giving useful bounds, results for this case lead to a simple corollary providing a theoretical closed-form solution for globally maximizing f if the Mj are simultaneously diagonalizable. However, even here conventional optimization methods for maximizing f are not practical in a real-time environment. The genal optimization problem is quite difficult and is approached using a heuristic Jacobi-like algorithm. Numerical test indicate that the algorithm provides an effective means to optimize performance for some important adaptive-optics systems.
NASA Astrophysics Data System (ADS)
Zhou, Peng; Lu, Siliang; Liu, Fang; Liu, Yongbin; Li, Guihua; Zhao, Jiwen
2017-03-01
Stochastic resonance (SR), which is characterized by the fact that proper noise can be utilized to enhance weak periodic signals, has been widely applied in weak signal detection. SR is a nonlinear parameterized filter, and the output signal relies on the system parameters for the deterministic input signal. The most commonly used index for parameter tuning in the SR procedure is the signal-to-noise ratio (SNR). However, using the SNR index to evaluate the denoising effect of SR quantitatively is insufficient when the target signal frequency cannot be estimated accurately. To address this issue, six different indexes, namely, power spectral kurtosis of the SR output signal, correlation coefficient between the SR output and the original signal, peak SNR, structural similarity, root mean square error, and smoothness, are constructed in this study to measure the SR output quantitatively. These six quantitative indexes are fused into a new synthetic quantitative index (SQI) via a back propagation neural network to guide the adaptive parameter selection of the SR procedure. The index fusion procedure reduces the instability of each index and thus improves the robustness of parameter tuning. In addition, genetic algorithm is utilized to quickly select the optimal SR parameters. The efficiency of bearing fault diagnosis is thus further improved. The effectiveness and efficiency of the proposed SQI-based adaptive SR method for bearing fault diagnosis are verified through numerical and experiment analyses.
Lesage, Adrien; Lelièvre, Tony; Stoltz, Gabriel; Hénin, Jérôme
2016-12-27
We report a theoretical description and numerical tests of the extended-system adaptive biasing force method (eABF), together with an unbiased estimator of the free energy surface from eABF dynamics. Whereas the original ABF approach uses its running estimate of the free energy gradient as the adaptive biasing force, eABF is built on the idea that the exact free energy gradient is not necessary for efficient exploration, and that it is still possible to recover the exact free energy separately with an appropriate estimator. eABF does not directly bias the collective coordinates of interest, but rather fictitious variables that are harmonically coupled to them; therefore is does not require second derivative estimates, making it easily applicable to a wider range of problems than ABF. Furthermore, the extended variables present a smoother, coarse-grain-like sampling problem on a mollified free energy surface, leading to faster exploration and convergence. We also introduce CZAR, a simple, unbiased free energy estimator from eABF trajectories. eABF/CZAR converges to the physical free energy surface faster than standard ABF for a wide range of parameters.
Adaptive model-based control systems and methods for controlling a gas turbine
NASA Technical Reports Server (NTRS)
Brunell, Brent Jerome (Inventor); Mathews, Jr., Harry Kirk (Inventor); Kumar, Aditya (Inventor)
2004-01-01
Adaptive model-based control systems and methods are described so that performance and/or operability of a gas turbine in an aircraft engine, power plant, marine propulsion, or industrial application can be optimized under normal, deteriorated, faulted, failed and/or damaged operation. First, a model of each relevant system or component is created, and the model is adapted to the engine. Then, if/when deterioration, a fault, a failure or some kind of damage to an engine component or system is detected, that information is input to the model-based control as changes to the model, constraints, objective function, or other control parameters. With all the information about the engine condition, and state and directives on the control goals in terms of an objective function and constraints, the control then solves an optimization so the optimal control action can be determined and taken. This model and control may be updated in real-time to account for engine-to-engine variation, deterioration, damage, faults and/or failures using optimal corrective control action command(s).
A Fast Variational Method for the Construction of Resolution Adaptive C-Smooth Molecular Surfaces.
Bajaj, Chandrajit L; Xu, Guoliang; Zhang, Qin
2009-05-01
We present a variational approach to smooth molecular (proteins, nucleic acids) surface constructions, starting from atomic coordinates, as available from the protein and nucleic-acid data banks. Molecular dynamics (MD) simulations traditionally used in understanding protein and nucleic-acid folding processes, are based on molecular force fields, and require smooth models of these molecular surfaces. To accelerate MD simulations, a popular methodology is to employ coarse grained molecular models, which represent clusters of atoms with similar physical properties by psuedo- atoms, resulting in coarser resolution molecular surfaces. We consider generation of these mixed-resolution or adaptive molecular surfaces. Our approach starts from deriving a general form second order geometric partial differential equation in the level-set formulation, by minimizing a first order energy functional which additionally includes a regularization term to minimize the occurrence of chemically infeasible molecular surface pockets or tunnel-like artifacts. To achieve even higher computational efficiency, a fast cubic B-spline C(2) interpolation algorithm is also utilized. A narrow band, tri-cubic B-spline level-set method is then used to provide C(2) smooth and resolution adaptive molecular surfaces.
ERIC Educational Resources Information Center
Cheng, Ying
2010-01-01
This article proposes a new item selection method, namely, the modified maximum global discrimination index (MMGDI) method, for cognitive diagnostic computerized adaptive testing (CD-CAT). The new method captures two aspects of the appeal of an item: (a) the amount of contribution it can make toward adequate coverage of every attribute and (b) the…
Experimental Design and Primary Data Analysis Methods for Comparing Adaptive Interventions
ERIC Educational Resources Information Center
Nahum-Shani, Inbal; Qian, Min; Almirall, Daniel; Pelham, William E.; Gnagy, Beth; Fabiano, Gregory A.; Waxmonsky, James G.; Yu, Jihnhee; Murphy, Susan A.
2012-01-01
In recent years, research in the area of intervention development has been shifting from the traditional fixed-intervention approach to "adaptive interventions," which allow greater individualization and adaptation of intervention options (i.e., intervention type and/or dosage) over time. Adaptive interventions are operationalized via a sequence…
ERIC Educational Resources Information Center
Park, Eunjeong
2016-01-01
Despite the contribution to economic and social impact on the institutions in the United States, international students' academic adaptation has been always challenging. The study investigated international graduate students' academic adaptation scales via a survey questionnaire and explored how international students are academically adapted in…
Q-Learning: A Data Analysis Method for Constructing Adaptive Interventions
ERIC Educational Resources Information Center
Nahum-Shani, Inbal; Qian, Min; Almirall, Daniel; Pelham, William E.; Gnagy, Beth; Fabiano, Gregory A.; Waxmonsky, James G.; Yu, Jihnhee; Murphy, Susan A.
2012-01-01
Increasing interest in individualizing and adapting intervention services over time has led to the development of adaptive interventions. Adaptive interventions operationalize the individualization of a sequence of intervention options over time via the use of decision rules that input participant information and output intervention…
Evaluation of intrinsic respiratory signal determination methods for 4D CBCT adapted for mice
Martin, Rachael; Pan, Tinsu; Rubinstein, Ashley; Court, Laurence; Ahmad, Moiz
2015-01-15
Purpose: 4D CT imaging in mice is important in a variety of areas including studies of lung function and tumor motion. A necessary step in 4D imaging is obtaining a respiratory signal, which can be done through an external system or intrinsically through the projection images. A number of methods have been developed that can successfully determine the respiratory signal from cone-beam projection images of humans, however only a few have been utilized in a preclinical setting and most of these rely on step-and-shoot style imaging. The purpose of this work is to assess and make adaptions of several successful methods developed for humans for an image-guided preclinical radiation therapy system. Methods: Respiratory signals were determined from the projection images of free-breathing mice scanned on the X-RAD system using four methods: the so-called Amsterdam shroud method, a method based on the phase of the Fourier transform, a pixel intensity method, and a center of mass method. The Amsterdam shroud method was modified so the sharp inspiration peaks associated with anesthetized mouse breathing could be detected. Respiratory signals were used to sort projections into phase bins and 4D images were reconstructed. Error and standard deviation in the assignment of phase bins for the four methods compared to a manual method considered to be ground truth were calculated for a range of region of interest (ROI) sizes. Qualitative comparisons were additionally made between the 4D images obtained using each of the methods and the manual method. Results: 4D images were successfully created for all mice with each of the respiratory signal extraction methods. Only minimal qualitative differences were noted between each of the methods and the manual method. The average error (and standard deviation) in phase bin assignment was 0.24 ± 0.08 (0.49 ± 0.11) phase bins for the Fourier transform method, 0.09 ± 0.03 (0.31 ± 0.08) phase bins for the modified Amsterdam shroud method, 0
SuBSENSE: a universal change detection method with local adaptive sensitivity.
St-Charles, Pierre-Luc; Bilodeau, Guillaume-Alexandre; Bergevin, Robert
2015-01-01
Foreground/background segmentation via change detection in video sequences is often used as a stepping stone in high-level analytics and applications. Despite the wide variety of methods that have been proposed for this problem, none has been able to fully address the complex nature of dynamic scenes in real surveillance tasks. In this paper, we present a universal pixel-level segmentation method that relies on spatiotemporal binary features as well as color information to detect changes. This allows camouflaged foreground objects to be detected more easily while most illumination variations are ignored. Besides, instead of using manually set, frame-wide constants to dictate model sensitivity and adaptation speed, we use pixel-level feedback loops to dynamically adjust our method's internal parameters without user intervention. These adjustments are based on the continuous monitoring of model fidelity and local segmentation noise levels. This new approach enables us to outperform all 32 previously tested state-of-the-art methods on the 2012 and 2014 versions of the ChangeDetection.net dataset in terms of overall F-Measure. The use of local binary image descriptors for pixel-level modeling also facilitates high-speed parallel implementations: our own version, which used no low-level or architecture-specific instruction, reached real-time processing speed on a midlevel desktop CPU. A complete C++ implementation based on OpenCV is available online.
An efficient contents-adaptive backlight control method for mobile devices
NASA Astrophysics Data System (ADS)
Chen, Qiao Song; Yan, Ya Xing; Zhang, Xiao Mou; Cai, Hua; Deng, Xin; Wang, Jin
2015-03-01
For most of mobile devices with a large screen, image quality and power consumption are both of the major factors affecting the consumers' preference. Contents-adaptive backlight control (CABC) method can be utilized to adjust the backlight and promote the performance of mobile devices. Unlike the previous works mostly focusing on the reduction of power consumption, both of image quality and power consumption are taken into account in the proposed method. Firstly, region of interest (ROI) is detected to divide image into two parts: ROI and non-ROI. Then, three attributes including entropy, luminance, and saturation information in ROI are calculated. To achieve high perceived image quality in mobile devices, optimal value of backlight can be calculated by a linear combination of the aforementioned attributes. Coefficients of the linear combination are determined by applying the linear regression to the subjective scores of human visual experiments and objective values of the attributes. Based on the optimal value of backlight, displayed image data are processed brightly and backlight is darkened to reduce the power consumption of backlight later. Here, the ratios of increasing image data and decreasing backlight functionally depend on the luminance information of displayed image. Also, the proposed method is hardware implemented. Experimental results indicate that the proposed technique exhibits better performance compared to the conventional methods.
An Adaptive and Implicit Immersed Boundary Method for Cardiovascular Device Modeling
NASA Astrophysics Data System (ADS)
Bhalla, Amneet Pal S.; Griffith, Boyce E.
2015-11-01
Computer models and numerical simulations are playing an increasingly important role in understanding the mechanics of fluid-structure interactions (FSI) in cardiovascular devices. To model cardiac devices realistically, there is a need to solve the classical fluid-structure interaction equations efficiently. Peskin's explicit immersed boundary method is one such approach to model FSI equations for elastic structures efficiently. However, in the presence of rigid structures the IB method faces a severe timestep restriction. To overcome this limitation, we are developing an implicit version of immersed boundary method on adaptive Cartesian grids. Higher grid resolution is employed in spatial regions occupying the structure while relatively coarser discretization is used elsewhere. The resulting discrete system is solved using geometric multigrid solver for the combined Stokes and elasticity operators. We use a rediscretization approach for standard finite difference approximations to the divergence, gradient, and viscous stress. In contrast, coarse grid versions of the Eulerian elasticity operator are constructed via a Galerkin approach. The implicit IB method is tested for a pulse duplicator cardiac device system that consists of both rigid mountings and elastic membrane.
A Time-Adaptive Integrator Based on Radau Methods for Advection Diffusion Reaction PDEs
NASA Astrophysics Data System (ADS)
Gonzalez-Pinto, S.; Perez-Rodriguez, S.
2009-09-01
The numerical integration of time-dependent PDEs, especially of Advection Diffusion Reaction type, for two and three spatial variables (in short, 2D and 3D problems) in the MoL framework is considered. The spatial discretization is made by using Finite Differences and the time integration is carried out by means of the L-stable, third order formula known as the two stage Radau IIA method. The main point for the solution of the large dimensional ODEs is not to solve the stage values of the Radau method until convergence (because the convergence is very slow on the stiff components), but only giving a very few iterations and take as advancing solution the latter stage value computed. The iterations are carried out by using the Approximate Matrix Factorization (AMF) coupled to a Newton-type iteration (SNI) as indicated in [5], which turns out in an acceptably cheap iteration, like Alternating Directions Methods (ADI) of Peaceman and Rachford (1955). Some stability results for the whole process (AMF)-(SNI) and a local error estimate for an adaptive time-integration are also given. Numerical results on two standard PDEs are presented and some conclusions about our method and other well-known solvers are drawn.
NASA Technical Reports Server (NTRS)
Kopasakis, George
2005-01-01
This year, an improved adaptive-feedback control method was demonstrated that suppresses thermoacoustic instabilities in a liquid-fueled combustor of a type used in aircraft engines. Extensive research has been done to develop lean-burning (low fuel-to-air ratio) combustors that can reduce emissions throughout the mission cycle to reduce the environmental impact of aerospace propulsion systems. However, these lean-burning combustors are susceptible to thermoacoustic instabilities (high-frequency pressure waves), which can fatigue combustor components and even downstream turbine blades. This can significantly decrease the safe operating life of the combustor and turbine. Thus, suppressing the thermoacoustic combustor instabilities is an enabling technology for meeting the low-emission goals of the NASA Ultra-Efficient Engine Technology (UEET) Project.
Natarajan, Annamalai; Angarita, Gustavo; Gaiser, Edward; Malison, Robert; Ganesan, Deepak; Marlin, Benjamin M.
2016-01-01
Mobile health research on illicit drug use detection typically involves a two-stage study design where data to learn detectors is first collected in lab-based trials, followed by a deployment to subjects in a free-living environment to assess detector performance. While recent work has demonstrated the feasibility of wearable sensors for illicit drug use detection in the lab setting, several key problems can limit lab-to-field generalization performance. For example, lab-based data collection often has low ecological validity, the ground-truth event labels collected in the lab may not be available at the same level of temporal granularity in the field, and there can be significant variability between subjects. In this paper, we present domain adaptation methods for assessing and mitigating potential sources of performance loss in lab-to-field generalization and apply them to the problem of cocaine use detection from wearable electrocardiogram sensor data. PMID:28090605
NASA Astrophysics Data System (ADS)
Sachau, Delf; Baschke, Manuel
2017-04-01
Acoustic transmissibility of aircraft panels is measured in full-scale test rigs. The panels are supported at their frames. These boundary conditions do not take into account the dynamic influence of the fuselage, which is significant in the frequency range below 300 Hz. This paper introduces a new adaptive boundary system (ABS). It combines accelerometers and electrodynamic shakers with real-time signal processing. The ABS considers the dynamic effect of the fuselage on the panel. The frames are dominating the dynamic behaviour of a fuselage in the low-frequency range. Therefore, the new method is applied to a beam representing a frame of the aircraft structure. The experimental results are evaluated and the precision of the ABS is discussed. The theoretical apparent mass representing the cut-off part of a frame is calculated and compared with the apparent mass, as provided by the ABS. It is explained how the experimental set-up limits the precision of the ABS.
Natarajan, Annamalai; Angarita, Gustavo; Gaiser, Edward; Malison, Robert; Ganesan, Deepak; Marlin, Benjamin M
2016-09-01
Mobile health research on illicit drug use detection typically involves a two-stage study design where data to learn detectors is first collected in lab-based trials, followed by a deployment to subjects in a free-living environment to assess detector performance. While recent work has demonstrated the feasibility of wearable sensors for illicit drug use detection in the lab setting, several key problems can limit lab-to-field generalization performance. For example, lab-based data collection often has low ecological validity, the ground-truth event labels collected in the lab may not be available at the same level of temporal granularity in the field, and there can be significant variability between subjects. In this paper, we present domain adaptation methods for assessing and mitigating potential sources of performance loss in lab-to-field generalization and apply them to the problem of cocaine use detection from wearable electrocardiogram sensor data.
An Adaptive Sensor Data Segments Selection Method for Wearable Health Care Services.
Chen, Shih-Yeh; Lai, Chin-Feng; Hwang, Ren-Hung; Lai, Ying-Hsun; Wang, Ming-Shi
2015-12-01
As cloud computing and wearable devices technologies mature, relevant services have grown more and more popular in recent years. The healthcare field is one of the popular services for this technology that adopts wearable devices to sense signals of negative physiological events, and to notify users. The development and implementation of long-term healthcare monitoring that can prevent or quickly respond to the occurrence of disease and accidents present an interesting challenge for computing power and energy limits. This study proposed an adaptive sensor data segments selection method for wearable health care services, and considered the sensing frequency of the various signals from human body, as well as the data transmission among the devices. The healthcare service regulates the sensing frequency of devices by considering the overall cloud computing environment and the sensing variations of wearable health care services. The experimental results show that the proposed service can effectively transmit the sensing data and prolong the overall lifetime of health care services.
Predictive wind turbine simulation with an adaptive lattice Boltzmann method for moving boundaries
NASA Astrophysics Data System (ADS)
Deiterding, Ralf; Wood, Stephen L.
2016-09-01
Operating horizontal axis wind turbines create large-scale turbulent wake structures that affect the power output of downwind turbines considerably. The computational prediction of this phenomenon is challenging as efficient low dissipation schemes are necessary that represent the vorticity production by the moving structures accurately and that are able to transport wakes without significant artificial decay over distances of several rotor diameters. We have developed a parallel adaptive lattice Boltzmann method for large eddy simulation of turbulent weakly compressible flows with embedded moving structures that considers these requirements rather naturally and enables first principle simulations of wake-turbine interaction phenomena at reasonable computational costs. The paper describes the employed computational techniques and presents validation simulations for the Mexnext benchmark experiments as well as simulations of the wake propagation in the Scaled Wind Farm Technology (SWIFT) array consisting of three Vestas V27 turbines in triangular arrangement.
A Review on Effectiveness and Adaptability of the Design-Build Method
NASA Astrophysics Data System (ADS)
Kudo, Masataka; Miyatake, Ichiro; Baba, Kazuhito; Yokoi, Hiroyuki; Fueta, Toshiharu
In the Ministry of Land, Infrastructure, Transport and Tourism (MLIT), various approaches have been taken for efficient implementation of public works projects, one of which is the ongoing use of the design-build method on a trial basis, as a means to utilize the technical skills and knowledge of private companies. In 2005, MLIT further introduced the a dvanced technical proposal type, a kind of the comprehensive evaluation method, as part of its efforts to improve tendering and contracting systems. Meanwhile, although the positive effect of the design build method has been reported, it has not been widely published, which may be one of the reasons that the number of MLIT projects using the design-build method is declining year by year. In this context, this paper discusses the result and review of the study concerning the extent of flexibility allowed for the process and design (proposal) of public work projects, and the follow-up surveys of the actual test case projects, conducted as basic researches to examine the measure to expand and promote the use of the design-build method. The study objects were selected from the tunnel construction projects using the shield tunneling method for developing the common utility duct, and the bridge construction projects ordering construction of supers tructure work and substructure work in a single contract. In providing the result and review of the studies, the structures and the temporary installations were separately examined, and effectiveness and adaptability of the design-build method was discussed for each, respectively.
1984-06-01
space discretization error . 1. I 3 1. INTRODUCTION Reaction- diffusion processes occur in many branches of biology and physical chemistry. Examples...to model reaction- diffusion phenomena. The primary goal of this adaptive method is to keep a particular norm of the space discretization error less...AD-A142 253 AN ADAPTIVE MET6 OFD LNES WITH ERROR CONTROL FOR 1 INST FOR PHYSICAL SCIENCE AND TECH. I BABUSKAAAO C7 EA OH S UMR AN UNVC EEP R
NASA Astrophysics Data System (ADS)
Jin, Seung-Seop; Jung, Hyung-Jo
2014-03-01
It is well known that the dynamic properties of a structure such as natural frequencies depend not only on damage but also on environmental condition (e.g., temperature). The variation in dynamic characteristics of a structure due to environmental condition may mask damage of the structure. Without taking the change of environmental condition into account, false-positive or false-negative damage diagnosis may occur so that structural health monitoring becomes unreliable. In order to address this problem, an approach to construct a regression model based on structural responses considering environmental factors has been usually used by many researchers. The key to success of this approach is the formulation between the input and output variables of the regression model to take into account the environmental variations. However, it is quite challenging to determine proper environmental variables and measurement locations in advance for fully representing the relationship between the structural responses and the environmental variations. One alternative (i.e., novelty detection) is to remove the variations caused by environmental factors from the structural responses by using multivariate statistical analysis (e.g., principal component analysis (PCA), factor analysis, etc.). The success of this method is deeply depending on the accuracy of the description of normal condition. Generally, there is no prior information on normal condition during data acquisition, so that the normal condition is determined by subjective perspective with human-intervention. The proposed method is a novel adaptive multivariate statistical analysis for monitoring of structural damage detection under environmental change. One advantage of this method is the ability of a generative learning to capture the intrinsic characteristics of the normal condition. The proposed method is tested on numerically simulated data for a range of noise in measurement under environmental variation. A comparative
Li, Ke; Zhang, Qiuju; Wang, Kun; Chen, Peng; Wang, Huaqing
2016-01-01
A new fault diagnosis method for rotating machinery based on adaptive statistic test filter (ASTF) and Diagnostic Bayesian Network (DBN) is presented in this paper. ASTF is proposed to obtain weak fault features under background noise, ASTF is based on statistic hypothesis testing in the frequency domain to evaluate similarity between reference signal (noise signal) and original signal, and remove the component of high similarity. The optimal level of significance α is obtained using particle swarm optimization (PSO). To evaluate the performance of the ASTF, evaluation factor Ipq is also defined. In addition, a simulation experiment is designed to verify the effectiveness and robustness of ASTF. A sensitive evaluation method using principal component analysis (PCA) is proposed to evaluate the sensitiveness of symptom parameters (SPs) for condition diagnosis. By this way, the good SPs that have high sensitiveness for condition diagnosis can be selected. A three-layer DBN is developed to identify condition of rotation machinery based on the Bayesian Belief Network (BBN) theory. Condition diagnosis experiment for rolling element bearings demonstrates the effectiveness of the proposed method. PMID:26761006
Automatic barcode recognition method based on adaptive edge detection and a mapping model
NASA Astrophysics Data System (ADS)
Yang, Hua; Chen, Lianzheng; Chen, Yifan; Lee, Yong; Yin, Zhouping
2016-09-01
An adaptive edge detection and mapping (AEDM) algorithm to address the challenging one-dimensional barcode recognition task with the existence of both image degradation and barcode shape deformation is presented. AEDM is an edge detection-based method that has three consecutive phases. The first phase extracts the scan lines from a cropped image. The second phase involves detecting the edge points in a scan line. The edge positions are assumed to be the intersecting points between a scan line and a corresponding well-designed reference line. The third phase involves adjusting the preliminary edge positions to more reasonable positions by employing prior information of the coding rules. Thus, a universal edge mapping model is established to obtain the coding positions of each edge in this phase, followed by a decoding procedure. The Levenberg-Marquardt method is utilized to solve this nonlinear model. The computational complexity and convergence analysis of AEDM are also provided. Several experiments were implemented to evaluate the performance of AEDM algorithm. The results indicate that the efficient AEDM algorithm outperforms state-of-the-art methods and adequately addresses multiple issues, such as out-of-focus blur, nonlinear distortion, noise, nonlinear optical illumination, and situations that involve the combinations of these issues.
Structural break detection method based on the Adaptive Regression Splines technique
NASA Astrophysics Data System (ADS)
Kucharczyk, Daniel; Wyłomańska, Agnieszka; Zimroz, Radosław
2017-04-01
For many real data, long term observation consists of different processes that coexist or occur one after the other. Those processes very often exhibit different statistical properties and thus before the further analysis the observed data should be segmented. This problem one can find in different applications and therefore new segmentation techniques have been appeared in the literature during last years. In this paper we propose a new method of time series segmentation, i.e. extraction from the analysed vector of observations homogeneous parts with similar behaviour. This method is based on the absolute deviation about the median of the signal and is an extension of the previously proposed techniques also based on the simple statistics. In this paper we introduce the method of structural break point detection which is based on the Adaptive Regression Splines technique, one of the form of regression analysis. Moreover we propose also the statistical test which allows testing hypothesis of behaviour related to different regimes. First, the methodology we apply to the simulated signals with different distributions in order to show the effectiveness of the new technique. Next, in the application part we analyse the real data set that represents the vibration signal from a heavy duty crusher used in a mineral processing plant.
Li, Ke; Zhang, Qiuju; Wang, Kun; Chen, Peng; Wang, Huaqing
2016-01-08
A new fault diagnosis method for rotating machinery based on adaptive statistic test filter (ASTF) and Diagnostic Bayesian Network (DBN) is presented in this paper. ASTF is proposed to obtain weak fault features under background noise, ASTF is based on statistic hypothesis testing in the frequency domain to evaluate similarity between reference signal (noise signal) and original signal, and remove the component of high similarity. The optimal level of significance α is obtained using particle swarm optimization (PSO). To evaluate the performance of the ASTF, evaluation factor Ipq is also defined. In addition, a simulation experiment is designed to verify the effectiveness and robustness of ASTF. A sensitive evaluation method using principal component analysis (PCA) is proposed to evaluate the sensitiveness of symptom parameters (SPs) for condition diagnosis. By this way, the good SPs that have high sensitiveness for condition diagnosis can be selected. A three-layer DBN is developed to identify condition of rotation machinery based on the Bayesian Belief Network (BBN) theory. Condition diagnosis experiment for rolling element bearings demonstrates the effectiveness of the proposed method.
Evaluating monitoring methods to guide adaptive management of a threatened amphibian (Litoria aurea)
Bower, Deborah S; Pickett, Evan J; Stockwell, Michelle P; Pollard, Carla J; Garnham, James I; Sanders, Madeleine R; Clulow, John; Mahony, Michael J
2014-01-01
Prompt detection of declines in abundance or distribution of populations is critical when managing threatened species that have high population turnover. Population monitoring programs provide the tools necessary to identify and detect decreases in abundance that will threaten the persistence of key populations and should occur in an adaptive management framework which designs monitoring to maximize detection and minimize effort. We monitored a population of Litoria aurea at Sydney Olympic Park over 5 years using mark–recapture, capture encounter, noncapture encounter, auditory, tadpole trapping, and dip-net surveys. The methods differed in the cost, time, and ability to detect changes in the population. Only capture encounter surveys were able to simultaneously detect a decline in the occupancy, relative abundance, and recruitment of frogs during the surveys. The relative abundance of L. aurea during encounter surveys correlated with the population size obtained from mark–recapture surveys, and the methods were therefore useful for detecting a change in the population. Tadpole trapping and auditory surveys did not predict overall abundance and were therefore not useful in detecting declines. Monitoring regimes should determine optimal survey times to identify periods where populations have the highest detectability. Once this has been achieved, capture encounter surveys provide a cost-effective method of effectively monitoring trends in occupancy, changes in relative abundance, and detecting recruitment in populations. PMID:24834332
Webster, Michael A.
2015-01-01
Sensory systems continuously mold themselves to the widely varying contexts in which they must operate. Studies of these adaptations have played a long and central role in vision science. In part this is because the specific adaptations remain a powerful tool for dissecting vision, by exposing the mechanisms that are adapting. That is, “if it adapts, it's there.” Many insights about vision have come from using adaptation in this way, as a method. A second important trend has been the realization that the processes of adaptation are themselves essential to how vision works, and thus are likely to operate at all levels. That is, “if it's there, it adapts.” This has focused interest on the mechanisms of adaptation as the target rather than the probe. Together both approaches have led to an emerging insight of adaptation as a fundamental and ubiquitous coding strategy impacting all aspects of how we see. PMID:26858985
Long-Time Convergence of an Adaptive Biasing Force Method: The Bi-Channel Case
NASA Astrophysics Data System (ADS)
Lelièvre, T.; Minoukadeh, K.
2011-10-01
We present convergence results for an adaptive algorithm to compute free energies, namely the adaptive biasing force (ABF) method (D arve and P ohorille in J Chem Phys 115(20):9169-9183, 2001; H énin and C hipot in J Chem Phys 121:2904, 2004). The free energy is the effective potential associated to a so-called reaction coordinate ξ( q), where q = ( q 1, … , q 3 N ) is the position vector of an N-particle system. Computing free energy differences remains an important challenge in molecular dynamics due to the presence of metastable regions in the potential energy surface. The ABF method uses an on-the-fly estimate of the free energy to bias dynamics and overcome metastability. Using entropy arguments and logarithmic Sobolev inequalities, previous results have shown that the rate of convergence of the ABF method is limited by the metastable features of the canonical measures conditioned to being at fixed values of ξ (L elièvre et al. in Nonlinearity 21(6):1155-1181, 2008). In this paper, we present an improvement on the existing results in the presence of such metastabilities, which is a generic case encountered in practice. More precisely, we study the so-called bi-channel case, where two channels along the reaction coordinate direction exist between an initial and final state, the channels being separated from each other by a region of very low probability. With hypotheses made on `channel-dependent' conditional measures, we show on a bi-channel model, which we introduce, that the convergence of the ABF method is, in fact, not limited by metastabilities in directions orthogonal to ξ under two crucial assumptions: (i) exchange between the two channels is possible for some values of ξ and (ii) the free energy is a good bias in each channel. This theoretical result supports recent numerical experiments (M inoukadeh et al. in J Chem Theory Comput 6:1008-1017, 2010), where the efficiency of the ABF approach is demonstrated for such a multiple-channel situation.
An adaptive MR-CT registration method for MRI-guided prostate cancer radiotherapy
NASA Astrophysics Data System (ADS)
Zhong, Hualiang; Wen, Ning; Gordon, James J.; Elshaikh, Mohamed A.; Movsas, Benjamin; Chetty, Indrin J.
2015-04-01
Magnetic Resonance images (MRI) have superior soft tissue contrast compared with CT images. Therefore, MRI might be a better imaging modality to differentiate the prostate from surrounding normal organs. Methods to accurately register MRI to simulation CT images are essential, as we transition the use of MRI into the routine clinic setting. In this study, we present a finite element method (FEM) to improve the performance of a commercially available, B-spline-based registration algorithm in the prostate region. Specifically, prostate contours were delineated independently on ten MRI and CT images using the Eclipse treatment planning system. Each pair of MRI and CT images was registered with the B-spline-based algorithm implemented in the VelocityAI system. A bounding box that contains the prostate volume in the CT image was selected and partitioned into a tetrahedral mesh. An adaptive finite element method was then developed to adjust the displacement vector fields (DVFs) of the B-spline-based registrations within the box. The B-spline and FEM-based registrations were evaluated based on the variations of prostate volume and tumor centroid, the unbalanced energy of the generated DVFs, and the clarity of the reconstructed anatomical structures. The results showed that the volumes of the prostate contours warped with the B-spline-based DVFs changed 10.2% on average, relative to the volumes of the prostate contours on the original MR images. This discrepancy was reduced to 1.5% for the FEM-based DVFs. The average unbalanced energy was 2.65 and 0.38 mJ cm-3, and the prostate centroid deviation was 0.37 and 0.28 cm, for the B-spline and FEM-based registrations, respectively. Different from the B-spline-warped MR images, the FEM-warped MR images have clear boundaries between prostates and bladders, and their internal prostatic structures are consistent with those of the original MR images. In summary, the developed adaptive FEM method preserves the prostate volume
André, L; Pauss, A; Ribeiro, T
2017-03-01
The chemical oxygen demand (COD) is an essential parameter in waste management, particularly when monitoring wet anaerobic digestion processes. An adapted method to determine COD was developed for solid waste (total solids >15%). This method used commercial COD tubes and did not require sample dilution. A homemade plastic weighing support was used to transfer the solid sample into COD tubes. Potassium hydrogen phthalate and glucose used as standards showed an excellent repeatability. A small underestimation of the theoretical COD value (standard values around 5% lower than theoretical values) was also observed, mainly due to the intrinsic COD of the weighing support and to measurement uncertainties. The adapted COD method was tested using various solid wastes in the range of 1-8 mgCOD, determining the COD of dried and ground cellulose, cattle manure, straw and a mixed-substrate sample. This new adapted method could be used to monitor and design dry anaerobic digestion processes.
Method and apparatus for adaptive force and position control of manipulators
NASA Technical Reports Server (NTRS)
Seraji, Homayoun (Inventor)
1995-01-01
The described and improved multi-arm invention of this application presents three strategies for adaptive control of cooperative multi-arm robots which coordinate control over a common load. In the position-position control strategy, the adaptive controllers ensure that the end-effector positions of both arms track desired trajectories in Cartesian space despite unknown time-varying interaction forces exerted through a load. In the position-hybrid control strategy, the adaptive controller of one arm controls end-effector motions in the free directions and applied forces in the constraint directions; while the adaptive controller of the other arm ensures that the end-effector tracks desired position trajectories. In the hybrid-hybrid control strategy, the adaptive controllers ensure that both end-effectors track reference position trajectories while simultaneously applying desired forces on the load. In all three control strategies, the cross-coupling effects between the arms are treated as disturbances which are compensated for by the adaptive controllers while following desired commands in a common frame of reference. The adaptive controllers do not require the complex mathematical model of the arm dynamics or any knowledge of the arm dynamic parameters or the load parameters such as mass and stiffness. Circuits in the adaptive feedback and feedforward controllers are varied by novel adaptation laws.
Zhao, Guoliang; Li, Hongxing
2013-01-01
This paper proposes new methodologies for the design of adaptive integral-sliding mode control. A tensor product model transformation based adaptive integral-sliding mode control law with respect to uncertainties and perturbations is studied, while upper bounds on the perturbations and uncertainties are assumed to be unknown. The advantage of proposed controllers consists in having a dynamical adaptive control gain to establish a sliding mode right at the beginning of the process. Gain dynamics ensure a reasonable adaptive gain with respect to the uncertainties. Finally, efficacy of the proposed controller is verified by simulations on an uncertain nonlinear system model. PMID:24453897
Zhao, Guoliang; Sun, Kaibiao; Li, Hongxing
2013-01-01
This paper proposes new methodologies for the design of adaptive integral-sliding mode control. A tensor product model transformation based adaptive integral-sliding mode control law with respect to uncertainties and perturbations is studied, while upper bounds on the perturbations and uncertainties are assumed to be unknown. The advantage of proposed controllers consists in having a dynamical adaptive control gain to establish a sliding mode right at the beginning of the process. Gain dynamics ensure a reasonable adaptive gain with respect to the uncertainties. Finally, efficacy of the proposed controller is verified by simulations on an uncertain nonlinear system model.
NASA Astrophysics Data System (ADS)
Messina, Riccardo; Noto, Antonio; Guizal, Brahim; Antezza, Mauro
2017-03-01
We calculate the radiative heat transfer between two identical metallic one-dimensional lamellar gratings. To this aim we present and exploit a modification to the widely used Fourier modal method, known as adaptive spatial resolution, based on a stretch of the coordinate associated with the periodicity of the grating. We first show that this technique dramatically improves the rate of convergence when calculating the heat flux, allowing us to explore smaller separations. We then present a study of heat flux as a function of the grating height, highlighting a remarkable amplification of the exchanged energy, ascribed to the appearance of spoof-plasmon modes, whose behavior is also spectrally investigated. Differently from previous works, our method allows us to explore a range of grating heights extending over several orders of magnitude. By comparing our results to recent studies we find a consistent quantitative disagreement with some previously obtained results going up to 50%. In some cases, this disagreement is explained in terms of an incorrect connection between the reflection operators of the two gratings.
Adaptive and accurate color edge extraction method for one-shot shape acquisition
NASA Astrophysics Data System (ADS)
Yin, Wei; Cheng, Xiaosheng; Cui, Haihua; Li, Dawei; Zhou, Lei
2016-09-01
This paper presents an approach to extract accurate color edge information using encoded patterns in hue, saturation, and intensity (HSI) color space. This method is applied to one-shot shape acquisition. Theoretical analysis shows that the hue transition between primary and secondary colors in a color edge is based on light interference and diffraction. We set up a color transition model to illustrate the hue transition on an edge and then define the segmenting position of two stripes. By setting up an adaptive HSI color space, the colors of the stripes and subpixel edges are obtained precisely without a dark laboratory environment, in a low-cost processing algorithm. Since this method does not have any constraints for colors of neighboring stripes, the encoding is an easy procedure. The experimental results show that the edges of dense modulation patterns can be obtained under a complicated environment illumination, and the precision can ensure that the three-dimensional shape of the object is obtained reliably with only one image.
NASA Astrophysics Data System (ADS)
Kim, Ji-hyun; Han, Jae-Ho; Jeong, Jichai
2015-09-01
Integration time and reference intensity are important factors for achieving high signal-to-noise ratio (SNR) and sensitivity in optical coherence tomography (OCT). In this context, we present an adaptive optimization method of reference intensity for OCT setup. The reference intensity is automatically controlled by tilting a beam position using a Galvanometric scanning mirror system. Before sample scanning, the OCT system acquires two dimensional intensity map with normalized intensity and variables in color spaces using false-color mapping. Then, the system increases or decreases reference intensity following the map data for optimization with a given algorithm. In our experiments, the proposed method successfully corrected the reference intensity with maintaining spectral shape, enabled to change integration time without manual calibration of the reference intensity, and prevented image degradation due to over-saturation and insufficient reference intensity. Also, SNR and sensitivity could be improved by increasing integration time with automatic adjustment of the reference intensity. We believe that our findings can significantly aid in the optimization of SNR and sensitivity for optical coherence tomography systems.
Adapting phase-switch Monte Carlo method for flexible organic molecules
NASA Astrophysics Data System (ADS)
Bridgwater, Sally; Quigley, David
2014-03-01
The role of cholesterol in lipid bilayers has been widely studied via molecular simulation, however, there has been relatively little work on crystalline cholesterol in biological environments. Recent work has linked the crystallisation of cholesterol in the body with heart attacks and strokes. Any attempt to model this process will require new models and advanced sampling methods to capture and quantify the subtle polymorphism of solid cholesterol, in which two crystalline phases are separated by a phase transition close to body temperature. To this end, we have adapted phase-switch Monte Carlo for use with flexible molecules, to calculate the free energy between crystal polymorphs to a high degree of accuracy. The method samples an order parameter , which divides a displacement space for the N molecules, into regions energetically favourable for each polymorph; which is traversed using biased Monte Carlo. Results for a simple model of butane will be presented, demonstrating that conformational flexibility can be correctly incorporated within a phase-switching scheme. Extension to a coarse grained model of cholesterol and the resulting free energies will be discussed.
Long-time atomistic dynamics through a new self-adaptive accelerated molecular dynamics method
NASA Astrophysics Data System (ADS)
Gao, N.; Yang, L.; Gao, F.; Kurtz, R. J.; West, D.; Zhang, S.
2017-04-01
A self-adaptive accelerated molecular dynamics method is developed to model infrequent atomic-scale events, especially those events that occur on a rugged free-energy surface. Key in the new development is the use of the total displacement of the system at a given temperature to construct a boost-potential, which is slowly increased to accelerate the dynamics. The temperature is slowly increased to accelerate the dynamics. By allowing the system to evolve from one steady-state configuration to another by overcoming the transition state, this self-evolving approach makes it possible to explore the coupled motion of species that migrate on vastly different time scales. The migrations of single vacancy (V) and small He-V clusters, and the growth of nano-sized He-V clusters in Fe for times in the order of seconds are studied by this new method. An interstitial-assisted mechanism is first explored for the migration of a helium-rich He-V cluster, while a new two-component Ostwald ripening mechanism is suggested for He-V cluster growth.
NASA Astrophysics Data System (ADS)
Owens, A. R.; Kópházi, J.; Welch, J. A.; Eaton, M. D.
2017-04-01
In this paper a hanging-node, discontinuous Galerkin, isogeometric discretisation of the multigroup, discrete ordinates (SN) equations is presented in which each energy group has its own mesh. The equations are discretised using Non-Uniform Rational B-Splines (NURBS), which allows the coarsest mesh to exactly represent the geometry for a wide range of engineering problems of interest; this would not be the case using straight-sided finite elements. Information is transferred between meshes via the construction of a supermesh. This is a non-trivial task for two arbitrary meshes, but is significantly simplified here by deriving every mesh from a common coarsest initial mesh. In order to take full advantage of this flexible discretisation, goal-based error estimators are derived for the multigroup, discrete ordinates equations with both fixed (extraneous) and fission sources, and these estimators are used to drive an adaptive mesh refinement (AMR) procedure. The method is applied to a variety of test cases for both fixed and fission source problems. The error estimators are found to be extremely accurate for linear NURBS discretisations, with degraded performance for quadratic discretisations owing to a reduction in relative accuracy of the ;exact; adjoint solution required to calculate the estimators. Nevertheless, the method seems to produce optimal meshes in the AMR process for both linear and quadratic discretisations, and is ≈×100 more accurate than uniform refinement for the same amount of computational effort for a 67 group deep penetration shielding problem.
Ying Chen; Shao-Jing Dong; Terrence Draper; Ivan Horvath; Keh-Fei Liu; Nilmani Mathur; Sonali Tamhankar; Cidambi Srinivasan; Frank X. Lee; Jianbo Zhang
2004-05-01
We introduce the ''Sequential Empirical Bayes Method'', an adaptive constrained-curve fitting procedure for extracting reliable priors. These are then used in standard augmented-{chi}{sup 2} fits on separate data. This better stabilizes fits to lattice QCD overlap-fermion data at very low quark mass where a priori values are not otherwise known. Lessons learned (including caveats limiting the scope of the method) from studying artificial data are presented. As an illustration, from local-local two-point correlation functions, we obtain masses and spectral weights for ground and first-excited states of the pion, give preliminary fits for the a{sub 0} where ghost states (a quenched artifact) must be dealt with, and elaborate on the details of fits of the Roper resonance and S{sub 11}(N{sup 1/2-}) previously presented elsewhere. The data are from overlap fermions on a quenched 16{sup 3} x 28 lattice with spatial size La = 3.2 fm and pion mass as low as {approx}180 MeV.
Huang, W.; Zheng, Lingyun; Zhan, X.
2002-01-01
Accurate modelling of groundwater flow and transport with sharp moving fronts often involves high computational cost, when a fixed/uniform mesh is used. In this paper, we investigate the modelling of groundwater problems using a particular adaptive mesh method called the moving mesh partial differential equation approach. With this approach, the mesh is dynamically relocated through a partial differential equation to capture the evolving sharp fronts with a relatively small number of grid points. The mesh movement and physical system modelling are realized by solving the mesh movement and physical partial differential equations alternately. The method is applied to the modelling of a range of groundwater problems, including advection dominated chemical transport and reaction, non-linear infiltration in soil, and the coupling of density dependent flow and transport. Numerical results demonstrate that sharp moving fronts can be accurately and efficiently captured by the moving mesh approach. Also addressed are important implementation strategies, e.g. the construction of the monitor function based on the interpolation error, control of mesh concentration, and two-layer mesh movement. Copyright ?? 2002 John Wiley and Sons, Ltd.
Collard, France; Gilbert, Bernard; Eppe, Gauthier; Parmentier, Eric; Das, Krishna
2015-10-01
Microplastic particles (MP) contaminate oceans and affect marine organisms in several ways. Ingestion combined with food intake is generally reported. However, data interpretation often is circumvented by the difficulty to separate MP from bulk samples. Visual examination often is used as one or the only step to sort these particles. However, color, size, and shape are insufficient and often unreliable criteria. We present an extraction method based on hypochlorite digestion and isolation of MP from the membrane by sonication. The protocol is especially well adapted to a subsequent analysis by Raman spectroscopy. The method avoids fluorescence problems, allowing better identification of anthropogenic particles (AP) from stomach contents of fish by Raman spectroscopy. It was developed with commercial samples of microplastics and cotton along with stomach contents from three different Clupeiformes fishes: Clupea harengus, Sardina pilchardus, and Engraulis encrasicolus. The optimized digestion and isolation protocol showed no visible impact on microplastics and cotton particles while the Raman spectroscopic spectrum allowed the precise identification of microplastics and textile fibers. Thirty-five particles were isolated from nine fish stomach contents. Raman analysis has confirmed 11 microplastics and 13 fibers mainly made of cellulose or lignin. Some particles were not completely identified but contained artificial dyes. The novel approach developed in this manuscript should help to assess the presence, quantity, and composition of AP in planktivorous fish stomachs.
Long-time atomistic dynamics through a new self-adaptive accelerated molecular dynamics method.
Gao, N; Yang, L; Gao, F; Kurtz, R J; West, D; Zhang, S
2017-04-12
A self-adaptive accelerated molecular dynamics method is developed to model infrequent atomic-scale events, especially those events that occur on a rugged free-energy surface. Key in the new development is the use of the total displacement of the system at a given temperature to construct a boost-potential, which is slowly increased to accelerate the dynamics. The temperature is slowly increased to accelerate the dynamics. By allowing the system to evolve from one steady-state configuration to another by overcoming the transition state, this self-evolving approach makes it possible to explore the coupled motion of species that migrate on vastly different time scales. The migrations of single vacancy (V) and small He-V clusters, and the growth of nano-sized He-V clusters in Fe for times in the order of seconds are studied by this new method. An interstitial-assisted mechanism is first explored for the migration of a helium-rich He-V cluster, while a new two-component Ostwald ripening mechanism is suggested for He-V cluster growth.
NASA Astrophysics Data System (ADS)
Schaefer, Andreas M.; Daniell, James E.; Wenzel, Friedemann
2017-03-01
Earthquake clustering is an essential part of almost any statistical analysis of spatial and temporal properties of seismic activity. The nature of earthquake clusters and subsequent declustering of earthquake catalogues plays a crucial role in determining the magnitude-dependent earthquake return period and its respective spatial variation for probabilistic seismic hazard assessment. This study introduces the Smart Cluster Method (SCM), a new methodology to identify earthquake clusters, which uses an adaptive point process for spatio-temporal cluster identification. It utilises the magnitude-dependent spatio-temporal earthquake density to adjust the search properties, subsequently analyses the identified clusters to determine directional variation and adjusts its search space with respect to directional properties. In the case of rapid subsequent ruptures like the 1992 Landers sequence or the 2010-2011 Darfield-Christchurch sequence, a reclassification procedure is applied to disassemble subsequent ruptures using near-field searches, nearest neighbour classification and temporal splitting. The method is capable of identifying and classifying earthquake clusters in space and time. It has been tested and validated using earthquake data from California and New Zealand. A total of more than 1500 clusters have been found in both regions since 1980 with M m i n = 2.0. Utilising the knowledge of cluster classification, the method has been adjusted to provide an earthquake declustering algorithm, which has been compared to existing methods. Its performance is comparable to established methodologies. The analysis of earthquake clustering statistics lead to various new and updated correlation functions, e.g. for ratios between mainshock and strongest aftershock and general aftershock activity metrics.
NASA Astrophysics Data System (ADS)
Li, Meng; Huang, Zhonghua
2016-10-01
Signal processing for an ultra-wideband radio fuze receiver involves some challenges: it requires high real-time performance; the output signal is mixed with broadband noise; and the signal-to-noise ratio (SNR) decreases with increased detection range. The adaptive line enhancement method is used to filter the output signal of the ultra-wideband radio fuze receiver, and thus suppress the wideband noise from the output signal of the receiver and extract the target characteristic signal. The filter input correlation matrix estimation algorithm is based on the delay factor of an adaptive line enhancer. The proposed adaptive algorithm was used to filter and reduce noise in the output signal from the fuze receiver. Simulation results showed that the SNR of the output signal after adaptive noise reduction was improved by 20 dB, which was higher than the SNR of the output signal after finite impulse response (FIR) filtering of around 10 dB.
Parallel processing of Eulerian-Lagrangian, cell-based adaptive method for moving boundary problems
NASA Astrophysics Data System (ADS)
Kuan, Chih-Kuang
In this study, issues and techniques related to the parallel processing of the Eulerian-Lagrangian method for multi-scale moving boundary computation are investigated. The scope of the study consists of the Eulerian approach for field equations, explicit interface-tracking, Lagrangian interface modification and reconstruction algorithms, and a cell-based unstructured adaptive mesh refinement (AMR) in a distributed-memory computation framework. We decomposed the Eulerian domain spatially along with AMR to balance the computational load of solving field equations, which is a primary cost of the entire solver. The Lagrangian domain is partitioned based on marker vicinities with respect to the Eulerian partitions to minimize inter-processor communication. Overall, the performance of an Eulerian task peaks at 10,000-20,000 cells per processor, and it is the upper bound of the performance of the Eulerian- Lagrangian method. Moreover, the load imbalance of the Lagrangian task is not as influential as the communication overhead of the Eulerian-Lagrangian tasks on the overall performance. To assess the parallel processing capabilities, a high Weber number drop collision is simulated. The high convective to viscous length scale ratios result in disparate length scale distributions; together with the moving and topologically irregular interfaces, the computational tasks require temporally and spatially resolved treatment adaptively. The techniques presented enable us to perform original studies to meet such computational requirements. Coalescence, stretch, and break-up of satellite droplets due to the interfacial instability are observed in current study, and the history of interface evolution is in good agreement with the experimental data. The competing mechanisms of the primary and secondary droplet break up, along with the gas-liquid interfacial dynamics are systematically investigated. This study shows that Rayleigh-Taylor instability on the edge of an extruding sheet
Adaptive multiresolution semi-Lagrangian discontinuous Galerkin methods for the Vlasov equations
NASA Astrophysics Data System (ADS)
Besse, N.; Deriaz, E.; Madaule, É.
2017-03-01
We develop adaptive numerical schemes for the Vlasov equation by combining discontinuous Galerkin discretisation, multiresolution analysis and semi-Lagrangian time integration. We implement a tree based structure in order to achieve adaptivity. Both multi-wavelets and discontinuous Galerkin rely on a local polynomial basis. The schemes are tested and validated using Vlasov-Poisson equations for plasma physics and astrophysics.
Guzik, S; McCorquodale, P; Colella, P
2011-12-16
A fourth-order accurate finite-volume method is presented for solving time-dependent hyperbolic systems of conservation laws on mapped grids that are adaptively refined in space and time. Novel considerations for formulating the semi-discrete system of equations in computational space combined with detailed mechanisms for accommodating the adapting grids ensure that conservation is maintained and that the divergence of a constant vector field is always zero (freestream-preservation property). Advancement in time is achieved with a fourth-order Runge-Kutta method.
AP-Cloud: Adaptive Particle-in-Cloud method for optimal solutions to Vlasov–Poisson equation
Wang, Xingyu; Samulyak, Roman; Jiao, Xiangmin; Yu, Kwangmin
2016-07-01
We propose a new adaptive Particle-in-Cloud (AP-Cloud) method for obtaining optimal numerical solutions to the Vlasov–Poisson equation. Unlike the traditional particle-in-cell (PIC) method, which is commonly used for solving this problem, the AP-Cloud adaptively selects computational nodes or particles to deliver higher accuracy and efficiency when the particle distribution is highly non-uniform. Unlike other adaptive techniques for PIC, our method balances the errors in PDE discretization and Monte Carlo integration, and discretizes the differential operators using a generalized finite difference (GFD) method based on a weighted least square formulation. As a result, AP-Cloud is independent of the geometric shapes of computational domains and is free of artificial parameters. Efficient and robust implementation is achieved through an octree data structure with 2:1 balance. We analyze the accuracy and convergence order of AP-Cloud theoretically, and verify the method using an electrostatic problem of a particle beam with halo. Simulation results show that the AP-Cloud method is substantially more accurate and faster than the traditional PIC, and it is free of artificial forces that are typical for some adaptive PIC techniques.
AP-Cloud: Adaptive particle-in-cloud method for optimal solutions to Vlasov–Poisson equation
Wang, Xingyu; Samulyak, Roman; Jiao, Xiangmin; Yu, Kwangmin
2016-04-19
We propose a new adaptive Particle-in-Cloud (AP-Cloud) method for obtaining optimal numerical solutions to the Vlasov–Poisson equation. Unlike the traditional particle-in-cell (PIC) method, which is commonly used for solving this problem, the AP-Cloud adaptively selects computational nodes or particles to deliver higher accuracy and efficiency when the particle distribution is highly non-uniform. Unlike other adaptive techniques for PIC, our method balances the errors in PDE discretization and Monte Carlo integration, and discretizes the differential operators using a generalized finite difference (GFD) method based on a weighted least square formulation. As a result, AP-Cloud is independent of the geometric shapes of computational domains and is free of artificial parameters. Efficient and robust implementation is achieved through an octree data structure with 2:1 balance. We analyze the accuracy and convergence order of AP-Cloud theoretically, and verify the method using an electrostatic problem of a particle beam with halo. Here, simulation results show that the AP-Cloud method is substantially more accurate and faster than the traditional PIC, and it is free of artificial forces that are typical for some adaptive PIC techniques.
AP-Cloud: Adaptive particle-in-cloud method for optimal solutions to Vlasov–Poisson equation
Wang, Xingyu; Samulyak, Roman; Jiao, Xiangmin; ...
2016-04-19
We propose a new adaptive Particle-in-Cloud (AP-Cloud) method for obtaining optimal numerical solutions to the Vlasov–Poisson equation. Unlike the traditional particle-in-cell (PIC) method, which is commonly used for solving this problem, the AP-Cloud adaptively selects computational nodes or particles to deliver higher accuracy and efficiency when the particle distribution is highly non-uniform. Unlike other adaptive techniques for PIC, our method balances the errors in PDE discretization and Monte Carlo integration, and discretizes the differential operators using a generalized finite difference (GFD) method based on a weighted least square formulation. As a result, AP-Cloud is independent of the geometric shapes ofmore » computational domains and is free of artificial parameters. Efficient and robust implementation is achieved through an octree data structure with 2:1 balance. We analyze the accuracy and convergence order of AP-Cloud theoretically, and verify the method using an electrostatic problem of a particle beam with halo. Here, simulation results show that the AP-Cloud method is substantially more accurate and faster than the traditional PIC, and it is free of artificial forces that are typical for some adaptive PIC techniques.« less
AP-Cloud: Adaptive Particle-in-Cloud method for optimal solutions to Vlasov-Poisson equation
NASA Astrophysics Data System (ADS)
Wang, Xingyu; Samulyak, Roman; Jiao, Xiangmin; Yu, Kwangmin
2016-07-01
We propose a new adaptive Particle-in-Cloud (AP-Cloud) method for obtaining optimal numerical solutions to the Vlasov-Poisson equation. Unlike the traditional particle-in-cell (PIC) method, which is commonly used for solving this problem, the AP-Cloud adaptively selects computational nodes or particles to deliver higher accuracy and efficiency when the particle distribution is highly non-uniform. Unlike other adaptive techniques for PIC, our method balances the errors in PDE discretization and Monte Carlo integration, and discretizes the differential operators using a generalized finite difference (GFD) method based on a weighted least square formulation. As a result, AP-Cloud is independent of the geometric shapes of computational domains and is free of artificial parameters. Efficient and robust implementation is achieved through an octree data structure with 2:1 balance. We analyze the accuracy and convergence order of AP-Cloud theoretically, and verify the method using an electrostatic problem of a particle beam with halo. Simulation results show that the AP-Cloud method is substantially more accurate and faster than the traditional PIC, and it is free of artificial forces that are typical for some adaptive PIC techniques.
NASA Astrophysics Data System (ADS)
Kouris, Charalampos; Dimakopoulos, Yannis; Georgiou, Georgios; Tsamopoulos, John
2002-05-01
A Galerkin/finite element and a pseudo-spectral method, in conjunction with the primitive (velocity-pressure) and streamfunction-vorticity formulations, are tested for solving the two-phase flow in a tube, which has a periodically varying, circular cross section. Two immiscible, incompressible, Newtonian fluids are arranged so that one of them is around the axis of the tube (core fluid) and the other one surrounds it (annular fluid). The physical and flow parameters are such that the interface between the two fluids remains continuous and single-valued. This arrangement is usually referred to as Core-Annular flow. A non-orthogonal mapping is used to transform the uneven tube shape and the unknown, time dependent interface to fixed, cylindrical surfaces. With both methods and formulations, steady states are calculated first using the Newton-Raphson method. The most dangerous eigenvalues of the related linear stability problem are calculated using the Arnoldi method, and dynamic simulations are carried out using the implicit Euler method. It is shown that with a smooth tube shape the pseudo-spectral method exhibits exponential convergence, whereas the finite element method exhibits algebraic convergence, albeit of higher order than expected from the relevant theory. Thus the former method, especially when coupled with the streamfunction-vorticity formulation, is much more efficient. The finite element method becomes more advantageous when the tube shape contains a cusp, in which case the convergence rate of the pseudo-spectral method deteriorates exhibiting algebraic convergence with the number of the axial spectral modes, whereas the convergence rate of the finite element method remains unaffected. Copyright
Real-Time Reconfigurable Adaptive Speech Recognition Command and Control Apparatus and Method
NASA Technical Reports Server (NTRS)
Salazar, George A. (Inventor); Haynes, Dena S. (Inventor); Sommers, Marc J. (Inventor)
1998-01-01
An adaptive speech recognition and control system and method for controlling various mechanisms and systems in response to spoken instructions and in which spoken commands are effective to direct the system into appropriate memory nodes, and to respective appropriate memory templates corresponding to the voiced command is discussed. Spoken commands from any of a group of operators for which the system is trained may be identified, and voice templates are updated as required in response to changes in pronunciation and voice characteristics over time of any of the operators for which the system is trained. Provisions are made for both near-real-time retraining of the system with respect to individual terms which are determined not be positively identified, and for an overall system training and updating process in which recognition of each command and vocabulary term is checked, and in which the memory templates are retrained if necessary for respective commands or vocabulary terms with respect to an operator currently using the system. In one embodiment, the system includes input circuitry connected to a microphone and including signal processing and control sections for sensing the level of vocabulary recognition over a given period and, if recognition performance falls below a given level, processing audio-derived signals for enhancing recognition performance of the system.
An adaptive total variation image reconstruction method for speckles through disordered media
NASA Astrophysics Data System (ADS)
Gong, Changmei; Shao, Xiaopeng; Wu, Tengfei
2013-09-01
Multiple scattering of light in highly disordered medium can break the diffraction limit of conventional optical system combined with image reconstruction method. Once the transmission matrix of the imaging system is obtained, the target image can be reconstructed from its speckle pattern by image reconstruction algorithm. Nevertheless, the restored image attained by common image reconstruction algorithms such as Tikhonov regularization has a relatively low signal-tonoise ratio (SNR) due to the experimental noise and reconstruction noise, greatly reducing the quality of the result image. In this paper, the speckle pattern of the test image is simulated by the combination of light propagation theories and statistical optics theories. Subsequently, an adaptive total variation (ATV) algorithm—the TV minimization by augmented Lagrangian and alternating direction algorithms (TVAL3), which is based on augmented Lagrangian and alternating direction algorithm, is utilized to reconstruct the target image. Numerical simulation experimental results show that, the TVAL3 algorithm can effectively suppress the noise of the restored image and preserve more image details, thus greatly boosts the SNR of the restored image. It also indicates that, compared with the image directly formed by `clean' system, the reconstructed results can overcoming the diffraction limit of the `clean' system, therefore being conductive to the observation of cells and protein molecules in biological tissues and other structures in micro/nano scale.
Experimental validation of a multi-energy x-ray adapted scatter separation method
NASA Astrophysics Data System (ADS)
Sossin, A.; Rebuffel, V.; Tabary, J.; Létang, J. M.; Freud, N.; Verger, L.
2016-12-01
Both in radiography and computed tomography (CT), recently emerged energy-resolved x-ray photon counting detectors enable the identification and quantification of individual materials comprising the inspected object. However, the approaches used for these operations require highly accurate x-ray images. The accuracy of the images is severely compromised by the presence of scattered radiation, which leads to a loss of spatial contrast and, more importantly, a bias in radiographic material imaging and artefacts in CT. The aim of the present study was to experimentally evaluate a recently introduced partial attenuation spectral scatter separation approach (PASSSA) adapted for multi-energy imaging. For this purpose, a prototype x-ray system was used. Several radiographic acquisitions of an anthropomorphic thorax phantom were performed. Reference primary images were obtained via the beam-stop (BS) approach. The attenuation images acquired from PASSSA-corrected data showed a substantial increase in local contrast and internal structure contour visibility when compared to uncorrected images. A substantial reduction of scatter induced bias was also achieved. Quantitatively, the developed method proved to be in relatively good agreement with the BS data. The application of the proposed scatter correction technique lowered the initial normalized root-mean-square error (NRMSE) of 45% between the uncorrected total and the reference primary spectral images by a factor of 9, thus reducing it to around 5%.
NASA Technical Reports Server (NTRS)
Aftosmis, M. J.; Berger, M. J.; Adomavicius, G.; Nixon, David (Technical Monitor)
1998-01-01
The work presents a new method for on-the-fly domain decomposition technique for mapping grids and solution algorithms to parallel machines, and is applicable to both shared-memory and message-passing architectures. It will be demonstrated on the Cray T3E, HP Exemplar, and SGI Origin 2000. Computing time has been secured on all these platforms. The decomposition technique is an outgrowth of techniques used in computational physics for simulations of N-body problems and the event horizons of black holes, and has not been previously used by the CFD community. Since the technique offers on-the-fly partitioning, it offers a substantial increase in flexibility for computing in heterogeneous environments, where the number of available processors may not be known at the time of job submission. In addition, since it is dynamic it permits the job to be repartitioned without global communication in cases where additional processors become available after the simulation has begun, or in cases where dynamic mesh adaptation changes the mesh size during the course of a simulation. The platform for this partitioning strategy is a completely new Cartesian Euler solver tarcreted at parallel machines which may be used in conjunction with Ames' "Cart3D" arbitrary geometry simulation package.
Fault detection method for railway wheel flat using an adaptive multiscale morphological filter
NASA Astrophysics Data System (ADS)
Li, Yifan; Zuo, Ming J.; Lin, Jianhui; Liu, Jianxin
2017-02-01
This study explores the capacity of the morphology analysis for railway wheel flat fault detection. A dynamic model of vehicle systems with 56 degrees of freedom was set up along with a wheel flat model to calculate the dynamic responses of axle box. The vehicle axle box vibration signal is complicated because it not only contains the information of wheel defect, but also includes track condition information. Thus, how to extract the influential features of wheels from strong background noise effectively is a typical key issue for railway wheel fault detection. In this paper, an algorithm for adaptive multiscale morphological filtering (AMMF) was proposed, and its effect was evaluated by a simulated signal. And then this algorithm was employed to study the axle box vibration caused by wheel flats, as well as the influence of track irregularity and vehicle running speed on diagnosis results. Finally, the effectiveness of the proposed method was verified by bench testing. Research results demonstrate that the AMMF extracts the influential characteristic of axle box vibration signals effectively and can diagnose wheel flat faults in real time.
Ray, Jaideep; Lefantzi, Sophia; Najm, Habib N.; Kennedy, Christopher A.
2006-01-01
Block-structured adaptively refined meshes (SAMR) strive for efficient resolution of partial differential equations (PDEs) solved on large computational domains by clustering mesh points only where required by large gradients. Previous work has indicated that fourth-order convergence can be achieved on such meshes by using a suitable combination of high-order discretizations, interpolations, and filters and can deliver significant computational savings over conventional second-order methods at engineering error tolerances. In this paper, we explore the interactions between the errors introduced by discretizations, interpolations and filters. We develop general expressions for high-order discretizations, interpolations, and filters, in multiple dimensions, using a Fourier approach, facilitating the high-order SAMR implementation. We derive a formulation for the necessary interpolation order for given discretization and derivative orders. We also illustrate this order relationship empirically using one and two-dimensional model problems on refined meshes. We study the observed increase in accuracy with increasing interpolation order. We also examine the empirically observed order of convergence, as the effective resolution of the mesh is increased by successively adding levels of refinement, with different orders of discretization, interpolation, or filtering.
An Adaptive Control Method for Ros-Drill Cellular Microinjector with Low-Resolution Encoder
Zhang, Zhenyu; Olgac, Nejat
2013-01-01
A novel control methodology which uses a low-resolution encoder is presented for a cellular microinjection technology called the Ros-Drill (rotationally oscillating drill). It is developed primarily for ICSI (intracytoplasmic sperm injection) operations, with the objective of generating a desired oscillatory motion at the tip of a micro glass pipette. It is an inexpensive setup, which creates high-frequency (higher than 500 Hz) and small-amplitude (around 0.2 deg) rotational oscillations at the tip of an injection pipette. These rotational oscillations enable the pipette to drill into cell membranes with minimum biological damage. Such a motion control procedure presents no particular difficulty when it uses sufficiently precise motion sensors. However, size, costs, and accessibility of technology to the hardware components severely constrain the sensory capabilities. Consequently, the control mission and the trajectory tracking are adversely affected. This paper presents two contributions: (a) a dedicated novel adaptive feedback control method to achieve a satisfactory trajectory tracking capability. We demonstrate via experiments that the tracking of the harmonic rotational motion is achieved with desirable fidelity; (b) some important analytical features and related observations associated with the controlled harmonic motion which is created by the low-resolution feedback control structure. PMID:27006914
System and method for the adaptive mapping of matrix data to sets of polygons
NASA Technical Reports Server (NTRS)
Burdon, David (Inventor)
2003-01-01
A system and method for converting bitmapped data, for example, weather data or thermal imaging data, to polygons is disclosed. The conversion of the data into polygons creates smaller data files. The invention is adaptive in that it allows for a variable degree of fidelity of the polygons. Matrix data is obtained. A color value is obtained. The color value is a variable used in the creation of the polygons. A list of cells to check is determined based on the color value. The list of cells to check is examined in order to determine a boundary list. The boundary list is then examined to determine vertices. The determination of the vertices is based on a prescribed maximum distance. When drawn, the ordered list of vertices create polygons which depict the cell data. The data files which include the vertices for the polygons are much smaller than the corresponding cell data files. The fidelity of the polygon representation can be adjusted by repeating the logic with varying fidelity values to achieve a given maximum file size or a maximum number of vertices per polygon.
NASA Astrophysics Data System (ADS)
Liu, Huanlin; Xu, Yifan; Chen, Yong; Zhang, Mingjia
2016-09-01
With the development of one point to multiple point applications, network resources become scarcer and wavelength channels become more crowded in optical networks. To improve the bandwidth utilization, the multicast routing algorithm based on network coding can greatly increase the resource utilization, but it is most difficult to maximize the network throughput owing to ignoring the differences between the multicast receiving nodes. For making full use of the destination nodes' receives ability to maximize optical multicast's network throughput, a new optical multicast routing algorithm based on teaching-learning-based optimization (MR-iTLBO) is proposed in the paper. In order to increase the diversity of learning, a self-driven learning method is adopted in MR-iTLBO algorithm, and the mutation operator of genetic algorithm is introduced to prevent the algorithm into a local optimum. For increasing learner's learning efficiency, an adaptive learning factor is designed to adjust the learning process. Moreover, the reconfiguration scheme based on probability vector is devised to expand its global search capability in MR-iTLBO algorithm. The simulation results show that performance in terms of network throughput and convergence rate has been improved significantly with respect to the TLBO and the variant TLBO.
New method adaptive to geospatial information acquisition and share based on grid
NASA Astrophysics Data System (ADS)
Fu, Yingchun; Yuan, Xiuxiao
2005-11-01
As we all know, it is difficult and time-consuming to acquire and share multi-source geospatial information in grid computing environment, especially for the data of different geo-reference benchmark. Although middleware for data format transformation has been applied by many grid applications and GIS software systems, it remains difficult to on demand realize spatial data assembly jobs among various geo-reference benchmarks because of complex computation of rigorous coordinate transformation model. To address the problem, an efficient hierarchical quadtree structure referred as multi-level grids is designed and coded to express the multi-scale global geo-space. The geospatial objects located in a certain grid of multi-level grids may be expressed as an increment value which is relative to the grid central point and is constant in different geo-reference benchmark. A mediator responsible for geo-reference transformation function with multi-level grids has been developed and aligned with grid service. With help of the mediator, a map or query spatial data sets from individual source of different geo-references can be merged into an uniform composite result. Instead of complex data pre-processing prior to compatible spatial integration, the introduced method is adaptive to be integrated with grid-enable service.
Fakhari, Abbas; Lee, Taehun
2014-03-01
An adaptive-mesh-refinement (AMR) algorithm for the finite-difference lattice Boltzmann method (FDLBM) is presented in this study. The idea behind the proposed AMR is to remove the need for a tree-type data structure. Instead, pointer attributes are used to determine the neighbors of a certain block via appropriate adjustment of its children identifications. As a result, the memory and time required for tree traversal are completely eliminated, leaving us with an efficient algorithm that is easier to implement and use on parallel machines. To allow different mesh sizes at separate parts of the computational domain, the Eulerian formulation of the streaming process is invoked. As a result, there is no need for rescaling the distribution functions or using a temporal interpolation at the fine-coarse grid boundaries. The accuracy and efficiency of the proposed FDLBM AMR are extensively assessed by investigating a variety of vorticity-dominated flow fields, including Taylor-Green vortex flow, lid-driven cavity flow, thin shear layer flow, and the flow past a square cylinder.
NASA Astrophysics Data System (ADS)
Fakhari, Abbas; Lee, Taehun
2014-03-01
An adaptive-mesh-refinement (AMR) algorithm for the finite-difference lattice Boltzmann method (FDLBM) is presented in this study. The idea behind the proposed AMR is to remove the need for a tree-type data structure. Instead, pointer attributes are used to determine the neighbors of a certain block via appropriate adjustment of its children identifications. As a result, the memory and time required for tree traversal are completely eliminated, leaving us with an efficient algorithm that is easier to implement and use on parallel machines. To allow different mesh sizes at separate parts of the computational domain, the Eulerian formulation of the streaming process is invoked. As a result, there is no need for rescaling the distribution functions or using a temporal interpolation at the fine-coarse grid boundaries. The accuracy and efficiency of the proposed FDLBM AMR are extensively assessed by investigating a variety of vorticity-dominated flow fields, including Taylor-Green vortex flow, lid-driven cavity flow, thin shear layer flow, and the flow past a square cylinder.
NASA Technical Reports Server (NTRS)
Olynick, David P.; Hassan, H. A.; Moss, James N.
1988-01-01
A grid generation and adaptation procedure based on the method of transfinite interpolation is incorporated into the Direct Simulation Monte Carlo Method of Bird. In addition, time is advanced based on a local criterion. The resulting procedure is used to calculate steady flows past wedges and cones. Five chemical species are considered. In general, the modifications result in a reduced computational effort. Moreover, preliminary results suggest that the simulation method is time step dependent if requirements on cell sizes are not met.
Adaptive Methods within a Sequential Bayesian Approach for Structural Health Monitoring
NASA Astrophysics Data System (ADS)
Huff, Daniel W.
Structural integrity is an important characteristic of performance for critical components used in applications such as aeronautics, materials, construction and transportation. When appraising the structural integrity of these components, evaluation methods must be accurate. In addition to possessing capability to perform damage detection, the ability to monitor the level of damage over time can provide extremely useful information in assessing the operational worthiness of a structure and in determining whether the structure should be repaired or removed from service. In this work, a sequential Bayesian approach with active sensing is employed for monitoring crack growth within fatigue-loaded materials. The monitoring approach is based on predicting crack damage state dynamics and modeling crack length observations. Since fatigue loading of a structural component can change while in service, an interacting multiple model technique is employed to estimate probabilities of different loading modes and incorporate this information in the crack length estimation problem. For the observation model, features are obtained from regions of high signal energy in the time-frequency plane and modeled for each crack length damage condition. Although this observation model approach exhibits high classification accuracy, the resolution characteristics can change depending upon the extent of the damage. Therefore, several different transmission waveforms and receiver sensors are considered to create multiple modes for making observations of crack damage. Resolution characteristics of the different observation modes are assessed using a predicted mean squared error criterion and observations are obtained using the predicted, optimal observation modes based on these characteristics. Calculation of the predicted mean square error metric can be computationally intensive, especially if performed in real time, and an approximation method is proposed. With this approach, the real time
NASA Astrophysics Data System (ADS)
Ma, Xibo; Tian, Jie; Zhang, Bo; Zhang, Xing; Xue, Zhenwen; Dong, Di; Han, Dong
2011-03-01
Among many optical molecular imaging modalities, bioluminescence imaging (BLI) has more and more wide application in tumor detection and evaluation of pharmacodynamics, toxicity, pharmacokinetics because of its noninvasive molecular and cellular level detection ability, high sensitivity and low cost in comparison with other imaging technologies. However, BLI can not present the accurate location and intensity of the inner bioluminescence sources such as in the bone, liver or lung etc. Bioluminescent tomography (BLT) shows its advantage in determining the bioluminescence source distribution inside a small animal or phantom. Considering the deficiency of two-dimensional imaging modality, we developed three-dimensional tomography to reconstruct the information of the bioluminescence source distribution in transgenic mOC-Luc mice bone with the boundary measured data. In this paper, to study the osteocalcin (OC) accumulation in transgenic mOC-Luc mice bone, a BLT reconstruction method based on multilevel adaptive finite element (FEM) algorithm was used for localizing and quantifying multi bioluminescence sources. Optical and anatomical information of the tissues are incorporated as a priori knowledge in this method, which can reduce the ill-posedness of BLT. The data was acquired by the dual modality BLT and Micro CT prototype system that was developed by us. Through temperature control and absolute intensity calibration, a relative accurate intensity can be calculated. The location of the OC accumulation was reconstructed, which was coherent with the principle of bone differentiation. This result also was testified by ex vivo experiment in the black 96-plate well using the BLI system and the chemiluminescence apparatus.
NASA Technical Reports Server (NTRS)
Milman, M.; Needels, L.; Redding, D.
1994-01-01
Keck telescope is planning to utilize adaptive optics technology to improve the resolution of the instrument. Telescopes operating in the atmosphere are limited by the seeing conditions at the telescope observational site.
The adaptive buffered force QM/MM method in the CP2K and AMBER software packages
Mones, Letif; Jones, Andrew; Götz, Andreas W.; Laino, Teodoro; Walker, Ross C.; Leimkuhler, Ben; Csányi, Gábor; Bernstein, Noam
2015-02-03
We present the implementation and validation of the adaptive buffered force (AdBF) quantum-mechanics/molecular-mechanics (QM/MM) method in two popular packages, CP2K and AMBER. The implementations build on the existing QM/MM functionality in each code, extending it to allow for redefinition of the QM and MM regions during the simulation and reducing QM-MM interface errors by discarding forces near the boundary according to the buffered force-mixing approach. New adaptive thermostats, needed by force-mixing methods, are also implemented. Different variants of the method are benchmarked by simulating the structure of bulk water, water autoprotolysis in the presence of zinc and dimethyl-phosphate hydrolysis using various semiempirical Hamiltonians and density functional theory as the QM model. It is shown that with suitable parameters, based on force convergence tests, the AdBF QM/MM scheme can provide an accurate approximation of the structure in the dynamical QM region matching the corresponding fully QM simulations, as well as reproducing the correct energetics in all cases. Adaptive unbuffered force-mixing and adaptive conventional QM/MM methods also provide reasonable results for some systems, but are more likely to suffer from instabilities and inaccuracies.
ERIC Educational Resources Information Center
Jian, Hu
2012-01-01
The purpose of this mixed method study was to investigate how graduates originating from mainland China adapt to the U.S. academic integrity requirements. In the first, quantitative phase of the study, the research questions focused on understanding the state of academic integrity in China. This guiding question was divided into two sub-questions,…
NASA Technical Reports Server (NTRS)
Whitmore, Stephen A. (Inventor); Saltzman, Edwin J. (Inventor); Moes, Timothy R. (Inventor); Iliff, Kenneth W. (Inventor)
2005-01-01
A method for reducing drag upon a blunt-based vehicle by adaptively increasing forebody roughness to increase drag at the roughened area of the forebody, which results in a decrease in drag at the base of this vehicle, and in total vehicle drag.
ERIC Educational Resources Information Center
Kluge, Annette; Sauer, Juergen; Burkolter, Dina; Ritzmann, Sandrina
2010-01-01
Training in process control environments requires operators to be prepared for temporal and adaptive transfer of skill. Three training methods were compared with regard to their effectiveness in supporting transfer: Drill & Practice (D&P), Error Training (ET), and procedure-based and error heuristics training (PHT). Communication…
The adaptive buffered force QM/MM method in the CP2K and AMBER software packages
Mones, Letif; Jones, Andrew; Götz, Andreas W; Laino, Teodoro; Walker, Ross C; Leimkuhler, Ben; Csányi, Gábor; Bernstein, Noam
2015-01-01
The implementation and validation of the adaptive buffered force (AdBF) quantum-mechanics/molecular-mechanics (QM/MM) method in two popular packages, CP2K and AMBER are presented. The implementations build on the existing QM/MM functionality in each code, extending it to allow for redefinition of the QM and MM regions during the simulation and reducing QM-MM interface errors by discarding forces near the boundary according to the buffered force-mixing approach. New adaptive thermostats, needed by force-mixing methods, are also implemented. Different variants of the method are benchmarked by simulating the structure of bulk water, water autoprotolysis in the presence of zinc and dimethyl-phosphate hydrolysis using various semiempirical Hamiltonians and density functional theory as the QM model. It is shown that with suitable parameters, based on force convergence tests, the AdBF QM/MM scheme can provide an accurate approximation of the structure in the dynamical QM region matching the corresponding fully QM simulations, as well as reproducing the correct energetics in all cases. Adaptive unbuffered force-mixing and adaptive conventional QM/MM methods also provide reasonable results for some systems, but are more likely to suffer from instabilities and inaccuracies. © 2015 The Authors. Journal of Computational Chemistry Published by Wiley Periodicals, Inc. PMID:25649827
The adaptive buffered force QM/MM method in the CP2K and AMBER software packages
Mones, Letif; Jones, Andrew; Götz, Andreas W.; ...
2015-02-03
We present the implementation and validation of the adaptive buffered force (AdBF) quantum-mechanics/molecular-mechanics (QM/MM) method in two popular packages, CP2K and AMBER. The implementations build on the existing QM/MM functionality in each code, extending it to allow for redefinition of the QM and MM regions during the simulation and reducing QM-MM interface errors by discarding forces near the boundary according to the buffered force-mixing approach. New adaptive thermostats, needed by force-mixing methods, are also implemented. Different variants of the method are benchmarked by simulating the structure of bulk water, water autoprotolysis in the presence of zinc and dimethyl-phosphate hydrolysis usingmore » various semiempirical Hamiltonians and density functional theory as the QM model. It is shown that with suitable parameters, based on force convergence tests, the AdBF QM/MM scheme can provide an accurate approximation of the structure in the dynamical QM region matching the corresponding fully QM simulations, as well as reproducing the correct energetics in all cases. Adaptive unbuffered force-mixing and adaptive conventional QM/MM methods also provide reasonable results for some systems, but are more likely to suffer from instabilities and inaccuracies.« less
FEMHD: An adaptive finite element method for MHD and edge modelling
Strauss, H.R.
1995-07-01
This paper describes the code FEMHD, an adaptive finite element MHD code, which is applied in a number of different manners to model MHD behavior and edge plasma phenomena on a diverted tokamak. The code uses an unstructured triangular mesh in 2D and wedge shaped mesh elements in 3D. The code has been adapted to look at neutral and charged particle dynamics in the plasma scrape off region, and into a full MHD-particle code.
Kohn, S.; Weare, J.; Ong, E.; Baden, S.
1997-05-01
We have applied structured adaptive mesh refinement techniques to the solution of the LDA equations for electronic structure calculations. Local spatial refinement concentrates memory resources and numerical effort where it is most needed, near the atomic centers and in regions of rapidly varying charge density. The structured grid representation enables us to employ efficient iterative solver techniques such as conjugate gradient with FAC multigrid preconditioning. We have parallelized our solver using an object- oriented adaptive mesh refinement framework.
Revision of FMM-Yukawa: An adaptive fast multipole method for screened Coulomb interactions
NASA Astrophysics Data System (ADS)
Zhang, Bo; Huang, Jingfang; Pitsianis, Nikos P.; Sun, Xiaobai
2010-12-01
FMM-YUKAWA is a mathematical software package primarily for rapid evaluation of the screened Coulomb interactions of N particles in three dimensional space. Since its release, we have revised and re-organized the data structure, software architecture, and user interface, for the purpose of enabling more flexible, broader and easier use of the package. The package and its documentation are available at http://www.fastmultipole.org/, along with a few other closely related mathematical software packages. New version program summaryProgram title: FMM-Yukawa Catalogue identifier: AEEQ_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEEQ_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU GPL 2.0 No. of lines in distributed program, including test data, etc.: 78 704 No. of bytes in distributed program, including test data, etc.: 854 265 Distribution format: tar.gz Programming language: FORTRAN 77, FORTRAN 90, and C. Requires gcc and gfortran version 4.4.3 or later Computer: All Operating system: Any Classification: 4.8, 4.12 Catalogue identifier of previous version: AEEQ_v1_0 Journal reference of previous version: Comput. Phys. Comm. 180 (2009) 2331 Does the new version supersede the previous version?: Yes Nature of problem: To evaluate the screened Coulomb potential and force field of N charged particles, and to evaluate a convolution type integral where the Green's function is the fundamental solution of the modified Helmholtz equation. Solution method: The new version of fast multipole method (FMM) that diagonalizes the multipole-to-local translation operator is applied with the tree structure adaptive to sample particle locations. Reasons for new version: To handle much larger particle ensembles, to enable the iterative use of the subroutines in a solver, and to remove potential contention in assignments for parallelization. Summary of revisions: The software package FMM-Yukawa has been
NASA Astrophysics Data System (ADS)
Zhou, Cong; Chase, J. Geoffrey; Rodgers, Geoffrey W.; Xu, Chao
2017-02-01
The model-free hysteresis loop analysis (HLA) method for structural health monitoring (SHM) has significant advantages over the traditional model-based SHM methods that require a suitable baseline model to represent the actual system response. This paper provides a unique validation against both an experimental reinforced concrete (RC) building and a calibrated numerical model to delineate the capability of the model-free HLA method and the adaptive least mean squares (LMS) model-based method in detecting, localizing and quantifying damage that may not be visible, observable in overall structural response. Results clearly show the model-free HLA method is capable of adapting to changes in how structures transfer load or demand across structural elements over time and multiple events of different size. However, the adaptive LMS model-based method presented an image of greater spread of lesser damage over time and story when the baseline model is not well defined. Finally, the two algorithms are tested over a simpler hysteretic behaviour typical steel structure to quantify the impact of model mismatch between the baseline model used for identification and the actual response. The overall results highlight the need for model-based methods to have an appropriate model that can capture the observed response, in order to yield accurate results, even in small events where the structure remains linear.
NASA Astrophysics Data System (ADS)
Rosam, J.; Jimack, P. K.; Mullis, A.
2007-08-01
A fully implicit numerical method based upon adaptively refined meshes for the simulation of binary alloy solidification in 2D is presented. In addition we combine a second-order fully implicit time discretisation scheme with variable step size control to obtain an adaptive time and space discretisation method. The superiority of this method, compared to widely used fully explicit methods, with respect to CPU time and accuracy, is shown. Due to the high nonlinearity of the governing equations a robust and fast solver for systems of nonlinear algebraic equations is needed to solve the intermediate approximations per time step. We use a nonlinear multigrid solver which shows almost h-independent convergence behaviour.
NASA Astrophysics Data System (ADS)
Regele, Jonathan D.
Multi-dimensional numerical modeling of detonation initiation is the primary goal of this thesis. The particular scenario under examination is initiating a detonation wave through acoustic timescale thermal power deposition. Physically this would correspond to igniting a reactive mixture with a laser pulse as opposed to a typical electric spark. Numerous spatial and temporal scales are involved, which makes these problems computationally challenging to solve. In order to model these problems, a shock capturing scheme is developed that utilizes the computational efficiency of the Adaptive Wavelet-Collocation Method (AWCM) to properly handle the multiple scales involved. With this technique, previous one-dimensional problems with unphysically small activation energies are revisited and simulated with the AWCM. The results demonstrate a qualitative agreement with previous work that used a uniform grid MacCormack scheme. Both sets of data show the basic sequence of events that are needed in order for a DDT process to occur. Instead of starting with a strong shock-coupled reaction zone as many other studies have done, the initial pulse is weak enough to allow the shock and the reaction zone to decouple. Reflected compression waves generated by the inertially confined reaction zone lead to localized reaction centers, which eventually explode and further accelerate the process. A shock-coupled reaction zone forms an initially overdriven detonation, which relaxes to a steady CJ wave. The one-dimensional problems are extended to two dimensions using a circular heat deposition in a channel. Two-dimensional results demonstrate the same sequence of events, which suggests that the concepts developed in the original one-dimensional work are applicable to multiple dimensions.
Lever, Teresa E.; Braun, Sabrina M.; Brooks, Ryan T.; Harris, Rebecca A.; Littrell, Loren L.; Neff, Ryan M.; Hinkel, Cameron J.; Allen, Mitchell J.; Ulsas, Mollie A.
2015-01-01
This study adapted human videofluoroscopic swallowing study (VFSS) methods for use with murine disease models for the purpose of facilitating translational dysphagia research. Successful outcomes are dependent upon three critical components: test chambers that permit self-feeding while standing unrestrained in a confined space, recipes that mask the aversive taste/odor of commercially-available oral contrast agents, and a step-by-step test protocol that permits quantification of swallow physiology. Elimination of one or more of these components will have a detrimental impact on the study results. Moreover, the energy level capability of the fluoroscopy system will determine which swallow parameters can be investigated. Most research centers have high energy fluoroscopes designed for use with people and larger animals, which results in exceptionally poor image quality when testing mice and other small rodents. Despite this limitation, we have identified seven VFSS parameters that are consistently quantifiable in mice when using a high energy fluoroscope in combination with the new murine VFSS protocol. We recently obtained a low energy fluoroscopy system with exceptionally high imaging resolution and magnification capabilities that was designed for use with mice and other small rodents. Preliminary work using this new system, in combination with the new murine VFSS protocol, has identified 13 swallow parameters that are consistently quantifiable in mice, which is nearly double the number obtained using conventional (i.e., high energy) fluoroscopes. Identification of additional swallow parameters is expected as we optimize the capabilities of this new system. Results thus far demonstrate the utility of using a low energy fluoroscopy system to detect and quantify subtle changes in swallow physiology that may otherwise be overlooked when using high energy fluoroscopes to investigate murine disease models. PMID:25866882
Adaptive Management Methods to Protect the California Sacramento-San Joaquin Delta Water Resource
NASA Technical Reports Server (NTRS)
Bubenheim, David
2016-01-01
The California Sacramento-San Joaquin River Delta is the hub for California's water supply, conveying water from Northern to Southern California agriculture and communities while supporting important ecosystem services, agriculture, and communities in the Delta. Changes in climate, long-term drought, water quality changes, and expansion of invasive aquatic plants threatens ecosystems, impedes ecosystem restoration, and is economically, environmentally, and sociologically detrimental to the San Francisco Bay/California Delta complex. NASA Ames Research Center and the USDA-ARS partnered with the State of California and local governments to develop science-based, adaptive-management strategies for the Sacramento-San Joaquin Delta. The project combines science, operations, and economics related to integrated management scenarios for aquatic weeds to help land and waterway managers make science-informed decisions regarding management and outcomes. The team provides a comprehensive understanding of agricultural and urban land use in the Delta and the major water sheds (San Joaquin/Sacramento) supplying the Delta and interaction with drought and climate impacts on the environment, water quality, and weed growth. The team recommends conservation and modified land-use practices and aids local Delta stakeholders in developing management strategies. New remote sensing tools have been developed to enhance ability to assess conditions, inform decision support tools, and monitor management practices. Science gaps in understanding how native and invasive plants respond to altered environmental conditions are being filled and provide critical biological response parameters for Delta-SWAT simulation modeling. Operational agencies such as the California Department of Boating and Waterways provide testing and act as initial adopter of decision support tools. Methods developed by the project can become routine land and water management tools in complex river delta systems.
Quirós, Elia; Felicísimo, Ángel M.; Cuartero, Aurora
2009-01-01
This work proposes a new method to classify multi-spectral satellite images based on multivariate adaptive regression splines (MARS) and compares this classification system with the more common parallelepiped and maximum likelihood (ML) methods. We apply the classification methods to the land cover classification of a test zone located in southwestern Spain. The basis of the MARS method and its associated procedures are explained in detail, and the area under the ROC curve (AUC) is compared for the three methods. The results show that the MARS method provides better results than the parallelepiped method in all cases, and it provides better results than the maximum likelihood method in 13 cases out of 17. These results demonstrate that the MARS method can be used in isolation or in combination with other methods to improve the accuracy of soil cover classification. The improvement is statistically significant according to the Wilcoxon signed rank test. PMID:22291550
NASA Astrophysics Data System (ADS)
Tago, J.; Cruz-Atienza, V. M.; Etienne, V.; Virieux, J.; Benjemaa, M.; Sanchez-Sesma, F. J.
2010-12-01
Simulating any realistic seismic scenario requires incorporating physical basis into the model. Considering both the dynamics of the rupture process and the anelastic attenuation of seismic waves is essential to this purpose and, therefore, we choose to extend the hp-adaptive Discontinuous Galerkin finite-element method to integrate these physical aspects. The 3D elastodynamic equations in an unstructured tetrahedral mesh are solved with a second-order time marching approach in a high-performance computing environment. The first extension incorporates the viscoelastic rheology so that the intrinsic attenuation of the medium is considered in terms of frequency dependent quality factors (Q). On the other hand, the extension related to dynamic rupture is integrated through explicit boundary conditions over the crack surface. For this visco-elastodynamic formulation, we introduce an original discrete scheme that preserves the optimal code performance of the elastodynamic equations. A set of relaxation mechanisms describes the behavior of a generalized Maxwell body. We approximate almost constant Q in a wide frequency range by selecting both suitable relaxation frequencies and anelastic coefficients characterizing these mechanisms. In order to do so, we solve an optimization problem which is critical to minimize the amount of relaxation mechanisms. Two strategies are explored: 1) a least squares method and 2) a genetic algorithm (GA). We found that the improvement provided by the heuristic GA method is negligible. Both optimization strategies yield Q values within the 5% of the target constant Q mechanism. Anelastic functions (i.e. memory variables) are introduced to efficiently evaluate the time convolution terms involved in the constitutive equations and thus to minimize the computational cost. The incorporation of anelastic functions implies new terms with ordinary differential equations in the mathematical formulation. We solve these equations using the same order
NASA Astrophysics Data System (ADS)
Feischl, Michael; Gantner, Gregor; Praetorius, Dirk
2015-06-01
We consider the Galerkin boundary element method (BEM) for weakly-singular integral equations of the first-kind in 2D. We analyze some residual-type a posteriori error estimator which provides a lower as well as an upper bound for the unknown Galerkin BEM error. The required assumptions are weak and allow for piecewise smooth parametrizations of the boundary, local mesh-refinement, and related standard piecewise polynomials as well as NURBS. In particular, our analysis gives a first contribution to adaptive BEM in the frame of isogeometric analysis (IGABEM), for which we formulate an adaptive algorithm which steers the local mesh-refinement and the multiplicity of the knots. Numerical experiments underline the theoretical findings and show that the proposed adaptive strategy leads to optimal convergence.
Feischl, Michael; Gantner, Gregor; Praetorius, Dirk
2015-01-01
We consider the Galerkin boundary element method (BEM) for weakly-singular integral equations of the first-kind in 2D. We analyze some residual-type a posteriori error estimator which provides a lower as well as an upper bound for the unknown Galerkin BEM error. The required assumptions are weak and allow for piecewise smooth parametrizations of the boundary, local mesh-refinement, and related standard piecewise polynomials as well as NURBS. In particular, our analysis gives a first contribution to adaptive BEM in the frame of isogeometric analysis (IGABEM), for which we formulate an adaptive algorithm which steers the local mesh-refinement and the multiplicity of the knots. Numerical experiments underline the theoretical findings and show that the proposed adaptive strategy leads to optimal convergence. PMID:26085698
Lesmes, Luis A; Lu, Zhong-Lin; Baek, Jongsoo; Tran, Nina; Dosher, Barbara A; Albright, Thomas D
2015-01-01
Motivated by Signal Detection Theory (SDT), we developed a family of novel adaptive methods that estimate the sensitivity threshold-the signal intensity corresponding to a pre-defined sensitivity level (d' = 1)-in Yes-No (YN) and Forced-Choice (FC) detection tasks. Rather than focus stimulus sampling to estimate a single level of %Yes or %Correct, the current methods sample psychometric functions more broadly, to concurrently estimate sensitivity and decision factors, and thereby estimate thresholds that are independent of decision confounds. Developed for four tasks-(1) simple YN detection, (2) cued YN detection, which cues the observer's response state before each trial, (3) rated YN detection, which incorporates a Not Sure response, and (4) FC detection-the qYN and qFC methods yield sensitivity thresholds that are independent of the task's decision structure (YN or FC) and/or the observer's subjective response state. Results from simulation and psychophysics suggest that 25 trials (and sometimes less) are sufficient to estimate YN thresholds with reasonable precision (s.d. = 0.10-0.15 decimal log units), but more trials are needed for FC thresholds. When the same subjects were tested across tasks of simple, cued, rated, and FC detection, adaptive threshold estimates exhibited excellent agreement with the method of constant stimuli (MCS), and with each other. These YN adaptive methods deliver criterion-free thresholds that have previously been exclusive to FC methods.
Lesmes, Luis A.; Lu, Zhong-Lin; Baek, Jongsoo; Tran, Nina; Dosher, Barbara A.; Albright, Thomas D.
2015-01-01
Motivated by Signal Detection Theory (SDT), we developed a family of novel adaptive methods that estimate the sensitivity threshold—the signal intensity corresponding to a pre-defined sensitivity level (d′ = 1)—in Yes-No (YN) and Forced-Choice (FC) detection tasks. Rather than focus stimulus sampling to estimate a single level of %Yes or %Correct, the current methods sample psychometric functions more broadly, to concurrently estimate sensitivity and decision factors, and thereby estimate thresholds that are independent of decision confounds. Developed for four tasks—(1) simple YN detection, (2) cued YN detection, which cues the observer's response state before each trial, (3) rated YN detection, which incorporates a Not Sure response, and (4) FC detection—the qYN and qFC methods yield sensitivity thresholds that are independent of the task's decision structure (YN or FC) and/or the observer's subjective response state. Results from simulation and psychophysics suggest that 25 trials (and sometimes less) are sufficient to estimate YN thresholds with reasonable precision (s.d. = 0.10–0.15 decimal log units), but more trials are needed for FC thresholds. When the same subjects were tested across tasks of simple, cued, rated, and FC detection, adaptive threshold estimates exhibited excellent agreement with the method of constant stimuli (MCS), and with each other. These YN adaptive methods deliver criterion-free thresholds that have previously been exclusive to FC methods. PMID:26300798
A wavelet-MRA-based adaptive semi-Lagrangian method for the relativistic Vlasov-Maxwell system
Besse, Nicolas Latu, Guillaume Ghizzo, Alain Sonnendruecker, Eric Bertrand, Pierre
2008-08-10
In this paper we present a new method for the numerical solution of the relativistic Vlasov-Maxwell system on a phase-space grid using an adaptive semi-Lagrangian method. The adaptivity is performed through a wavelet multiresolution analysis, which gives a powerful and natural refinement criterion based on the local measurement of the approximation error and regularity of the distribution function. Therefore, the multiscale expansion of the distribution function allows to get a sparse representation of the data and thus save memory space and CPU time. We apply this numerical scheme to reduced Vlasov-Maxwell systems arising in laser-plasma physics. Interaction of relativistically strong laser pulses with overdense plasma slabs is investigated. These Vlasov simulations revealed a rich variety of phenomena associated with the fast particle dynamics induced by electromagnetic waves as electron trapping, particle acceleration, and electron plasma wavebreaking. However, the wavelet based adaptive method that we developed here, does not yield significant improvements compared to Vlasov solvers on a uniform mesh due to the substantial overhead that the method introduces. Nonetheless they might be a first step towards more efficient adaptive solvers based on different ideas for the grid refinement or on a more efficient implementation. Here the Vlasov simulations are performed in a two-dimensional phase-space where the development of thin filaments, strongly amplified by relativistic effects requires an important increase of the total number of points of the phase-space grid as they get finer as time goes on. The adaptive method could be more useful in cases where these thin filaments that need to be resolved are a very small fraction of the hyper-volume, which arises in higher dimensions because of the surface-to-volume scaling and the essentially one-dimensional structure of the filaments. Moreover, the main way to improve the efficiency of the adaptive method is to
Bell, Iris R; Schwartz, Gary E
2015-04-01
Multiple studies have demonstrated that traditional homeopathic manufacturing reagents and processes can generate remedy source and silica nanoparticles (NPs). Homeopathically-made NPs would initiate adaptive changes in an organism as a complex adaptive system (CAS) or network. Adaptive changes would emerge from several different endogenous amplification processes that respond to exogenous danger or threat signals that manufactured nanomaterials convey, including (1) stochastic resonance (SR) in sensory neural systems and (2) time-dependent sensitization (TDS)/oscillation. SR is nonlinear coherent amplification of a weak signal by the superposition of a larger magnitude white noise containing within it the same frequencies of the weak signal. TDS is progressive response magnitude amplification and oscillatory reversal in response direction to a given low dose at physiological limits with the passage of time. Hormesis is an overarching adaptive phenomenon that reflects the observed nonlinear adaptive dose-response relationship. Remedies would act as enhanced micro- and nanoscale forms of their source material via direct local ligand-receptor interactions at very low potencies and/or by triggering systemic adaptive network dynamical effects via their NP-based electromagnetic, optical, and quantum mechanical properties at higher potencies. Manufacturing parameters including dilution modify sizes, shapes, and surface charges of nanoparticles, thereby causing differences in physico-chemical properties and biological effects. Based on surface area, size, shape, and charge, nanoparticles adsorb a complex pattern of serum proteins, forming a protein corona on contact that constitutes a unique biological identity. The protein corona may capture individualized dysfunctional biological mediator information of the organism onto the surfaces of the salient, i.e., resonant, remedy nanostructures. SR would amplify this weak signal from the salient remedy NPs with protein corona
NASA Astrophysics Data System (ADS)
Liao, Qinzhuo; Zhang, Dongxiao; Tchelepi, Hamdi
2017-02-01
A new computational method is proposed for efficient uncertainty quantification of multiphase flow in porous media with stochastic permeability. For pressure estimation, it combines the dimension-adaptive stochastic collocation method on Smolyak sparse grids and the Kronrod-Patterson-Hermite nested quadrature formulas. For saturation estimation, an additional stage is developed, in which the pressure and velocity samples are first generated by the sparse grid interpolation and then substituted into the transport equation to solve for the saturation samples, to address the low regularity problem of the saturation. Numerical examples are presented for multiphase flow with stochastic permeability fields to demonstrate accuracy and efficiency of the proposed two-stage adaptive stochastic collocation method on nested sparse grids.
Ma, JiaLi; Zhang, TanTan; Dong, MingChui
2015-05-01
This paper presents a novel electrocardiogram (ECG) compression method for e-health applications by adapting an adaptive Fourier decomposition (AFD) algorithm hybridized with a symbol substitution (SS) technique. The compression consists of two stages: first stage AFD executes efficient lossy compression with high fidelity; second stage SS performs lossless compression enhancement and built-in data encryption, which is pivotal for e-health. Validated with 48 ECG records from MIT-BIH arrhythmia benchmark database, the proposed method achieves averaged compression ratio (CR) of 17.6-44.5 and percentage root mean square difference (PRD) of 0.8-2.0% with a highly linear and robust PRD-CR relationship, pushing forward the compression performance to an unexploited region. As such, this paper provides an attractive candidate of ECG compression method for pervasive e-health applications.
NASA Astrophysics Data System (ADS)
Mostaghimi, P.; Percival, J. R.; Pavlidis, D.; Gorman, G.; Jackson, M.; Neethling, S.; Pain, C. C.
2013-12-01
Numerical simulation of multiphase flow in porous media is of importance in a wide range of applications in science and engineering. We present a novel control volume finite element method (CVFEM) to solve for multi-scale flow in heterogeneous geological formations. It employs a node centred control volume approach to discretize the saturation equation, while a control volume finite element method is applied for the pressure equation. We embed the discrete continuity equation into the pressure equation and assure that the continuity is exactly enforced. Anisotropic mesh adaptivity is used to accurately model the fine grained features of multiphase flow. The adaptive algorithm uses a metric tensor field based on solution error estimates to locally control the size and shape of elements in the metric. Moreover, it uses metric advection between adaptive meshes in order to predict the future required density of mesh thereby reducing numerical dispersion at the saturation front. The scheme is capable of capturing multi-scale heterogeneity such as those in fractured porous media through the use of several constraints on the element size in different regions of porous media. We show the application of our method for simulation of flow in some challenging benchmark problems. For flow in fractured reservoirs, the scheme adapts the mesh as the flow penetrates through the fracture and the matrix. The constraints for the element size within the fracture are smaller by several orders of magnitude than the generated mesh within the matrix. We show that the scheme captures the key multi-scale features of flow while preserving the geometry. We demonstrate that mesh adaptation can be used to accurately simulate flow in heterogeneous porous media at low computational cost.
Gibson, Oliver R; Mee, Jessica A; Tuttle, James A; Taylor, Lee; Watt, Peter W; Maxwell, Neil S
2015-01-01
Heat acclimation requires the interaction between hot environments and exercise to elicit thermoregulatory adaptations. Optimal synergism between these parameters is unknown. Common practise involves utilising a fixed workload model where exercise prescription is controlled and core temperature is uncontrolled, or an isothermic model where core temperature is controlled and work rate is manipulated to control core temperature. Following a baseline heat stress test; 24 males performed a between groups experimental design performing short term heat acclimation (STHA; five 90 min sessions) and long term heat acclimation (LTHA; STHA plus further five 90 min sessions) utilising either fixed intensity (50% VO2peak), continuous isothermic (target rectal temperature 38.5 °C for STHA and LTHA), or progressive isothermic heat acclimation (target rectal temperature 38.5 °C for STHA, and 39.0 °C for LTHA). Identical heat stress tests followed STHA and LTHA to determine the magnitude of adaptation. All methods induced equal adaptation from baseline however isothermic methods induced adaptation and reduced exercise durations (STHA = -66% and LTHA = -72%) and mean session intensity (STHA = -13% VO2peak and LTHA = -9% VO2peak) in comparison to fixed (p < 0.05). STHA decreased exercising heart rate (-10 b min(-1)), core (-0.2 °C) and skin temperature (-0.51 °C), with sweat losses increasing (+0.36 Lh(-1)) (p<0.05). No difference between heat acclimation methods, and no further benefit of LTHA was observed (p > 0.05). Only thermal sensation improved from baseline to STHA (-0.2), and then between STHA and LTHA (-0.5) (p<0.05). Both the continuous and progressive isothermic methods elicited exercise duration, mean session intensity, and mean T(rec) analogous to more efficient administration for maximising adaptation. Short term isothermic methods are therefore optimal for individuals aiming to achieve heat adaptation most economically, i.e. when integrating heat acclimation into
Solid rocket booster internal flow analysis by highly accurate adaptive computational methods
NASA Technical Reports Server (NTRS)
Huang, C. Y.; Tworzydlo, W.; Oden, J. T.; Bass, J. M.; Cullen, C.; Vadaketh, S.
1991-01-01
The primary objective of this project was to develop an adaptive finite element flow solver for simulating internal flows in the solid rocket booster. Described here is a unique flow simulator code for analyzing highly complex flow phenomena in the solid rocket booster. New methodologies and features incorporated into this analysis tool are described.
Algebraic grid adaptation method using non-uniform rational B-spline surface modeling
NASA Technical Reports Server (NTRS)
Yang, Jiann-Cherng; Soni, B. K.
1992-01-01
An algebraic adaptive grid system based on equidistribution law and utilized by the Non-Uniform Rational B-Spline (NURBS) surface for redistribution is presented. A weight function, utilizing a properly weighted boolean sum of various flow field characteristics is developed. Computational examples are presented to demonstrate the success of this technique.
Analysis of the Same Subject in Diverse Periodicals: One Method for Teaching Audience Adaptation.
ERIC Educational Resources Information Center
Bradford, Annette N.; Whitburn, Merrill D.
1982-01-01
Examines two technical writing assignments involving analysis of particular audience adaptive techniques used in five published technical articles from diverse sources on the same limited subject. The first is a discussion exercise involving the entire class, and the second is an individual written exercise. (HTH)
A Comparison of Item-Selection Methods for Adaptive Tests with Content Constraints
ERIC Educational Resources Information Center
van der Linden, Wim J.
2005-01-01
In test assembly, a fundamental difference exists between algorithms that select a test sequentially or simultaneously. Sequential assembly allows us to optimize an objective function at the examinee's ability estimate, such as the test information function in computerized adaptive testing. But it leads to the non-trivial problem of how to realize…
ERIC Educational Resources Information Center
Wang, Chun
2013-01-01
Cognitive diagnostic computerized adaptive testing (CD-CAT) purports to combine the strengths of both CAT and cognitive diagnosis. Cognitive diagnosis models aim at classifying examinees into the correct mastery profile group so as to pinpoint the strengths and weakness of each examinee whereas CAT algorithms choose items to determine those…
Comparing Computer-Adaptive and Curriculum-Based Measurement Methods of Assessment
ERIC Educational Resources Information Center
Shapiro, Edward S.; Gebhardt, Sarah N.
2012-01-01
This article reported the concurrent, predictive, and diagnostic accuracy of a computer-adaptive test (CAT) and curriculum-based measurements (CBM; both computation and concepts/application measures) for universal screening in mathematics among students in first through fourth grade. Correlational analyses indicated moderate to strong…
Boutalis, Yiannis; Theodoridis, Dimitris C; Christodoulou, Manolis A
2009-04-01
The indirect adaptive regulation of unknown nonlinear dynamical systems is considered in this paper. The method is based on a new neuro-fuzzy dynamical system (neuro-FDS) definition, which uses the concept of adaptive fuzzy systems (AFSs) operating in conjunction with high-order neural network functions (FHONNFs). Since the plant is considered unknown, we first propose its approximation by a special form of an FDS and then the fuzzy rules are approximated by appropriate HONNFs. Thus, the identification scheme leads up to a recurrent high-order neural network (RHONN), which however takes into account the fuzzy output partitions of the initial FDS. The proposed scheme does not require a priori experts' information on the number and type of input variable membership functions making it less vulnerable to initial design assumptions. Once the system is identified around an operation point, it is regulated to zero adaptively. Weight updating laws for the involved HONNFs are provided, which guarantee that both the identification error and the system states reach zero exponentially fast, while keeping all signals in the closed loop bounded. The existence of the control signal is always assured by introducing a novel method of parameter hopping, which is incorporated in the weight updating law. Simulations illustrate the potency of the method and comparisons with conventional approaches on benchmarking systems are given. Also, the applicability of the method is tested on a direct current (dc) motor system where it is shown that by following the proposed procedure one can obtain asymptotic regulation.
NASA Astrophysics Data System (ADS)
Sun, Zhiyong; Hao, Lina; Chen, Wenlin; Li, Zhi; Liu, Liqun
2013-09-01
Ionic polymer-metal composite (IPMC), also called artificial muscle, is an EAP material which can generate a relatively large deformation with a low driving voltage (generally less than 5 V). Like other EAP materials, IPMC possesses strong nonlinear properties, which can be described as a hybrid of back-relaxation (BR) and hysteresis characteristics, which also vary with water content, environmental temperature and even the usage consumption. Nowadays, many control approaches have been developed to tune the IPMC actuators, among which adaptive methods show a particular striking performance. To deal with IPMCs’ nonlinear problem, this paper represents a robust discrete adaptive inverse (AI) control approach, which employs an on-line identification technique based on the BR operator and Prandtl-Ishlinskii (PI) hysteresis operator hybrid model estimation method. Here the newly formed control approach is called discrete adaptive sliding-mode-like control (DASMLC) due to the similarity of its design method to that of a sliding mode controller. The weighted least mean squares (WLMS) identification method was employed to estimate the hybrid IPMC model because of its advantage of insensitivity to environmental noise. Experiments with the DASMLC approach and a conventional PID controller were carried out to compare and demonstrate the proposed controller’s better performance.
NASA Astrophysics Data System (ADS)
Wróbel, Jacek K.; Goodman, Roy H.
2013-07-01
An efficient and accurate numerical method is presented for computing invariant manifolds of maps which arise in the study of dynamical systems. A quasi-interpolation method due to Hering-Bertram et al. is used to decrease the number of points needed to compute a portion of the manifold. Bézier triangular patches are used in this construction, together with adaptivity conditions based on properties of these patches. Several numerical tests are performed, which show the method to compare favorably with previous approaches.
Fontanesi, John; Martinez, Anthony; Boyo, Toritsesan O; Gish, Robert
2015-01-01
Although demands for greater access to hepatology services that are less costly and achieve better outcomes have led to numerous quality improvement initiatives, traditional quality management methods may be inappropriate for hepatology. We empirically tested a model for conducting quality improvement in an academic hepatology program using methods developed to analyze and improve complex adaptive systems. We achieved a 25% increase in volume using 15% more clinical sessions with no change in staff or faculty FTEs, generating a positive margin of 50%. Wait times for next available appointments were reduced from five months to two weeks; unscheduled appointment slots dropped from 7% to less than 1%; "no-show" rates dropped to less than 10%; Press-Ganey scores increased to the 100th percentile. We conclude that framing hepatology as a complex adaptive system may improve our understanding of the complex, interdependent actions required to improve quality of care, patient satisfaction, and cost-effectiveness.
Schnöller, Johannes Aschenbrenner, Philipp; Hahn, Manuel; Fellner, Johann; Rechberger, Helmut
2014-11-15
Highlights: • An alternative sample comminution procedure for SRF is tested. • Proof of principle is shown on a SRF model mixture. • The biogenic content of the SRF is analyzed with the adapted balance method. • The novel method combines combustion analysis and a data reconciliation algorithm. • Factors for the variance of the analysis results are statistically quantified. - Abstract: The biogenic fraction of a simple solid recovered fuel (SRF) mixture (80 wt% printer paper/20 wt% high density polyethylene) is analyzed with the in-house developed adapted balance method (aBM). This fairly new approach is a combination of combustion elemental analysis (CHNS) and a data reconciliation algorithm based on successive linearisation for evaluation of the analysis results. This method shows a great potential as an alternative way to determine the biomass content in SRF. However, the employed analytical technique (CHNS elemental analysis) restricts the probed sample mass to low amounts in the range of a few hundred milligrams. This requires sample comminution to small grain sizes (<200 μm) to generate representative SRF specimen. This is not easily accomplished for certain material mixtures (e.g. SRF with rubber content) by conventional means of sample size reduction. This paper presents a proof of principle investigation of the sample preparation and analysis of an SRF model mixture with the use of cryogenic impact milling (final sample comminution) and the adapted balance method (determination of biomass content). The so derived sample preparation methodology (cutting mills and cryogenic impact milling) shows a better performance in accuracy and precision for the determination of the biomass content than one solely based on cutting mills. The results for the determination of the biogenic fraction are within 1–5% of the data obtained by the reference methods, selective dissolution method (SDM) and {sup 14}C-method ({sup 14}C-M)
A real-time regional adaptive exposure method for saving dose-area product in x-ray fluoroscopy
Burion, Steve; Funk, Tobias; Speidel, Michael A.
2013-05-15
Purpose: Reduction of radiation dose in x-ray imaging has been recognized as a high priority in the medical community. Here the authors show that a regional adaptive exposure method can reduce dose-area product (DAP) in x-ray fluoroscopy. The authors' method is particularly geared toward providing dose savings for the pediatric population. Methods: The scanning beam digital x-ray system uses a large-area x-ray source with 8000 focal spots in combination with a small photon-counting detector. An imaging frame is obtained by acquiring and reconstructing up to 8000 detector images, each viewing only a small portion of the patient. Regional adaptive exposure was implemented by varying the exposure of the detector images depending on the local opacity of the object. A family of phantoms ranging in size from infant to obese adult was imaged in anteroposterior view with and without adaptive exposure. The DAP delivered to each phantom was measured in each case, and noise performance was compared by generating noise arrays to represent regional noise in the images. These noise arrays were generated by dividing the image into regions of about 6 mm{sup 2}, calculating the relative noise in each region, and placing the relative noise value of each region in a one-dimensional array (noise array) sorted from highest to lowest. Dose-area product savings were calculated as the difference between the ratio of DAP with adaptive exposure to DAP without adaptive exposure. The authors modified this value by a correction factor that matches the noise arrays where relative noise is the highest to report a final dose-area product savings. Results: The average dose-area product saving across the phantom family was (42 {+-} 8)% with the highest dose-area product saving in the child-sized phantom (50%) and the lowest in the phantom mimicking an obese adult (23%). Conclusions: Phantom measurements indicate that a regional adaptive exposure method can produce large DAP savings without
The adaptive EVP method for solving the sea ice momentum equation
NASA Astrophysics Data System (ADS)
Kimmritz, Madlen; Danilov, Sergey; Losch, Martin
2016-05-01
Stability and convergence of the modified EVP implementation of the visco-plastic sea ice rheology by Bouillon et al., Ocean Modell., 2013, is analyzed on B- and C-grids. It is shown that the implementation on a B-grid is less restrictive with respect to stability requirements than on a C-grid. On C-grids convergence is sensitive to the discretization of the viscosities. We suggest to adaptively vary the parameters of pseudotime subcycling of the modified EVP scheme in time and space to satisfy local stability constraints. This new approach generally improves the convergence of the modified EVP scheme and hence its numerical efficiency. The performance of the new "adaptive EVP" approach is illustrated in a series of experiments with the sea ice component of the MIT general circulation model (MITgcm) that is formulated on a C-grid.
Wang, Han; Du, Wencai; Xu, Lingwei
2016-06-24
The conventional channel estimation methods based on a preamble for filter bank multicarrier with offset quadrature amplitude modulation (FBMC/OQAM) systems in mobile-to-mobile sensor networks are inefficient. By utilizing the intrinsicsparsity of wireless channels, channel estimation is researched as a compressive sensing (CS) problem to improve the estimation performance. In this paper, an AdaptiveRegularized Compressive Sampling Matching Pursuit (ARCoSaMP) algorithm is proposed. Unlike anterior greedy algorithms, the new algorithm can achieve the accuracy of reconstruction by choosing the support set adaptively, and exploiting the regularization process, which realizes the second selecting of atoms in the support set although the sparsity of the channel is unknown. Simulation results show that CS-based methods obtain significant channel estimation performance improvement compared to that of conventional preamble-based methods. The proposed ARCoSaMP algorithm outperforms the conventional sparse adaptive matching pursuit (SAMP) algorithm. ARCoSaMP provides even more interesting results than the mostadvanced greedy compressive sampling matching pursuit (CoSaMP) algorithm without a prior sparse knowledge of the channel.
Wang, Han; Du, Wencai; Xu, Lingwei
2016-01-01
The conventional channel estimation methods based on a preamble for filter bank multicarrier with offset quadrature amplitude modulation (FBMC/OQAM) systems in mobile-to-mobile sensor networks are inefficient. By utilizing the intrinsicsparsity of wireless channels, channel estimation is researched as a compressive sensing (CS) problem to improve the estimation performance. In this paper, an AdaptiveRegularized Compressive Sampling Matching Pursuit (ARCoSaMP) algorithm is proposed. Unlike anterior greedy algorithms, the new algorithm can achieve the accuracy of reconstruction by choosing the support set adaptively, and exploiting the regularization process, which realizes the second selecting of atoms in the support set although the sparsity of the channel is unknown. Simulation results show that CS-based methods obtain significant channel estimation performance improvement compared to that of conventional preamble-based methods. The proposed ARCoSaMP algorithm outperforms the conventional sparse adaptive matching pursuit (SAMP) algorithm. ARCoSaMP provides even more interesting results than the mostadvanced greedy compressive sampling matching pursuit (CoSaMP) algorithm without a prior sparse knowledge of the channel. PMID:27347967
NASA Astrophysics Data System (ADS)
Chang, Yong; Zi, Yanyang; Zhao, Jiyuan; Yang, Zhe; He, Wangpeng; Sun, Hailiang
2017-03-01
In guided wave pipeline inspection, echoes reflected from closely spaced reflectors generally overlap, meaning useful information is lost. To solve the overlapping problem, sparse deconvolution methods have been developed in the past decade. However, conventional sparse deconvolution methods have limitations in handling guided wave signals, because the input signal is directly used as the prototype of the convolution matrix, without considering the waveform change caused by the dispersion properties of the guided wave. In this paper, an adaptive sparse deconvolution (ASD) method is proposed to overcome these limitations. First, the Gaussian echo model is employed to adaptively estimate the column prototype of the convolution matrix instead of directly using the input signal as the prototype. Then, the convolution matrix is constructed upon the estimated results. Third, the split augmented Lagrangian shrinkage (SALSA) algorithm is introduced to solve the deconvolution problem with high computational efficiency. To verify the effectiveness of the proposed method, guided wave signals obtained from pipeline inspection are investigated numerically and experimentally. Compared to conventional sparse deconvolution methods, e.g. the {{l}1} -norm deconvolution method, the proposed method shows better performance in handling the echo overlap problem in the guided wave signal.
Adaptive finite element methods for the solution of inverse problems in optical tomography
NASA Astrophysics Data System (ADS)
Bangerth, Wolfgang; Joshi, Amit
2008-06-01
Optical tomography attempts to determine a spatially variable coefficient in the interior of a body from measurements of light fluxes at the boundary. Like in many other applications in biomedical imaging, computing solutions in optical tomography is complicated by the fact that one wants to identify an unknown number of relatively small irregularities in this coefficient at unknown locations, for example corresponding to the presence of tumors. To recover them at the resolution needed in clinical practice, one has to use meshes that, if uniformly fine, would lead to intractably large problems with hundreds of millions of unknowns. Adaptive meshes are therefore an indispensable tool. In this paper, we will describe a framework for the adaptive finite element solution of optical tomography problems. It takes into account all steps starting from the formulation of the problem including constraints on the coefficient, outer Newton-type nonlinear and inner linear iterations, regularization, and in particular the interplay of these algorithms with discretizing the problem on a sequence of adaptively refined meshes. We will demonstrate the efficiency and accuracy of these algorithms on a set of numerical examples of clinical relevance related to locating lymph nodes in tumor diagnosis.
Li, Xiaoqiang; Quan, Enzhuo M.; Li, Yupeng; Pan, Xiaoning; Zhou, Yin; Wang, Xiaochun; Du, Weiliang; Kudchadker, Rajat J.; Johnson, Jennifer L.; Kuban, Deborah A.; Lee, Andrew K.; Zhang, Xiaodong
2013-08-01
Purpose: This study was designed to validate a fully automated adaptive planning (AAP) method which integrates automated recontouring and automated replanning to account for interfractional anatomical changes in prostate cancer patients receiving adaptive intensity modulated radiation therapy (IMRT) based on daily repeated computed tomography (CT)-on-rails images. Methods and Materials: Nine prostate cancer patients treated at our institution were randomly selected. For the AAP method, contours on each repeat CT image were automatically generated by mapping the contours from the simulation CT image using deformable image registration. An in-house automated planning tool incorporated into the Pinnacle treatment planning system was used to generate the original and the adapted IMRT plans. The cumulative dose–volume histograms (DVHs) of the target and critical structures were calculated based on the manual contours for all plans and compared with those of plans generated by the conventional method, that is, shifting the isocenters by aligning the images based on the center of the volume (COV) of prostate (prostate COV-aligned). Results: The target coverage from our AAP method for every patient was acceptable, while 1 of the 9 patients showed target underdosing from prostate COV-aligned plans. The normalized volume receiving at least 70 Gy (V{sub 70}), and the mean dose of the rectum and bladder were reduced by 8.9%, 6.4 Gy and 4.3%, 5.3 Gy, respectively, for the AAP method compared with the values obtained from prostate COV-aligned plans. Conclusions: The AAP method, which is fully automated, is effective for online replanning to compensate for target dose deficits and critical organ overdosing caused by interfractional anatomical changes in prostate cancer.
Detection of neuronal spikes using an adaptive threshold based on the max-min spread sorting method.
Chan, Hsiao-Lung; Lin, Ming-An; Wu, Tony; Lee, Shih-Tseng; Tsai, Yu-Tai; Chao, Pei-Kuang
2008-07-15
Neuronal spike information can be used to correlate neuronal activity to various stimuli, to find target neural areas for deep brain stimulation, and to decode intended motor command for brain-machine interface. Typically, spike detection is performed based on the adaptive thresholds determined by running root-mean-square (RMS) value of the signal. Yet conventional detection methods are susceptible to threshold fluctuations caused by neuronal spike intensity. In the present study we propose a novel adaptive threshold based on the max-min spread sorting method. On the basis of microelectrode recording signals and simulated signals with Gaussian noises and colored noises, the novel method had the smallest threshold variations, and similar or better spike detection performance than either the RMS-based method or other improved methods. Moreover, the detection method described in this paper uses the reduced features of raw signal to determine the threshold, thereby giving a simple data manipulation that is beneficial for reducing the computational load when dealing with very large amounts of data (as multi-electrode recordings).