Accelerated Adaptive Integration Method
2015-01-01
Conformational changes that occur upon ligand binding may be too slow to observe on the time scales routinely accessible using molecular dynamics simulations. The adaptive integration method (AIM) leverages the notion that when a ligand is either fully coupled or decoupled, according to λ, barrier heights may change, making some conformational transitions more accessible at certain λ values. AIM adaptively changes the value of λ in a single simulation so that conformations sampled at one value of λ seed the conformational space sampled at another λ value. Adapting the value of λ throughout a simulation, however, does not resolve issues in sampling when barriers remain high regardless of the λ value. In this work, we introduce a new method, called Accelerated AIM (AcclAIM), in which the potential energy function is flattened at intermediate values of λ, promoting the exploration of conformational space as the ligand is decoupled from its receptor. We show, with both a simple model system (Bromocyclohexane) and the more complex biomolecule Thrombin, that AcclAIM is a promising approach to overcome high barriers in the calculation of free energies, without the need for any statistical reweighting or additional processors. PMID:24780083
Self-Adaptive Filon's Integration Method and Its Application to Computing Synthetic Seismograms
NASA Astrophysics Data System (ADS)
Zhang, Hai-Ming; Chen, Xiao-Fei
2001-03-01
Based on the principle of the self-adaptive Simpson integration method, and by incorporating the `fifth-order' Filon's integration algorithm [Bull. Seism. Soc. Am. 73(1983)913], we have proposed a simple and efficient numerical integration method, i.e., the self-adaptive Filon's integration method (SAFIM), for computing synthetic seismograms at large epicentral distances. With numerical examples, we have demonstrated that the SAFIM is not only accurate but also very efficient. This new integration method is expected to be very useful in seismology, as well as in computing similar oscillatory integrals in other branches of physics.
Compact integration factor methods for complex domains and adaptive mesh refinement.
Liu, Xinfeng; Nie, Qing
2010-08-10
Implicit integration factor (IIF) method, a class of efficient semi-implicit temporal scheme, was introduced recently for stiff reaction-diffusion equations. To reduce cost of IIF, compact implicit integration factor (cIIF) method was later developed for efficient storage and calculation of exponential matrices associated with the diffusion operators in two and three spatial dimensions for Cartesian coordinates with regular meshes. Unlike IIF, cIIF cannot be directly extended to other curvilinear coordinates, such as polar and spherical coordinate, due to the compact representation for the diffusion terms in cIIF. In this paper, we present a method to generalize cIIF for other curvilinear coordinates through examples of polar and spherical coordinates. The new cIIF method in polar and spherical coordinates has similar computational efficiency and stability properties as the cIIF in Cartesian coordinate. In addition, we present a method for integrating cIIF with adaptive mesh refinement (AMR) to take advantage of the excellent stability condition for cIIF. Because the second order cIIF is unconditionally stable, it allows large time steps for AMR, unlike a typical explicit temporal scheme whose time step is severely restricted by the smallest mesh size in the entire spatial domain. Finally, we apply those methods to simulating a cell signaling system described by a system of stiff reaction-diffusion equations in both two and three spatial dimensions using AMR, curvilinear and Cartesian coordinates. Excellent performance of the new methods is observed.
Zhao, Guoliang; Li, Hongxing
2013-01-01
This paper proposes new methodologies for the design of adaptive integral-sliding mode control. A tensor product model transformation based adaptive integral-sliding mode control law with respect to uncertainties and perturbations is studied, while upper bounds on the perturbations and uncertainties are assumed to be unknown. The advantage of proposed controllers consists in having a dynamical adaptive control gain to establish a sliding mode right at the beginning of the process. Gain dynamics ensure a reasonable adaptive gain with respect to the uncertainties. Finally, efficacy of the proposed controller is verified by simulations on an uncertain nonlinear system model. PMID:24453897
Zhao, Guoliang; Sun, Kaibiao; Li, Hongxing
2013-01-01
This paper proposes new methodologies for the design of adaptive integral-sliding mode control. A tensor product model transformation based adaptive integral-sliding mode control law with respect to uncertainties and perturbations is studied, while upper bounds on the perturbations and uncertainties are assumed to be unknown. The advantage of proposed controllers consists in having a dynamical adaptive control gain to establish a sliding mode right at the beginning of the process. Gain dynamics ensure a reasonable adaptive gain with respect to the uncertainties. Finally, efficacy of the proposed controller is verified by simulations on an uncertain nonlinear system model.
A Time-Adaptive Integrator Based on Radau Methods for Advection Diffusion Reaction PDEs
NASA Astrophysics Data System (ADS)
Gonzalez-Pinto, S.; Perez-Rodriguez, S.
2009-09-01
The numerical integration of time-dependent PDEs, especially of Advection Diffusion Reaction type, for two and three spatial variables (in short, 2D and 3D problems) in the MoL framework is considered. The spatial discretization is made by using Finite Differences and the time integration is carried out by means of the L-stable, third order formula known as the two stage Radau IIA method. The main point for the solution of the large dimensional ODEs is not to solve the stage values of the Radau method until convergence (because the convergence is very slow on the stiff components), but only giving a very few iterations and take as advancing solution the latter stage value computed. The iterations are carried out by using the Approximate Matrix Factorization (AMF) coupled to a Newton-type iteration (SNI) as indicated in [5], which turns out in an acceptably cheap iteration, like Alternating Directions Methods (ADI) of Peaceman and Rachford (1955). Some stability results for the whole process (AMF)-(SNI) and a local error estimate for an adaptive time-integration are also given. Numerical results on two standard PDEs are presented and some conclusions about our method and other well-known solvers are drawn.
ERIC Educational Resources Information Center
Jian, Hu
2012-01-01
The purpose of this mixed method study was to investigate how graduates originating from mainland China adapt to the U.S. academic integrity requirements. In the first, quantitative phase of the study, the research questions focused on understanding the state of academic integrity in China. This guiding question was divided into two sub-questions,…
NASA Astrophysics Data System (ADS)
Feischl, Michael; Gantner, Gregor; Praetorius, Dirk
2015-06-01
We consider the Galerkin boundary element method (BEM) for weakly-singular integral equations of the first-kind in 2D. We analyze some residual-type a posteriori error estimator which provides a lower as well as an upper bound for the unknown Galerkin BEM error. The required assumptions are weak and allow for piecewise smooth parametrizations of the boundary, local mesh-refinement, and related standard piecewise polynomials as well as NURBS. In particular, our analysis gives a first contribution to adaptive BEM in the frame of isogeometric analysis (IGABEM), for which we formulate an adaptive algorithm which steers the local mesh-refinement and the multiplicity of the knots. Numerical experiments underline the theoretical findings and show that the proposed adaptive strategy leads to optimal convergence.
Feischl, Michael; Gantner, Gregor; Praetorius, Dirk
2015-01-01
We consider the Galerkin boundary element method (BEM) for weakly-singular integral equations of the first-kind in 2D. We analyze some residual-type a posteriori error estimator which provides a lower as well as an upper bound for the unknown Galerkin BEM error. The required assumptions are weak and allow for piecewise smooth parametrizations of the boundary, local mesh-refinement, and related standard piecewise polynomials as well as NURBS. In particular, our analysis gives a first contribution to adaptive BEM in the frame of isogeometric analysis (IGABEM), for which we formulate an adaptive algorithm which steers the local mesh-refinement and the multiplicity of the knots. Numerical experiments underline the theoretical findings and show that the proposed adaptive strategy leads to optimal convergence. PMID:26085698
Recursive adaptive frame integration limited
NASA Astrophysics Data System (ADS)
Rafailov, Michael K.
2006-05-01
Recursive Frame Integration Limited was proposed as a way to improve frame integration performance and mitigate issues related to high data rate needed for conventional frame integration. The technique applies two thresholds - one tuned for optimum probability of detection, the other to manage required false alarm rate - and allows a non-linear integration process that, along with Signal-to-Noise Ratio (SNR) gain, provides system designers more capability where cost, weight, or power considerations limit system data rate, processing, or memory capability. However, Recursive Frame Integration Limited may have performance issues when single frame SNR is really low. Recursive Adaptive Frame Integration Limited is proposed as a means to improve limited integration performance with really low single frame SNR. It combines the benefits of nonlinear recursive limited frame integration and adaptive thresholds with a kind of conventional frame integration.
Adaptive Algebraic Multigrid Methods
Brezina, M; Falgout, R; MacLachlan, S; Manteuffel, T; McCormick, S; Ruge, J
2004-04-09
Our ability to simulate physical processes numerically is constrained by our ability to solve the resulting linear systems, prompting substantial research into the development of multiscale iterative methods capable of solving these linear systems with an optimal amount of effort. Overcoming the limitations of geometric multigrid methods to simple geometries and differential equations, algebraic multigrid methods construct the multigrid hierarchy based only on the given matrix. While this allows for efficient black-box solution of the linear systems associated with discretizations of many elliptic differential equations, it also results in a lack of robustness due to assumptions made on the near-null spaces of these matrices. This paper introduces an extension to algebraic multigrid methods that removes the need to make such assumptions by utilizing an adaptive process. The principles which guide the adaptivity are highlighted, as well as their application to algebraic multigrid solution of certain symmetric positive-definite linear systems.
NASA Astrophysics Data System (ADS)
Chamundeeswari, V. V.; Singh, D.; Singh, K.
2007-12-01
In single band and single polarized synthetic aperture radar (SAR) images, the information is limited to intensity and texture only and it is very difficult to interpret such SAR images without any a priori information. For unsupervised classification of SAR images, M-band wavelet decomposition is performed on the SAR image and sub-band selection on the basis of energy levels is applied to improve the classification results since sparse representation of sub-bands degrades the performance of classification. Then, textural features are obtained from selected sub-bands and integrated with intensity features. An adaptive neuro-fuzzy algorithm is used to improve computational efficiency by extracting significant features. K-means classification is performed on the extracted features and land features are labeled. This classification algorithm involves user defined parameters. To remove the user dependency and to obtain maximum achievable classification accuracy, an algorithm is developed in this paper for classification accuracy in terms of the parameters involved in the segmentation process. This is very helpful to develop the automated land-cover monitoring system with SAR, where optimized parameters are to be identified only once and these parameters can be applied to SAR imagery of the same scene obtained year after year. A single band, single polarized SAR image is classified into water, urban and vegetation areas using this method and overall classification accuracy is obtained in the range of 85.92%-93.70% by comparing with ground truth data.
Advances in Adaptive Control Methods
NASA Technical Reports Server (NTRS)
Nguyen, Nhan
2009-01-01
This poster presentation describes recent advances in adaptive control technology developed by NASA. Optimal Control Modification is a novel adaptive law that can improve performance and robustness of adaptive control systems. A new technique has been developed to provide an analytical method for computing time delay stability margin for adaptive control systems.
Adaptive method with intercessory feedback control for an intelligent agent
Goldsmith, Steven Y.
2004-06-22
An adaptive architecture method with feedback control for an intelligent agent provides for adaptively integrating reflexive and deliberative responses to a stimulus according to a goal. An adaptive architecture method with feedback control for multiple intelligent agents provides for coordinating and adaptively integrating reflexive and deliberative responses to a stimulus according to a goal. Re-programming of the adaptive architecture is through a nexus which coordinates reflexive and deliberator components.
Parallel multilevel adaptive methods
NASA Technical Reports Server (NTRS)
Dowell, B.; Govett, M.; Mccormick, S.; Quinlan, D.
1989-01-01
The progress of a project for the design and analysis of a multilevel adaptive algorithm (AFAC/HM/) targeted for the Navier Stokes Computer is discussed. The results of initial timing tests of AFAC, coupled with multigrid and an efficient load balancer, on a 16-node Intel iPSC/2 hypercube are included. The results of timing tests are presented.
Adaptive Urban Dispersion Integrated Model
Wissink, A; Chand, K; Kosovic, B; Chan, S; Berger, M; Chow, F K
2005-11-03
Numerical simulations represent a unique predictive tool for understanding the three-dimensional flow fields and associated concentration distributions from contaminant releases in complex urban settings (Britter and Hanna 2003). Utilization of the most accurate urban models, based on fully three-dimensional computational fluid dynamics (CFD) that solve the Navier-Stokes equations with incorporated turbulence models, presents many challenges. We address two in this work; first, a fast but accurate way to incorporate the complex urban terrain, buildings, and other structures to enforce proper boundary conditions in the flow solution; second, ways to achieve a level of computational efficiency that allows the models to be run in an automated fashion such that they may be used for emergency response and event reconstruction applications. We have developed a new integrated urban dispersion modeling capability based on FEM3MP (Gresho and Chan 1998, Chan and Stevens 2000), a CFD model from Lawrence Livermore National Lab. The integrated capability incorporates fast embedded boundary mesh generation for geometrically complex problems and full three-dimensional Cartesian adaptive mesh refinement (AMR). Parallel AMR and embedded boundary gridding support are provided through the SAMRAI library (Wissink et al. 2001, Hornung and Kohn 2002). Embedded boundary mesh generation has been demonstrated to be an automatic, fast, and efficient approach for problem setup. It has been used for a variety of geometrically complex applications, including urban applications (Pullen et al. 2005). The key technology we introduce in this work is the application of AMR, which allows the application of high-resolution modeling to certain important features, such as individual buildings and high-resolution terrain (including important vegetative and land-use features). It also allows the urban scale model to be readily interfaced with coarser resolution meso or regional scale models. This talk
Adaptive Control of Event Integration
ERIC Educational Resources Information Center
Akyurek, Elkan G.; Toffanin, Paolo; Hommel, Bernhard
2008-01-01
Identifying 2 target stimuli in a rapid stream of visual symbols is much easier if the 2nd target appears immediately after the 1st target (i.e., at Lag 1) than if distractor stimuli intervene. As this phenomenon comes with a strong tendency to confuse the order of the targets, it seems to be due to the integration of both targets into the same…
NASA Astrophysics Data System (ADS)
Salinas, Pablo; Pavlidis, Dimitrios; Percival, James; Adam, Alexander; Xie, Zhihua; Pain, Christopher; Jackson, Matthew
2015-11-01
We present a new, high-order, control-volume-finite-element (CVFE) method with discontinuous representation for pressure and velocity to simulate multiphase flow in heterogeneous porous media. Time is discretized using an adaptive, fully implicit method. Heterogeneous geologic features are represented as volumes bounded by surfaces. Our approach conserves mass and does not require the use of CVs that span domain boundaries. Computational efficiency is increased by use of dynamic mesh optimization. We demonstrate that the approach, amongst other features, accurately preserves sharp saturation changes associated with high aspect ratio geologic domains, allowing efficient simulation of flow in highly heterogeneous models. Moreover, accurate solutions are obtained at lower cost than an equivalent fine, fixed mesh and conventional CVFE methods. The use of implicit time integration allows the method to efficiently converge using highly anisotropic meshes without having to reduce the time-step. The work is significant for two key reasons. First, it resolves a long-standing problem associated with the use of classical CVFE methods. Second, it reduces computational cost/increases solution accuracy through the use of dynamic mesh optimization and time-stepping with large Courant number. Funding for Dr P. Salinas from ExxonMobil is gratefully acknowledged.
Protecting genome integrity during CRISPR immune adaptation.
Wright, Addison V; Doudna, Jennifer A
2016-10-01
Bacterial CRISPR-Cas systems include genomic arrays of short repeats flanking foreign DNA sequences and provide adaptive immunity against viruses. Integration of foreign DNA must occur specifically to avoid damaging the genome or the CRISPR array, but surprisingly promiscuous activity occurs in vitro. Here we reconstituted full-site DNA integration and show that the Streptococcus pyogenes type II-A Cas1-Cas2 integrase maintains specificity in part through limitations on the second integration step. At non-CRISPR sites, integration stalls at the half-site intermediate, thereby enabling reaction reversal. S. pyogenes Cas1-Cas2 is highly specific for the leader-proximal repeat and recognizes the repeat's palindromic ends, thus fitting a model of independent recognition by distal Cas1 active sites. These findings suggest that DNA-insertion sites are less common than suggested by previous work, thereby preventing toxicity during CRISPR immune adaptation and maintaining host genome integrity.
Schwarz, Dominik; Dörrstein, Jörg; Kugler, Sabine; Schieder, Doris; Zollfrank, Cordt; Sieber, Volker
2016-09-01
An integrated refining and pulping process for ensiled biomass from permanent grassland was established on laboratory scale. The liquid phase, containing the majority of water-soluble components, including 24% of the initial dry matter (DM), was first separated by mechanical pressing. The fiber fraction was subjected to high solid load saccharification (25% DM) to enhance the lignin content in the feed for subsequent organosolvation. The saccharification enzymes were pre-selected applying experimental design approaches. Cellulose convertibility was improved by a secondary pressing step during liquefaction. Combined saccharification and organosolvation showed high degree of saccharide solubilization with recovery of 98% of the glucan and 73% of the xylan from the fiber fraction in the hydrolysates, and enabled the recovery of 41% of the grass silage lignin. The effects of the treatment were confirmed by XRD and SEM tracking of cellulose crystallinity and fiber morphology throughout the pulping procedure.
Adaptive Through-Thickness Integration Strategy for Shell Elements
NASA Astrophysics Data System (ADS)
Burchitz, I. A.; Meinders, T.; Huétink, J.
2007-05-01
Reliable numerical prediction of springback in sheet metal forming is essential for the automotive industry. There are numerous factors that influence the accuracy of springback prediction by using the finite element method. One of the reasons is the through-thickness numerical integration of shell elements. It is known that even for simple problems the traditional integration schemes may require up to 50 integration points to achieve a high accuracy of springback analysis. An adaptive through-thickness integration strategy can be a good alternative. The strategy defines abscissas and weights depending on the integrand's properties and, thus, can adapt itself to improve the accuracy of integration. A concept of the adaptive through-thickness integration strategy for shell elements is presented. It is tested using a simple problem of bending of a beam under tension. Results show that for a similar set of material and process parameters the adaptive Simpson's rule with 7 integration points performs better than the traditional trapezoidal rule with 50 points. The adaptive through-thickness integration strategy for shell elements can improve the accuracy of springback prediction at minimal costs.
Milne, Roger Brent
1995-12-01
This thesis describes a new method for the numerical solution of partial differential equations of the parabolic type on an adaptively refined mesh in two or more spatial dimensions. The method is motivated and developed in the context of the level set formulation for the curvature dependent propagation of surfaces in three dimensions. In that setting, it realizes the multiple advantages of decreased computational effort, localized accuracy enhancement, and compatibility with problems containing a range of length scales.
Method of adaptive artificial viscosity
NASA Astrophysics Data System (ADS)
Popov, I. V.; Fryazinov, I. V.
2011-09-01
A new finite-difference method for the numerical solution of gas dynamics equations is proposed. This method is a uniform monotonous finite-difference scheme of second-order approximation on time and space outside of domains of shock and compression waves. This method is based on inputting adaptive artificial viscosity (AAV) into gas dynamics equations. In this paper, this method is analyzed for 2D geometry. The testing computations of the movement of contact discontinuities and shock waves and the breakup of discontinuities are demonstrated.
Robust Optimal Adaptive Control Method with Large Adaptive Gain
NASA Technical Reports Server (NTRS)
Nguyen, Nhan T.
2009-01-01
In the presence of large uncertainties, a control system needs to be able to adapt rapidly to regain performance. Fast adaptation is referred to the implementation of adaptive control with a large adaptive gain to reduce the tracking error rapidly. However, a large adaptive gain can lead to high-frequency oscillations which can adversely affect robustness of an adaptive control law. A new adaptive control modification is presented that can achieve robust adaptation with a large adaptive gain without incurring high-frequency oscillations as with the standard model-reference adaptive control. The modification is based on the minimization of the Y2 norm of the tracking error, which is formulated as an optimal control problem. The optimality condition is used to derive the modification using the gradient method. The optimal control modification results in a stable adaptation and allows a large adaptive gain to be used for better tracking while providing sufficient stability robustness. Simulations were conducted for a damaged generic transport aircraft with both standard adaptive control and the adaptive optimal control modification technique. The results demonstrate the effectiveness of the proposed modification in tracking a reference model while maintaining a sufficient time delay margin.
Robust rotational-velocity-Verlet integration methods
NASA Astrophysics Data System (ADS)
Rozmanov, Dmitri; Kusalik, Peter G.
2010-05-01
Two rotational integration algorithms for rigid-body dynamics are proposed in velocity-Verlet formulation. The first method uses quaternion dynamics and was derived from the original rotational leap-frog method by Svanberg [Mol. Phys. 92, 1085 (1997)]; it produces time consistent positions and momenta. The second method is also formulated in terms of quaternions but it is not quaternion specific and can be easily adapted for any other orientational representation. Both the methods are tested extensively and compared to existing rotational integrators. The proposed integrators demonstrated performance at least at the level of previously reported rotational algorithms. The choice of simulation parameters is also discussed.
Automatic numerical integration methods for Feynman integrals through 3-loop
NASA Astrophysics Data System (ADS)
de Doncker, E.; Yuasa, F.; Kato, K.; Ishikawa, T.; Olagbemi, O.
2015-05-01
We give numerical integration results for Feynman loop diagrams through 3-loop such as those covered by Laporta [1]. The methods are based on automatic adaptive integration, using iterated integration and extrapolation with programs from the QUADPACK package, or multivariate techniques from the ParInt package. The Dqags algorithm from QuadPack accommodates boundary singularities of fairly general types. PARINT is a package for multivariate integration layered over MPI (Message Passing Interface), which runs on clusters and incorporates advanced parallel/distributed techniques such as load balancing among processes that may be distributed over a network of nodes. Results are included for 3-loop self-energy diagrams without IR (infra-red) or UV (ultra-violet) singularities. A procedure based on iterated integration and extrapolation yields a novel method of numerical regularization for integrals with UV terms, and is applied to a set of 2-loop self-energy diagrams with UV singularities.
Ying, Wenjun; Henriquez, Craig S
2015-01-01
A both space and time adaptive algorithm is presented for simulating electrical wave propagation in the Purkinje system of the heart. The equations governing the distribution of electric potential over the system are solved in time with the method of lines. At each timestep, by an operator splitting technique, the space-dependent but linear diffusion part and the nonlinear but space-independent reactions part in the partial differential equations are integrated separately with implicit schemes, which have better stability and allow larger timesteps than explicit ones. The linear diffusion equation on each edge of the system is spatially discretized with the continuous piecewise linear finite element method. The adaptive algorithm can automatically recognize when and where the electrical wave starts to leave or enter the computational domain due to external current/voltage stimulation, self-excitation, or local change of membrane properties. Numerical examples demonstrating efficiency and accuracy of the adaptive algorithm are presented.
Ying, Wenjun; Henriquez, Craig S.
2015-01-01
A both space and time adaptive algorithm is presented for simulating electrical wave propagation in the Purkinje system of the heart. The equations governing the distribution of electric potential over the system are solved in time with the method of lines. At each timestep, by an operator splitting technique, the space-dependent but linear diffusion part and the nonlinear but space-independent reactions part in the partial differential equations are integrated separately with implicit schemes, which have better stability and allow larger timesteps than explicit ones. The linear diffusion equation on each edge of the system is spatially discretized with the continuous piecewise linear finite element method. The adaptive algorithm can automatically recognize when and where the electrical wave starts to leave or enter the computational domain due to external current/voltage stimulation, self-excitation, or local change of membrane properties. Numerical examples demonstrating efficiency and accuracy of the adaptive algorithm are presented. PMID:26581455
A new orientation-adaptive interpolation method.
Wang, Qing; Ward, Rabab Kreidieh
2007-04-01
We propose an isophote-oriented, orientation-adaptive interpolation method. The proposed method employs an interpolation kernel that adapts to the local orientation of isophotes, and the pixel values are obtained through an oriented, bilinear interpolation. We show that, by doing so, the curvature of the interpolated isophotes is reduced, and, thus, zigzagging artifacts are largely suppressed. Analysis and experiments show that images interpolated using the proposed method are visually pleasing and almost artifact free.
The Method of Adaptive Comparative Judgement
ERIC Educational Resources Information Center
Pollitt, Alastair
2012-01-01
Adaptive Comparative Judgement (ACJ) is a modification of Thurstone's method of comparative judgement that exploits the power of adaptivity, but in scoring rather than testing. Professional judgement by teachers replaces the marking of tests; a judge is asked to compare the work of two students and simply to decide which of them is the better.…
Wang, Cheng-Hang; Liu, Baw-Jhiune; Wu, Lawrence Shih-Hsin
2012-02-01
Asthma is one of the most common chronic diseases in children. It is caused by complicated coactions between various genetic factors and environmental allergens. The study aims to integrate the concept of implementing adaptive neuro-fuzzy inference system (ANFIS) and classification analysis methods for forecasting the association of asthma susceptibility genes on 3 serum IgE groups. The ANFIS model was trained and tested with data sets obtained from 425 asthmatic subjects and 483 non-asthma subjects from the Taiwanese population. We assessed 13 single-nucleotide polymorphisms (SNPs) in seven well-known asthma susceptibility genes; firstly, the proposed ANFIS model learned to reduce input features from the 13 SNPs. And secondly, the classification will be used to classify the serum IgE groups from the simulated SNPs results. The performance of the ANFIS model, classification accuracies and the results confirmed that the integration of ANFIS and classified analysis has potential in association discovery.
Fuzzy Adaptive Cubature Kalman Filter for Integrated Navigation Systems.
Tseng, Chien-Hao; Lin, Sheng-Fuu; Jwo, Dah-Jing
2016-07-26
This paper presents a sensor fusion method based on the combination of cubature Kalman filter (CKF) and fuzzy logic adaptive system (FLAS) for the integrated navigation systems, such as the GPS/INS (Global Positioning System/inertial navigation system) integration. The third-degree spherical-radial cubature rule applied in the CKF has been employed to avoid the numerically instability in the system model. In processing navigation integration, the performance of nonlinear filter based estimation of the position and velocity states may severely degrade caused by modeling errors due to dynamics uncertainties of the vehicle. In order to resolve the shortcoming for selecting the process noise covariance through personal experience or numerical simulation, a scheme called the fuzzy adaptive cubature Kalman filter (FACKF) is presented by introducing the FLAS to adjust the weighting factor of the process noise covariance matrix. The FLAS is incorporated into the CKF framework as a mechanism for timely implementing the tuning of process noise covariance matrix based on the information of degree of divergence (DOD) parameter. The proposed FACKF algorithm shows promising accuracy improvement as compared to the extended Kalman filter (EKF), unscented Kalman filter (UKF), and CKF approaches.
Fuzzy Adaptive Cubature Kalman Filter for Integrated Navigation Systems
Tseng, Chien-Hao; Lin, Sheng-Fuu; Jwo, Dah-Jing
2016-01-01
This paper presents a sensor fusion method based on the combination of cubature Kalman filter (CKF) and fuzzy logic adaptive system (FLAS) for the integrated navigation systems, such as the GPS/INS (Global Positioning System/inertial navigation system) integration. The third-degree spherical-radial cubature rule applied in the CKF has been employed to avoid the numerically instability in the system model. In processing navigation integration, the performance of nonlinear filter based estimation of the position and velocity states may severely degrade caused by modeling errors due to dynamics uncertainties of the vehicle. In order to resolve the shortcoming for selecting the process noise covariance through personal experience or numerical simulation, a scheme called the fuzzy adaptive cubature Kalman filter (FACKF) is presented by introducing the FLAS to adjust the weighting factor of the process noise covariance matrix. The FLAS is incorporated into the CKF framework as a mechanism for timely implementing the tuning of process noise covariance matrix based on the information of degree of divergence (DOD) parameter. The proposed FACKF algorithm shows promising accuracy improvement as compared to the extended Kalman filter (EKF), unscented Kalman filter (UKF), and CKF approaches. PMID:27472336
Adaptive Discontinuous Galerkin Methods in Multiwavelets Bases
Archibald, Richard K; Fann, George I; Shelton Jr, William Allison
2011-01-01
We use a multiwavelet basis with the Discontinuous Galerkin (DG) method to produce a multi-scale DG method. We apply this Multiwavelet DG method to convection and convection-diffusion problems in multiple dimensions. Merging the DG method with multiwavelets allows the adaptivity in the DG method to be resolved through manipulation of multiwavelet coefficients rather than grid manipulation. Additionally, the Multiwavelet DG method is tested on non-linear equations in one dimension and on the cubed sphere.
A Dynamically Adaptive Arbitrary Lagrangian-Eulerian Method for Hydrodynamics
Anderson, R W; Pember, R B; Elliott, N S
2002-10-19
A new method that combines staggered grid Arbitrary Lagrangian-Eulerian (ALE) techniques with structured local adaptive mesh refinement (AMR) has been developed for solution of the Euler equations. The novel components of the combined ALE-AMR method hinge upon the integration of traditional AMR techniques with both staggered grid Lagrangian operators as well as elliptic relaxation operators on moving, deforming mesh hierarchies. Numerical examples demonstrate the utility of the method in performing detailed three-dimensional shock-driven instability calculations.
A Dynamically Adaptive Arbitrary Lagrangian-Eulerian Method for Hydrodynamics
Anderson, R W; Pember, R B; Elliott, N S
2004-01-28
A new method that combines staggered grid Arbitrary Lagrangian-Eulerian (ALE) techniques with structured local adaptive mesh refinement (AMR) has been developed for solution of the Euler equations. The novel components of the combined ALE-AMR method hinge upon the integration of traditional AMR techniques with both staggered grid Lagrangian operators as well as elliptic relaxation operators on moving, deforming mesh hierarchies. Numerical examples demonstrate the utility of the method in performing detailed three-dimensional shock-driven instability calculations.
Adjoint Methods for Guiding Adaptive Mesh Refinement in Tsunami Modeling
NASA Astrophysics Data System (ADS)
Davis, B. N.; LeVeque, R. J.
2016-12-01
One difficulty in developing numerical methods for tsunami modeling is the fact that solutions contain time-varying regions where much higher resolution is required than elsewhere in the domain, particularly when tracking a tsunami propagating across the ocean. The open source GeoClaw software deals with this issue by using block-structured adaptive mesh refinement to selectively refine around propagating waves. For problems where only a target area of the total solution is of interest (e.g., one coastal community), a method that allows identifying and refining the grid only in regions that influence this target area would significantly reduce the computational cost of finding a solution. In this work, we show that solving the time-dependent adjoint equation and using a suitable inner product with the forward solution allows more precise refinement of the relevant waves. We present the adjoint methodology first in one space dimension for illustration and in a broad context since it could also be used in other adaptive software, and potentially for other tsunami applications beyond adaptive refinement. We then show how this adjoint method has been integrated into the adaptive mesh refinement strategy of the open source GeoClaw software and present tsunami modeling results showing that the accuracy of the solution is maintained and the computational time required is significantly reduced through the integration of the adjoint method into adaptive mesh refinement.
Integrating Adaptive Games in Student-Centered Virtual Learning Environments
ERIC Educational Resources Information Center
del Blanco, Angel; Torrente, Javier; Moreno-Ger, Pablo; Fernandez-Manjon, Baltasar
2010-01-01
The increasing adoption of e-Learning technology is facing new challenges, such as how to produce student-centered systems that can be adapted to each student's needs. In this context, educational video games are proposed as an ideal medium to facilitate adaptation and tracking of students' performance for assessment purposes, but integrating the…
Integrated Framework for an Urban Climate Adaptation Tool
NASA Astrophysics Data System (ADS)
Omitaomu, O.; Parish, E. S.; Nugent, P.; Mei, R.; Sylvester, L.; Ernst, K.; Absar, M.
2015-12-01
Cities have an opportunity to become more resilient to future climate change through investments made in urban infrastructure today. However, most cities lack access to credible high-resolution climate change projection information needed to assess and address potential vulnerabilities from future climate variability. Therefore, we present an integrated framework for developing an urban climate adaptation tool (Urban-CAT). Urban-CAT consists of four modules. Firstly, it provides climate projections at different spatial resolutions for quantifying urban landscape. Secondly, this projected data is combined with socio-economic data using leading and lagging indicators for assessing landscape vulnerability to climate extremes (e.g., urban flooding). Thirdly, a neighborhood scale modeling approach is presented for identifying candidate areas for adaptation strategies (e.g., green infrastructure as an adaptation strategy for urban flooding). Finally, all these capabilities are made available as a web-based tool to support decision-making and communication at the neighborhood and city levels. In this paper, we present some of the methods that drive each of the modules and demo some of the capabilities available to-date using the City of Knoxville in Tennessee as a case study.
Adaptive robust controller based on integral sliding mode concept
NASA Astrophysics Data System (ADS)
Taleb, M.; Plestan, F.
2016-09-01
This paper proposes, for a class of uncertain nonlinear systems, an adaptive controller based on adaptive second-order sliding mode control and integral sliding mode control concepts. The adaptation strategy solves the problem of gain tuning and has the advantage of chattering reduction. Moreover, limited information about perturbation and uncertainties has to be known. The control is composed of two parts: an adaptive one whose objective is to reject the perturbation and system uncertainties, whereas the second one is chosen such as the nominal part of the system is stabilised in zero. To illustrate the effectiveness of the proposed approach, an application on an academic example is shown with simulation results.
Adaptive integral robust control and application to electromechanical servo systems.
Deng, Wenxiang; Yao, Jianyong
2017-03-01
This paper proposes a continuous adaptive integral robust control with robust integral of the sign of the error (RISE) feedback for a class of uncertain nonlinear systems, in which the RISE feedback gain is adapted online to ensure the robustness against disturbances without the prior bound knowledge of the additive disturbances. In addition, an adaptive compensation integrated with the proposed adaptive RISE feedback term is also constructed to further reduce design conservatism when the system also exists parametric uncertainties. Lyapunov analysis reveals the proposed controllers could guarantee the tracking errors are asymptotically converging to zero with continuous control efforts. To illustrate the high performance nature of the developed controllers, numerical simulations are provided. At the end, an application case of an actual electromechanical servo system driven by motor is also studied, with some specific design consideration, and comparative experimental results are obtained to verify the effectiveness of the proposed controllers.
High integrity adaptive SMA components for gas turbine applications
NASA Astrophysics Data System (ADS)
Webster, John
2006-03-01
The use of Shape Memory Alloys (SMAs) is growing rapidly. They have been under serious development for aerospace applications for over 15 years, but are still restricted to niche areas and small scale applications. Very few applications have found their way into service. Whilst they have been predominantly aimed at airframe applications, they also offer major advantages for adaptive gas turbine components. The harsh environment within a gas turbine with its high loads, temperatures and vibration excitation provide considerable challenges which must be met whilst still delivering high integrity, light weight, aerodynamic and efficient structures. A novel method has been developed which will deliver high integrity, stiff mechanical components which can provide massive shape change capability without the need for conventional moving parts. The lead application is for a shape changing engine nozzle to provide noise reduction at take off but will withdraw at cruise to remove any performance penalty. The technology also promises to provide significant advantages for applications in a gas turbine such as shape change aerofoils, heat exchanger controls, and intake shapes. The same mechanism should be directly applicable to other areas such as air frames, automotive and civil structures, where similar high integrity requirements exist.
2012-02-29
index location of the corresponding neighboring node . The system of semi-discrete ODE’s as in equation (5) is integrated using the four-step Runge-Kutta...distribution of the sensor is also modeled by a spatial Dirac delta function. It is assumed that there is no noise and the measurement device provides exact...conducted on a 5 node Linux cluster running Red Hat 3.4.6. The serial code was implanted on one of the nodes with a Quad Core Intel Xeon processor running
NASA Astrophysics Data System (ADS)
Curran, B. R.; Routhier, M.; Mulukutla, G. K.; Gopalakrishnan, G.
2010-12-01
The Government Accountability Office’s report, Climate Change Adaption, examines federal, state, local, and international mitigation actions for climate change and sea-level rise. The report specifically addresses the dearth of Site-Specific Information relating to the effects of climate change on a localized scale and the challenges this poses for the development of adaption strategies. We are developing a model that will begin to regionalize climate change projections for the purpose of projecting the effects of climate change on coastal cultural heritage. As global sea level increases, so too will the number of historically significant landscapes that are threatened due to sea-level rise. Because of this, historical preservationists will require a greater availability of pertinent information in order to contend with the threats posed by climate change and rising sea levels. These threats will have a far greater impact on Low Elevation Coastal Zones (LECZ) areas. The US ranks third for land mass classified as LECZ and has an estimated population of 22 million people living within these regions. Many of these areas have had high population densities due to the concentration of marine fishery resources, ease of transportation, and agricultural associations with river deltas. These areas have acted as catalysts for the evolution of various societies and cultures, and contain a concentrated stratification of cultural heritage deposits. The development of models for the assessment of spatial/temporal impacts of climate change on coastal cultural heritage will play a significant role in defining long-term preservation needs on a regional scale. We are coordinating ground water seepage models, tidal estuary models, and the regionalized Global Climate Models with localized geophysical assessments and GIS data sets. Through the digitization and rectification of various contemporary and historical maps we have developed a GIS data set that reflects the evolution of the
Domain adaptive boosting method and its applications
NASA Astrophysics Data System (ADS)
Geng, Jie; Miao, Zhenjiang
2015-03-01
Differences of data distributions widely exist among datasets, i.e., domains. For many pattern recognition, nature language processing, and content-based analysis systems, a decrease in performance caused by the domain differences between the training and testing datasets is still a notable problem. We propose a domain adaptation method called domain adaptive boosting (DAB). It is based on the AdaBoost approach with extensions to cover the domain differences between the source and target domains. Two main stages are contained in this approach: source-domain clustering and source-domain sample selection. By iteratively adding the selected training samples from the source domain, the discrimination model is able to achieve better domain adaptation performance based on a small validation set. The DAB algorithm is suitable for the domains with large scale samples and easy to extend for multisource adaptation. We implement this method on three computer vision systems: the skin detection model in single images, the video concept detection model, and the object classification model. In the experiments, we compare the performances of several commonly used methods and the proposed DAB. Under most situations, the DAB is superior.
NASA Astrophysics Data System (ADS)
Zeff, Harrison B.; Herman, Jonathan D.; Reed, Patrick M.; Characklis, Gregory W.
2016-09-01
A considerable fraction of urban water supply capacity serves primarily as a hedge against drought. Water utilities can reduce their dependence on firm capacity and forestall the development of new supplies using short-term drought management actions, such as conservation and transfers. Nevertheless, new supplies will often be needed, especially as demands rise due to population growth and economic development. Planning decisions regarding when and how to integrate new supply projects are fundamentally shaped by the way in which short-term adaptive drought management strategies are employed. To date, the challenges posed by long-term infrastructure sequencing and adaptive short-term drought management are treated independently, neglecting important feedbacks between planning and management actions. This work contributes a risk-based framework that uses continuously updating risk-of-failure (ROF) triggers to capture the feedbacks between short-term drought management actions (e.g., conservation and water transfers) and the selection and sequencing of a set of regional supply infrastructure options over the long term. Probabilistic regional water supply pathways are discovered for four water utilities in the "Research Triangle" region of North Carolina. Furthermore, this study distinguishes the status-quo planning path of independent action (encompassing utility-specific conservation and new supply infrastructure only) from two cooperative formulations: "weak" cooperation, which combines utility-specific conservation and infrastructure development with regional transfers, and "strong" cooperation, which also includes jointly developed regional infrastructure to support transfers. Results suggest that strong cooperation aids utilities in meeting their individual objectives at substantially lower costs and with less overall development. These benefits demonstrate how an adaptive, rule-based decision framework can coordinate integrated solutions that would not be
Structured adaptive grid generation using algebraic methods
NASA Technical Reports Server (NTRS)
Yang, Jiann-Cherng; Soni, Bharat K.; Roger, R. P.; Chan, Stephen C.
1993-01-01
The accuracy of the numerical algorithm depends not only on the formal order of approximation but also on the distribution of grid points in the computational domain. Grid adaptation is a procedure which allows optimal grid redistribution as the solution progresses. It offers the prospect of accurate flow field simulations without the use of an excessively timely, computationally expensive, grid. Grid adaptive schemes are divided into two basic categories: differential and algebraic. The differential method is based on a variational approach where a function which contains a measure of grid smoothness, orthogonality and volume variation is minimized by using a variational principle. This approach provided a solid mathematical basis for the adaptive method, but the Euler-Lagrange equations must be solved in addition to the original governing equations. On the other hand, the algebraic method requires much less computational effort, but the grid may not be smooth. The algebraic techniques are based on devising an algorithm where the grid movement is governed by estimates of the local error in the numerical solution. This is achieved by requiring the points in the large error regions to attract other points and points in the low error region to repel other points. The development of a fast, efficient, and robust algebraic adaptive algorithm for structured flow simulation applications is presented. This development is accomplished in a three step process. The first step is to define an adaptive weighting mesh (distribution mesh) on the basis of the equidistribution law applied to the flow field solution. The second, and probably the most crucial step, is to redistribute grid points in the computational domain according to the aforementioned weighting mesh. The third and the last step is to reevaluate the flow property by an appropriate search/interpolate scheme at the new grid locations. The adaptive weighting mesh provides the information on the desired concentration
Adaptive multi-sensor integration for mine detection
Baker, J.E.
1997-05-01
State-of-the-art in multi-sensor integration (MSI) application involves extensive research and development time to understand and characterize the application domain; to determine and define the appropriate sensor suite; to analyze, characterize, and calibrate the individual sensor systems; to recognize and accommodate the various sensor interactions; and to develop and optimize robust merging code. Much of this process can benefit from adaptive learning, i.e., an output-based system can take raw sensor data and desired merged results as input and adaptively develop/determine an effective method if interpretation and merger. This approach significantly reduces the time required to apply MSI to a given application, while increasing the quality of the final result and provides a quantitative measure for comparing competing MSI techniques and sensor suites. The ability to automatically develop and optimize MSI techniques for new sensor suites and operating environments makes this approach well suited to the detection of mines and mine-like targets. Perhaps more than any other, this application domain is characterized by diverse, innovative, and dynamic sensor suites, whose nature and interactions are not yet well established. This paper presents such an outcome-based multi-image analysis system. An empirical evaluation of its performance and its application, sensor and domain robustness is presented.
Adaptive Method for Nonsmooth Nonnegative Matrix Factorization.
Yang, Zuyuan; Xiang, Yong; Xie, Kan; Lai, Yue
2017-04-01
Nonnegative matrix factorization (NMF) is an emerging tool for meaningful low-rank matrix representation. In NMF, explicit constraints are usually required, such that NMF generates desired products (or factorizations), especially when the products have significant sparseness features. It is known that the ability of NMF in learning sparse representation can be improved by embedding a smoothness factor between the products. Motivated by this result, we propose an adaptive nonsmooth NMF (Ans-NMF) method in this paper. In our method, the embedded factor is obtained by using a data-related approach, so it matches well with the underlying products, implying a superior faithfulness of the representations. Besides, due to the usage of an adaptive selection scheme to this factor, the sparseness of the products can be separately constrained, leading to wider applicability and interpretability. Furthermore, since the adaptive selection scheme is processed through solving a series of typical linear programming problems, it can be easily implemented. Simulations using computer-generated data and real-world data show the advantages of the proposed Ans-NMF method over the state-of-the-art methods.
Parallel adaptive wavelet collocation method for PDEs
Nejadmalayeri, Alireza; Vezolainen, Alexei; Brown-Dymkoski, Eric; Vasilyev, Oleg V.
2015-10-01
A parallel adaptive wavelet collocation method for solving a large class of Partial Differential Equations is presented. The parallelization is achieved by developing an asynchronous parallel wavelet transform, which allows one to perform parallel wavelet transform and derivative calculations with only one data synchronization at the highest level of resolution. The data are stored using tree-like structure with tree roots starting at a priori defined level of resolution. Both static and dynamic domain partitioning approaches are developed. For the dynamic domain partitioning, trees are considered to be the minimum quanta of data to be migrated between the processes. This allows fully automated and efficient handling of non-simply connected partitioning of a computational domain. Dynamic load balancing is achieved via domain repartitioning during the grid adaptation step and reassigning trees to the appropriate processes to ensure approximately the same number of grid points on each process. The parallel efficiency of the approach is discussed based on parallel adaptive wavelet-based Coherent Vortex Simulations of homogeneous turbulence with linear forcing at effective non-adaptive resolutions up to 2048{sup 3} using as many as 2048 CPU cores.
Parallel adaptive wavelet collocation method for PDEs
NASA Astrophysics Data System (ADS)
Nejadmalayeri, Alireza; Vezolainen, Alexei; Brown-Dymkoski, Eric; Vasilyev, Oleg V.
2015-10-01
A parallel adaptive wavelet collocation method for solving a large class of Partial Differential Equations is presented. The parallelization is achieved by developing an asynchronous parallel wavelet transform, which allows one to perform parallel wavelet transform and derivative calculations with only one data synchronization at the highest level of resolution. The data are stored using tree-like structure with tree roots starting at a priori defined level of resolution. Both static and dynamic domain partitioning approaches are developed. For the dynamic domain partitioning, trees are considered to be the minimum quanta of data to be migrated between the processes. This allows fully automated and efficient handling of non-simply connected partitioning of a computational domain. Dynamic load balancing is achieved via domain repartitioning during the grid adaptation step and reassigning trees to the appropriate processes to ensure approximately the same number of grid points on each process. The parallel efficiency of the approach is discussed based on parallel adaptive wavelet-based Coherent Vortex Simulations of homogeneous turbulence with linear forcing at effective non-adaptive resolutions up to 20483 using as many as 2048 CPU cores.
System integration of pattern recognition, adaptive aided, upper limb prostheses
NASA Technical Reports Server (NTRS)
Lyman, J.; Freedy, A.; Solomonow, M.
1975-01-01
The requirements for successful integration of a computer aided control system for multi degree of freedom artificial arms are discussed. Specifications are established for a system which shares control between a human amputee and an automatic control subsystem. The approach integrates the following subsystems: (1) myoelectric pattern recognition, (2) adaptive computer aiding; (3) local reflex control; (4) prosthetic sensory feedback; and (5) externally energized arm with the functions of prehension, wrist rotation, elbow extension and flexion and humeral rotation.
Higher order time integration methods for two-phase flow
NASA Astrophysics Data System (ADS)
Kees, Christopher E.; Miller, Cass T.
Time integration methods that adapt in both the order of approximation and time step have been shown to provide efficient solutions to Richards' equation. In this work, we extend the same method of lines approach to solve a set of two-phase flow formulations and address some mass conservation issues from the previous work. We analyze these formulations and the nonlinear systems that result from applying the integration methods, placing particular emphasis on their index, range of applicability, and mass conservation characteristics. We conduct numerical experiments to study the behavior of the numerical models for three test problems. We demonstrate that higher order integration in time is more efficient than standard low-order methods for a variety of practical grids and integration tolerances, that the adaptive scheme successfully varies the step size in response to changing conditions, and that mass balance can be maintained efficiently using variable-order integration and an appropriately chosen numerical model formulation.
Integrating Learning Styles into Adaptive E-Learning System
ERIC Educational Resources Information Center
Truong, Huong May
2015-01-01
This paper provides an overview and update on my PhD research project which focuses on integrating learning styles into adaptive e-learning system. The project, firstly, aims to develop a system to classify students' learning styles through their online learning behaviour. This will be followed by a study on the complex relationship between…
Adaptation disrupts motion integration in the primate dorsal stream
Patterson, Carlyn A.; Wissig, Stephanie C.; Kohn, Adam
2014-01-01
Summary Sensory systems adjust continuously to the environment. The effects of recent sensory experience—or adaptation—are typically assayed by recording in a relevant subcortical or cortical network. However, adaptation effects cannot be localized to a single, local network. Adjustments in one circuit or area will alter the input provided to others, with unclear consequences for computations implemented in the downstream circuit. Here we show that prolonged adaptation with drifting gratings, which alters responses in the early visual system, impedes the ability of area MT neurons to integrate motion signals in plaid stimuli. Perceptual experiments reveal a corresponding loss of plaid coherence. A simple computational model shows how the altered representation of motion signals in early cortex can derail integration in MT. Our results suggest that the effects of adaptation cascade through the visual system, derailing the downstream representation of distinct stimulus attributes. PMID:24507198
Adaptive envelope protection methods for aircraft
NASA Astrophysics Data System (ADS)
Unnikrishnan, Suraj
Carefree handling refers to the ability of a pilot to operate an aircraft without the need to continuously monitor aircraft operating limits. At the heart of all carefree handling or maneuvering systems, also referred to as envelope protection systems, are algorithms and methods for predicting future limit violations. Recently, envelope protection methods that have gained more acceptance, translate limit proximity information to its equivalent in the control channel. Envelope protection algorithms either use very small prediction horizon or are static methods with no capability to adapt to changes in system configurations. Adaptive approaches maximizing prediction horizon such as dynamic trim, are only applicable to steady-state-response critical limit parameters. In this thesis, a new adaptive envelope protection method is developed that is applicable to steady-state and transient response critical limit parameters. The approach is based upon devising the most aggressive optimal control profile to the limit boundary and using it to compute control limits. Pilot-in-the-loop evaluations of the proposed approach are conducted at the Georgia Tech Carefree Maneuver lab for transient longitudinal hub moment limit protection. Carefree maneuvering is the dual of carefree handling in the realm of autonomous Uninhabited Aerial Vehicles (UAVs). Designing a flight control system to fully and effectively utilize the operational flight envelope is very difficult. With the increasing role and demands for extreme maneuverability there is a need for developing envelope protection methods for autonomous UAVs. In this thesis, a full-authority automatic envelope protection method is proposed for limit protection in UAVs. The approach uses adaptive estimate of limit parameter dynamics and finite-time horizon predictions to detect impending limit boundary violations. Limit violations are prevented by treating the limit boundary as an obstacle and by correcting nominal control
Perturbative Methods in Path Integration
NASA Astrophysics Data System (ADS)
Johnson-Freyd, Theodore Paul
This dissertation addresses a number of related questions concerning perturbative "path" integrals. Perturbative methods are one of the few successful ways physicists have worked with (or even defined) these infinite-dimensional integrals, and it is important as mathematicians to check that they are correct. Chapter 0 provides a detailed introduction. We take a classical approach to path integrals in Chapter 1. Following standard arguments, we posit a Feynman-diagrammatic description of the asymptotics of the time-evolution operator for the quantum mechanics of a charged particle moving nonrelativistically through a curved manifold under the influence of an external electromagnetic field. We check that our sum of Feynman diagrams has all desired properties: it is coordinate-independent and well-defined without ultraviolet divergences, it satisfies the correct composition law, and it satisfies Schrodinger's equation thought of as a boundary-value problem in PDE. Path integrals in quantum mechanics and elsewhere in quantum field theory are almost always of the shape ∫ f es for some functions f (the "observable") and s (the "action"). In Chapter 2 we step back to analyze integrals of this type more generally. Integration by parts provides algebraic relations between the values of ∫ (-) es for different inputs, which can be packaged into a Batalin--Vilkovisky-type chain complex. Using some simple homological perturbation theory, we study the version of this complex that arises when f and s are taken to be polynomial functions, and power series are banished. We find that in such cases, the entire scheme-theoretic critical locus (complex points included) of s plays an important role, and that one can uniformly (but noncanonically) integrate out in a purely algebraic way the contributions to the integral from all "higher modes," reducing ∫ f es to an integral over the critical locus. This may help explain the presence of analytic continuation in questions like the
Adaptive Kernel Based Machine Learning Methods
2012-10-15
multiscale collocation method with a matrix compression strategy to discretize the system of integral equations and then use the multilevel...augmentation method to solve the resulting discrete system. A priori and a posteriori 1 parameter choice strategies are developed for thesemethods. The...performance of the proximity algo- rithms for the L1/TV denoising model. This leads us to a new characterization of all solutions to the L1/TV model via fixed
hp-Adaptive time integration based on the BDF for viscous flows
NASA Astrophysics Data System (ADS)
Hay, A.; Etienne, S.; Pelletier, D.; Garon, A.
2015-06-01
This paper presents a procedure based on the Backward Differentiation Formulas of order 1 to 5 to obtain efficient time integration of the incompressible Navier-Stokes equations. The adaptive algorithm performs both stepsize and order selections to control respectively the solution accuracy and the computational efficiency of the time integration process. The stepsize selection (h-adaptivity) is based on a local error estimate and an error controller to guarantee that the numerical solution accuracy is within a user prescribed tolerance. The order selection (p-adaptivity) relies on the idea that low-accuracy solutions can be computed efficiently by low order time integrators while accurate solutions require high order time integrators to keep computational time low. The selection is based on a stability test that detects growing numerical noise and deems a method of order p stable if there is no method of lower order that delivers the same solution accuracy for a larger stepsize. Hence, it guarantees both that (1) the used method of integration operates inside of its stability region and (2) the time integration procedure is computationally efficient. The proposed time integration procedure also features a time-step rejection and quarantine mechanisms, a modified Newton method with a predictor and dense output techniques to compute solution at off-step points.
Ensemble transform sensitivity method for adaptive observations
NASA Astrophysics Data System (ADS)
Zhang, Yu; Xie, Yuanfu; Wang, Hongli; Chen, Dehui; Toth, Zoltan
2016-01-01
The Ensemble Transform (ET) method has been shown to be useful in providing guidance for adaptive observation deployment. It predicts forecast error variance reduction for each possible deployment using its corresponding transformation matrix in an ensemble subspace. In this paper, a new ET-based sensitivity (ETS) method, which calculates the gradient of forecast error variance reduction in terms of analysis error variance reduction, is proposed to specify regions for possible adaptive observations. ETS is a first order approximation of the ET; it requires just one calculation of a transformation matrix, increasing computational efficiency (60%-80% reduction in computational cost). An explicit mathematical formulation of the ETS gradient is derived and described. Both the ET and ETS methods are applied to the Hurricane Irene (2011) case and a heavy rainfall case for comparison. The numerical results imply that the sensitive areas estimated by the ETS and ET are similar. However, ETS is much more efficient, particularly when the resolution is higher and the number of ensemble members is larger.
Bergeron, Bryan; Cline, Andrew; Shipley, Jaime
2012-01-01
We have developed a distributed, standards-based architecture that enables simulation and simulator designers to leverage adaptive learning systems. Our approach, which incorporates an electronic competency record, open source LMS, and open source microcontroller hardware, is a low-cost, pragmatic option to integrating simulators with traditional courseware.
Adaptive Accommodation Control Method for Complex Assembly
NASA Astrophysics Data System (ADS)
Kang, Sungchul; Kim, Munsang; Park, Shinsuk
Robotic systems have been used to automate assembly tasks in manufacturing and in teleoperation. Conventional robotic systems, however, have been ineffective in controlling contact force in multiple contact states of complex assemblythat involves interactions between complex-shaped parts. Unlike robots, humans excel at complex assembly tasks by utilizing their intrinsic impedance, forces and torque sensation, and tactile contact clues. By examining the human behavior in assembling complex parts, this study proposes a novel geometry-independent control method for robotic assembly using adaptive accommodation (or damping) algorithm. Two important conditions for complex assembly, target approachability and bounded contact force, can be met by the proposed control scheme. It generates target approachable motion that leads the object to move closer to a desired target position, while contact force is kept under a predetermined value. Experimental results from complex assembly tests have confirmed the feasibility and applicability of the proposed method.
A New Adaptive H-Infinity Filtering Algorithm for the GPS/INS Integrated Navigation.
Jiang, Chen; Zhang, Shu-Bi; Zhang, Qiu-Zhao
2016-12-19
The Kalman filter is an optimal estimator with numerous applications in technology, especially in systems with Gaussian distributed noise. Moreover, the adaptive Kalman filtering algorithms, based on the Kalman filter, can control the influence of dynamic model errors. In contrast to the adaptive Kalman filtering algorithms, the H-infinity filter is able to address the interference of the stochastic model by minimization of the worst-case estimation error. In this paper, a novel adaptive H-infinity filtering algorithm, which integrates the adaptive Kalman filter and the H-infinity filter in order to perform a comprehensive filtering algorithm, is presented. In the proposed algorithm, a robust estimation method is employed to control the influence of outliers. In order to verify the proposed algorithm, experiments with real data of the Global Positioning System (GPS) and Inertial Navigation System (INS) integrated navigation, were conducted. The experimental results have shown that the proposed algorithm has multiple advantages compared to the other filtering algorithms.
Space-time adaptive numerical methods for geophysical applications.
Castro, C E; Käser, M; Toro, E F
2009-11-28
In this paper we present high-order formulations of the finite volume and discontinuous Galerkin finite-element methods for wave propagation problems with a space-time adaptation technique using unstructured meshes in order to reduce computational cost without reducing accuracy. Both methods can be derived in a similar mathematical framework and are identical in their first-order version. In their extension to higher order accuracy in space and time, both methods use spatial polynomials of higher degree inside each element, a high-order solution of the generalized Riemann problem and a high-order time integration method based on the Taylor series expansion. The static adaptation strategy uses locally refined high-resolution meshes in areas with low wave speeds to improve the approximation quality. Furthermore, the time step length is chosen locally adaptive such that the solution is evolved explicitly in time by an optimal time step determined by a local stability criterion. After validating the numerical approach, both schemes are applied to geophysical wave propagation problems such as tsunami waves and seismic waves comparing the new approach with the classical global time-stepping technique. The problem of mesh partitioning for large-scale applications on multi-processor architectures is discussed and a new mesh partition approach is proposed and tested to further reduce computational cost.
Dissociating conflict adaptation from feature integration: a multiple regression approach.
Notebaert, Wim; Verguts, Tom
2007-10-01
Congruency effects are typically smaller after incongruent than after congruent trials. One explanation is in terms of higher levels of cognitive control after detection of conflict (conflict adaptation; e.g., M. M. Botvinick, T. S. Braver, D. M. Barch, C. S. Carter, & J. D. Cohen, 2001). An alternative explanation for these results is based on feature repetition and/or integration effects (e.g., B. Hommel, R. W. Proctor, & K.-P. Vu, 2004; U. Mayr, E. Awh, & P. Laurey, 2003). Previous attempts to dissociate feature integration from conflict adaptation focused on a particular subset of the data in which feature transitions were held constant (J. G. Kerns et al., 2004) or in which congruency transitions were held constant (C. Akcay & E. Hazeltine, in press), but this has a number of disadvantages. In this article, the authors present a multiple regression solution for this problem and discuss its possibilities and pitfalls.
Adapting implicit methods to parallel processors
Reeves, L.; McMillin, B.; Okunbor, D.; Riggins, D.
1994-12-31
When numerically solving many types of partial differential equations, it is advantageous to use implicit methods because of their better stability and more flexible parameter choice, (e.g. larger time steps). However, since implicit methods usually require simultaneous knowledge of the entire computational domain, these methods axe difficult to implement directly on distributed memory parallel processors. This leads to infrequent use of implicit methods on parallel/distributed systems. The usual implementation of implicit methods is inefficient due to the nature of parallel systems where it is common to take the computational domain and distribute the grid points over the processors so as to maintain a relatively even workload per processor. This creates a problem at the locations in the domain where adjacent points are not on the same processor. In order for the values at these points to be calculated, messages have to be exchanged between the corresponding processors. Without special adaptation, this will result in idle processors during part of the computation, and as the number of idle processors increases, the lower the effective speed improvement by using a parallel processor.
Adaptive model training system and method
Bickford, Randall L; Palnitkar, Rahul M; Lee, Vo
2014-04-15
An adaptive model training system and method for filtering asset operating data values acquired from a monitored asset for selectively choosing asset operating data values that meet at least one predefined criterion of good data quality while rejecting asset operating data values that fail to meet at least the one predefined criterion of good data quality; and recalibrating a previously trained or calibrated model having a learned scope of normal operation of the asset by utilizing the asset operating data values that meet at least the one predefined criterion of good data quality for adjusting the learned scope of normal operation of the asset for defining a recalibrated model having the adjusted learned scope of normal operation of the asset.
Adaptive model training system and method
Bickford, Randall L; Palnitkar, Rahul M
2014-11-18
An adaptive model training system and method for filtering asset operating data values acquired from a monitored asset for selectively choosing asset operating data values that meet at least one predefined criterion of good data quality while rejecting asset operating data values that fail to meet at least the one predefined criterion of good data quality; and recalibrating a previously trained or calibrated model having a learned scope of normal operation of the asset by utilizing the asset operating data values that meet at least the one predefined criterion of good data quality for adjusting the learned scope of normal operation of the asset for defining a recalibrated model having the adjusted learned scope of normal operation of the asset.
Adaptive filtering for the lattice Boltzmann method
NASA Astrophysics Data System (ADS)
Marié, Simon; Gloerfelt, Xavier
2017-03-01
In this study, a new selective filtering technique is proposed for the Lattice Boltzmann Method. This technique is based on an adaptive implementation of the selective filter coefficient σ. The proposed model makes the latter coefficient dependent on the shear stress in order to restrict the use of the spatial filtering technique in sheared stress region where numerical instabilities may occur. Different parameters are tested on 2D test-cases sensitive to numerical stability and on a 3D decaying Taylor-Green vortex. The results are compared to the classical static filtering technique and to the use of a standard subgrid-scale model and give significant improvements in particular for low-order filter consistent with the LBM stencil.
Analysis of adaptive algorithms for an integrated communication network
NASA Technical Reports Server (NTRS)
Reed, Daniel A.; Barr, Matthew; Chong-Kwon, Kim
1985-01-01
Techniques were examined that trade communication bandwidth for decreased transmission delays. When the network is lightly used, these schemes attempt to use additional network resources to decrease communication delays. As the network utilization rises, the schemes degrade gracefully, still providing service but with minimal use of the network. Because the schemes use a combination of circuit and packet switching, they should respond to variations in the types and amounts of network traffic. Also, a combination of circuit and packet switching to support the widely varying traffic demands imposed on an integrated network was investigated. The packet switched component is best suited to bursty traffic where some delays in delivery are acceptable. The circuit switched component is reserved for traffic that must meet real time constraints. Selected packet routing algorithms that might be used in an integrated network were simulated. An integrated traffic places widely varying workload demands on a network. Adaptive algorithms were identified, ones that respond to both the transient and evolutionary changes that arise in integrated networks. A new algorithm was developed, hybrid weighted routing, that adapts to workload changes.
A novel adaptive force control method for IPMC manipulation
NASA Astrophysics Data System (ADS)
Hao, Lina; Sun, Zhiyong; Li, Zhi; Su, Yunquan; Gao, Jianchao
2012-07-01
IPMC is a type of electro-active polymer material, also called artificial muscle, which can generate a relatively large deformation under a relatively low input voltage (generally speaking, less than 5 V), and can be implemented in a water environment. Due to these advantages, IPMC can be used in many fields such as biomimetics, service robots, bio-manipulation, etc. Until now, most existing methods for IPMC manipulation are displacement control not directly force control, however, under most conditions, the success rate of manipulations for tiny fragile objects is limited by the contact force, such as using an IPMC gripper to fix cells. Like most EAPs, a creep phenomenon exists in IPMC, of which the generated force will change with time and the creep model will be influenced by the change of the water content or other environmental factors, so a proper force control method is urgently needed. This paper presents a novel adaptive force control method (AIPOF control—adaptive integral periodic output feedback control), based on employing a creep model of which parameters are obtained by using the FRLS on-line identification method. The AIPOF control method can achieve an arbitrary pole configuration as long as the plant is controllable and observable. This paper also designs the POF and IPOF controller to compare their test results. Simulation and experiments of micro-force-tracking tests are carried out, with results confirming that the proposed control method is viable.
Integrated modeling of the GMT laser tomography adaptive optics system
NASA Astrophysics Data System (ADS)
Piatrou, Piotr
2014-08-01
Laser Tomography Adaptive Optics (LTAO) is one of adaptive optics systems planned for the Giant Magellan Telescope (GMT). End-to-end simulation tools that are able to cope with the complexity and computational burden of the AO systems to be installed on the extremely large telescopes such as GMT prove to be an integral part of the GMT LTAO system development endeavors. SL95, the Fortran 95 Simulation Library, is one of the software tools successfully used for the LTAO system end-to-end simulations. The goal of SL95 project is to provide a complete set of generic, richly parameterized mathematical models for key elements of the segmented telescope wavefront control systems including both active and adaptive optics as well as the models for atmospheric turbulence, extended light sources like Laser Guide Stars (LGS), light propagation engines and closed-loop controllers. The library is implemented as a hierarchical collection of classes capable of mutual interaction, which allows one to assemble complex wavefront control system configurations with multiple interacting control channels. In this paper we demonstrate the SL95 capabilities by building an integrated end-to-end model of the GMT LTAO system with 7 control channels: LGS tomography with Adaptive Secondary and on-instrument deformable mirrors, tip-tilt and vibration control, LGS stabilization, LGS focus control, truth sensor-based dynamic noncommon path aberration rejection, pupil position control, SLODAR-like embedded turbulence profiler. The rich parameterization of the SL95 classes allows to build detailed error budgets propagating through the system multiple errors and perturbations such as turbulence-, telescope-, telescope misalignment-, segment phasing error-, non-common path-induced aberrations, sensor noises, deformable mirror-to-sensor mis-registration, vibration, temporal errors, etc. We will present a short description of the SL95 architecture, as well as the sample GMT LTAO system simulation
Integrated Decision Support for Global Environmental Change Adaptation
NASA Astrophysics Data System (ADS)
Kumar, S.; Cantrell, S.; Higgins, G. J.; Marshall, J.; VanWijngaarden, F.
2011-12-01
Environmental changes are happening now that has caused concern in many parts of the world; particularly vulnerable are the countries and communities with limited resources and with natural environments that are more susceptible to climate change impacts. Global leaders are concerned about the observed phenomena and events such as Amazon deforestation, shifting monsoon patterns affecting agriculture in the mountain slopes of Peru, floods in Pakistan, water shortages in Middle East, droughts impacting water supplies and wildlife migration in Africa, and sea level rise impacts on low lying coastal communities in Bangladesh. These environmental changes are likely to get exacerbated as the temperatures rise, the weather and climate patterns change, and sea level rise continues. Large populations and billions of dollars of infrastructure could be affected. At Northrop Grumman, we have developed an integrated decision support framework for providing necessary information to stakeholders and planners to adapt to the impacts of climate variability and change at the regional and local levels. This integrated approach takes into account assimilation and exploitation of large and disparate weather and climate data sets, regional downscaling (dynamic and statistical), uncertainty quantification and reduction, and a synthesis of scientific data with demographic and economic data to generate actionable information for the stakeholders and decision makers. Utilizing a flexible service oriented architecture and state-of-the-art visualization techniques, this information can be delivered via tailored GIS portals to meet diverse set of user needs and expectations. This integrated approach can be applied to regional and local risk assessments, predictions and decadal projections, and proactive adaptation planning for vulnerable communities. In this paper we will describe this comprehensive decision support approach with selected applications and case studies to illustrate how this
Optimizing aircraft performance with adaptive, integrated flight/propulsion control
NASA Technical Reports Server (NTRS)
Smith, R. H.; Chisholm, J. D.; Stewart, J. F.
1991-01-01
The Performance-Seeking Control (PSC) integrated flight/propulsion adaptive control algorithm presented was developed in order to optimize total aircraft performance during steady-state engine operation. The PSC multimode algorithm minimizes fuel consumption at cruise conditions, while maximizing excess thrust during aircraft accelerations, climbs, and dashes, and simultaneously extending engine service life through reduction of fan-driving turbine inlet temperature upon engagement of the extended-life mode. The engine models incorporated by the PSC are continually upgraded, using a Kalman filter to detect anomalous operations. The PSC algorithm will be flight-demonstrated by an F-15 at NASA-Dryden.
Replicated evolution of integrated plastic responses during early adaptive divergence.
Parsons, Kevin J; Robinson, Beren W
2006-04-01
Colonization of a novel environment is expected to result in adaptive divergence from the ancestral population when selection favors a new phenotypic optimum. Local adaptation in the new environment occurs through the accumulation and integration of character states that positively affect fitness. The role played by plastic traits in adaptation to a novel environment has generally been ignored, except for variable environments. We propose that if conditions in a relatively stable but novel environment induce phenotypically plastic responses in many traits, and if genetic variation exists in the form of those responses, then selection may initially favor the accumulation and integration of functionally useful plastic responses. Early divergence between ancestral and colonist forms will then occur with respect to their plastic responses across the gradient bounded by ancestral and novel environmental conditions. To test this, we compared the magnitude, integration, and pattern of plastic character responses in external body form induced by shallow versus open water conditions between two sunfish ecomorphs that coexist in four postglacial lakes. The novel sunfish ecomorph is present in the deeper open water habitat, whereas the ancestral ecomorph inhabits the shallow waters along the lake margin. Plastic responses by open water ecomorphs were more correlated than those of their local shallow water ecomorph in two of the populations, whereas equal levels of correlated plastic character responses occurred between ecomorphs in the other two populations. Small but persistent differences occurred between ecomorph pairs in the pattern of their character responses, suggesting a recent divergence. Open water ecomorphs shared some similarities in the covariance among plastic responses to rearing environment. Replication in the form of correlated plastic responses among populations of open water ecomorphs suggests that plastic character states may evolve under selection
Sarhadi, Pouria; Noei, Abolfazl Ranjbar; Khosravi, Alireza
2016-11-01
Input saturations and uncertain dynamics are among the practical challenges in control of autonomous vehicles. Adaptive control is known as a proper method to deal with the uncertain dynamics of these systems. Therefore, incorporating the ability to confront with input saturation in adaptive controllers can be valuable. In this paper, an adaptive autopilot is presented for the pitch and yaw channels of an autonomous underwater vehicle (AUV) in the presence of input saturations. This will be performed by combination of a model reference adaptive control (MRAC) with integral state feedback with a modern anti-windup (AW) compensator. MRAC with integral state feedback is commonly used in autonomous vehicles. However, some proper modifications need to be taken into account in order to cope with the saturation problem. To this end, a Riccati-based anti-windup (AW) compensator is employed. The presented technique is applied to the non-linear six degrees of freedom (DOF) model of an AUV and the obtained results are compared with that of its baseline method. Several simulation scenarios are executed in the pitch and yaw channels to evaluate the controller performance. Moreover, effectiveness of proposed adaptive controller is comprehensively investigated by implementing Monte Carlo simulations. The obtained results verify the performance of proposed method.
Adaptive numerical methods for partial differential equations
Cololla, P.
1995-07-01
This review describes a structured approach to adaptivity. The Automated Mesh Refinement (ARM) algorithms developed by M Berger are described, touching on hyperbolic and parabolic applications. Adaptivity is achieved by overlaying finer grids only in areas flagged by a generalized error criterion. The author discusses some of the issues involved in abutting disparate-resolution grids, and demonstrates that suitable algorithms exist for dissipative as well as hyperbolic systems.
Li, Shasha; Nie, Hongchao; Lu, Xudong; Duan, Huilong
2015-02-01
Integration of heterogeneous systems is the key to hospital information construction due to complexity of the healthcare environment. Currently, during the process of healthcare information system integration, people participating in integration project usually communicate by free-format document, which impairs the efficiency and adaptability of integration. A method utilizing business process model and notation (BPMN) to model integration requirement and automatically transforming it to executable integration configuration was proposed in this paper. Based on the method, a tool was developed to model integration requirement and transform it to integration configuration. In addition, an integration case in radiology scenario was used to verify the method.
Immune tolerance induction by integrating innate and adaptive immune regulators
Suzuki, Jun; Ricordi, Camillo; Chen, Zhibin
2009-01-01
A diversity of immune tolerance mechanisms have evolved to protect normal tissues from immune damage. Immune regulatory cells are critical contributors to peripheral tolerance. These regulatory cells, exemplified by the CD4+Foxp3+ regulatory T (Treg) cells and a recently identified population named myeloid-derived suppressor cells (MDSCs), regulate immune responses and limiting immune-mediated pathology. In a chronic inflammatory setting, such as allograft-directed immunity, there may be a dynamic “crosstalk” between the innate and adaptive immunomodulatory mechanisms for an integrated control of immune damage. CTLA4-B7-based interaction between the two branches may function as a molecular “bridge” to facilitate such “crosstalk”. Understanding the interplays among Treg cells, innate suppressors and pathogenic effector T (Teff) cells will be critical in the future to assist in the development of therapeutic strategies to enhance and synergize physiological immunosuppressive elements in the innate and adaptive immune system. Successful development of localized strategies of regulatory cell therapies could circumvent the requirement for very high number of cells and decrease the risks associated with systemic immunosuppression. To realize the potential of innate and adaptive immune regulators for the still-elusive goal of immune tolerance induction, adoptive cell therapies may also need to be coupled with agents enhancing endogenous tolerance mechanisms. PMID:19919733
The adaptive significance of adult neurogenesis: an integrative approach
Konefal, Sarah; Elliot, Mick; Crespi, Bernard
2013-01-01
Adult neurogenesis in mammals is predominantly restricted to two brain regions, the dentate gyrus (DG) of the hippocampus and the olfactory bulb (OB), suggesting that these two brain regions uniquely share functions that mediate its adaptive significance. Benefits of adult neurogenesis across these two regions appear to converge on increased neuronal and structural plasticity that subserves coding of novel, complex, and fine-grained information, usually with contextual components that include spatial positioning. By contrast, costs of adult neurogenesis appear to center on potential for dysregulation resulting in higher risk of brain cancer or psychological dysfunctions, but such costs have yet to be quantified directly. The three main hypotheses for the proximate functions and adaptive significance of adult neurogenesis, pattern separation, memory consolidation, and olfactory spatial, are not mutually exclusive and can be reconciled into a simple general model amenable to targeted experimental and comparative tests. Comparative analysis of brain region sizes across two major social-ecological groups of primates, gregarious (mainly diurnal haplorhines, visually-oriented, and in large social groups) and solitary (mainly noctural, territorial, and highly reliant on olfaction, as in most rodents) suggest that solitary species, but not gregarious species, show positive associations of population densities and home range sizes with sizes of both the hippocampus and OB, implicating their functions in social-territorial systems mediated by olfactory cues. Integrated analyses of the adaptive significance of adult neurogenesis will benefit from experimental studies motivated and structured by ecologically and socially relevant selective contexts. PMID:23882188
Three-Dimensional Integration of Graphene via Swelling, Shrinking, and Adaptation.
Choi, Jonghyun; Kim, Hoe Joon; Wang, Michael Cai; Leem, Juyoung; King, William P; Nam, SungWoo
2015-07-08
The transfer of graphene from its growth substrate to a target substrate has been widely investigated for its decisive role in subsequent device integration and performance. Thus far, various reported methods of graphene transfer have been mostly limited to planar or curvilinear surfaces due to the challenges associated with fractures from local stress during transfer onto three-dimensional (3D) microstructured surfaces. Here, we report a robust approach to integrate graphene onto 3D microstructured surfaces while maintaining the structural integrity of graphene, where the out-of-plane dimensions of the 3D features vary from 3.5 to 50 μm. We utilized three sequential steps: (1) substrate swelling, (2) shrinking, and (3) adaptation, in order to achieve damage-free, large area integration of graphene on 3D microstructures. Detailed scanning electron microscopy, atomic force microscopy, Raman spectroscopy, and electrical resistance measurement studies show that the amount of substrate swelling as well as the flexural rigidities of the transfer film affect the integration yield and quality of the integrated graphene. We also demonstrate the versatility of our approach by extension to a variety of 3D microstructured geometries. Lastly, we show the integration of hybrid structures of graphene decorated with gold nanoparticles onto 3D microstructure substrates, demonstrating the compatibility of our integration method with other hybrid nanomaterials. We believe that the versatile, damage-free integration method based on swelling, shrinking, and adaptation will pave the way for 3D integration of two-dimensional (2D) materials and expand potential applications of graphene and 2D materials in the future.
Method and apparatus for adaptive force and position control of manipulators
NASA Technical Reports Server (NTRS)
Seraji, Homayoun (Inventor)
1989-01-01
The present invention discloses systematic methods and apparatus for the design of real time controllers. Real-time control employs adaptive force/position by use of feedforward and feedback controllers, with the feedforward controller being the inverse of the linearized model of robot dynamics and containing only proportional-double-derivative terms is disclosed. The feedback controller, of the proportional-integral-derivative type, ensures that manipulator joints follow reference trajectories and the feedback controller achieves robust tracking of step-plus-exponential trajectories, all in real time. The adaptive controller includes adaptive force and position control within a hybrid control architecture. The adaptive controller, for force control, achieves tracking of desired force setpoints, and the adaptive position controller accomplishes tracking of desired position trajectories. Circuits in the adaptive feedback and feedforward controllers are varied by adaptation laws.
Turbulence profiling methods applied to ESO's adaptive optics facility
NASA Astrophysics Data System (ADS)
Valenzuela, Javier; Béchet, Clémentine; Garcia-Rissmann, Aurea; Gonté, Frédéric; Kolb, Johann; Le Louarn, Miska; Neichel, Benoît; Madec, Pierre-Yves; Guesalaga, Andrés.
2014-07-01
Two algorithms were recently studied for C2n profiling from wide-field Adaptive Optics (AO) measurements on GeMS (Gemini Multi-Conjugate AO system). They both rely on the Slope Detection and Ranging (SLODAR) approach, using spatial covariances of the measurements issued from various wavefront sensors. The first algorithm estimates the C2n profile by applying the truncated least-squares inverse of a matrix modeling the response of slopes covariances to various turbulent layer heights. In the second method, the profile is estimated by deconvolution of these spatial cross-covariances of slopes. We compare these methods in the new configuration of ESO Adaptive Optics Facility (AOF), a high-order multiple laser system under integration. For this, we use measurements simulated by the AO cluster of ESO. The impact of the measurement noise and of the outer scale of the atmospheric turbulence is analyzed. The important influence of the outer scale on the results leads to the development of a new step for outer scale fitting included in each algorithm. This increases the reliability and robustness of the turbulence strength and profile estimations.
A Method for Severely Constrained Item Selection in Adaptive Testing.
ERIC Educational Resources Information Center
Stocking, Martha L.; Swanson, Len
1993-01-01
A method is presented for incorporating a large number of constraints on adaptive item selection in the construction of computerized adaptive tests. The method, which emulates practices of expert test specialists, is illustrated for verbal and quantitative measures. Its foundation is application of a weighted deviations model and algorithm. (SLD)
Solution-adaptive finite element method in computational fracture mechanics
NASA Technical Reports Server (NTRS)
Min, J. B.; Bass, J. M.; Spradley, L. W.
1993-01-01
Some recent results obtained using solution-adaptive finite element method in linear elastic two-dimensional fracture mechanics problems are presented. The focus is on the basic issue of adaptive finite element method for validating the applications of new methodology to fracture mechanics problems by computing demonstration problems and comparing the stress intensity factors to analytical results.
Adaptivity demonstration of inflatable rigidized integrated structures (IRIS)
NASA Astrophysics Data System (ADS)
Natori, M. C.; Higuchi, Ken; Sekine, Koji; Okazaki, Kakuma
1995-10-01
An inflatable rigidized integrated structure (IRIS), which is composed of membrane elements and cable networks, and whose structural accuracy is decided by mainly cable networks, has various design adaptivity, since it is a high performance deployable structure for future space applications. In order to keep some stiffness after deployment, materials of membrane are assumed to be rigidized in space, and sometimes the cable network is also rigidized. The concept can cover various structural elements and structure systems. The accuracy analysis of reflector surface constrained by inside hard points and the manufacturing of a simple reflector model is introduced. Test results of rigidized cable columns to show many variations of IRIS to be feasible are also reported.
Adaptive integral dynamic surface control of a hypersonic flight vehicle
NASA Astrophysics Data System (ADS)
Aslam Butt, Waseem; Yan, Lin; Amezquita S., Kendrick
2015-07-01
In this article, non-linear adaptive dynamic surface air speed and flight path angle control designs are presented for the longitudinal dynamics of a flexible hypersonic flight vehicle. The tracking performance of the control design is enhanced by introducing a novel integral term that caters to avoiding a large initial control signal. To ensure feasibility, the design scheme incorporates magnitude and rate constraints on the actuator commands. The uncertain non-linear functions are approximated by an efficient use of the neural networks to reduce the computational load. A detailed stability analysis shows that all closed-loop signals are uniformly ultimately bounded and the ? tracking performance is guaranteed. The robustness of the design scheme is verified through numerical simulations of the flexible flight vehicle model.
Adaptive method for electron bunch profile prediction
Scheinker, Alexander; Gessner, Spencer
2015-10-01
We report on an experiment performed at the Facility for Advanced Accelerator Experimental Tests (FACET) at SLAC National Accelerator Laboratory, in which a new adaptive control algorithm, one with known, bounded update rates, despite operating on analytically unknown cost functions, was utilized in order to provide quasi-real-time bunch property estimates of the electron beam. Multiple parameters, such as arbitrary rf phase settings and other time-varying accelerator properties, were simultaneously tuned in order to match a simulated bunch energy spectrum with a measured energy spectrum. The simple adaptive scheme was digitally implemented using matlab and the experimental physics and industrial control system. The main result is a nonintrusive, nondestructive, real-time diagnostic scheme for prediction of bunch profiles, as well as other beam parameters, the precise control of which are important for the plasma wakefield acceleration experiments being explored at FACET. © 2015 authors. Published by the American Physical Society.
Adaptive method for electron bunch profile prediction
NASA Astrophysics Data System (ADS)
Scheinker, Alexander; Gessner, Spencer
2015-10-01
We report on an experiment performed at the Facility for Advanced Accelerator Experimental Tests (FACET) at SLAC National Accelerator Laboratory, in which a new adaptive control algorithm, one with known, bounded update rates, despite operating on analytically unknown cost functions, was utilized in order to provide quasi-real-time bunch property estimates of the electron beam. Multiple parameters, such as arbitrary rf phase settings and other time-varying accelerator properties, were simultaneously tuned in order to match a simulated bunch energy spectrum with a measured energy spectrum. The simple adaptive scheme was digitally implemented using matlab and the experimental physics and industrial control system. The main result is a nonintrusive, nondestructive, real-time diagnostic scheme for prediction of bunch profiles, as well as other beam parameters, the precise control of which are important for the plasma wakefield acceleration experiments being explored at FACET.
Adaptive finite element methods in electrochemistry.
Gavaghan, David J; Gillow, Kathryn; Süli, Endre
2006-12-05
In this article, we review some of our previous work that considers the general problem of numerical simulation of the currents at microelectrodes using an adaptive finite element approach. Microelectrodes typically consist of an electrode embedded (or recessed) in an insulating material. For all such electrodes, numerical simulation is made difficult by the presence of a boundary singularity at the electrode edge (where the electrode meets the insulator), manifested by the large increase in the current density at this point, often referred to as the edge effect. Our approach to overcoming this problem has involved the derivation of an a posteriori bound on the error in the numerical approximation for the current that can be used to drive an adaptive mesh-generation algorithm, allowing calculation of the quantity of interest (the current) to within a prescribed tolerance. We illustrate the generic applicability of the approach by considering a broad range of steady-state applications of the technique.
NASA Astrophysics Data System (ADS)
Zeff, H. B.; Characklis, G. W.; Reed, P. M.; Herman, J. D.
2015-12-01
Water supply policies that integrate portfolios of short-term management decisions with long-term infrastructure development enable utilities to adapt to a range of future scenarios. An effective mix of short-term management actions can augment existing infrastructure, potentially forestalling new development. Likewise, coordinated expansion of infrastructure such as regional interconnections and shared treatment capacity can increase the effectiveness of some management actions like water transfers. Highly adaptable decision pathways that mix long-term infrastructure options and short-term management actions require decision triggers capable of incorporating the impact of these time-evolving decisions on growing water supply needs. Here, we adapt risk-based triggers to sequence a set of potential infrastructure options in combination with utility-specific conservation actions and inter-utility water transfers. Individual infrastructure pathways can be augmented with conservation or water transfers to reduce the cost of meeting utility objectives, but they can also include cooperatively developed, shared infrastructure that expands regional capacity to transfer water. This analysis explores the role of cooperation among four water utilities in the 'Research Triangle' region of North Carolina by formulating three distinct categories of adaptive policy pathways: independent action (utility-specific conservation and supply infrastructure only), weak cooperation (utility-specific conservation and infrastructure development with regional transfers), and strong cooperation (utility specific conservation and jointly developed of regional infrastructure that supports transfers). Results suggest that strong cooperation aids the utilities in meeting their individual objections at substantially lower costs and with fewer irreversible infrastructure options.
Adaptive weld control for high-integrity welding applications
NASA Astrophysics Data System (ADS)
Powell, Bradley W.
Adaptive, closed-loop weld control is necessary to maintain high-integrity, zero-defect welds. Conventional weld control techniques using weld parameter feedback control loops are sufficient to maintain set points, but fall short when confronted with unexpected variations in part/tooling temperature and mechanical structure, weldment material, arc skew angle, or calibration in weld parameter feedback measurement. Modern technology allows closed-loop control utilizing input from real-time weld monitoring sensors and inspection devices. Weld puddle parameters, bead profile parameters, and weld seam position are fed back into the weld control loop which adapts for the weld condition variations and drives them back to a desired state, thereby preventing weld defects or perturbations. Parameters such as arc position relative to the weld seam, puddle symmetry, arc length, weld width, and bead shape can be extracted from sensor imagery and used in closed-loop active weld control. All weld bead and puddle measurements are available for real-time display and statistical process control analysis, after which the data is archived to permanent storage or later retrieval and analysis.
Integrated Power Adapter: Isolated Converter with Integrated Passives and Low Material Stress
2010-09-01
ADEPT Project: CPES at Virginia Tech is developing an extremely efficient power converter that could be used in power adapters for small, lightweight laptops and other types of mobile electronic devices. Power adapters convert electrical energy into useable power for an electronic device, and they currently waste a lot of energy when they are plugged into an outlet to power up. CPES at Virginia Tech is integrating high-density capacitors, new magnetic materials, high-frequency integrated circuits, and a constant-flux transformer to create its efficient power converter. The high-density capacitors enable the power adapter to store more energy. The new magnetic materials also increase energy storage, and they can be precisely dispensed using a low-cost ink-jet printer which keeps costs down. The high-frequency integrated circuits can handle more power, and they can handle it more efficiently. And, the constant-flux transformer processes a consistent flow of electrical current, which makes the converter more efficient.
ZZ-Type a posteriori error estimators for adaptive boundary element methods on a curve☆
Feischl, Michael; Führer, Thomas; Karkulik, Michael; Praetorius, Dirk
2014-01-01
In the context of the adaptive finite element method (FEM), ZZ-error estimators named after Zienkiewicz and Zhu (1987) [52] are mathematically well-established and widely used in practice. In this work, we propose and analyze ZZ-type error estimators for the adaptive boundary element method (BEM). We consider weakly singular and hyper-singular integral equations and prove, in particular, convergence of the related adaptive mesh-refining algorithms. Throughout, the theoretical findings are underlined by numerical experiments. PMID:24748725
Adaptive methods, rolling contact, and nonclassical friction laws
NASA Technical Reports Server (NTRS)
Oden, J. T.
1989-01-01
Results and methods on three different areas of contemporary research are outlined. These include adaptive methods, the rolling contact problem for finite deformation of a hyperelastic or viscoelastic cylinder, and non-classical friction laws for modeling dynamic friction phenomena.
Bonora, Stefano; Jian, Yifan; Zhang, Pengfei; Zam, Azhar; Pugh, Edward N; Zawadzki, Robert J; Sarunic, Marinko V
2015-08-24
Adaptive optics is rapidly transforming microscopy and high-resolution ophthalmic imaging. The adaptive elements commonly used to control optical wavefronts are liquid crystal spatial light modulators and deformable mirrors. We introduce a novel Multi-actuator Adaptive Lens that can correct aberrations to high order, and which has the potential to increase the spread of adaptive optics to many new applications by simplifying its integration with existing systems. Our method combines an adaptive lens with an imaged-based optimization control that allows the correction of images to the diffraction limit, and provides a reduction of hardware complexity with respect to existing state-of-the-art adaptive optics systems. The Multi-actuator Adaptive Lens design that we present can correct wavefront aberrations up to the 4th order of the Zernike polynomial characterization. The performance of the Multi-actuator Adaptive Lens is demonstrated in a wide field microscope, using a Shack-Hartmann wavefront sensor for closed loop control. The Multi-actuator Adaptive Lens and image-based wavefront-sensorless control were also integrated into the objective of a Fourier Domain Optical Coherence Tomography system for in vivo imaging of mouse retinal structures. The experimental results demonstrate that the insertion of the Multi-actuator Objective Lens can generate arbitrary wavefronts to correct aberrations down to the diffraction limit, and can be easily integrated into optical systems to improve the quality of aberrated images.
Bonora, Stefano; Jian, Yifan; Zhang, Pengfei; Zam, Azhar; Pugh, Edward N.; Zawadzki, Robert J.; Sarunic, Marinko V.
2015-01-01
Adaptive optics is rapidly transforming microscopy and high-resolution ophthalmic imaging. The adaptive elements commonly used to control optical wavefronts are liquid crystal spatial light modulators and deformable mirrors. We introduce a novel Multi-actuator Adaptive Lens that can correct aberrations to high order, and which has the potential to increase the spread of adaptive optics to many new applications by simplifying its integration with existing systems. Our method combines an adaptive lens with an imaged-based optimization control that allows the correction of images to the diffraction limit, and provides a reduction of hardware complexity with respect to existing state-of-the-art adaptive optics systems. The Multi-actuator Adaptive Lens design that we present can correct wavefront aberrations up to the 4th order of the Zernike polynomial characterization. The performance of the Multi-actuator Adaptive Lens is demonstrated in a wide field microscope, using a Shack-Hartmann wavefront sensor for closed loop control. The Multi-actuator Adaptive Lens and image-based wavefront-sensorless control were also integrated into the objective of a Fourier Domain Optical Coherence Tomography system for in vivo imaging of mouse retinal structures. The experimental results demonstrate that the insertion of the Multi-actuator Objective Lens can generate arbitrary wavefronts to correct aberrations down to the diffraction limit, and can be easily integrated into optical systems to improve the quality of aberrated images. PMID:26368169
Laser housing having integral mounts and method of manufacturing same
Herron, Michael Alan; Brickeen, Brian Keith
2004-10-19
A housing adapted to position, support, and facilitate aligning various components, including an optical path assembly, of a laser. In a preferred embodiment, the housing is constructed from a single piece of material and broadly comprises one or more through-holes; one or more cavities; and one or more integral mounts, wherein the through-holes and the cavities cooperate to define the integral mounts. Securement holes machined into the integral mounts facilitate securing components within the integral mounts using set screws, adhesive, or a combination thereof. In a preferred method of making the housing, the through-holes and cavities are first machined into the single piece of material, with at least some of the remaining material forming the integral mounts.
Comparison of photopeak integration methods
NASA Astrophysics Data System (ADS)
Kennedy, G.
1990-12-01
Several methods for the calculation of gamma-ray photopeak areas have been compared for the case of a small peak on a high Compton background. 980 similar spectra were accumulated with a germanium detector using a weak 137Cs source to produce a peak at 662 keV on a Compton background generated by a 60Co source. A computer program was written to calculate the area of the 662 keV peak using the total- and partial-peak-area methods, a modification of Sterlinski's method, Loska's method and least-squares fitting of Gaussian peak shapes with linear and quadratic background. The precision attained was highly dependent on the number of channels used to estimate the background, and the best precision, about 9.5%, was obtained with the partial-peak-area method, the modified Sterlinski method and least-squares fitting with variable peak position, fixed peak width and linear background. The methods were also evaluated for their sensitivity to uncertainty in the peak centroid position. Considering precision, ease of use, reliability and universal applicability, the total-peak-area method using several channels for background estimation and the least-squares-fitting method are recommended.
Managing Climate Risk. Integrating Adaptation into World Bank Group Operations
Van Aalst, M.
2006-08-15
Climate change is already taking place, and further changes are inevitable. Developing countries, and particularly the poorest people in these countries, are most at risk. The impacts result not only from gradual changes in temperature and sea level but also, in particular, from increased climate variability and extremes, including more intense floods, droughts, and storms. These changes are already having major impacts on the economic performance of developing countries and on the lives and livelihoods of millions of poor people around the world. Climate change thus directly affects the World Bank Group's mission of eradicating poverty. It also puts at risk many projects in a wide range of sectors, including infrastructure, agriculture, human health, water resources, and environment. The risks include physical threats to the investments, potential underperformance, and the possibility that projects will indirectly contribute to rising vulnerability by, for example, triggering investment and settlement in high-risk areas. The way to address these concerns is not to separate climate change adaptation from other priorities but to integrate comprehensive climate risk management into development planning, programs, and projects. While there is a great need to heighten awareness of climate risk in Bank work, a large body of experience on climate risk management is already available, in analytical work, in country dialogues, and in a growing number of investment projects. This operational experience highlights the general ingredients for successful integration of climate risk management into the mainstream development agenda: getting the right sectoral departments and senior policy makers involved; incorporating risk management into economic planning; engaging a wide range of nongovernmental actors (businesses, nongovernmental organizations, communities, and so on); giving attention to regulatory issues; and choosing strategies that will pay off immediately under current
An Adaptive Discontinuous Galerkin Method for Modeling Atmospheric Convection (Preprint)
2011-04-13
Giraldo and Volkmar Wirth 5 SENSITIVITY STUDIES One important question for each adaptive numerical model is: how accurate is the adaptive method? For...this criterion that is used later for some sensitivity studies . These studies include a comparison between a simulation on an adaptive mesh with a...simulation on a uniform mesh and a sensitivity study concerning the size of the refinement region. 5.1 Comparison Criterion For comparing different
Integrated control system and method
Wang, Paul Sai Keat; Baldwin, Darryl; Kim, Myoungjin
2013-10-29
An integrated control system for use with an engine connected to a generator providing electrical power to a switchgear is disclosed. The engine receives gas produced by a gasifier. The control system includes an electronic controller associated with the gasifier, engine, generator, and switchgear. A gas flow sensor monitors a gas flow from the gasifier to the engine through an engine gas control valve and provides a gas flow signal to the electronic controller. A gas oversupply sensor monitors a gas oversupply from the gasifier and provides an oversupply signal indicative of gas not provided to the engine. A power output sensor monitors a power output of the switchgear and provide a power output signal. The electronic controller changes gas production of the gasifier and the power output rating of the switchgear based on the gas flow signal, the oversupply signal, and the power output signal.
Fast integral methods for integrated optical systems simulations: a review
NASA Astrophysics Data System (ADS)
Kleemann, Bernd H.
2015-09-01
Boundary integral equation methods (BIM) or simply integral methods (IM) in the context of optical design and simulation are rigorous electromagnetic methods solving Helmholtz or Maxwell equations on the boundary (surface or interface of the structures between two materials) for scattering or/and diffraction purposes. This work is mainly restricted to integral methods for diffracting structures such as gratings, kinoforms, diffractive optical elements (DOEs), micro Fresnel lenses, computer generated holograms (CGHs), holographic or digital phase holograms, periodic lithographic structures, and the like. In most cases all of the mentioned structures have dimensions of thousands of wavelengths in diameter. Therefore, the basic methods necessary for the numerical treatment are locally applied electromagnetic grating diffraction algorithms. Interestingly, integral methods belong to the first electromagnetic methods investigated for grating diffraction. The development started in the mid 1960ies for gratings with infinite conductivity and it was mainly due to the good convergence of the integral methods especially for TM polarization. The first integral equation methods (IEM) for finite conductivity were the methods by D. Maystre at Fresnel Institute in Marseille: in 1972/74 for dielectric, and metallic gratings, and later for multiprofile, and other types of gratings and for photonic crystals. Other methods such as differential and modal methods suffered from unstable behaviour and slow convergence compared to BIMs for metallic gratings in TM polarization from the beginning to the mid 1990ies. The first BIM for gratings using a parametrization of the profile was developed at Karl-Weierstrass Institute in Berlin under a contract with Carl Zeiss Jena works in 1984-1986 by A. Pomp, J. Creutziger, and the author. Due to the parametrization, this method was able to deal with any kind of surface grating from the beginning: whether profiles with edges, overhanging non
On the Computation of Integral Curves in Adaptive Mesh Refinement Vector Fields
Deines, Eduard; Weber, Gunther H.; Garth, Christoph; Van Straalen, Brian; Borovikov, Sergey; Martin, Daniel F.; Joy, Kenneth I.
2011-06-27
Integral curves, such as streamlines, streaklines, pathlines, and timelines, are an essential tool in the analysis of vector field structures, offering straightforward and intuitive interpretation of visualization results. While such curves have a long-standing tradition in vector field visualization, their application to Adaptive Mesh Refinement (AMR) simulation results poses unique problems. AMR is a highly effective discretization method for a variety of physical simulation problems and has recently been applied to the study of vector fields in flow and magnetohydrodynamic applications. The cell-centered nature of AMR data and discontinuities in the vector field representation arising from AMR level boundaries complicate the application of numerical integration methods to compute integral curves. In this paper, we propose a novel approach to alleviate these problems and show its application to streamline visualization in an AMR model of the magnetic field of the solar system as well as to a simulation of two incompressible viscous vortex rings merging.
Adaptable radiation monitoring system and method
Archer, Daniel E.; Beauchamp, Brock R.; Mauger, G. Joseph; Nelson, Karl E.; Mercer, Michael B.; Pletcher, David C.; Riot, Vincent J.; Schek, James L.; Knapp, David A.
2006-06-20
A portable radioactive-material detection system capable of detecting radioactive sources moving at high speeds. The system has at least one radiation detector capable of detecting gamma-radiation and coupled to an MCA capable of collecting spectral data in very small time bins of less than about 150 msec. A computer processor is connected to the MCA for determining from the spectral data if a triggering event has occurred. Spectral data is stored on a data storage device, and a power source supplies power to the detection system. Various configurations of the detection system may be adaptably arranged for various radiation detection scenarios. In a preferred embodiment, the computer processor operates as a server which receives spectral data from other networked detection systems, and communicates the collected data to a central data reporting system.
Adaptive computational methods for aerothermal heating analysis
NASA Technical Reports Server (NTRS)
Price, John M.; Oden, J. Tinsley
1988-01-01
The development of adaptive gridding techniques for finite-element analysis of fluid dynamics equations is described. The developmental work was done with the Euler equations with concentration on shock and inviscid flow field capturing. Ultimately this methodology is to be applied to a viscous analysis for the purpose of predicting accurate aerothermal loads on complex shapes subjected to high speed flow environments. The development of local error estimate strategies as a basis for refinement strategies is discussed, as well as the refinement strategies themselves. The application of the strategies to triangular elements and a finite-element flux-corrected-transport numerical scheme are presented. The implementation of these strategies in the GIM/PAGE code for 2-D and 3-D applications is documented and demonstrated.
An adaptive pseudospectral method for discontinuous problems
NASA Technical Reports Server (NTRS)
Augenbaum, Jeffrey M.
1988-01-01
The accuracy of adaptively chosen, mapped polynomial approximations is studied for functions with steep gradients or discontinuities. It is shown that, for steep gradient functions, one can obtain spectral accuracy in the original coordinate system by using polynomial approximations in a transformed coordinate system with substantially fewer collocation points than are necessary using polynomial expansion directly in the original, physical, coordinate system. It is also shown that one can avoid the usual Gibbs oscillation associated with steep gradient solutions of hyperbolic pde's by approximation in suitably chosen coordinate systems. Continuous, high gradient solutions are computed with spectral accuracy (as measured in the physical coordinate system). Discontinuous solutions associated with nonlinear hyperbolic equations can be accurately computed by using an artificial viscosity chosen to smooth out the solution in the mapped, computational domain. Thus, shocks can be effectively resolved on a scale that is subgrid to the resolution available with collocation only in the physical domain. Examples with Fourier and Chebyshev collocation are given.
Moving and adaptive grid methods for compressible flows
NASA Technical Reports Server (NTRS)
Trepanier, Jean-Yves; Camarero, Ricardo
1995-01-01
This paper describes adaptive grid methods developed specifically for compressible flow computations. The basic flow solver is a finite-volume implementation of Roe's flux difference splitting scheme or arbitrarily moving unstructured triangular meshes. The grid adaptation is performed according to geometric and flow requirements. Some results are included to illustrate the potential of the methodology.
Anderson, R W; Pember, R B; Elliott, N S
2001-10-22
A new method that combines staggered grid Arbitrary Lagrangian-Eulerian (ALE) techniques with structured local adaptive mesh refinement (AMR) has been developed for solution of the Euler equations. This method facilitates the solution of problems currently at and beyond the boundary of soluble problems by traditional ALE methods by focusing computational resources where they are required through dynamic adaption. Many of the core issues involved in the development of the combined ALEAMR method hinge upon the integration of AMR with a staggered grid Lagrangian integration method. The novel components of the method are mainly driven by the need to reconcile traditional AMR techniques, which are typically employed on stationary meshes with cell-centered quantities, with the staggered grids and grid motion employed by Lagrangian methods. Numerical examples are presented which demonstrate the accuracy and efficiency of the method.
NASA Astrophysics Data System (ADS)
Rosen, A. L.; Krumholz, M. R.; Oishi, J. S.; Lee, A. T.; Klein, R. I.
2017-02-01
We present a highly-parallel multi-frequency hybrid radiation hydrodynamics algorithm that combines a spatially-adaptive long characteristics method for the radiation field from point sources with a moment method that handles the diffuse radiation field produced by a volume-filling fluid. Our Hybrid Adaptive Ray-Moment Method (HARM2) operates on patch-based adaptive grids, is compatible with asynchronous time stepping, and works with any moment method. In comparison to previous long characteristics methods, we have greatly improved the parallel performance of the adaptive long-characteristics method by developing a new completely asynchronous and non-blocking communication algorithm. As a result of this improvement, our implementation achieves near-perfect scaling up to O (103) processors on distributed memory machines. We present a series of tests to demonstrate the accuracy and performance of the method.
Adaptive mesh strategies for the spectral element method
NASA Technical Reports Server (NTRS)
Mavriplis, Catherine
1992-01-01
An adaptive spectral method was developed for the efficient solution of time dependent partial differential equations. Adaptive mesh strategies that include resolution refinement and coarsening by three different methods are illustrated on solutions to the 1-D viscous Burger equation and the 2-D Navier-Stokes equations for driven flow in a cavity. Sharp gradients, singularities, and regions of poor resolution are resolved optimally as they develop in time using error estimators which indicate the choice of refinement to be used. The adaptive formulation presents significant increases in efficiency, flexibility, and general capabilities for high order spectral methods.
Comparing Anisotropic Output-Based Grid Adaptation Methods by Decomposition
NASA Technical Reports Server (NTRS)
Park, Michael A.; Loseille, Adrien; Krakos, Joshua A.; Michal, Todd
2015-01-01
Anisotropic grid adaptation is examined by decomposing the steps of flow solution, ad- joint solution, error estimation, metric construction, and simplex grid adaptation. Multiple implementations of each of these steps are evaluated by comparison to each other and expected analytic results when available. For example, grids are adapted to analytic metric fields and grid measures are computed to illustrate the properties of multiple independent implementations of grid adaptation mechanics. Different implementations of each step in the adaptation process can be evaluated in a system where the other components of the adaptive cycle are fixed. Detailed examination of these properties allows comparison of different methods to identify the current state of the art and where further development should be targeted.
A New Adaptive H-Infinity Filtering Algorithm for the GPS/INS Integrated Navigation
Jiang, Chen; Zhang, Shu-Bi; Zhang, Qiu-Zhao
2016-01-01
The Kalman filter is an optimal estimator with numerous applications in technology, especially in systems with Gaussian distributed noise. Moreover, the adaptive Kalman filtering algorithms, based on the Kalman filter, can control the influence of dynamic model errors. In contrast to the adaptive Kalman filtering algorithms, the H-infinity filter is able to address the interference of the stochastic model by minimization of the worst-case estimation error. In this paper, a novel adaptive H-infinity filtering algorithm, which integrates the adaptive Kalman filter and the H-infinity filter in order to perform a comprehensive filtering algorithm, is presented. In the proposed algorithm, a robust estimation method is employed to control the influence of outliers. In order to verify the proposed algorithm, experiments with real data of the Global Positioning System (GPS) and Inertial Navigation System (INS) integrated navigation, were conducted. The experimental results have shown that the proposed algorithm has multiple advantages compared to the other filtering algorithms. PMID:27999361
Shaping the Cities of Tomorrow: Integrating Local Urban Adaptation within an Environmental Framework
NASA Astrophysics Data System (ADS)
Georgescu, M.
2014-12-01
Contemporary methods focused on increasing urban sustainability are largely based on the reduction of greenhouse gas emissions. While these efforts are essential steps forward, continued characterization of urban sustainability solely within a biogeochemical framework, with neglect of the biophysical impact of the built environment, omits regional hydroclimatic forcing of the same order of magnitude as greenhouse gas emissions. Using a suite of continuous, multi-year and multi-member continental scale numerical simulations with the WRF model for the U.S., we examine hydroclimatic impacts for a variety of U.S. urban expansion scenarios (for the year 2100) and urban adaptation futures (cool roofs, green roofs, and a hypothetical hybrid approach integrating biophysical properties of both cool and green roofs), and compare those to experiments utilizing a contemporary urban extent. Widespread adoption of adaptation strategies exhibits regionally and seasonally dependent hydroclimatic impacts. For some regions and seasons, urban-induced warming in excess of 3°C can be completely offset by all adaptation approaches examined. For other regions, widespread adoption of some adaptation approaches leads to significant rainfall decline. Sustainable urban expansion therefore requires an integrated assessment that also incorporates biophysically induced urban impacts, and demands tradeoff assessment of various strategies aimed to ameliorate deleterious consequences of growth (e.g., urban heat island reduction).
An adaptive precision gradient method for optimal control.
NASA Technical Reports Server (NTRS)
Klessig, R.; Polak, E.
1973-01-01
This paper presents a gradient algorithm for unconstrained optimal control problems. The algorithm is stated in terms of numerical integration formulas, the precision of which is controlled adaptively by a test that ensures convergence. Empirical results show that this algorithm is considerably faster than its fixed precision counterpart.-
Adaptive methods: when and how should they be used in clinical trials?
Porcher, Raphaël; Lecocq, Brigitte; Vray, Muriel
2011-01-01
Adaptive clinical trial designs are defined as designs that use data cumulated during trial to possibly modify certain aspects without compromising the validity and integrity of the said trial. Compared to more traditional trials, in theory, adaptive designs allow the same information to be generated but in a more efficient manner. The advantages and limits of this type of design together with the weight of the constraints, in particular of a logistic nature, that their use implies, differ depending on whether the trial is exploratory or confirmatory with a view to registration. One of the key elements ensuring trial integrity is the involvement of an independent committee to determine adaptations in terms of experimental design during the study. Adaptive methods for clinical trials are appealing and may be accepted by the relevant authorities. However, the constraints that they impose must be determined well in advance.
NASA Astrophysics Data System (ADS)
Zhang, Yangming; Yan, Peng
2016-12-01
This paper investigates a systematic modeling and control methodology for a multi-axis PZT (piezoelectric transducer) actuated servo stage supporting nano-manipulations. A sliding mode disturbance observer-based adaptive integral backstepping control method with an estimated inverse model compensation scheme is proposed to achieve ultra high precision tracking in the presence of the hysteresis nonlinearities, model uncertainties, and external disturbances. By introducing a time rate of the input signal, an enhanced rate-dependent Prandtl-Ishlinskii model is developed to describe the hysteresis behaviors, and its inverse is also constructed to mitigate their adverse effects. In particular, the corresponding inverse compensation error is analyzed and its boundedness is proven. Subsequently, the sliding mode disturbance observer-based adaptive integral backstepping controller is designed to guarantee the convergence of the tracking error, where the sliding mode disturbance observer can track the total disturbances in a finite time, while the integral action is incorporated into the adaptive backstepping design to improve the steady-state control accuracy. Finally, real time implementations of the proposed algorithm are applied on the PZT actuated servo system, where excellent tracking performance with tracking precision error around 6‰ for circular contour tracking is achieved in the experimental results.
Methods and systems for integrating fluid dispensing technology with stereolithography
Medina, Francisco; Wicker, Ryan; Palmer, Jeremy A.; Davis, Don W.; Chavez, Bart D.; Gallegos, Phillip L.
2010-02-09
An integrated system and method of integrating fluid dispensing technologies (e.g., direct-write (DW)) with rapid prototyping (RP) technologies (e.g., stereolithography (SL)) without part registration comprising: an SL apparatus and a fluid dispensing apparatus further comprising a translation mechanism adapted to translate the fluid dispensing apparatus along the Z-, Y- and Z-axes. The fluid dispensing apparatus comprises: a pressurized fluid container; a valve mechanism adapted to control the flow of fluid from the pressurized fluid container; and a dispensing nozzle adapted to deposit the fluid in a desired location. To aid in calibration, the integrated system includes a laser sensor and a mechanical switch. The method further comprises building a second part layer on top of the fluid deposits and optionally accommodating multi-layered circuitry by incorporating a connector trace. Thus, the present invention is capable of efficiently building single and multi-material SL fabricated parts embedded with complex three-dimensional circuitry using DW.
Nonlinear adaptive control using the Fourier integral and its application to CSTR systems.
Zhang, Huaguang; Cai, Lilong
2002-01-01
This paper presents a new nonlinear adaptive tracking controller for a class of general time-variant nonlinear systems. The control system consists of an inner loop and an outer loop. The inner loop is a fuzzy sliding mode control that is used as the feedback controller to overcome random instant disturbances. The stability of the inner loop is designed by the sliding mode control method. The other loop is a Fourier integral-based control that is used as the feedforward controller to overcome the deterministic type of uncertain disturbance. The asymptotic convergence condition of the nonlinear adaptive control system is guaranteed by the Lyapunov direct method. The effectiveness of the proposed controller is illustrated by its application to composition control in a continuously stirred tank reactor system.
EMERGY METHODS: VALUABLE INTEGRATED ASSESSMENT TOOLS
NHEERL's Atlantic Ecology Division is investigating emergy methods as tools for integrated assessment in several projects evaluating environmental impacts, policies, and alternatives for remediation and intervention. Emergy accounting is a methodology that provides a quantitative...
Adaptive upscaling with the dual mesh method
Guerillot, D.; Verdiere, S.
1997-08-01
The objective of this paper is to demonstrate that upscaling should be calculated during the flow simulation instead of trying to enhance the a priori upscaling methods. Hence, counter-examples are given to motivate our approach, the so-called Dual Mesh Method. The main steps of this numerical algorithm are recalled. Applications illustrate the necessity to consider different average relative permeability values depending on the direction in space. Moreover, these values could be different for the same average saturation. This proves that an a priori upscaling cannot be the answer even in homogeneous cases because of the {open_quotes}dynamical heterogeneity{close_quotes} created by the saturation profile. Other examples show the efficiency of the Dual Mesh Method applied to heterogeneous medium and to an actual field case in South America.
Adaptive multi-stage integrators for optimal energy conservation in molecular simulations
NASA Astrophysics Data System (ADS)
Fernández-Pendás, Mario; Akhmatskaya, Elena; Sanz-Serna, J. M.
2016-12-01
We introduce a new Adaptive Integration Approach (AIA) to be used in a wide range of molecular simulations. Given a simulation problem and a step size, the method automatically chooses the optimal scheme out of an available family of numerical integrators. Although we focus on two-stage splitting integrators, the idea may be used with more general families. In each instance, the system-specific integrating scheme identified by our approach is optimal in the sense that it provides the best conservation of energy for harmonic forces. The AIA method has been implemented in the BCAM-modified GROMACS software package. Numerical tests in molecular dynamics and hybrid Monte Carlo simulations of constrained and unconstrained physical systems show that the method successfully realizes the fail-safe strategy. In all experiments, and for each of the criteria employed, the AIA is at least as good as, and often significantly outperforms the standard Verlet scheme, as well as fixed parameter, optimized two-stage integrators. In particular, for the systems where harmonic forces play an important role, the sampling efficiency found in simulations using the AIA is up to 5 times better than the one achieved with other tested schemes.
Adaptive Voltage Management Enabling Energy Efficiency in Nanoscale Integrated Circuits
NASA Astrophysics Data System (ADS)
Shapiro, Alexander E.
Battery powered devices emphasize energy efficiency in modern sub-22 nm CMOS microprocessors rendering classic power reduction solutions not sufficient. Classical solutions that reduce power consumption in high performance integrated circuits are superseded with novel and enhanced power reduction techniques to enable the greater energy efficiency desired in modern microprocessors and emerging mobile platforms. Dynamic power consumption is reduced by operating over a wide range of supply voltages. This region of operation is enabled by a high speed and power efficient level shifter which translates low voltage digital signals to higher voltages (and vice versa), a key component that enables communication among circuits operating at different voltage levels. Additionally, optimizing the wide supply voltage range of signals propagating across long interconnect enables greater energy savings. A closed-form delay model supporting wide voltage range is developed to enable this capability. The model supports an ultra-wide voltage range from nominal voltages to subthreshold voltages, and a wide range of repeater sizes. To mitigate the drawback of lower operating speed at reduced supply voltages, the high performance exhibited by MOS current mode logic technology is exploited. High performance and energy efficient circuits are enabled by combining this logic style with power efficient near threshold circuits. Many-core systems that operate at high frequencies and process highly parallel workloads benefit from this combination of MCML with NTC. Due to aggressive scaling, static power consumption can in some cases overshadow dynamic power. Techniques to lower leakage power have therefore become an important objective in modern microprocessors. To address this issue, an adaptive power gating technique is proposed. This technique utilizes high levels of granularity to save additional leakage power when a circuit is active as opposed to standard power gating that saves static
Adaptive Finite Element Methods for Continuum Damage Modeling
NASA Technical Reports Server (NTRS)
Min, J. B.; Tworzydlo, W. W.; Xiques, K. E.
1995-01-01
The paper presents an application of adaptive finite element methods to the modeling of low-cycle continuum damage and life prediction of high-temperature components. The major objective is to provide automated and accurate modeling of damaged zones through adaptive mesh refinement and adaptive time-stepping methods. The damage modeling methodology is implemented in an usual way by embedding damage evolution in the transient nonlinear solution of elasto-viscoplastic deformation problems. This nonlinear boundary-value problem is discretized by adaptive finite element methods. The automated h-adaptive mesh refinements are driven by error indicators, based on selected principal variables in the problem (stresses, non-elastic strains, damage, etc.). In the time domain, adaptive time-stepping is used, combined with a predictor-corrector time marching algorithm. The time selection is controlled by required time accuracy. In order to take into account strong temperature dependency of material parameters, the nonlinear structural solution a coupled with thermal analyses (one-way coupling). Several test examples illustrate the importance and benefits of adaptive mesh refinements in accurate prediction of damage levels and failure time.
Integration of AdaptiSPECT, a small-animal adaptive SPECT imaging system
Chaix, Cécile; Kovalsky, Stephen; Kosmider, Matthew; Barrett, Harrison H.; Furenlid, Lars R.
2015-01-01
AdaptiSPECT is a pre-clinical adaptive SPECT imaging system under final development at the Center for Gamma-ray Imaging. The system incorporates multiple adaptive features: an adaptive aperture, 16 detectors mounted on translational stages, and the ability to switch between a non-multiplexed and a multiplexed imaging configuration. In this paper, we review the design of AdaptiSPECT and its adaptive features. We then describe the on-going integration of the imaging system. PMID:26347197
On Accuracy of Adaptive Grid Methods for Captured Shocks
NASA Technical Reports Server (NTRS)
Yamaleev, Nail K.; Carpenter, Mark H.
2002-01-01
The accuracy of two grid adaptation strategies, grid redistribution and local grid refinement, is examined by solving the 2-D Euler equations for the supersonic steady flow around a cylinder. Second- and fourth-order linear finite difference shock-capturing schemes, based on the Lax-Friedrichs flux splitting, are used to discretize the governing equations. The grid refinement study shows that for the second-order scheme, neither grid adaptation strategy improves the numerical solution accuracy compared to that calculated on a uniform grid with the same number of grid points. For the fourth-order scheme, the dominant first-order error component is reduced by the grid adaptation, while the design-order error component drastically increases because of the grid nonuniformity. As a result, both grid adaptation techniques improve the numerical solution accuracy only on the coarsest mesh or on very fine grids that are seldom found in practical applications because of the computational cost involved. Similar error behavior has been obtained for the pressure integral across the shock. A simple analysis shows that both grid adaptation strategies are not without penalties in the numerical solution accuracy. Based on these results, a new grid adaptation criterion for captured shocks is proposed.
Chen, Xiyuan; Li, Qinghua
2014-01-01
As the core of the integrated navigation system, the data fusion algorithm should be designed seriously. In order to improve the accuracy of data fusion, this work proposed an adaptive iterated extended Kalman (AIEKF) which used the noise statistics estimator in the iterated extended Kalman (IEKF), and then AIEKF is used to deal with the nonlinear problem in the inertial navigation systems (INS)/wireless sensors networks (WSNs)-integrated navigation system. Practical test has been done to evaluate the performance of the proposed method. The results show that the proposed method is effective to reduce the mean root-mean-square error (RMSE) of position by about 92.53%, 67.93%, 55.97%, and 30.09% compared with the INS only, WSN, EKF, and IEKF. PMID:24693225
Xu, Yuan; Chen, Xiyuan; Li, Qinghua
2014-01-01
As the core of the integrated navigation system, the data fusion algorithm should be designed seriously. In order to improve the accuracy of data fusion, this work proposed an adaptive iterated extended Kalman (AIEKF) which used the noise statistics estimator in the iterated extended Kalman (IEKF), and then AIEKF is used to deal with the nonlinear problem in the inertial navigation systems (INS)/wireless sensors networks (WSNs)-integrated navigation system. Practical test has been done to evaluate the performance of the proposed method. The results show that the proposed method is effective to reduce the mean root-mean-square error (RMSE) of position by about 92.53%, 67.93%, 55.97%, and 30.09% compared with the INS only, WSN, EKF, and IEKF.
NASA Astrophysics Data System (ADS)
Meng, Yang; Gao, Shesheng; Zhong, Yongmin; Hu, Gaoge; Subic, Aleksandar
2016-03-01
The use of the direct filtering approach for INS/GNSS integrated navigation introduces nonlinearity into the system state equation. As the unscented Kalman filter (UKF) is a promising method for nonlinear problems, an obvious solution is to incorporate the UKF concept in the direct filtering approach to address the nonlinearity involved in INS/GNSS integrated navigation. However, the performance of the standard UKF is dependent on the accurate statistical characterizations of system noise. If the noise distributions of inertial instruments and GNSS receivers are not appropriately described, the standard UKF will produce deteriorated or even divergent navigation solutions. This paper presents an adaptive UKF with noise statistic estimator to overcome the limitation of the standard UKF. According to the covariance matching technique, the innovation and residual sequences are used to determine the covariance matrices of the process and measurement noises. The proposed algorithm can estimate and adjust the system noise statistics online, and thus enhance the adaptive capability of the standard UKF. Simulation and experimental results demonstrate that the performance of the proposed algorithm is significantly superior to that of the standard UKF and adaptive-robust UKF under the condition without accurate knowledge on system noise, leading to improved navigation precision.
Integration of the immune system: a complex adaptive supersystem
NASA Astrophysics Data System (ADS)
Crisman, Mark V.
2001-10-01
Immunity to pathogenic organisms is a complex process involving interacting factors within the immune system including circulating cells, tissues and soluble chemical mediators. Both the efficiency and adaptive responses of the immune system in a dynamic, often hostile, environment are essential for maintaining our health and homeostasis. This paper will present a brief review of one of nature's most elegant, complex adaptive systems.
Adaptive Discontinuous Evolution Galerkin Method for Dry Atmospheric Flow
2013-04-02
standard one-dimensional approximate Riemann solver used for the flux integration demonstrate better stability, accuracy as well as reliability of the...discontinuous evolution Galerkin method for dry atmospheric convection. Comparisons with the standard one-dimensional approximate Riemann solver used...instead of a standard one- dimensional approximate Riemann solver, the flux integration within the discontinuous Galerkin method is now realized by
Blood viscosity measurement: an integral method using Doppler ultrasonic profiles
NASA Astrophysics Data System (ADS)
Flaud, P.; Bensalah, A.
2005-12-01
The aim of this work is to present a new indirect and noninvasive method for the measurement of the Newtonian blood viscosity. Based on an integral form of the axial Navier-Stokes equation, this method is particularly suited for in vivo investigations using ultrasonic arterial blood velocity profiles. Its main advantage is that it is applicable to periodic as well as non periodic flows. Moreover it does not require classical filtering methods enhancing signal to noise ratio of the physiological signals. This method only requires the knowledge of the velocimetric data measured inside a spatially and temporally optimized zone of the Doppler velocity profiles. The results obtained using numerical simulation as well as in vitro or in vivo experiments prove the effectiveness of the method. It is then well adapted to the clinical environment as a systematic quasi on-line method for the measurement of the blood viscosity.
Methods of geometrical integration in accelerator physics
NASA Astrophysics Data System (ADS)
Andrianov, S. N.
2016-12-01
In the paper we consider a method of geometric integration for a long evolution of the particle beam in cyclic accelerators, based on the matrix representation of the operator of particles evolution. This method allows us to calculate the corresponding beam evolution in terms of two-dimensional matrices including for nonlinear effects. The ideology of the geometric integration introduces in appropriate computational algorithms amendments which are necessary for preserving the qualitative properties of maps presented in the form of the truncated series generated by the operator of evolution. This formalism extends both on polarized and intense beams. Examples of practical applications are described.
Studies of an Adaptive Kaczmarz Method for Electrical Impedance Imaging
NASA Astrophysics Data System (ADS)
Li, Taoran; Isaacson, David; Newell, Jonathan C.; Saulnier, Gary J.
2013-04-01
We present an adaptive Kaczmarz method for solving the inverse problem in electrical impedance tomography and determining the conductivity distribution inside an object from electrical measurements made on the surface. To best characterize an unknown conductivity distribution and avoid inverting the Jacobian-related term JTJ which could be expensive in terms of memory storage in large scale problems, we propose to solve the inverse problem by adaptively updating both the optimal current pattern with improved distinguishability and the conductivity estimate at each iteration. With a novel subset scheme, the memory-efficient reconstruction algorithm which appropriately combines the optimal current pattern generation and the Kaczmarz method can produce accurate and stable solutions adaptively compared to traditional Kaczmarz and Gauss-Newton type methods. Several reconstruction image metrics are used to quantitatively evaluate the performance of the simulation results.
Differential temperature integrating diagnostic method and apparatus
Doss, James D.; McCabe, Charles W.
1976-01-01
A method and device for detecting the presence of breast cancer in women by integrating the temperature difference between the temperature of a normal breast and that of a breast having a malignant tumor. The breast-receiving cups of a brassiere are each provided with thermally conductive material next to the skin, with a thermistor attached to the thermally conductive material in each cup. The thermistors are connected to adjacent arms of a Wheatstone bridge. Unbalance currents in the bridge are integrated with respect to time by means of an electrochemical integrator. In the absence of a tumor, both breasts maintain substantially the same temperature, and the bridge remains balanced. If the tumor is present in one breast, a higher temperature in that breast unbalances the bridge and the electrochemical cells integrate the temperature difference with respect to time.
NASA Astrophysics Data System (ADS)
Grothmann, T.; Grecksch, K.; Winges, M.; Siebenhüner, B.
2013-03-01
Several case studies show that "soft social factors" (e.g. institutions, perceptions, social capital) strongly affect social capacities to adapt to climate change. Many soft social factors can probably be changed faster than "hard social factors" (e.g. economic and technological development) and are therefore particularly important for building social capacities. However, there are almost no methodologies for the systematic assessment of soft social factors. Gupta et al. (2010) have developed the Adaptive Capacity Wheel (ACW) for assessing the adaptive capacity of institutions. The ACW differentiates 22 criteria to assess six dimensions: variety, learning capacity, room for autonomous change, leadership, availability of resources, fair governance. To include important psychological factors we extended the ACW by two dimensions: "adaptation motivation" refers to actors' motivation to realise, support and/or promote adaptation to climate. "Adaptation belief" refers to actors' perceptions of realisability and effectiveness of adaptation measures. We applied the extended ACW to assess adaptive capacities of four sectors - water management, flood/coastal protection, civil protection and regional planning - in North Western Germany. The assessments of adaptation motivation and belief provided a clear added value. The results also revealed some methodological problems in applying the ACW (e.g. overlap of dimensions), for which we propose methodological solutions.
NASA Astrophysics Data System (ADS)
Grothmann, T.; Grecksch, K.; Winges, M.; Siebenhüner, B.
2013-12-01
Several case studies show that social factors like institutions, perceptions and social capital strongly affect social capacities to adapt to climate change. Together with economic and technological development they are important for building social capacities. However, there are almost no methodologies for the systematic assessment of social factors. After reviewing existing methodologies we identify the Adaptive Capacity Wheel (ACW) by Gupta et al. (2010), developed for assessing the adaptive capacity of institutions, as the most comprehensive and operationalised framework to assess social factors. The ACW differentiates 22 criteria to assess 6 dimensions: variety, learning capacity, room for autonomous change, leadership, availability of resources, fair governance. To include important psychological factors we extended the ACW by two dimensions: "adaptation motivation" refers to actors' motivation to realise, support and/or promote adaptation to climate; "adaptation belief" refers to actors' perceptions of realisability and effectiveness of adaptation measures. We applied the extended ACW to assess adaptive capacities of four sectors - water management, flood/coastal protection, civil protection and regional planning - in northwestern Germany. The assessments of adaptation motivation and belief provided a clear added value. The results also revealed some methodological problems in applying the ACW (e.g. overlap of dimensions), for which we propose methodological solutions.
Hongda Wang; Chiu-Sing Choy
2016-08-01
The ability of correlation integral for automatic seizure detection using scalp EEG data has been re-examined in this paper. To facilitate the detection performance and overcome the shortcoming of correlation integral, nonlinear adaptive denoising and Kalman filter have been adopted for pre-processing and post-processing. The three-stage algorithm has achieved 84.6% sensitivity and 0.087/h false detection rate, which are comparable to many machine learning based methods, but at much lower computational cost. Since this algorithm is tested with long-term scalp EEG, it has the potential to achieve higher performance with intracranial EEG. The clinical value of this algorithm includes providing a pre-judgement to assist the doctor's diagnosis procedure and acting as a reliable warning system in a wearable device for epilepsy patients.
Final Report: Symposium on Adaptive Methods for Partial Differential Equations
Pernice, M.; Johnson, C.R.; Smith, P.J.; Fogelson, A.
1998-12-10
OAK-B135 Final Report: Symposium on Adaptive Methods for Partial Differential Equations. Complex physical phenomena often include features that span a wide range of spatial and temporal scales. Accurate simulation of such phenomena can be difficult to obtain, and computations that are under-resolved can even exhibit spurious features. While it is possible to resolve small scale features by increasing the number of grid points, global grid refinement can quickly lead to problems that are intractable, even on the largest available computing facilities. These constraints are particularly severe for three dimensional problems that involve complex physics. One way to achieve the needed resolution is to refine the computational mesh locally, in only those regions where enhanced resolution is required. Adaptive solution methods concentrate computational effort in regions where it is most needed. These methods have been successfully applied to a wide variety of problems in computational science and engineering. Adaptive methods can be difficult to implement, prompting the development of tools and environments to facilitate their use. To ensure that the results of their efforts are useful, algorithm and tool developers must maintain close communication with application specialists. Conversely it remains difficult for application specialists who are unfamiliar with the methods to evaluate the trade-offs between the benefits of enhanced local resolution and the effort needed to implement an adaptive solution method.
Adaptive remeshing method in 2D based on refinement and coarsening techniques
NASA Astrophysics Data System (ADS)
Giraud-Moreau, L.; Borouchaki, H.; Cherouat, A.
2007-04-01
The analysis of mechanical structures using the Finite Element Method, in the framework of large elastoplastic strains, needs frequent remeshing of the deformed domain during computation. Remeshing is necessary for two main reasons, the large geometric distortion of finite elements and the adaptation of the mesh size to the physical behavior of the solution. This paper presents an adaptive remeshing method to remesh a mechanical structure in two dimensions subjected to large elastoplastic deformations with damage. The proposed remeshing technique includes adaptive refinement and coarsening procedures, based on geometrical and physical criteria. The proposed method has been integrated in a computational environment using the ABAQUS solver. Numerical examples show the efficiency of the proposed approach.
Integrated force method versus displacement method for finite element analysis
NASA Technical Reports Server (NTRS)
Patnaik, Surya N.; Berke, Laszlo; Gallagher, Richard H.
1990-01-01
A novel formulation termed the integrated force method (IFM) has been developed in recent years for analyzing structures. In this method all the internal forces are taken as independent variables, and the system equilibrium equations (EE's) are integrated with the global compatibility conditions (CC's) to form the governing set of equations. In IFM the CC's are obtained from the strain formulation of St. Venant, and no choices of redundant load systems have to be made, in constrast to the standard force method (SFM). This property of IFM allows the generation of the governing equation to be automated straightforwardly, as it is in the popular stiffness method (SM). In this report IFM and SM are compared relative to the structure of their respective equations, their conditioning, required solution methods, overall computational requirements, and convergence properties as these factors influence the accuracy of the results. Overall, this new version of the force method produces more accurate results than the stiffness method for comparable computational cost.
An improved adaptive IHS method for image fusion
NASA Astrophysics Data System (ADS)
Wang, Ting
2015-12-01
An improved adaptive intensity-hue-saturation (IHS) method is proposed for image fusion in this paper based on the adaptive IHS (AIHS) method and its improved method(IAIHS). Through improved method, the weighting matrix, which decides how many spatial details in the panchromatic (Pan) image should be injected into the multispectral (MS) image, is defined on the basis of the linear relationship of the edges of Pan and MS image. At the same time, a modulation parameter t is used to balance the spatial resolution and spectral resolution of the fusion image. Experiments showed that the improved method can improve spectral quality and maintain spatial resolution compared with the AIHS and IAIHS methods.
Monte Carlo methods for multidimensional integration for European option pricing
NASA Astrophysics Data System (ADS)
Todorov, V.; Dimov, I. T.
2016-10-01
In this paper, we illustrate examples of highly accurate Monte Carlo and quasi-Monte Carlo methods for multiple integrals related to the evaluation of European style options. The idea is that the value of the option is formulated in terms of the expectation of some random variable; then the average of independent samples of this random variable is used to estimate the value of the option. First we obtain an integral representation for the value of the option using the risk neutral valuation formula. Then with an appropriations change of the constants we obtain a multidimensional integral over the unit hypercube of the corresponding dimensionality. Then we compare a specific type of lattice rules over one of the best low discrepancy sequence of Sobol for numerical integration. Quasi-Monte Carlo methods are compared with Adaptive and Crude Monte Carlo techniques for solving the problem. The four approaches are completely different thus it is a question of interest to know which one of them outperforms the other for evaluation multidimensional integrals in finance. Some of the advantages and disadvantages of the developed algorithms are discussed.
Implicit integration methods for dislocation dynamics
NASA Astrophysics Data System (ADS)
Gardner, D. J.; Woodward, C. S.; Reynolds, D. R.; Hommes, G.; Aubry, S.; Arsenlis, A.
2015-03-01
In dislocation dynamics simulations, strain hardening simulations require integrating stiff systems of ordinary differential equations in time with expensive force calculations, discontinuous topological events and rapidly changing problem size. Current solvers in use often result in small time steps and long simulation times. Faster solvers may help dislocation dynamics simulations accumulate plastic strains at strain rates comparable to experimental observations. This paper investigates the viability of high-order implicit time integrators and robust nonlinear solvers to reduce simulation run times while maintaining the accuracy of the computed solution. In particular, implicit Runge-Kutta time integrators are explored as a way of providing greater accuracy over a larger time step than is typically done with the standard second-order trapezoidal method. In addition, both accelerated fixed point and Newton's method are investigated to provide fast and effective solves for the nonlinear systems that must be resolved within each time step. Results show that integrators of third order are the most effective, while accelerated fixed point and Newton's method both improve solver performance over the standard fixed point method used for the solution of the nonlinear systems.
Collaborative Teaching of an Integrated Methods Course
ERIC Educational Resources Information Center
Zhou, George; Kim, Jinyoung; Kerekes, Judit
2011-01-01
With an increasing diversity in American schools, teachers need to be able to collaborate in teaching. University courses are widely considered as a stage to demonstrate or model the ways of collaboration. To respond to this call, three authors team taught an integrated methods course at an urban public university in the city of New York.…
Implicit integration methods for dislocation dynamics
Gardner, D. J.; Woodward, C. S.; Reynolds, D. R.; ...
2015-01-20
In dislocation dynamics simulations, strain hardening simulations require integrating stiff systems of ordinary differential equations in time with expensive force calculations, discontinuous topological events, and rapidly changing problem size. Current solvers in use often result in small time steps and long simulation times. Faster solvers may help dislocation dynamics simulations accumulate plastic strains at strain rates comparable to experimental observations. Here, this paper investigates the viability of high order implicit time integrators and robust nonlinear solvers to reduce simulation run times while maintaining the accuracy of the computed solution. In particular, implicit Runge-Kutta time integrators are explored as a waymore » of providing greater accuracy over a larger time step than is typically done with the standard second-order trapezoidal method. In addition, both accelerated fixed point and Newton's method are investigated to provide fast and effective solves for the nonlinear systems that must be resolved within each time step. Results show that integrators of third order are the most effective, while accelerated fixed point and Newton's method both improve solver performance over the standard fixed point method used for the solution of the nonlinear systems.« less
Implicit integration methods for dislocation dynamics
Gardner, D. J.; Woodward, C. S.; Reynolds, D. R.; Hommes, G.; Aubry, S.; Arsenlis, A.
2015-01-20
In dislocation dynamics simulations, strain hardening simulations require integrating stiff systems of ordinary differential equations in time with expensive force calculations, discontinuous topological events, and rapidly changing problem size. Current solvers in use often result in small time steps and long simulation times. Faster solvers may help dislocation dynamics simulations accumulate plastic strains at strain rates comparable to experimental observations. Here, this paper investigates the viability of high order implicit time integrators and robust nonlinear solvers to reduce simulation run times while maintaining the accuracy of the computed solution. In particular, implicit Runge-Kutta time integrators are explored as a way of providing greater accuracy over a larger time step than is typically done with the standard second-order trapezoidal method. In addition, both accelerated fixed point and Newton's method are investigated to provide fast and effective solves for the nonlinear systems that must be resolved within each time step. Results show that integrators of third order are the most effective, while accelerated fixed point and Newton's method both improve solver performance over the standard fixed point method used for the solution of the nonlinear systems.
Bioluminescent bioreporter integrated circuit detection methods
Simpson, Michael L.; Paulus, Michael J.; Sayler, Gary S.; Applegate, Bruce M.; Ripp, Steven A.
2005-06-14
Disclosed are monolithic bioelectronic devices comprising a bioreporter and an OASIC. These bioluminescent bioreporter integrated circuit are useful in detecting substances such as pollutants, explosives, and heavy-metals residing in inhospitable areas such as groundwater, industrial process vessels, and battlefields. Also disclosed are methods and apparatus for detection of particular analytes, including ammonia and estrogen compounds.
Wavelet methods in multi-conjugate adaptive optics
NASA Astrophysics Data System (ADS)
Helin, T.; Yudytskiy, M.
2013-08-01
The next generation ground-based telescopes rely heavily on adaptive optics for overcoming the limitation of atmospheric turbulence. In the future adaptive optics modalities, like multi-conjugate adaptive optics (MCAO), atmospheric tomography is the major mathematical and computational challenge. In this severely ill-posed problem, a fast and stable reconstruction algorithm is needed that can take into account many real-life phenomena of telescope imaging. We introduce a novel reconstruction method for the atmospheric tomography problem and demonstrate its performance and flexibility in the context of MCAO. Our method is based on using locality properties of compactly supported wavelets, both in the spatial and frequency domains. The reconstruction in the atmospheric tomography problem is obtained by solving the Bayesian MAP estimator with a conjugate-gradient-based algorithm. An accelerated algorithm with preconditioning is also introduced. Numerical performance is demonstrated on the official end-to-end simulation tool OCTOPUS of European Southern Observatory.
Adaptive computational methods for SSME internal flow analysis
NASA Technical Reports Server (NTRS)
Oden, J. T.
1986-01-01
Adaptive finite element methods for the analysis of classes of problems in compressible and incompressible flow of interest in SSME (space shuttle main engine) analysis and design are described. The general objective of the adaptive methods is to improve and to quantify the quality of numerical solutions to the governing partial differential equations of fluid dynamics in two-dimensional cases. There are several different families of adaptive schemes that can be used to improve the quality of solutions in complex flow simulations. Among these are: (1) r-methods (node-redistribution or moving mesh methods) in which a fixed number of nodal points is allowed to migrate to points in the mesh where high error is detected; (2) h-methods, in which the mesh size h is automatically refined to reduce local error; and (3) p-methods, in which the local degree p of the finite element approximation is increased to reduce local error. Two of the three basic techniques have been studied in this project: an r-method for steady Euler equations in two dimensions and a p-method for transient, laminar, viscous incompressible flow. Numerical results are presented. A brief introduction to residual methods of a-posterior error estimation is also given and some pertinent conclusions of the study are listed.
Fidelity of the Integrated Force Method Solution
NASA Technical Reports Server (NTRS)
Hopkins, Dale; Halford, Gary; Coroneos, Rula; Patnaik, Surya
2002-01-01
The theory of strain compatibility of the solid mechanics discipline was incomplete since St. Venant's 'strain formulation' in 1876. We have addressed the compatibility condition both in the continuum and the discrete system. This has lead to the formulation of the Integrated Force Method. A dual Integrated Force Method with displacement as the primal variable has also been formulated. A modest finite element code (IFM/Analyzers) based on the IFM theory has been developed. For a set of standard test problems the IFM results were compared with the stiffness method solutions and the MSC/Nastran code. For the problems IFM outperformed the existing methods. Superior IFM performance is attributed to simultaneous compliance of equilibrium equation and compatibility condition. MSC/Nastran organization expressed reluctance to accept the high fidelity IFM solutions. This report discusses the solutions to the examples. No inaccuracy was detected in the IFM solutions. A stiffness method code with a small programming effort can be improved to reap the many IFM benefits when implemented with the IFMD elements. Dr. Halford conducted a peer-review on the Integrated Force Method. Reviewers' response is included.
Adaptive clustering and adaptive weighting methods to detect disease associated rare variants.
Sha, Qiuying; Wang, Shuaicheng; Zhang, Shuanglin
2013-03-01
Current statistical methods to test association between rare variants and phenotypes are essentially the group-wise methods that collapse or aggregate all variants in a predefined group into a single variant. Comparing with the variant-by-variant methods, the group-wise methods have their advantages. However, two factors may affect the power of these methods. One is that some of the causal variants may be protective. When both risk and protective variants are presented, it will lose power by collapsing or aggregating all variants because the effects of risk and protective variants will counteract each other. The other is that not all variants in the group are causal; rather, a large proportion is believed to be neutral. When a large proportion of variants are neutral, collapsing or aggregating all variants may not be an optimal solution. We propose two alternative methods, adaptive clustering (AC) method and adaptive weighting (AW) method, aiming to test rare variant association in the presence of neutral and/or protective variants. Both of AC and AW are applicable to quantitative traits as well as qualitative traits. Results of extensive simulation studies show that AC and AW have similar power and both of them have clear advantages from power to computational efficiency comparing with existing group-wise methods and existing data-driven methods that allow neutral and protective variants. We recommend AW method because AW method is computationally more efficient than AC method.
Adaptive windowed range-constrained Otsu method using local information
NASA Astrophysics Data System (ADS)
Zheng, Jia; Zhang, Dinghua; Huang, Kuidong; Sun, Yuanxi; Tang, Shaojie
2016-01-01
An adaptive windowed range-constrained Otsu method using local information is proposed for improving the performance of image segmentation. First, the reason why traditional thresholding methods do not perform well in the segmentation of complicated images is analyzed. Therein, the influences of global and local thresholdings on the image segmentation are compared. Second, two methods that can adaptively change the size of the local window according to local information are proposed by us. The characteristics of the proposed methods are analyzed. Thereby, the information on the number of edge pixels in the local window of the binarized variance image is employed to adaptively change the local window size. Finally, the superiority of the proposed method over other methods such as the range-constrained Otsu, the active contour model, the double Otsu, the Bradley's, and the distance-regularized level set evolution is demonstrated. It is validated by the experiments that the proposed method can keep more details and acquire much more satisfying area overlap measure as compared with the other conventional methods.
New developments in adaptive methods for computational fluid dynamics
NASA Technical Reports Server (NTRS)
Oden, J. T.; Bass, Jon M.
1990-01-01
New developments in a posteriori error estimates, smart algorithms, and h- and h-p adaptive finite element methods are discussed in the context of two- and three-dimensional compressible and incompressible flow simulations. Applications to rotor-stator interaction, rotorcraft aerodynamics, shock and viscous boundary layer interaction and fluid-structure interaction problems are discussed.
Likelihood Methods for Adaptive Filtering and Smoothing. Technical Report #455.
ERIC Educational Resources Information Center
Butler, Ronald W.
The dynamic linear model or Kalman filtering model provides a useful methodology for predicting the past, present, and future states of a dynamic system, such as an object in motion or an economic or social indicator that is changing systematically with time. Recursive likelihood methods for adaptive Kalman filtering and smoothing are developed.…
A Conditional Exposure Control Method for Multidimensional Adaptive Testing
ERIC Educational Resources Information Center
Finkelman, Matthew; Nering, Michael L.; Roussos, Louis A.
2009-01-01
In computerized adaptive testing (CAT), ensuring the security of test items is a crucial practical consideration. A common approach to reducing item theft is to define maximum item exposure rates, i.e., to limit the proportion of examinees to whom a given item can be administered. Numerous methods for controlling exposure rates have been proposed…
An adaptive multiresolution gradient-augmented level set method for advection problems
NASA Astrophysics Data System (ADS)
Schneider, Kai; Kolomenskiy, Dmitry; Nave, Jean-Chtristophe
2014-11-01
Advection problems are encountered in many applications, such as transport of passive scalars modeling pollution or mixing in chemical engineering. In some problems, the solution develops small-scale features localized in a part of the computational domain. If the location of these features changes in time, the efficiency of the numerical method can be significantly improved by adapting the partition dynamically to the solution. We present a space-time adaptive scheme for solving advection equations in two space dimensions. The third order accurate gradient-augmented level set method using a semi-Lagrangian formulation with backward time integration is coupled with a point value multiresolution analysis using Hermite interpolation. Thus locally refined dyadic spatial grids are introduced which are efficiently implemented with dynamic quad-tree data structures. For adaptive time integration, an embedded Runge-Kutta method is employed. The precision of the new fully adaptive method is analysed and speed up of CPU time and memory compression with respect to the uniform grid discretization are reported.
Package for integrated optic circuit and method
Kravitz, S.H.; Hadley, G.R.; Warren, M.E.; Carson, R.F.; Armendariz, M.G.
1998-08-04
A structure and method are disclosed for packaging an integrated optic circuit. The package comprises a first wall having a plurality of microlenses formed therein to establish channels of optical communication with an integrated optic circuit within the package. A first registration pattern is provided on an inside surface of one of the walls of the package for alignment and attachment of the integrated optic circuit. The package in one embodiment may further comprise a fiber holder for aligning and attaching a plurality of optical fibers to the package and extending the channels of optical communication to the fibers outside the package. In another embodiment, a fiber holder may be used to hold the fibers and align the fibers to the package. The fiber holder may be detachably connected to the package. 6 figs.
Package for integrated optic circuit and method
Kravitz, Stanley H.; Hadley, G. Ronald; Warren, Mial E.; Carson, Richard F.; Armendariz, Marcelino G.
1998-01-01
A structure and method for packaging an integrated optic circuit. The package comprises a first wall having a plurality of microlenses formed therein to establish channels of optical communication with an integrated optic circuit within the package. A first registration pattern is provided on an inside surface of one of the walls of the package for alignment and attachment of the integrated optic circuit. The package in one embodiment may further comprise a fiber holder for aligning and attaching a plurality of optical fibers to the package and extending the channels of optical communication to the fibers outside the package. In another embodiment, a fiber holder may be used to hold the fibers and align the fibers to the package. The fiber holder may be detachably connected to the package.
A Method for Obtaining Integrable Couplings
NASA Astrophysics Data System (ADS)
Zhang, Yu-Sen; Chen, Wei; Liao, Bo; Gong, Xin-Bo
2006-06-01
By making use of the vector product in R3, a commuting operation is introduced so that R3 becomes a Lie algebra. The resulting loop algebra tilde R3 is presented, from which the well-known AKNS hierarchy is produced. Again via applying the superposition of the commuting operations of the Lie algebra, a commuting operation in R6 is constructed so that R6 becomes a Lie algebra. Thanks to the corresponding loop algebra tilde R3 of the Lie algebra R3, the integrable coupling of the AKNS system is obtained. The method presented in this paper is rather simple and can be used to work out integrable coupling systems of the other known integrable hierarchies of soliton equations.
Recursive integral method for transmission eigenvalues
NASA Astrophysics Data System (ADS)
Huang, Ruihao; Struthers, Allan A.; Sun, Jiguang; Zhang, Ruming
2016-12-01
Transmission eigenvalue problems arise from inverse scattering theory for inhomogeneous media. These non-selfadjoint problems are numerically challenging because of a complicated spectrum. In this paper, we propose a novel recursive contour integral method for matrix eigenvalue problems from finite element discretizations of transmission eigenvalue problems. The technique tests (using an approximate spectral projection) if a region contains eigenvalues. Regions that contain eigenvalues are subdivided and tested recursively until eigenvalues are isolated with a specified precision. The method is fully parallel and requires no a priori spectral information. Numerical examples show the method is effective and robust.
Integrating Adaptability into Special Operations Forces Intermediate Level Education
2010-10-01
components of adaptability, as described in this report. In addition, we found that while some of the material covered by the ILE curriculum relates...19 APPENDIX A – ADVANCED MATERIALS ............................................................ A-1 APPENDIX B – INTERVIEW... MATERIALS ............................................................ B-1 APPENDIX C – INTERVIEW DATA
Adaptive reconnection-based arbitrary Lagrangian Eulerian method
Bo, Wurigen; Shashkov, Mikhail
2015-07-21
We present a new adaptive Arbitrary Lagrangian Eulerian (ALE) method. This method is based on the reconnection-based ALE (ReALE) methodology of Refs. [35], [34] and [6]. The main elements in a standard ReALE method are: an explicit Lagrangian phase on an arbitrary polygonal (in 2D) mesh in which the solution and positions of grid nodes are updated; a rezoning phase in which a new grid is defined by changing the connectivity (using Voronoi tessellation) but not the number of cells; and a remapping phase in which the Lagrangian solution is transferred onto the new grid. Furthermore, in the standard ReALEmore » method, the rezoned mesh is smoothed by using one or several steps toward centroidal Voronoi tessellation, but it is not adapted to the solution in any way.« less
Adaptive reconnection-based arbitrary Lagrangian Eulerian method
Bo, Wurigen; Shashkov, Mikhail
2015-07-21
We present a new adaptive Arbitrary Lagrangian Eulerian (ALE) method. This method is based on the reconnection-based ALE (ReALE) methodology of Refs. [35], [34] and [6]. The main elements in a standard ReALE method are: an explicit Lagrangian phase on an arbitrary polygonal (in 2D) mesh in which the solution and positions of grid nodes are updated; a rezoning phase in which a new grid is defined by changing the connectivity (using Voronoi tessellation) but not the number of cells; and a remapping phase in which the Lagrangian solution is transferred onto the new grid. Furthermore, in the standard ReALE method, the rezoned mesh is smoothed by using one or several steps toward centroidal Voronoi tessellation, but it is not adapted to the solution in any way.
Kampfner, Roberto R
2006-07-01
The structure of a system influences its adaptability. An important result of adaptability theory is that subsystem independence increases adaptability [Conrad, M., 1983. Adaptability. Plenum Press, New York]. Adaptability is essential in systems that face an uncertain environment such as biological systems and organizations. Modern organizations are the product of human design. And so it is their structure and the effect that it has on their adaptability. In this paper we explore the potential effects of computer-based information processing on the adaptability of organizations. The integration of computer-based processes into the dynamics of the functions they support and the effect it has on subsystem independence are especially relevant to our analysis.
An Adaptive INS-Aided PLL Tracking Method for GNSS Receivers in Harsh Environments.
Cong, Li; Li, Xin; Jin, Tian; Yue, Song; Xue, Rui
2016-01-23
As the weak link in global navigation satellite system (GNSS) signal processing, the phase-locked loop (PLL) is easily influenced with frequent cycle slips and loss of lock as a result of higher vehicle dynamics and lower signal-to-noise ratios. With inertial navigation system (INS) aid, PLLs' tracking performance can be improved. However, for harsh environments with high dynamics and signal attenuation, the traditional INS-aided PLL with fixed loop parameters has some limitations to improve the tracking adaptability. In this paper, an adaptive INS-aided PLL capable of adjusting its noise bandwidth and coherent integration time has been proposed. Through theoretical analysis, the relation between INS-aided PLL phase tracking error and carrier to noise density ratio (C/N₀), vehicle dynamics, aiding information update time, noise bandwidth, and coherent integration time has been built. The relation formulae are used to choose the optimal integration time and bandwidth for a given application under the minimum tracking error criterion. Software and hardware simulation results verify the correctness of the theoretical analysis, and demonstrate that the adaptive tracking method can effectively improve the PLL tracking ability and integrated GNSS/INS navigation performance. For harsh environments, the tracking sensitivity is increased by 3 to 5 dB, velocity errors are decreased by 36% to 50% and position errors are decreased by 6% to 24% when compared with other INS-aided PLL methods.
An Adaptive INS-Aided PLL Tracking Method for GNSS Receivers in Harsh Environments
Cong, Li; Li, Xin; Jin, Tian; Yue, Song; Xue, Rui
2016-01-01
As the weak link in global navigation satellite system (GNSS) signal processing, the phase-locked loop (PLL) is easily influenced with frequent cycle slips and loss of lock as a result of higher vehicle dynamics and lower signal-to-noise ratios. With inertial navigation system (INS) aid, PLLs’ tracking performance can be improved. However, for harsh environments with high dynamics and signal attenuation, the traditional INS-aided PLL with fixed loop parameters has some limitations to improve the tracking adaptability. In this paper, an adaptive INS-aided PLL capable of adjusting its noise bandwidth and coherent integration time has been proposed. Through theoretical analysis, the relation between INS-aided PLL phase tracking error and carrier to noise density ratio (C/N0), vehicle dynamics, aiding information update time, noise bandwidth, and coherent integration time has been built. The relation formulae are used to choose the optimal integration time and bandwidth for a given application under the minimum tracking error criterion. Software and hardware simulation results verify the correctness of the theoretical analysis, and demonstrate that the adaptive tracking method can effectively improve the PLL tracking ability and integrated GNSS/INS navigation performance. For harsh environments, the tracking sensitivity is increased by 3 to 5 dB, velocity errors are decreased by 36% to 50% and position errors are decreased by 6% to 24% when compared with other INS-aided PLL methods. PMID:26805853
Method and system for environmentally adaptive fault tolerant computing
NASA Technical Reports Server (NTRS)
Copenhaver, Jason L. (Inventor); Jeremy, Ramos (Inventor); Wolfe, Jeffrey M. (Inventor); Brenner, Dean (Inventor)
2010-01-01
A method and system for adapting fault tolerant computing. The method includes the steps of measuring an environmental condition representative of an environment. An on-board processing system's sensitivity to the measured environmental condition is measured. It is determined whether to reconfigure a fault tolerance of the on-board processing system based in part on the measured environmental condition. The fault tolerance of the on-board processing system may be reconfigured based in part on the measured environmental condition.
Workshop on adaptive grid methods for fusion plasmas
Wiley, J.C.
1995-07-01
The author describes a general `hp` finite element method with adaptive grids. The code was based on the work of Oden, et al. The term `hp` refers to the method of spatial refinement (h), in conjunction with the order of polynomials used as a part of the finite element discretization (p). This finite element code seems to handle well the different mesh grid sizes occuring between abuted grids with different resolutions.
ICASE/LaRC Workshop on Adaptive Grid Methods
NASA Technical Reports Server (NTRS)
South, Jerry C., Jr. (Editor); Thomas, James L. (Editor); Vanrosendale, John (Editor)
1995-01-01
Solution-adaptive grid techniques are essential to the attainment of practical, user friendly, computational fluid dynamics (CFD) applications. In this three-day workshop, experts gathered together to describe state-of-the-art methods in solution-adaptive grid refinement, analysis, and implementation; to assess the current practice; and to discuss future needs and directions for research. This was accomplished through a series of invited and contributed papers. The workshop focused on a set of two-dimensional test cases designed by the organizers to aid in assessing the current state of development of adaptive grid technology. In addition, a panel of experts from universities, industry, and government research laboratories discussed their views of needs and future directions in this field.
An h-adaptive local discontinuous Galerkin method for the Navier-Stokes-Korteweg equations
NASA Astrophysics Data System (ADS)
Tian, Lulu; Xu, Yan; Kuerten, J. G. M.; van der Vegt, J. J. W.
2016-08-01
In this article, we develop a mesh adaptation algorithm for a local discontinuous Galerkin (LDG) discretization of the (non)-isothermal Navier-Stokes-Korteweg (NSK) equations modeling liquid-vapor flows with phase change. This work is a continuation of our previous research, where we proposed LDG discretizations for the (non)-isothermal NSK equations with a time-implicit Runge-Kutta method. To save computing time and to capture the thin interfaces more accurately, we extend the LDG discretization with a mesh adaptation method. Given the current adapted mesh, a criterion for selecting candidate elements for refinement and coarsening is adopted based on the locally largest value of the density gradient. A strategy to refine and coarsen the candidate elements is then provided. We emphasize that the adaptive LDG discretization is relatively simple and does not require additional stabilization. The use of a locally refined mesh in combination with an implicit Runge-Kutta time method is, however, non-trivial, but results in an efficient time integration method for the NSK equations. Computations, including cases with solid wall boundaries, are provided to demonstrate the accuracy, efficiency and capabilities of the adaptive LDG discretizations.
Free energy calculations: an efficient adaptive biasing potential method.
Dickson, Bradley M; Legoll, Frédéric; Lelièvre, Tony; Stoltz, Gabriel; Fleurat-Lessard, Paul
2010-05-06
We develop an efficient sampling and free energy calculation technique within the adaptive biasing potential (ABP) framework. By mollifying the density of states we obtain an approximate free energy and an adaptive bias potential that is computed directly from the population along the coordinates of the free energy. Because of the mollifier, the bias potential is "nonlocal", and its gradient admits a simple analytic expression. A single observation of the reaction coordinate can thus be used to update the approximate free energy at every point within a neighborhood of the observation. This greatly reduces the equilibration time of the adaptive bias potential. This approximation introduces two parameters: strength of mollification and the zero of energy of the bias potential. While we observe that the approximate free energy is a very good estimate of the actual free energy for a large range of mollification strength, we demonstrate that the errors associated with the mollification may be removed via deconvolution. The zero of energy of the bias potential, which is easy to choose, influences the speed of convergence but not the limiting accuracy. This method is simple to apply to free energy or mean force computation in multiple dimensions and does not involve second derivatives of the reaction coordinates, matrix manipulations nor on-the-fly adaptation of parameters. For the alanine dipeptide test case, the new method is found to gain as much as a factor of 10 in efficiency as compared to two basic implementations of the adaptive biasing force methods, and it is shown to be as efficient as well-tempered metadynamics with the postprocess deconvolution giving a clear advantage to the mollified density of states method.
Data rate management and real time operation: recursive adaptive frame integration of limited data
NASA Astrophysics Data System (ADS)
Rafailov, Michael K.
2006-08-01
Recursive Limited Frame Integration was proposed as a way to improve frame integration performance and mitigate issues related to high data rate needed to support conventional frame integration. The technique uses two thresholds -one tuned for optimum probability of detection, the other to manage required false alarm rate, and places integration process between those thresholds. This configuration allows a non-linear integration process that, along with Signal-to-Noise Ratio (SNR) gain, provides system designers more capability where cost, weight, or power considerations limit system data rate, processing, or memory capability. However, Recursive Frame Integration Limited may have performance issues when single-frame SNR is really low. Recursive Adaptive Limited Frame Integration was proposed as a means to improve limited integration performance with really low single-frame SNR. It combines the benefits of nonlinear recursive limited frame integration and adaptive thresholds with a kind of conventional frame integration. Adding the third threshold may help in managing real time operations. In the paper the Recursive Frame Integration is presented in form of multiple parallel recursive integration. Such an approach can help not only in data rate management but in mitigation of low single frame SNR issue for Recursive Integration as well as in real time operations with frame integration.
INCORPORATING CATASTROPHES INTO INTEGRATED ASSESSMENT: SCIENCE, IMPACTS, AND ADAPTATION
Incorporating potential catastrophic consequences into integrated assessment models of climate change has been a top priority of policymakers and modelers alike. We review the current state of scientific understanding regarding three frequently mentioned geophysical catastrophes,...
An Adaptive Cross-Architecture Combination Method for Graph Traversal
You, Yang; Song, Shuaiwen; Kerbyson, Darren J.
2014-06-18
Breadth-First Search (BFS) is widely used in many real-world applications including computational biology, social networks, and electronic design automation. The combination method, using both top-down and bottom-up techniques, is the most effective BFS approach. However, current combination methods rely on trial-and-error and exhaustive search to locate the optimal switching point, which may cause significant runtime overhead. To solve this problem, we design an adaptive method based on regression analysis to predict an optimal switching point for the combination method at runtime within less than 0.1% of the BFS execution time.
Multistep Methods for Integrating the Solar System
1988-07-01
Technical Report 1055 [Multistep Methods for Integrating the Solar System 0 Panayotis A. Skordos’ MIT Artificial Intelligence Laboratory DTIC S D g8...RMA ELEENT. PROECT. TASK Artific ial Inteligence Laboratory ARE1A G WORK UNIT NUMBERS 545 Technology Square Cambridge, MA 02139 IL. CONTROLLING...describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology, supported by the Advanced Research Projects
Adaptive Kaczmarz Method for Image Reconstruction in Electrical Impedance Tomography
Li, Taoran; Kao, Tzu-Jen; Isaacson, David; Newell, Jonathan C.; Saulnier, Gary J.
2013-01-01
We present an adaptive Kaczmarz method for solving the inverse problem in electrical impedance tomography and determining the conductivity distribution inside an object from electrical measurements made on the surface. To best characterize an unknown conductivity distribution and avoid inverting the Jacobian-related term JTJ which could be expensive in terms of computation cost and memory in large scale problems, we propose solving the inverse problem by applying the optimal current patterns for distinguishing the actual conductivity from the conductivity estimate between each iteration of the block Kaczmarz algorithm. With a novel subset scheme, the memory-efficient reconstruction algorithm which appropriately combines the optimal current pattern generation with the Kaczmarz method can produce more accurate and stable solutions adaptively as compared to traditional Kaczmarz and Gauss-Newton type methods. Choices of initial current pattern estimates are discussed in the paper. Several reconstruction image metrics are used to quantitatively evaluate the performance of the simulation results. PMID:23718952
Final Report: Symposium on Adaptive Methods for Partial Differential Equations
Pernice, Michael; Johnson, Christopher R.; Smith, Philip J.; Fogelson, Aaron
1998-12-08
Complex physical phenomena often include features that span a wide range of spatial and temporal scales. Accurate simulation of such phenomena can be difficult to obtain, and computations that are under-resolved can even exhibit spurious features. While it is possible to resolve small scale features by increasing the number of grid points, global grid refinement can quickly lead to problems that are intractable, even on the largest available computing facilities. These constraints are particularly severe for three dimensional problems that involve complex physics. One way to achieve the needed resolution is to refine the computational mesh locally, in only those regions where enhanced resolution is required. Adaptive solution methods concentrate computational effort in regions where it is most needed. These methods have been successfully applied to a wide variety of problems in computational science and engineering. Adaptive methods can be difficult to implement, prompting the development of tools and environments to facilitate their use. To ensure that the results of their efforts are useful, algorithm and tool developers must maintain close communication with application specialists. Conversely it remains difficult for application specialists who are unfamiliar with the methods to evaluate the trade-offs between the benefits of enhanced local resolution and the effort needed to implement an adaptive solution method.
Designing Adaptive Low-Dissipative High Order Schemes for Long-Time Integrations. Chapter 1
NASA Technical Reports Server (NTRS)
Yee, Helen C.; Sjoegreen, B.; Mansour, Nagi N. (Technical Monitor)
2001-01-01
A general framework for the design of adaptive low-dissipative high order schemes is presented. It encompasses a rather complete treatment of the numerical approach based on four integrated design criteria: (1) For stability considerations, condition the governing equations before the application of the appropriate numerical scheme whenever it is possible; (2) For consistency, compatible schemes that possess stability properties, including physical and numerical boundary condition treatments, similar to those of the discrete analogue of the continuum are preferred; (3) For the minimization of numerical dissipation contamination, efficient and adaptive numerical dissipation control to further improve nonlinear stability and accuracy should be used; and (4) For practical considerations, the numerical approach should be efficient and applicable to general geometries, and an efficient and reliable dynamic grid adaptation should be used if necessary. These design criteria are, in general, very useful to a wide spectrum of flow simulations. However, the demand on the overall numerical approach for nonlinear stability and accuracy is much more stringent for long-time integration of complex multiscale viscous shock/shear/turbulence/acoustics interactions and numerical combustion. Robust classical numerical methods for less complex flow physics are not suitable or practical for such applications. The present approach is designed expressly to address such flow problems, especially unsteady flows. The minimization of employing very fine grids to overcome the production of spurious numerical solutions and/or instability due to under-resolved grids is also sought. The incremental studies to illustrate the performance of the approach are summarized. Extensive testing and full implementation of the approach is forthcoming. The results shown so far are very encouraging.
Adaptive Set-Based Methods for Association Testing.
Su, Yu-Chen; Gauderman, William James; Berhane, Kiros; Lewinger, Juan Pablo
2016-02-01
With a typical sample size of a few thousand subjects, a single genome-wide association study (GWAS) using traditional one single nucleotide polymorphism (SNP)-at-a-time methods can only detect genetic variants conferring a sizable effect on disease risk. Set-based methods, which analyze sets of SNPs jointly, can detect variants with smaller effects acting within a gene, a pathway, or other biologically relevant sets. Although self-contained set-based methods (those that test sets of variants without regard to variants not in the set) are generally more powerful than competitive set-based approaches (those that rely on comparison of variants in the set of interest with variants not in the set), there is no consensus as to which self-contained methods are best. In particular, several self-contained set tests have been proposed to directly or indirectly "adapt" to the a priori unknown proportion and distribution of effects of the truly associated SNPs in the set, which is a major determinant of their power. A popular adaptive set-based test is the adaptive rank truncated product (ARTP), which seeks the set of SNPs that yields the best-combined evidence of association. We compared the standard ARTP, several ARTP variations we introduced, and other adaptive methods in a comprehensive simulation study to evaluate their performance. We used permutations to assess significance for all the methods and thus provide a level playing field for comparison. We found the standard ARTP test to have the highest power across our simulations followed closely by the global model of random effects (GMRE) and a least absolute shrinkage and selection operator (LASSO)-based test.
A Comparative Study of Acousto-Optic Time-Integrating Correlators for Adaptive Jamming Cancellation
1997-10-01
This final report presents a comparative study of the space-integrating and time-integrating configurations of an acousto - optic correlator...systematically evaluate all existing acousto - optic correlator architectures and to determine which would be most suitable for adaptive jamming
ERIC Educational Resources Information Center
Rule, Audrey C.; Barrera, Manuel T., III
2008-01-01
Integration of subject areas with technology and thinking skills is a way to help teachers cope with today's overloaded curriculum and to help students see the connectedness of different curriculum areas. This study compares three authentic approaches to teaching a science unit on bird adaptations for habitat that integrate thinking skills and…
ERIC Educational Resources Information Center
Yu, Baohua; Downing, Kevin
2012-01-01
This study examined the influence of integrative motivation, instrumental motivation and second language (L2) proficiency on socio-cultural/academic adaptation in a sample of two groups of international students studying Chinese in China. Results revealed that the non-Asian student group reported higher levels of integrative motivation,…
Advanced numerical methods in mesh generation and mesh adaptation
Lipnikov, Konstantine; Danilov, A; Vassilevski, Y; Agonzal, A
2010-01-01
Numerical solution of partial differential equations requires appropriate meshes, efficient solvers and robust and reliable error estimates. Generation of high-quality meshes for complex engineering models is a non-trivial task. This task is made more difficult when the mesh has to be adapted to a problem solution. This article is focused on a synergistic approach to the mesh generation and mesh adaptation, where best properties of various mesh generation methods are combined to build efficiently simplicial meshes. First, the advancing front technique (AFT) is combined with the incremental Delaunay triangulation (DT) to build an initial mesh. Second, the metric-based mesh adaptation (MBA) method is employed to improve quality of the generated mesh and/or to adapt it to a problem solution. We demonstrate with numerical experiments that combination of all three methods is required for robust meshing of complex engineering models. The key to successful mesh generation is the high-quality of the triangles in the initial front. We use a black-box technique to improve surface meshes exported from an unattainable CAD system. The initial surface mesh is refined into a shape-regular triangulation which approximates the boundary with the same accuracy as the CAD mesh. The DT method adds robustness to the AFT. The resulting mesh is topologically correct but may contain a few slivers. The MBA uses seven local operations to modify the mesh topology. It improves significantly the mesh quality. The MBA method is also used to adapt the mesh to a problem solution to minimize computational resources required for solving the problem. The MBA has a solid theoretical background. In the first two experiments, we consider the convection-diffusion and elasticity problems. We demonstrate the optimal reduction rate of the discretization error on a sequence of adaptive strongly anisotropic meshes. The key element of the MBA method is construction of a tensor metric from hierarchical edge
Parallel 3D Mortar Element Method for Adaptive Nonconforming Meshes
NASA Technical Reports Server (NTRS)
Feng, Huiyu; Mavriplis, Catherine; VanderWijngaart, Rob; Biswas, Rupak
2004-01-01
High order methods are frequently used in computational simulation for their high accuracy. An efficient way to avoid unnecessary computation in smooth regions of the solution is to use adaptive meshes which employ fine grids only in areas where they are needed. Nonconforming spectral elements allow the grid to be flexibly adjusted to satisfy the computational accuracy requirements. The method is suitable for computational simulations of unsteady problems with very disparate length scales or unsteady moving features, such as heat transfer, fluid dynamics or flame combustion. In this work, we select the Mark Element Method (MEM) to handle the non-conforming interfaces between elements. A new technique is introduced to efficiently implement MEM in 3-D nonconforming meshes. By introducing an "intermediate mortar", the proposed method decomposes the projection between 3-D elements and mortars into two steps. In each step, projection matrices derived in 2-D are used. The two-step method avoids explicitly forming/deriving large projection matrices for 3-D meshes, and also helps to simplify the implementation. This new technique can be used for both h- and p-type adaptation. This method is applied to an unsteady 3-D moving heat source problem. With our new MEM implementation, mesh adaptation is able to efficiently refine the grid near the heat source and coarsen the grid once the heat source passes. The savings in computational work resulting from the dynamic mesh adaptation is demonstrated by the reduction of the the number of elements used and CPU time spent. MEM and mesh adaptation, respectively, bring irregularity and dynamics to the computer memory access pattern. Hence, they provide a good way to gauge the performance of computer systems when running scientific applications whose memory access patterns are irregular and unpredictable. We select a 3-D moving heat source problem as the Unstructured Adaptive (UA) grid benchmark, a new component of the NAS Parallel
Adaptive-Anisotropic Wavelet Collocation Method on general curvilinear coordinate systems
NASA Astrophysics Data System (ADS)
Brown-Dymkoski, Eric; Vasilyev, Oleg V.
2017-03-01
A new general framework for an Adaptive-Anisotropic Wavelet Collocation Method (A-AWCM) for the solution of partial differential equations is developed. This proposed framework addresses two major shortcomings of existing wavelet-based adaptive numerical methodologies, namely the reliance on a rectangular domain and the "curse of anisotropy", i.e. drastic over-resolution of sheet- and filament-like features arising from the inability of the wavelet refinement mechanism to distinguish highly correlated directional information in the solution. The A-AWCM addresses both of these challenges by incorporating coordinate transforms into the Adaptive Wavelet Collocation Method for the solution of PDEs. The resulting integrated framework leverages the advantages of both the curvilinear anisotropic meshes and wavelet-based adaptive refinement in a complimentary fashion, resulting in greatly reduced cost of resolution for anisotropic features. The proposed Adaptive-Anisotropic Wavelet Collocation Method retains the a priori error control of the solution and fully automated mesh refinement, while offering new abilities through the flexible mesh geometry, including body-fitting. The new A-AWCM is demonstrated for a variety of cases, including parabolic diffusion, acoustic scattering, and unsteady external flow.
Mixed Methods in Intervention Research: Theory to Adaptation
ERIC Educational Resources Information Center
Nastasi, Bonnie K.; Hitchcock, John; Sarkar, Sreeroopa; Burkholder, Gary; Varjas, Kristen; Jayasena, Asoka
2007-01-01
The purpose of this article is to demonstrate the application of mixed methods research designs to multiyear programmatic research and development projects whose goals include integration of cultural specificity when generating or translating evidence-based practices. The authors propose a set of five mixed methods designs related to different…
Methods for prismatic/tetrahedral grid generation and adaptation
NASA Technical Reports Server (NTRS)
Kallinderis, Y.
1995-01-01
The present work involves generation of hybrid prismatic/tetrahedral grids for complex 3-D geometries including multi-body domains. The prisms cover the region close to each body's surface, while tetrahedra are created elsewhere. Two developments are presented for hybrid grid generation around complex 3-D geometries. The first is a new octree/advancing front type of method for generation of the tetrahedra of the hybrid mesh. The main feature of the present advancing front tetrahedra generator that is different from previous such methods is that it does not require the creation of a background mesh by the user for the determination of the grid-spacing and stretching parameters. These are determined via an automatically generated octree. The second development is a method for treating the narrow gaps in between different bodies in a multiply-connected domain. This method is applied to a two-element wing case. A High Speed Civil Transport (HSCT) type of aircraft geometry is considered. The generated hybrid grid required only 170 K tetrahedra instead of an estimated two million had a tetrahedral mesh been used in the prisms region as well. A solution adaptive scheme for viscous computations on hybrid grids is also presented. A hybrid grid adaptation scheme that employs both h-refinement and redistribution strategies is developed to provide optimum meshes for viscous flow computations. Grid refinement is a dual adaptation scheme that couples 3-D, isotropic division of tetrahedra and 2-D, directional division of prisms.
Efficient Unstructured Grid Adaptation Methods for Sonic Boom Prediction
NASA Technical Reports Server (NTRS)
Campbell, Richard L.; Carter, Melissa B.; Deere, Karen A.; Waithe, Kenrick A.
2008-01-01
This paper examines the use of two grid adaptation methods to improve the accuracy of the near-to-mid field pressure signature prediction of supersonic aircraft computed using the USM3D unstructured grid flow solver. The first method (ADV) is an interactive adaptation process that uses grid movement rather than enrichment to more accurately resolve the expansion and compression waves. The second method (SSGRID) uses an a priori adaptation approach to stretch and shear the original unstructured grid to align the grid with the pressure waves and reduce the cell count required to achieve an accurate signature prediction at a given distance from the vehicle. Both methods initially create negative volume cells that are repaired in a module in the ADV code. While both approaches provide significant improvements in the near field signature (< 3 body lengths) relative to a baseline grid without increasing the number of grid points, only the SSGRID approach allows the details of the signature to be accurately computed at mid-field distances (3-10 body lengths) for direct use with mid-field-to-ground boom propagation codes.
Methods for prismatic/tetrahedral grid generation and adaptation
NASA Astrophysics Data System (ADS)
Kallinderis, Y.
1995-10-01
The present work involves generation of hybrid prismatic/tetrahedral grids for complex 3-D geometries including multi-body domains. The prisms cover the region close to each body's surface, while tetrahedra are created elsewhere. Two developments are presented for hybrid grid generation around complex 3-D geometries. The first is a new octree/advancing front type of method for generation of the tetrahedra of the hybrid mesh. The main feature of the present advancing front tetrahedra generator that is different from previous such methods is that it does not require the creation of a background mesh by the user for the determination of the grid-spacing and stretching parameters. These are determined via an automatically generated octree. The second development is a method for treating the narrow gaps in between different bodies in a multiply-connected domain. This method is applied to a two-element wing case. A High Speed Civil Transport (HSCT) type of aircraft geometry is considered. The generated hybrid grid required only 170 K tetrahedra instead of an estimated two million had a tetrahedral mesh been used in the prisms region as well. A solution adaptive scheme for viscous computations on hybrid grids is also presented. A hybrid grid adaptation scheme that employs both h-refinement and redistribution strategies is developed to provide optimum meshes for viscous flow computations. Grid refinement is a dual adaptation scheme that couples 3-D, isotropic division of tetrahedra and 2-D, directional division of prisms.
Developing new online calibration methods for multidimensional computerized adaptive testing.
Chen, Ping; Wang, Chun; Xin, Tao; Chang, Hua-Hua
2017-02-01
Multidimensional computerized adaptive testing (MCAT) has received increasing attention over the past few years in educational measurement. Like all other formats of CAT, item replenishment is an essential part of MCAT for its item bank maintenance and management, which governs retiring overexposed or obsolete items over time and replacing them with new ones. Moreover, calibration precision of the new items will directly affect the estimation accuracy of examinees' ability vectors. In unidimensional CAT (UCAT) and cognitive diagnostic CAT, online calibration techniques have been developed to effectively calibrate new items. However, there has been very little discussion of online calibration in MCAT in the literature. Thus, this paper proposes new online calibration methods for MCAT based upon some popular methods used in UCAT. Three representative methods, Method A, the 'one EM cycle' method and the 'multiple EM cycles' method, are generalized to MCAT. Three simulation studies were conducted to compare the three new methods by manipulating three factors (test length, item bank design, and level of correlation between coordinate dimensions). The results showed that all the new methods were able to recover the item parameters accurately, and the adaptive online calibration designs showed some improvements compared to the random design under most conditions.
Müller, Achim; Merca, Alice; Al-Karawi, Ahmed Jasim M; Garai, Somenath; Bögge, Hartmut; Hou, Guangfeng; Wu, Lixin; Haupt, Erhard T K; Rehder, Dieter; Haso, Fadi; Liu, Tianbo
2012-12-14
Unique properties of the two giant wheel-shaped molybdenum-oxides of the type {Mo(154)}≡[{Mo(2)}{Mo(8)}{Mo(1)}](14) (1) and {Mo(176)}≡[{Mo(2)}{Mo(8)}{Mo(1)}](16) (2) that have the same building blocks either 14 or 16 times, respectively, are considered and show a "chemical adaptability" as a new phenomenon regarding the integration of a large number of appropriate cations and anions, for example, in form of the large "salt-like" {M(SO(4))}(16) rings (M = K(+), NH(4)(+)), while the two resulting {Mo(146)(K(SO(4)))(16)} (3) and {Mo(146)(NH(4)(SO(4)))(16)} (4) type hybrid compounds have the same shape as the parent ring structures. The chemical adaptability, which also allows the integration of anions and cations even at the same positions in the {Mo(4)O(6)}-type units of 1 and 2, is caused by easy changes in constitution by reorganisation and simultaneous release of (some) building blocks (one example: two opposite orientations of the same functional groups, that is, of H(2)O{Mo=O} (I) and O={Mo(H(2)O)} (II) are possible). Whereas Cu(2+) in [(H(4)Cu(II)(5))Mo(V)(28)Mo(VI)(114)O(432)(H(2)O)(58)](26-) (5 a) is simply coordinated to two parent O(2-) ions of {Mo(4)O(6)} and to two fragments of type II, the SO(4)(2-) integration in 3 and 4 occurs through the substitution of two oxo ligands of {Mo(4)O(6)} as well as two H(2)O ligands of fragment I. Complexes 3 and now 4 were characterised by different physical methods, for example, solutions of 4 in DMSO with sophisticated NMR spectroscopy (EXSY, DOSY and HSQC). The NH(4)(+) ions integrated in the cluster anion of 4 "communicate" with those in solution in the sense that the related H(+) ion exchange is in equilibrium. The important message: the reported "chemical adaptability" has its formal counterpart in solutions of "molybdates", which can form unique dynamic libraries containing constituents/building blocks that may form and break reversibly and can lead to the isolation of a variety of giant clusters with
Parallel implementation of an adaptive and parameter-free N-body integrator
NASA Astrophysics Data System (ADS)
Pruett, C. David; Ingham, William H.; Herman, Ralph D.
2011-05-01
Previously, Pruett et al. (2003) [3] described an N-body integrator of arbitrarily high order M with an asymptotic operation count of O(MN). The algorithm's structure lends itself readily to data parallelization, which we document and demonstrate here in the integration of point-mass systems subject to Newtonian gravitation. High order is shown to benefit parallel efficiency. The resulting N-body integrator is robust, parameter-free, highly accurate, and adaptive in both time-step and order. Moreover, it exhibits linear speedup on distributed parallel processors, provided that each processor is assigned at least a handful of bodies. Program summaryProgram title: PNB.f90 Catalogue identifier: AEIK_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEIK_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC license, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 3052 No. of bytes in distributed program, including test data, etc.: 68 600 Distribution format: tar.gz Programming language: Fortran 90 and OpenMPI Computer: All shared or distributed memory parallel processors Operating system: Unix/Linux Has the code been vectorized or parallelized?: The code has been parallelized but has not been explicitly vectorized. RAM: Dependent upon N Classification: 4.3, 4.12, 6.5 Nature of problem: High accuracy numerical evaluation of trajectories of N point masses each subject to Newtonian gravitation. Solution method: Parallel and adaptive extrapolation in time via power series of arbitrary degree. Running time: 5.1 s for the demo program supplied with the package.
A simplified self-adaptive grid method, SAGE
NASA Technical Reports Server (NTRS)
Davies, C.; Venkatapathy, E.
1989-01-01
The formulation of the Self-Adaptive Grid Evolution (SAGE) code, based on the work of Nakahashi and Deiwert, is described in the first section of this document. The second section is presented in the form of a user guide which explains the input and execution of the code, and provides many examples. Application of the SAGE code, by Ames Research Center and by others, in the solution of various flow problems has been an indication of the code's general utility and success. Although the basic formulation follows the method of Nakahashi and Deiwert, many modifications have been made to facilitate the use of the self-adaptive grid method for single, zonal, and multiple grids. Modifications to the methodology and the simplified input options make this current version a flexible and user-friendly code.
Optimal and adaptive methods of processing hydroacoustic signals (review)
NASA Astrophysics Data System (ADS)
Malyshkin, G. S.; Sidel'nikov, G. B.
2014-09-01
Different methods of optimal and adaptive processing of hydroacoustic signals for multipath propagation and scattering are considered. Advantages and drawbacks of the classical adaptive (Capon, MUSIC, and Johnson) algorithms and "fast" projection algorithms are analyzed for the case of multipath propagation and scattering of strong signals. The classical optimal approaches to detecting multipath signals are presented. A mechanism of controlled normalization of strong signals is proposed to automatically detect weak signals. The results of simulating the operation of different detection algorithms for a linear equidistant array under multipath propagation and scattering are presented. An automatic detector is analyzed, which is based on classical or fast projection algorithms, which estimates the background proceeding from median filtering or the method of bilateral spatial contrast.
Integrated Force Method for Indeterminate Structures
NASA Technical Reports Server (NTRS)
Hopkins, Dale A.; Halford, Gary R.; Patnaik, Surya N.
2008-01-01
Two methods of solving indeterminate structural-mechanics problems have been developed as products of research on the theory of strain compatibility. In these methods, stresses are considered to be the primary unknowns (in contrast to strains and displacements being considered as the primary unknowns in some prior methods). One of these methods, denoted the integrated force method (IFM), makes it possible to compute stresses, strains, and displacements with high fidelity by use of modest finite-element models that entail relatively small amounts of computation. The other method, denoted the completed Beltrami Mitchell formulation (CBMF), enables direct determination of stresses in an elastic continuum with general boundary conditions, without the need to first calculate displacements as in traditional methods. The equilibrium equation, the compatibility condition, and the material law are the three fundamental concepts of the theory of structures. For almost 150 years, it has been commonly supposed that the theory is complete. However, until now, the understanding of the compatibility condition remained incomplete, and the compatibility condition was confused with the continuity condition. Furthermore, the compatibility condition as applied to structures in its previous incomplete form was inconsistent with the strain formulation in elasticity.
NASA Astrophysics Data System (ADS)
Li, Dongming; Zhang, Lijuan; Wang, Ting; Liu, Huan; Yang, Jinhua; Chen, Guifen
2016-11-01
To improve the adaptive optics (AO) image's quality, we study the AO image restoration algorithm based on wavefront reconstruction technology and adaptive total variation (TV) method in this paper. Firstly, the wavefront reconstruction using Zernike polynomial is used for initial estimated for the point spread function (PSF). Then, we develop our proposed iterative solutions for AO images restoration, addressing the joint deconvolution issue. The image restoration experiments are performed to verify the image restoration effect of our proposed algorithm. The experimental results show that, compared with the RL-IBD algorithm and Wiener-IBD algorithm, we can see that GMG measures (for real AO image) from our algorithm are increased by 36.92%, and 27.44% respectively, and the computation time are decreased by 7.2%, and 3.4% respectively, and its estimation accuracy is significantly improved.
Methods of Genomic Competency Integration in Practice
Jenkins, Jean; Calzone, Kathleen A.; Caskey, Sarah; Culp, Stacey; Weiner, Marsha; Badzek, Laurie
2015-01-01
Purpose Genomics is increasingly relevant to health care, necessitating support for nurses to incorporate genomic competencies into practice. The primary aim of this project was to develop, implement, and evaluate a year-long genomic education intervention that trained, supported, and supervised institutional administrator and educator champion dyads to increase nursing capacity to integrate genomics through assessments of program satisfaction and institutional achieved outcomes. Design Longitudinal study of 23 Magnet Recognition Program® Hospitals (21 intervention, 2 controls) participating in a 1-year new competency integration effort aimed at increasing genomic nursing competency and overcoming barriers to genomics integration in practice. Methods Champion dyads underwent genomic training consisting of one in-person kick-off training meeting followed by monthly education webinars. Champion dyads designed institution-specific action plans detailing objectives, methods or strategies used to engage and educate nursing staff, timeline for implementation, and outcomes achieved. Action plans focused on a minimum of seven genomic priority areas: champion dyad personal development; practice assessment; policy content assessment; staff knowledge needs assessment; staff development; plans for integration; and anticipated obstacles and challenges. Action plans were updated quarterly, outlining progress made as well as inclusion of new methods or strategies. Progress was validated through virtual site visits with the champion dyads and chief nursing officers. Descriptive data were collected on all strategies or methods utilized, and timeline for achievement. Descriptive data were analyzed using content analysis. Findings The complexity of the competency content and the uniqueness of social systems and infrastructure resulted in a significant variation of champion dyad interventions. Conclusions Nursing champions can facilitate change in genomic nursing capacity through
Grid adaptation and remapping for arbitrary lagrangian eulerian (ALE) methods
Lapenta, G. M.
2002-01-01
Methods to include automatic grid adaptation tools within the Arbitrary Lagrangian Eulerian (ALE) method are described. Two main developments will be described. First, a new grid adaptation approach is described, based on an automatic and accurate estimate of the local truncation error. Second, a new method to remap the information between two grids is presented, based on the MPDATA approach. The Arbitrary Lagrangian Eulerian (ALE) method solves hyperbolic equations by splitting the operators is two phases. First, in the Lagrangian phase, the equations under consideration are written in a Lagrangian frame and are discretized. In this phase, the grid moves with the solution, the velocity of each node being the local fluid velocity. Second, in the Eulerian phase, a new grid is generated and the information is transferred to the new grid. The advantage of considering this second step is the possibility of avoiding mesh distortion and tangling typical of pure Lagrangian methods. The second phase of the ALE method is the primary topic of the present communication. In the Eulerian phase two tasks need to be completed. First, a new grid need to be created (we will refer to this task as rezoning). Second, the information is transferred from the grid available at the end of the Lagrangian phase to the new grid (we will refer to this task as remapping). New techniques are presented for the two tasks of the Eulerian phase: rezoning and remapping.
Investigation of the Multiple Method Adaptive Control (MMAC) method for flight control systems
NASA Technical Reports Server (NTRS)
Athans, M.; Baram, Y.; Castanon, D.; Dunn, K. P.; Green, C. S.; Lee, W. H.; Sandell, N. R., Jr.; Willsky, A. S.
1979-01-01
The stochastic adaptive control of the NASA F-8C digital-fly-by-wire aircraft using the multiple model adaptive control (MMAC) method is presented. The selection of the performance criteria for the lateral and the longitudinal dynamics, the design of the Kalman filters for different operating conditions, the identification algorithm associated with the MMAC method, the control system design, and simulation results obtained using the real time simulator of the F-8 aircraft at the NASA Langley Research Center are discussed.
A two-dimensional adaptive mesh generation method
NASA Astrophysics Data System (ADS)
Altas, Irfan; Stephenson, John W.
1991-05-01
The present, two-dimensional adaptive mesh-generation method allows selective modification of a small portion of the mesh without affecting large areas of adjacent mesh-points, and is applicable with or without boundary-fitted coordinate-generation procedures. The cases of differential equation discretization by, on the one hand, classical difference formulas designed for uniform meshes, and on the other the present difference formulas, are illustrated through the application of the method to the Hiemenz flow for which the Navier-Stokes equation's exact solution is known, as well as to a two-dimensional viscous internal flow problem.
An adaptive penalty method for DIRECT algorithm in engineering optimization
NASA Astrophysics Data System (ADS)
Vilaça, Rita; Rocha, Ana Maria A. C.
2012-09-01
The most common approach for solving constrained optimization problems is based on penalty functions, where the constrained problem is transformed into a sequence of unconstrained problem by penalizing the objective function when constraints are violated. In this paper, we analyze the implementation of an adaptive penalty method, within the DIRECT algorithm, in which the constraints that are more difficult to be satisfied will have relatively higher penalty values. In order to assess the applicability and performance of the proposed method, some benchmark problems from engineering design optimization are considered.
Solution methods for very highly integrated circuits.
Nong, Ryan; Thornquist, Heidi K.; Chen, Yao; Mei, Ting; Santarelli, Keith R.; Tuminaro, Raymond Stephen
2010-12-01
While advances in manufacturing enable the fabrication of integrated circuits containing tens-to-hundreds of millions of devices, the time-sensitive modeling and simulation necessary to design these circuits poses a significant computational challenge. This is especially true for mixed-signal integrated circuits where detailed performance analyses are necessary for the individual analog/digital circuit components as well as the full system. When the integrated circuit has millions of devices, performing a full system simulation is practically infeasible using currently available Electrical Design Automation (EDA) tools. The principal reason for this is the time required for the nonlinear solver to compute the solutions of large linearized systems during the simulation of these circuits. The research presented in this report aims to address the computational difficulties introduced by these large linearized systems by using Model Order Reduction (MOR) to (i) generate specialized preconditioners that accelerate the computation of the linear system solution and (ii) reduce the overall dynamical system size. MOR techniques attempt to produce macromodels that capture the desired input-output behavior of larger dynamical systems and enable substantial speedups in simulation time. Several MOR techniques that have been developed under the LDRD on 'Solution Methods for Very Highly Integrated Circuits' will be presented in this report. Among those presented are techniques for linear time-invariant dynamical systems that either extend current approaches or improve the time-domain performance of the reduced model using novel error bounds and a new approach for linear time-varying dynamical systems that guarantees dimension reduction, which has not been proven before. Progress on preconditioning power grid systems using multi-grid techniques will be presented as well as a framework for delivering MOR techniques to the user community using Trilinos and the Xyce circuit simulator
Adaptive Current Control Method for Hybrid Active Power Filter
NASA Astrophysics Data System (ADS)
Chau, Minh Thuyen
2016-09-01
This paper proposes an adaptive current control method for Hybrid Active Power Filter (HAPF). It consists of a fuzzy-neural controller, identification and prediction model and cost function. The fuzzy-neural controller parameters are adjusted according to the cost function minimum criteria. For this reason, the proposed control method has a capability on-line control clings to variation of the load harmonic currents. Compared to the single fuzzy logic control method, the proposed control method shows the advantages of better dynamic response, compensation error in steady-state is smaller, able to online control is better and harmonics cancelling is more effective. Simulation and experimental results have demonstrated the effectiveness of the proposed control method.
Parallel, adaptive finite element methods for conservation laws
NASA Technical Reports Server (NTRS)
Biswas, Rupak; Devine, Karen D.; Flaherty, Joseph E.
1994-01-01
We construct parallel finite element methods for the solution of hyperbolic conservation laws in one and two dimensions. Spatial discretization is performed by a discontinuous Galerkin finite element method using a basis of piecewise Legendre polynomials. Temporal discretization utilizes a Runge-Kutta method. Dissipative fluxes and projection limiting prevent oscillations near solution discontinuities. A posteriori estimates of spatial errors are obtained by a p-refinement technique using superconvergence at Radau points. The resulting method is of high order and may be parallelized efficiently on MIMD computers. We compare results using different limiting schemes and demonstrate parallel efficiency through computations on an NCUBE/2 hypercube. We also present results using adaptive h- and p-refinement to reduce the computational cost of the method.
NASA Astrophysics Data System (ADS)
Reidsma, Pytrik; Wolf, Joost; Kanellopoulos, Argyris; Schaap, Ben F.; Mandryk, Maryia; Verhagen, Jan; van Ittersum, Martin K.
2015-04-01
Rather than on crop modelling only, climate change impact assessments in agriculture need to be based on integrated assessment and farming systems analysis, and account for adaptation at different levels. With a case study for Flevoland, the Netherlands, we illustrate that (1) crop models cannot account for all relevant climate change impacts and adaptation options, and (2) changes in technology, policy and prices have had and are likely to have larger impacts on farms than climate change. While crop modelling indicates positive impacts of climate change on yields of major crops in 2050, a semi-quantitative and participatory method assessing impacts of extreme events shows that there are nevertheless several climate risks. A range of adaptation measures are, however, available to reduce possible negative effects at crop level. In addition, at farm level farmers can change cropping patterns, and adjust inputs and outputs. Also farm structural change will influence impacts and adaptation. While the 5th IPCC report is more negative regarding impacts of climate change on agriculture compared to the previous report, also for temperate regions, our results show that when putting climate change in context of other drivers, and when explicitly accounting for adaptation at crop and farm level, impacts may be less negative in some regions and opportunities are revealed. These results refer to a temperate region, but an integrated assessment may also change perspectives on climate change for other parts of the world.
de Bremond, Ariane; Preston, Benjamin; Rice, Jennie S.
2014-10-01
Energy systems comprise a key sector of the U.S. economy, and one that has been identified as potentially vulnerable to the effects of climate variability and change. However, understanding of adaptation processes in energy companies and private entities more broadly is limited. It is unclear, for example, the extent to which energy companies are well-served by existing knowledge and tools emerging from the impacts, adaptation and vulnerability (IAV) and integrated assessment modeling (IAM) communities and/or what experiments, analyses, and model results have practical utility for informing adaptation in the energy sector. As part of a regional IAM development project, we investigated available evidence of adaptation processes in the energy sector, with a particular emphasis on the U.S. Southeast and Gulf Coast region. A mixed methods approach of literature review and semi-structured interviews with key informants from energy utilities was used to compare existing knowledge from the IAV community with that of regional stakeholders. That comparison revealed that much of the IAV literature on the energy sector is climate-centric and therefore disconnected from the more integrated decision-making processes and institutional perspectives of energy utilities. Increasing the relevance of research and assessment for the energy sector will necessitate a greater investment in integrated assessment and modeling efforts that respond to practical decision-making needs as well as greater collaboration between energy utilities and researchers in the design, execution, and communication of those efforts.
A novel adaptive noise filtering method for SAR images
NASA Astrophysics Data System (ADS)
Li, Weibin; He, Mingyi
2009-08-01
In the most application situation, signal or image always is corrupted by additive noise. As a result there are mass methods to remove the additive noise while few approaches can work well for the multiplicative noise. The paper presents an improved MAP-based filter for multiplicative noise by adaptive window denoising technique. A Gamma noise models is discussed and a preprocessing technique to differential the matured and un-matured pixel is applied to get accurate estimate for Equivalent Number of Looks. Also the adaptive local window growth and 3 different denoise strategies are applied to smooth noise while keep its subtle information according to its local statistics feature. The simulation results show that the performance is better than existing filter. Several image experiments demonstrate its theoretical performance.
Varying Timescales of Stimulus Integration Unite Neural Adaptation and Prototype Formation.
Mattar, Marcelo G; Kahn, David A; Thompson-Schill, Sharon L; Aguirre, Geoffrey K
2016-07-11
Human visual perception is both stable and adaptive. Perception of complex objects, such as faces, is shaped by the long-term average of experience as well as immediate, comparative context. Measurements of brain activity have demonstrated corresponding neural mechanisms, including norm-based responses reflective of stored prototype representations, and adaptation induced by the immediately preceding stimulus. Here, we consider the possibility that these apparently separate phenomena can arise from a single mechanism of sensory integration operating over varying timescales. We used fMRI to measure neural responses from the fusiform gyrus while subjects observed a rapid stream of face stimuli. Neural activity at this cortical site was best explained by the integration of sensory experience over multiple sequential stimuli, following a decaying-exponential weighting function. Although this neural activity could be mistaken for immediate neural adaptation or long-term, norm-based responses, it in fact reflected a timescale of integration intermediate to both. We then examined the timescale of sensory integration across the cortex. We found a gradient that ranged from rapid sensory integration in early visual areas, to long-term, stable representations in higher-level, ventral-temporal cortex. These findings were replicated with a new set of face stimuli and subjects. Our results suggest that a cascade of visual areas integrate sensory experience, transforming highly adaptable responses at early stages to stable representations at higher levels.
NASA Astrophysics Data System (ADS)
Jeong, Jong Seob; Cannata, Jonathan Matthew; Shung, K. Kirk
2010-04-01
It was previously demonstrated that it is feasible to simultaneously perform ultrasound therapy and imaging of a coagulated lesion during treatment with an integrated transducer that is capable of high intensity focused ultrasound (HIFU) and B-mode ultrasound imaging. It was found that coded excitation and fixed notch filtering upon reception could significantly reduce interference caused by the therapeutic transducer. During HIFU sonication, the imaging signal generated with coded excitation and fixed notch filtering had a range side-lobe level of less than -40 dB, while traditional short-pulse excitation and fixed notch filtering produced a range side-lobe level of -20 dB. The shortcoming is, however, that relatively complicated electronics may be needed to utilize coded excitation in an array imaging system. It is for this reason that in this paper an adaptive noise canceling technique is proposed to improve image quality by minimizing not only the therapeutic interference, but also the remnant side-lobe 'ripples' when using the traditional short-pulse excitation. The performance of this technique was verified through simulation and experiments using a prototype integrated HIFU/imaging transducer. Although it is known that the remnant ripples are related to the notch attenuation value of the fixed notch filter, in reality, it is difficult to find the optimal notch attenuation value due to the change in targets or the media resulted from motion or different acoustic properties even during one sonication pulse. In contrast, the proposed adaptive noise canceling technique is capable of optimally minimizing both the therapeutic interference and residual ripples without such constraints. The prototype integrated HIFU/imaging transducer is composed of three rectangular elements. The 6 MHz center element is used for imaging and the outer two identical 4 MHz elements work together to transmit the HIFU beam. Two HIFU elements of 14.4 mm × 20.0 mm dimensions could
Magnitude Estimation with Noisy Integrators Linked by an Adaptive Reference.
Thurley, Kay
2016-01-01
Judgments of physical stimuli show characteristic biases; relatively small stimuli are overestimated whereas relatively large stimuli are underestimated (regression effect). Such biases likely result from a strategy that seeks to minimize errors given noisy estimates about stimuli that itself are drawn from a distribution, i.e., the statistics of the environment. While being conceptually well described, it is unclear how such a strategy could be implemented neurally. The present paper aims toward answering this question. A theoretical approach is introduced that describes magnitude estimation as two successive stages of noisy (neural) integration. Both stages are linked by a reference memory that is updated with every new stimulus. The model reproduces the behavioral characteristics of magnitude estimation and makes several experimentally testable predictions. Moreover, the model identifies the regression effect as a means of minimizing estimation errors and explains how this optimality strategy depends on the subject's discrimination abilities and on the stimulus statistics. The latter influence predicts another property of magnitude estimation, the so-called range effect. Beyond being successful in describing decision-making, the present work suggests that noisy integration may also be important in processing magnitudes.
Automatic multirate methods for ordinary differential equations. [Adaptive time steps
Gear, C.W.
1980-01-01
A study is made of the application of integration methods in which different step sizes are used for different members of a system of equations. Such methods can result in savings if the cost of derivative evaluation is high or if a system is sparse; however, the estimation and control of errors is very difficult and can lead to high overheads. Three approaches are discussed, and it is shown that the least intuitive is the most promising. 2 figures.
Planetary gearbox fault diagnosis using an adaptive stochastic resonance method
NASA Astrophysics Data System (ADS)
Lei, Yaguo; Han, Dong; Lin, Jing; He, Zhengjia
2013-07-01
Planetary gearboxes are widely used in aerospace, automotive and heavy industry applications due to their large transmission ratio, strong load-bearing capacity and high transmission efficiency. The tough operation conditions of heavy duty and intensive impact load may cause gear tooth damage such as fatigue crack and teeth missed etc. The challenging issues in fault diagnosis of planetary gearboxes include selection of sensitive measurement locations, investigation of vibration transmission paths and weak feature extraction. One of them is how to effectively discover the weak characteristics from noisy signals of faulty components in planetary gearboxes. To address the issue in fault diagnosis of planetary gearboxes, an adaptive stochastic resonance (ASR) method is proposed in this paper. The ASR method utilizes the optimization ability of ant colony algorithms and adaptively realizes the optimal stochastic resonance system matching input signals. Using the ASR method, the noise may be weakened and weak characteristics highlighted, and therefore the faults can be diagnosed accurately. A planetary gearbox test rig is established and experiments with sun gear faults including a chipped tooth and a missing tooth are conducted. And the vibration signals are collected under the loaded condition and various motor speeds. The proposed method is used to process the collected signals and the results of feature extraction and fault diagnosis demonstrate its effectiveness.
Adaptation of fast marching methods to intracellular signaling
NASA Astrophysics Data System (ADS)
Chikando, Aristide C.; Kinser, Jason M.
2006-02-01
Imaging of signaling phenomena within the intracellular domain is a well studied field. Signaling is the process by which all living cells communicate with their environment and with each other. In the case of signaling calcium waves, numerous computational models based on solving homogeneous reaction diffusion equations have been developed. Typically, the reaction diffusion approach consists of solving systems of partial differential equations at each update step. The traditional methods used to solve these reaction diffusion equations are very computationally expensive since they must employ small time steps in order to reduce the computational error. The presented research suggests the application of fast marching methods to imaging signaling calcium waves, more specifically fertilization calcium waves, in Xenopus laevis eggs. The fast marching approach provides fast and efficient means of tracking the evolution of monotonically advancing fronts. A model that employs biophysical properties of intracellular calcium signaling, and adapts fast marching methods to tracking the propagation of signaling calcium waves is presented. The developed model is used to reproduce simulation results obtained with reaction diffusion based model. Results obtained with our model agree with both the results obtained with reaction diffusion based models, and confocal microscopy observations during in vivo experiments. The adaptation of fast marching methods to intracellular protein or macromolecule trafficking is also briefly explored.
Robust time and frequency domain estimation methods in adaptive control
NASA Technical Reports Server (NTRS)
Lamaire, Richard Orville
1987-01-01
A robust identification method was developed for use in an adaptive control system. The type of estimator is called the robust estimator, since it is robust to the effects of both unmodeled dynamics and an unmeasurable disturbance. The development of the robust estimator was motivated by a need to provide guarantees in the identification part of an adaptive controller. To enable the design of a robust control system, a nominal model as well as a frequency-domain bounding function on the modeling uncertainty associated with this nominal model must be provided. Two estimation methods are presented for finding parameter estimates, and, hence, a nominal model. One of these methods is based on the well developed field of time-domain parameter estimation. In a second method of finding parameter estimates, a type of weighted least-squares fitting to a frequency-domain estimated model is used. The frequency-domain estimator is shown to perform better, in general, than the time-domain parameter estimator. In addition, a methodology for finding a frequency-domain bounding function on the disturbance is used to compute a frequency-domain bounding function on the additive modeling error due to the effects of the disturbance and the use of finite-length data. The performance of the robust estimator in both open-loop and closed-loop situations is examined through the use of simulations.
NASA Astrophysics Data System (ADS)
Hay-McCutcheon, Marcia J.; Brown, Carolyn J.; Abbas, Paul J.
2005-10-01
The objective of this study was to determine the impact that auditory-nerve adaptation has on behavioral measures of temporal integration in Nucleus 24 cochlear implant recipients. It was expected that, because the auditory nerve serves as the input to central temporal integrator, a large degree of auditory-nerve adaptation would reduce the amount of temporal integration. Neural adaptation was measured by tracking amplitude changes of the electrically evoked compound action potential (ECAP) in response to 1000-pps biphasic pulse trains of varying durations. Temporal integration was measured at both suprathreshold and threshold levels by an adaptive procedure. Although varying degrees of neural adaptation and temporal integration were observed across individuals, results of this investigation revealed no correlation between the degree of neural adaptation and psychophysical measures of temporal integration.
Hay-McCutcheon, Marcia J; Brown, Carolyn J; Abbas, Paul J
2005-10-01
The objective of this study was to determine the impact that auditory-nerve adaptation has on behavioral measures of temporal integration in Nucleus 24 cochlear implant recipients. It was expected that, because the auditory nerve serves as the input to central temporal integrator, a large degree of auditory-nerve adaptation would reduce the amount of temporal integration. Neural adaptation was measured by tracking amplitude changes of the electrically evoked compound action potential (ECAP) in response to 1000-pps biphasic pulse trains of varying durations. Temporal integration was measured at both suprathreshold and threshold levels by an adaptive procedure. Although varying degrees of neural adaptation and temporal integration were observed across individuals, results of this investigation revealed no correlation between the degree of neural adaptation and psychophysical measures of temporal integration.
INTEGRATING EVOLUTIONARY AND FUNCTIONAL APPROACHES TO INFER ADAPTATION AT SPECIFIC LOCI
Storz, Jay F.; Wheat, Christopher W.
2010-01-01
Inferences about adaptation at specific loci are often exclusively based on the static analysis of DNA sequence variation. Ideally, population-genetic evidence for positive selection serves as a stepping-off point for experimental studies to elucidate the functional significance of the putatively adaptive variation. We argue that inferences about adaptation at specific loci are best achieved by integrating the indirect, retrospective insights provided by population-genetic analyses with the more direct, mechanistic insights provided by functional experiments. Integrative studies of adaptive genetic variation may sometimes be motivated by experimental insights into molecular function, which then provide the impetus to perform population genetic tests to evaluate whether the functional variation is of adaptive significance. In other cases, studies may be initiated by genome scans of DNA variation to identify candidate loci for recent adaptation. Results of such analyses can then motivate experimental efforts to test whether the identified candidate loci do in fact contribute to functional variation in some fitness-related phenotype. Functional studies can provide corroborative evidence for positive selection at particular loci, and can potentially reveal specific molecular mechanisms of adaptation. PMID:20500215
Baraúna, Rafael A.; Freitas, Dhara Y.; Pinheiro, Juliana C.; Folador, Adriana R. C.; Silva, Artur
2017-01-01
Since the publication of one of the first studies using 2D gel electrophoresis by Patrick H. O’Farrell in 1975, several other studies have used that method to evaluate cellular responses to different physicochemical variations. In environmental microbiology, bacterial adaptation to cold environments is a “hot topic” because of its application in biotechnological processes. As in other fields, gel-based and gel-free proteomic methods have been used to determine the molecular mechanisms of adaptation to cold of several psychrotrophic and psychrophilic bacterial species. In this review, we aim to describe and discuss these main molecular mechanisms of cold adaptation, referencing proteomic studies that have made significant contributions to our current knowledge in the area. Furthermore, we use Exiguobacterium antarcticum B7 as a model organism to present the importance of integrating genomic, transcriptomic, and proteomic data. This species has been isolated in Antarctica and previously studied at all three omic levels. The integration of these data permitted more robust conclusions about the mechanisms of bacterial adaptation to cold. PMID:28248259
The SMART CLUSTER METHOD - adaptive earthquake cluster analysis and declustering
NASA Astrophysics Data System (ADS)
Schaefer, Andreas; Daniell, James; Wenzel, Friedemann
2016-04-01
Earthquake declustering is an essential part of almost any statistical analysis of spatial and temporal properties of seismic activity with usual applications comprising of probabilistic seismic hazard assessments (PSHAs) and earthquake prediction methods. The nature of earthquake clusters and subsequent declustering of earthquake catalogues plays a crucial role in determining the magnitude-dependent earthquake return period and its respective spatial variation. Various methods have been developed to address this issue from other researchers. These have differing ranges of complexity ranging from rather simple statistical window methods to complex epidemic models. This study introduces the smart cluster method (SCM), a new methodology to identify earthquake clusters, which uses an adaptive point process for spatio-temporal identification. Hereby, an adaptive search algorithm for data point clusters is adopted. It uses the earthquake density in the spatio-temporal neighbourhood of each event to adjust the search properties. The identified clusters are subsequently analysed to determine directional anisotropy, focussing on a strong correlation along the rupture plane and adjusts its search space with respect to directional properties. In the case of rapid subsequent ruptures like the 1992 Landers sequence or the 2010/2011 Darfield-Christchurch events, an adaptive classification procedure is applied to disassemble subsequent ruptures which may have been grouped into an individual cluster using near-field searches, support vector machines and temporal splitting. The steering parameters of the search behaviour are linked to local earthquake properties like magnitude of completeness, earthquake density and Gutenberg-Richter parameters. The method is capable of identifying and classifying earthquake clusters in space and time. It is tested and validated using earthquake data from California and New Zealand. As a result of the cluster identification process, each event in
NASA Astrophysics Data System (ADS)
Coleman, S.; Hurley, S.; Koliba, C.; Zia, A.; Exler, S.
2014-12-01
Eutrophication and nutrient pollution of surface waters occur within complex governance, social, hydrologic and biophysical basin contexts. The pervasive and perennial nutrient pollution in Lake Champlain Basin, despite decades of efforts, exemplifies problems found across the world's surface waters. Stakeholders with diverse values, interests, and forms of explicit and tacit knowledge determine water quality impacts through land use, agricultural and water resource decisions. Uncertainty, ambiguity and dynamic feedback further complicate the ability to promote the continual provision of water quality and ecosystem services. Adaptive management of water resources and land use requires mechanisms to allow for learning and integration of new information over time. The transdisciplinary Research on Adaptation to Climate Change (RACC) team is working to build regional adaptive capacity in Lake Champlain Basin while studying and integrating governance, land use, hydrological, and biophysical systems to evaluate implications for adaptive management. The RACC team has engaged stakeholders through mediated modeling workshops, online forums, surveys, focus groups and interviews. In March 2014, CSS2CC.org, an interactive online forum to source and identify adaptive interventions from a group of stakeholders across sectors was launched. The forum, based on the Delphi Method, brings forward the collective wisdom of stakeholders and experts to identify potential interventions and governance designs in response to scientific uncertainty and ambiguity surrounding the effectiveness of any strategy, climate change impacts, and the social and natural systems governing water quality and eutrophication. A Mediated Modeling Workshop followed the forum in May 2014, where participants refined and identified plausible interventions under different governance, policy and resource scenarios. Results from the online forum and workshop can identify emerging consensus across scales and sectors
A decentralized adaptive robust method for chaos control.
Kobravi, Hamid-Reza; Erfanian, Abbas
2009-09-01
This paper presents a control strategy, which is based on sliding mode control, adaptive control, and fuzzy logic system for controlling the chaotic dynamics. We consider this control paradigm in chaotic systems where the equations of motion are not known. The proposed control strategy is robust against the external noise disturbance and system parameter variations and can be used to convert the chaotic orbits not only to the desired periodic ones but also to any desired chaotic motions. Simulation results of controlling some typical higher order chaotic systems demonstrate the effectiveness of the proposed control method.
Sea Extremes: Integrated impact assessment in coastal climate adaptation
NASA Astrophysics Data System (ADS)
Sorensen, Carlo; Knudsen, Per; Broge, Niels; Molgaard, Mads; Andersen, Ole
2016-04-01
We investigate effects of sea level rise and a change in precipitation pattern on coastal flooding hazards. Historic and present in situ and satellite data of water and groundwater levels, precipitation, vertical ground motion, geology, and geotechnical soil properties are combined with flood protection measures, topography, and infrastructure to provide a more complete picture of the water-related impact from climate change at an exposed coastal location. Results show that future sea extremes evaluated from extreme value statistics may, indeed, have a large impact. The integrated effects from future storm surges and other geo- and hydro-parameters need to be considered in order to provide for the best protection and mitigation efforts, however. Based on the results we present and discuss a simple conceptual model setup that can e.g. be used for 'translation' of regional sea level rise evidence and projections to concrete impact measures. This may be used by potentially affected stakeholders -often working in different sectors and across levels of governance, in a common appraisal of the challenges faced ahead. The model may also enter dynamic tools to evaluate local impact as sea level research advances and projections for the future are updated.
A CCD Monolithic LMS Adaptive Analog Signal Processor Integrated Circuit.
1980-03-01
correlated. This is an excellent method in the case when no external reference input is available such as speech or music playback in the presence of...messages. Speech has the vocal timbre and conversational idiosyncrosies of the speaker and the emotion behind his words. It is normally constructed
Path Integral Monte Carlo Methods for Fermions
NASA Astrophysics Data System (ADS)
Ethan, Ethan; Dubois, Jonathan; Ceperley, David
2014-03-01
In general, Quantum Monte Carlo methods suffer from a sign problem when simulating fermionic systems. This causes the efficiency of a simulation to decrease exponentially with the number of particles and inverse temperature. To circumvent this issue, a nodal constraint is often implemented, restricting the Monte Carlo procedure from sampling paths that cause the many-body density matrix to change sign. Unfortunately, this high-dimensional nodal surface is not a priori known unless the system is exactly solvable, resulting in uncontrolled errors. We will discuss two possible routes to extend the applicability of finite-temperatue path integral Monte Carlo. First we extend the regime where signful simulations are possible through a novel permutation sampling scheme. Afterwards, we discuss a method to variationally improve the nodal surface by minimizing a free energy during simulation. Applications of these methods will include both free and interacting electron gases, concluding with discussion concerning extension to inhomogeneous systems. Support from DOE DE-FG52-09NA29456, DE-AC52-07NA27344, LLNL LDRD 10- ERD-058, and the Lawrence Scholar program.
Adaptive grid methods for RLV environment assessment and nozzle analysis
NASA Technical Reports Server (NTRS)
Thornburg, Hugh J.
1996-01-01
Rapid access to highly accurate data about complex configurations is needed for multi-disciplinary optimization and design. In order to efficiently meet these requirements a closer coupling between the analysis algorithms and the discretization process is needed. In some cases, such as free surface, temporally varying geometries, and fluid structure interaction, the need is unavoidable. In other cases the need is to rapidly generate and modify high quality grids. Techniques such as unstructured and/or solution-adaptive methods can be used to speed the grid generation process and to automatically cluster mesh points in regions of interest. Global features of the flow can be significantly affected by isolated regions of inadequately resolved flow. These regions may not exhibit high gradients and can be difficult to detect. Thus excessive resolution in certain regions does not necessarily increase the accuracy of the overall solution. Several approaches have been employed for both structured and unstructured grid adaption. The most widely used involve grid point redistribution, local grid point enrichment/derefinement or local modification of the actual flow solver. However, the success of any one of these methods ultimately depends on the feature detection algorithm used to determine solution domain regions which require a fine mesh for their accurate representation. Typically, weight functions are constructed to mimic the local truncation error and may require substantial user input. Most problems of engineering interest involve multi-block grids and widely disparate length scales. Hence, it is desirable that the adaptive grid feature detection algorithm be developed to recognize flow structures of different type as well as differing intensity, and adequately address scaling and normalization across blocks. These weight functions can then be used to construct blending functions for algebraic redistribution, interpolation functions for unstructured grid generation
An adaptive stepsize method for the chemical Langevin equation.
Ilie, Silvana; Teslya, Alexandra
2012-05-14
Mathematical and computational modeling are key tools in analyzing important biological processes in cells and living organisms. In particular, stochastic models are essential to accurately describe the cellular dynamics, when the assumption of the thermodynamic limit can no longer be applied. However, stochastic models are computationally much more challenging than the traditional deterministic models. Moreover, many biochemical systems arising in applications have multiple time-scales, which lead to mathematical stiffness. In this paper we investigate the numerical solution of a stochastic continuous model of well-stirred biochemical systems, the chemical Langevin equation. The chemical Langevin equation is a stochastic differential equation with multiplicative, non-commutative noise. We propose an adaptive stepsize algorithm for approximating the solution of models of biochemical systems in the Langevin regime, with small noise, based on estimates of the local error. The underlying numerical method is the Milstein scheme. The proposed adaptive method is tested on several examples arising in applications and it is shown to have improved efficiency and accuracy compared to the existing fixed stepsize schemes.
ERIC Educational Resources Information Center
International Migration, 1979
1979-01-01
This document contains working papers prepared for a seminar on Adaptation and Integration of Permanent Immigrants, along with general and specific recommendations formulated by seminar participants. Conclusions and recommendations from each paper are presented in English, French, and Spanish; the conference papers themselves are presented only in…
Career Adaptability: An Integrative Construct for Life-Span, Life-Space Theory.
ERIC Educational Resources Information Center
Savickas, Mark L.
1997-01-01
Examines the origin and current status of lifespan, life-space theory and proposes one way in which to integrate its three segments. Discusses a functionalist strategy for theory construction and the outcomes and consequences of this strategy. Discusses future directions for theory development, such as career adaptability and planful attitudes.…
Simulation Based Evaluation of Integrated Adaptive Control and Flight Planning Technologies
NASA Technical Reports Server (NTRS)
Campbell, Stefan Forrest; Kaneshige, John T.
2008-01-01
The objective of this work is to leverage NASA resources to enable effective evaluation of resilient aircraft technologies through simulation. This includes examining strengths and weaknesses of adaptive controllers, emergency flight planning algorithms, and flight envelope determination algorithms both individually and as an integrated package.
Integration of hp-Adaptivity and a Two Grid Solver. II. Electromagnetic Problems
2005-01-01
for lower order FE spaces. More precisely, let T be a grid,M the associated lowest order Nedelec subspaces ofHD(curl; Ω) of the first kind [24], and W... Nedelec , Mixed finite elements in IR3., Numer. Math., 35 (1980), pp. 315–341. [25] D. Pardo and L. Demkowicz, Integration of hp-adaptivity with a two
Integrated and adaptive management of water resources: Tensions, legacies, and the next best thing
Engle, Nathan L.; Johns, Owen R.; Lemos, Maria Carmen; Nelson, Donald
2011-02-01
Integrated water resources management (IWRM) and adaptive management (AM) are two institutional and management paradigms designed to address shortcomings within water systems governance – the limits of hierarchical water institutional arrangements in the case of IWRM and the challenge of making water management decisions under uncertainty in the case of AM. Recently, there has been a trend to merge these paradigms to address the growing complexity of stressors shaping water management, such as globalization and climate change. However, because many of these joint approaches have received little empirical attention, questions remain about how they might work (or not) in practice. Here, we explore a few of these issues using empirical research carried out in Brazil. We focus on highlighting the potentially negative interactions, tensions, and tradeoffs between different institutions/mechanisms perceived as desirable as research and practice attempt to make water systems management simultaneously integrated and adaptive. Our examples pertain mainly on the use of techno-scientific knowledge in water management and governance in Brazil’s IWRM model and how it relates to participation, democracy, deliberation, diversity, and adaptability. We show that a legacy of technical and hierarchical management has shaped the integration of management, and subsequently, the degree to which management might also be adaptive. While integrated systems may be more legitimate and accountable than top-down command and control ones, the mechanisms of IWRM may be at odds with the flexible, experimental, and self-organizing nature of AM.
An adaptive multifluid interface-capturing method for compressible flow in complex geometries
Greenough, J.A.; Beckner, V.; Pember, R.B.; Crutchfield, W.Y.; Bell, J.B.; Colella, P.
1995-04-01
We present a numerical method for solving the multifluid equations of gas dynamics using an operator-split second-order Godunov method for flow in complex geometries in two and three dimensions. The multifluid system treats the fluid components as thermodynamically distinct entities and correctly models fluids with different compressibilities. This treatment allows a general equation-of-state (EOS) specification and the method is implemented so that the EOS references are minimized. The current method is complementary to volume-of-fluid (VOF) methods in the sense that a VOF representation is used, but no interface reconstruction is performed. The Godunov integrator captures the interface during the solution process. The basic multifluid integrator is coupled to a Cartesian grid algorithm that also uses a VOF representation of the fluid-body interface. This representation of the fluid-body interface allows the algorithm to easily accommodate arbitrarily complex geometries. The resulting single grid multifluid-Cartesian grid integration scheme is coupled to a local adaptive mesh refinement algorithm that dynamically refines selected regions of the computational grid to achieve a desired level of accuracy. The overall method is fully conservative with respect to the total mixture. The method will be used for a simple nozzle problem in two-dimensional axisymmetric coordinates.
A method for online verification of adapted fields using an independent dose monitor
Chang Jina; Norrlinger, Bernhard D.; Heaton, Robert K.; Jaffray, David A.; Cho, Young-Bin; Islam, Mohammad K.; Mahon, Robert
2013-07-15
Purpose: Clinical implementation of online adaptive radiotherapy requires generation of modified fields and a method of dosimetric verification in a short time. We present a method of treatment field modification to account for patient setup error, and an online method of verification using an independent monitoring system.Methods: The fields are modified by translating each multileaf collimator (MLC) defined aperture in the direction of the patient setup error, and magnifying to account for distance variation to the marked isocentre. A modified version of a previously reported online beam monitoring system, the integral quality monitoring (IQM) system, was investigated for validation of adapted fields. The system consists of a large area ion-chamber with a spatial gradient in electrode separation to provide a spatially sensitive signal for each beam segment, mounted below the MLC, and a calculation algorithm to predict the signal. IMRT plans of ten prostate patients have been modified in response to six randomly chosen setup errors in three orthogonal directions.Results: A total of approximately 49 beams for the modified fields were verified by the IQM system, of which 97% of measured IQM signal agree with the predicted value to within 2%.Conclusions: The modified IQM system was found to be suitable for online verification of adapted treatment fields.
Climate change adaptation and Integrated Water Resource Management in the water sector
NASA Astrophysics Data System (ADS)
Ludwig, Fulco; van Slobbe, Erik; Cofino, Wim
2014-10-01
Integrated Water Resources Management (IWRM) was introduced in 1980s to better optimise water uses between different water demanding sectors. However, since it was introduced water systems have become more complicated due to changes in the global water cycle as a result of climate change. The realization that climate change will have a significant impact on water availability and flood risks has driven research and policy making on adaptation. This paper discusses the main similarities and differences between climate change adaptation and IWRM. The main difference between the two is the focus on current and historic issues of IWRM compared to the (long-term) future focus of adaptation. One of the main problems of implementing climate change adaptation is the large uncertainties in future projections. Two completely different approaches to adaptation have been developed in response to these large uncertainties. A top-down approach based on large scale biophysical impacts analyses focussing on quantifying and minimizing uncertainty by using a large range of scenarios and different climate and impact models. The main problem with this approach is the propagation of uncertainties within the modelling chain. The opposite is the bottom up approach which basically ignores uncertainty. It focusses on reducing vulnerabilities, often at local scale, by developing resilient water systems. Both these approaches however are unsuitable for integrating into water management. The bottom up approach focuses too much on socio-economic vulnerability and too little on developing (technical) solutions. The top-down approach often results in an “explosion” of uncertainty and therefore complicates decision making. A more promising direction of adaptation would be a risk based approach. Future research should further develop and test an approach which starts with developing adaptation strategies based on current and future risks. These strategies should then be evaluated using a range
NASA Technical Reports Server (NTRS)
Kantor, A. V.; Timonin, V. G.; Azarova, Y. S.
1974-01-01
The method of adaptive discretization is the most promising for elimination of redundancy from telemetry messages characterized by signal shape. Adaptive discretization with associative sorting was considered as a way to avoid the shortcomings of adaptive discretization with buffer smoothing and adaptive discretization with logical switching in on-board information compression devices (OICD) in spacecraft. Mathematical investigations of OICD are presented.
Robust image registration using adaptive coherent point drift method
NASA Astrophysics Data System (ADS)
Yang, Lijuan; Tian, Zheng; Zhao, Wei; Wen, Jinhuan; Yan, Weidong
2016-04-01
Coherent point drift (CPD) method is a powerful registration tool under the framework of the Gaussian mixture model (GMM). However, the global spatial structure of point sets is considered only without other forms of additional attribute information. The equivalent simplification of mixing parameters and the manual setting of the weight parameter in GMM make the CPD method less robust to outlier and have less flexibility. An adaptive CPD method is proposed to automatically determine the mixing parameters by embedding the local attribute information of features into the construction of GMM. In addition, the weight parameter is treated as an unknown parameter and automatically determined in the expectation-maximization algorithm. In image registration applications, the block-divided salient image disk extraction method is designed to detect sparse salient image features and local self-similarity is used as attribute information to describe the local neighborhood structure of each feature. The experimental results on optical images and remote sensing images show that the proposed method can significantly improve the matching performance.
Spiking neural network simulation: numerical integration with the Parker-Sochacki method.
Stewart, Robert D; Bair, Wyeth
2009-08-01
Mathematical neuronal models are normally expressed using differential equations. The Parker-Sochacki method is a new technique for the numerical integration of differential equations applicable to many neuronal models. Using this method, the solution order can be adapted according to the local conditions at each time step, enabling adaptive error control without changing the integration timestep. The method has been limited to polynomial equations, but we present division and power operations that expand its scope. We apply the Parker-Sochacki method to the Izhikevich 'simple' model and a Hodgkin-Huxley type neuron, comparing the results with those obtained using the Runge-Kutta and Bulirsch-Stoer methods. Benchmark simulations demonstrate an improved speed/accuracy trade-off for the method relative to these established techniques.
Research on PGNAA adaptive analysis method with BP neural network
NASA Astrophysics Data System (ADS)
Peng, Ke-Xin; Yang, Jian-Bo; Tuo, Xian-Guo; Du, Hua; Zhang, Rui-Xue
2016-11-01
A new approach method to dealing with the puzzle of spectral analysis in prompt gamma neutron activation analysis (PGNAA) is developed and demonstrated. It consists of utilizing BP neural network to PGNAA energy spectrum analysis which is based on Monte Carlo (MC) simulation, the main tasks which we will accomplish as follows: (1) Completing the MC simulation of PGNAA spectrum library, we respectively set mass fractions of element Si, Ca, Fe from 0.00 to 0.45 with a step of 0.05 and each sample is simulated using MCNP. (2) Establishing the BP model of adaptive quantitative analysis of PGNAA energy spectrum, we calculate peak areas of eight characteristic gamma rays that respectively correspond to eight elements in each individual of 1000 samples and that of the standard sample. (3) Verifying the viability of quantitative analysis of the adaptive algorithm where 68 samples were used successively. Results show that the precision when using neural network to calculate the content of each element is significantly higher than the MCLLS.
Efficient Combustion Simulation via the Adaptive Wavelet Collocation Method
NASA Astrophysics Data System (ADS)
Lung, Kevin; Brown-Dymkoski, Eric; Guerrero, Victor; Doran, Eric; Museth, Ken; Balme, Jo; Urberger, Bob; Kessler, Andre; Jones, Stephen; Moses, Billy; Crognale, Anthony
Rocket engine development continues to be driven by the intuition and experience of designers, progressing through extensive trial-and-error test campaigns. Extreme temperatures and pressures frustrate direct observation, while high-fidelity simulation can be impractically expensive owing to the inherent muti-scale, multi-physics nature of the problem. To address this cost, an adaptive multi-resolution PDE solver has been designed which targets the high performance, many-core architecture of GPUs. The adaptive wavelet collocation method is used to maintain a sparse-data representation of the high resolution simulation, greatly reducing the memory footprint while tightly controlling physical fidelity. The tensorial, stencil topology of wavelet-based grids lends itself to highly vectorized algorithms which are necessary to exploit the performance of GPUs. This approach permits efficient implementation of direct finite-rate kinetics, and improved resolution of steep thermodynamic gradients and the smaller mixing scales that drive combustion dynamics. Resolving these scales is crucial for accurate chemical kinetics, which are typically degraded or lost in statistical modeling approaches.
A locally adaptive kernel regression method for facies delineation
NASA Astrophysics Data System (ADS)
Fernàndez-Garcia, D.; Barahona-Palomo, M.; Henri, C. V.; Sanchez-Vila, X.
2015-12-01
Facies delineation is defined as the separation of geological units with distinct intrinsic characteristics (grain size, hydraulic conductivity, mineralogical composition). A major challenge in this area stems from the fact that only a few scattered pieces of hydrogeological information are available to delineate geological facies. Several methods to delineate facies are available in the literature, ranging from those based only on existing hard data, to those including secondary data or external knowledge about sedimentological patterns. This paper describes a methodology to use kernel regression methods as an effective tool for facies delineation. The method uses both the spatial and the actual sampled values to produce, for each individual hard data point, a locally adaptive steering kernel function, self-adjusting the principal directions of the local anisotropic kernels to the direction of highest local spatial correlation. The method is shown to outperform the nearest neighbor classification method in a number of synthetic aquifers whenever the available number of hard data is small and randomly distributed in space. In the case of exhaustive sampling, the steering kernel regression method converges to the true solution. Simulations ran in a suite of synthetic examples are used to explore the selection of kernel parameters in typical field settings. It is shown that, in practice, a rule of thumb can be used to obtain suboptimal results. The performance of the method is demonstrated to significantly improve when external information regarding facies proportions is incorporated. Remarkably, the method allows for a reasonable reconstruction of the facies connectivity patterns, shown in terms of breakthrough curves performance.
NASA Astrophysics Data System (ADS)
Hematiyan, M. R.
2007-03-01
A robust method is presented to evaluate 2D and 3D domain integrals without domain discretization. Each domain integral is transformed into a double integral, a boundary integral and a 1D integral. Both integrals are evaluated by adaptive Simpson quadrature method. The method can be used to evaluate domain integrals over simply or multiply connected regions with any arbitrary form of integrands. As an application of the method, domain integrals produced in boundary element formulation of potential and elastostatic problems are analyzed. Several examples are provided to show the validity and accuracy of the method.
Sparse diffraction imaging method using an adaptive reweighting homotopy algorithm
NASA Astrophysics Data System (ADS)
Yu, Caixia; Zhao, Jingtao; Wang, Yanfei; Qiu, Zhen
2017-02-01
Seismic diffractions carry valuable information from subsurface small-scale geologic discontinuities, such as faults, cavities and other features associated with hydrocarbon reservoirs. However, seismic imaging methods mainly use reflection theory for constructing imaging models, which means a smooth constraint on imaging conditions. In fact, diffractors occupy a small account of distributions in an imaging model and possess discontinuous characteristics. In mathematics, this kind of phenomena can be described by the sparse optimization theory. Therefore, we propose a diffraction imaging method based on a sparsity-constraint model for studying diffractors. A reweighted L 2-norm and L 1-norm minimization model is investigated, where the L 2 term requests a least-square error between modeled diffractions and observed diffractions and the L 1 term imposes sparsity on the solution. In order to efficiently solve this model, we use an adaptive reweighting homotopy algorithm that updates the solutions by tracking a path along inexpensive homotopy steps. Numerical examples and field data application demonstrate the feasibility of the proposed method and show its significance for detecting small-scale discontinuities in a seismic section. The proposed method has an advantage in improving the focusing ability of diffractions and reducing the migration artifacts.
An adaptive Cartesian grid generation method for Dirty geometry
NASA Astrophysics Data System (ADS)
Wang, Z. J.; Srinivasan, Kumar
2002-07-01
Traditional structured and unstructured grid generation methods need a water-tight boundary surface grid to start. Therefore, these methods are named boundary to interior (B2I) approaches. Although these methods have achieved great success in fluid flow simulations, the grid generation process can still be very time consuming if non-water-tight geometries are given. Significant user time can be taken to repair or clean a dirty geometry with cracks, overlaps or invalid manifolds before grid generation can take place. In this paper, we advocate a different approach in grid generation, namely the interior to boundary (I2B) approach. With an I2B approach, the computational grid is first generated inside the computational domain. Then this grid is intelligently connected to the boundary, and the boundary grid is a result of this connection. A significant advantage of the I2B approach is that dirty geometries can be handled without cleaning or repairing, dramatically reducing grid generation time. An I2B adaptive Cartesian grid generation method is developed in this paper to handle dirty geometries without geometry repair. Comparing with a B2I approach, the grid generation time with the I2B approach for a complex automotive engine can be reduced by three orders of magnitude. Copyright
Anderson, R W; Pember, R B; Elliot, N S
2000-09-26
A new method for the solution of the unsteady Euler equations has been developed. The method combines staggered grid Lagrangian techniques with structured local adaptive mesh refinement (AMR). This method is a precursor to a more general adaptive arbitrary Lagrangian Eulerian (ALE-AMR) algorithm under development, which will facilitate the solution of problems currently at and beyond the boundary of soluble problems by traditional ALE methods by focusing computational resources where they are required. Many of the core issues involved in the development of the ALE-AMR method hinge upon the integration of AMR with a Lagrange step, which is the focus of the work described here. The novel components of the method are mainly driven by the need to reconcile traditional AMR techniques, which are typically employed on stationary meshes with cell-centered quantities, with the staggered grids and grid motion employed by Lagrangian methods. These new algorithmic components are first developed in one dimension and are then generalized to two dimensions. Solutions of several model problems involving shock hydrodynamics are presented and discussed.
A method of camera calibration with adaptive thresholding
NASA Astrophysics Data System (ADS)
Gao, Lei; Yan, Shu-hua; Wang, Guo-chao; Zhou, Chun-lei
2009-07-01
In order to calculate the parameters of the camera correctly, we must figure out the accurate coordinates of the certain points in the image plane. Corners are the important features in the 2D images. Generally speaking, they are the points that have high curvature and lie in the junction of different brightness regions of images. So corners detection has already widely used in many fields. In this paper we use the pinhole camera model and SUSAN corner detection algorithm to calibrate the camera. When using the SUSAN corner detection algorithm, we propose an approach to retrieve the gray difference threshold, adaptively. That makes it possible to pick up the right chessboard inner comers in all kinds of gray contrast. The experiment result based on this method was proved to be feasible.
A forward method for optimal stochastic nonlinear and adaptive control
NASA Technical Reports Server (NTRS)
Bayard, David S.
1988-01-01
A computational approach is taken to solve the optimal nonlinear stochastic control problem. The approach is to systematically solve the stochastic dynamic programming equations forward in time, using a nested stochastic approximation technique. Although computationally intensive, this provides a straightforward numerical solution for this class of problems and provides an alternative to the usual dimensionality problem associated with solving the dynamic programming equations backward in time. It is shown that the cost degrades monotonically as the complexity of the algorithm is reduced. This provides a strategy for suboptimal control with clear performance/computation tradeoffs. A numerical study focusing on a generic optimal stochastic adaptive control example is included to demonstrate the feasibility of the method.
Rivera, Claudia
2014-01-01
This paper analyses the perceptions of disaster risk reduction (DRR) practitioners concerning the on-going integration of climate change adaptation (CCA) into their practices in urban contexts in Nicaragua. Understanding their perceptions is important as this will provide information on how this integration can be improved. Exploring the perceptions of practitioners in Nicaragua is important as the country has a long history of disasters, and practitioners have been developing the current DRR planning framework for more than a decade. The analysis is based on semi-structured interviews designed to collect information about practitioners’ understanding of: (a) CCA, (b) the current level of integration of CCA into DRR and urban planning, (c) the opportunities and constraints of this integration, and (d) the potential to adapt cities to climate change. The results revealed that practitioners’ perception is that the integration of CCA into their practice is at an early stage, and that they need to improve their understanding of CCA in terms of a development issue. Three main constraints on improved integration were identified: (a) a recognized lack of understanding of CCA, (b) insufficient guidance on how to integrate it, and (c) the limited opportunities to integrate it into urban planning due to a lack of instruments and capacity in this field. Three opportunities were also identified: (a) practitioners’ awareness of the need to integrate CCA into their practices, (b) the robust structure of the DRR planning framework in the country, which provides a suitable channel for facilitating integration, and (c) the fact that CCA is receiving more attention and financial and technical support from the international community. PMID:24475365
Adaptive mesh refinement and adjoint methods in geophysics simulations
NASA Astrophysics Data System (ADS)
Burstedde, Carsten
2013-04-01
It is an ongoing challenge to increase the resolution that can be achieved by numerical geophysics simulations. This applies to considering sub-kilometer mesh spacings in global-scale mantle convection simulations as well as to using frequencies up to 1 Hz in seismic wave propagation simulations. One central issue is the numerical cost, since for three-dimensional space discretizations, possibly combined with time stepping schemes, a doubling of resolution can lead to an increase in storage requirements and run time by factors between 8 and 16. A related challenge lies in the fact that an increase in resolution also increases the dimensionality of the model space that is needed to fully parametrize the physical properties of the simulated object (a.k.a. earth). Systems that exhibit a multiscale structure in space are candidates for employing adaptive mesh refinement, which varies the resolution locally. An example that we found well suited is the mantle, where plate boundaries and fault zones require a resolution on the km scale, while deeper area can be treated with 50 or 100 km mesh spacings. This approach effectively reduces the number of computational variables by several orders of magnitude. While in this case it is possible to derive the local adaptation pattern from known physical parameters, it is often unclear what are the most suitable criteria for adaptation. We will present the goal-oriented error estimation procedure, where such criteria are derived from an objective functional that represents the observables to be computed most accurately. Even though this approach is well studied, it is rarely used in the geophysics community. A related strategy to make finer resolution manageable is to design methods that automate the inference of model parameters. Tweaking more than a handful of numbers and judging the quality of the simulation by adhoc comparisons to known facts and observations is a tedious task and fundamentally limited by the turnaround times
NASA Astrophysics Data System (ADS)
Vanderlinden, J. P.; Baztan, J.
2014-12-01
The prupose of this paper is to present the "Adaptation Research a Transdisciplinary community and policy centered appoach" (ARTisticc) project. ARTisticc's goal is to apply innovative standardized transdisciplinary art and science integrative approaches to foster robust, socially, culturally and scientifically, community centred adaptation to climate change. The approach used in the project is based on the strong understanding that adaptation is: (a) still "a concept of uncertain form"; (b) a concept dealing with uncertainty; (c) a concept that calls for an analysis that goes beyond the traditional disciplinary organization of science, and; (d) an unconventional process in the realm of science and policy integration. The project is centered on case studies in France, Greenland, Russia, India, Canada, Alaska, and Senegal. In every site we jointly develop artwork while we analyzing how natural science, essentially geosciences can be used in order to better adapt in the future, how society adapt to current changes and how memories of past adaptations frames current and future processes. Artforms are mobilized in order to share scientific results with local communities and policy makers, this in a way that respects cultural specificities while empowering stakeholders, ARTISTICC translates these "real life experiments" into stories and artwork that are meaningful to those affected by climate change. The scientific results and the culturally mediated productions will thereafter be used in order to co-construct, with NGOs and policy makers, policy briefs, i.e. robust and scientifically legitimate policy recommendations regarding coastal adaptation. This co-construction process will be in itself analysed with the goal of increasing arts and science's performative functions in the universe of evidence-based policy making. The project involves scientists from natural sciences, the social sciences and the humanities, as well as artitis from the performing arts (playwriters
Adaptive Elastic Net for Generalized Methods of Moments.
Caner, Mehmet; Zhang, Hao Helen
2014-01-30
Model selection and estimation are crucial parts of econometrics. This paper introduces a new technique that can simultaneously estimate and select the model in generalized method of moments (GMM) context. The GMM is particularly powerful for analyzing complex data sets such as longitudinal and panel data, and it has wide applications in econometrics. This paper extends the least squares based adaptive elastic net estimator of Zou and Zhang (2009) to nonlinear equation systems with endogenous variables. The extension is not trivial and involves a new proof technique due to estimators lack of closed form solutions. Compared to Bridge-GMM of Caner (2009), we allow for the number of parameters to diverge to infinity as well as collinearity among a large number of variables, also the redundant parameters set to zero via a data dependent technique. This method has the oracle property, meaning that we can estimate nonzero parameters with their standard limit and the redundant parameters are dropped from the equations simultaneously. Numerical examples are used to illustrate the performance of the new method.
Evaluation of Adaptive Subdivision Method on Mobile Device
NASA Astrophysics Data System (ADS)
Rahim, Mohd Shafry Mohd; Isa, Siti Aida Mohd; Rehman, Amjad; Saba, Tanzila
2013-06-01
Recently, there are significant improvements in the capabilities of mobile devices; but rendering large 3D object is still tedious because of the constraint in resources of mobile devices. To reduce storage requirement, 3D object is simplified but certain area of curvature is compromised and the surface will not be smooth. Therefore a method to smoother selected area of a curvature is implemented. One of the popular methods is adaptive subdivision method. Experiments are performed using two data with results based on processing time, rendering speed and the appearance of the object on the devices. The result shows a downfall in frame rate performance due to the increase in the number of triangles with each level of iteration while the processing time of generating the new mesh also significantly increase. Since there is a difference in screen size between the devices the surface on the iPhone appears to have more triangles and more compact than the surface displayed on the iPad. [Figure not available: see fulltext.
Does Integration Help Adapt to Climate Change? Case of Increased US Corn Yield Volatility
NASA Astrophysics Data System (ADS)
Verma, M.; Diffenbaugh, N. S.; Hertel, T. W.
2012-12-01
In absence of of new crop varieties or significant shifts in the geography of corn production, US national corn yields variation could double by the year 2040 as a result of climate change and without adaptation this could lead the variability in US corn prices to quadruple (Diffenbaugh et al. 2012). In addition to climate induced price changes, analysis of recent commodity price spikes suggests that interventionist trade policies are partly to blame. Assuming we cannot much influence the future climate outcome, what policies can we undertake to adapt better? Can we use markets to blunt this edge? Diffenbaugh et al. find that sale of corn- ethanol for use in liquid fuel, when governed by quotas such as US Renewable Fuel Standard (RFS), could make US corn prices even more variable; in contrast the same food-fuel market link (we refer to it as intersectoral link) may well dampen price volatility when the sale of corn to ethanol industry is driven by higher future oil prices. The latter however comes at the cost of exposing corn prices to the greater volatility in oil markets. Similarly intervention in corn trade can make US corn prices less or more volatile by distorting international corn price transmission. A negative US corn yield shock shows that domestic corn supply falls and domestic prices to go up irrespective of whether or not markets are integrated. How much the prices go up depends on how much demand adjusts to accommodate the supply shock. Based on the forgoing analysis, one should expect that demand would adjust more readily when markets are integrated and therefore reduce the resulting price fluctuation. Simulation results confirm this response of corn markets. In terms of relative comparisons however a policy driven intersectoral integration is least effective and prices rise much more. Similarly, a positive world oil price shock makes the US oil imports expensive and with oil being used to produce gasoline blends, it increases the price of gasoline
Calculation of transonic flows using an extended integral equation method
NASA Technical Reports Server (NTRS)
Nixon, D.
1976-01-01
An extended integral equation method for transonic flows is developed. In the extended integral equation method velocities in the flow field are calculated in addition to values on the aerofoil surface, in contrast with the less accurate 'standard' integral equation method in which only surface velocities are calculated. The results obtained for aerofoils in subcritical flow and in supercritical flow when shock waves are present compare satisfactorily with the results of recent finite difference methods.
Ferguson, R Daniel; Zhong, Zhangyi; Hammer, Daniel X; Mujat, Mircea; Patel, Ankit H; Deng, Cong; Zou, Weiyao; Burns, Stephen A
2010-11-01
We have developed a new, unified implementation of the adaptive optics scanning laser ophthalmoscope (AOSLO) incorporating a wide-field line-scanning ophthalmoscope (LSO) and a closed-loop optical retinal tracker. AOSLO raster scans are deflected by the integrated tracking mirrors so that direct AOSLO stabilization is automatic during tracking. The wide-field imager and large-spherical-mirror optical interface design, as well as a large-stroke deformable mirror (DM), enable the AOSLO image field to be corrected at any retinal coordinates of interest in a field of >25 deg. AO performance was assessed by imaging individuals with a range of refractive errors. In most subjects, image contrast was measurable at spatial frequencies close to the diffraction limit. Closed-loop optical (hardware) tracking performance was assessed by comparing sequential image series with and without stabilization. Though usually better than 10 μm rms, or 0.03 deg, tracking does not yet stabilize to single cone precision but significantly improves average image quality and increases the number of frames that can be successfully aligned by software-based post-processing methods. The new optical interface allows the high-resolution imaging field to be placed anywhere within the wide field without requiring the subject to re-fixate, enabling easier retinal navigation and faster, more efficient AOSLO montage capture and stitching.
Classical FEM-BEM coupling methods: nonlinearities, well-posedness, and adaptivity
NASA Astrophysics Data System (ADS)
Aurada, Markus; Feischl, Michael; Führer, Thomas; Karkulik, Michael; Melenk, Jens Markus; Praetorius, Dirk
2013-04-01
We consider a (possibly) nonlinear interface problem in 2D and 3D, which is solved by use of various adaptive FEM-BEM coupling strategies, namely the Johnson-Nédélec coupling, the Bielak-MacCamy coupling, and Costabel's symmetric coupling. We provide a framework to prove that the continuous as well as the discrete Galerkin solutions of these coupling methods additionally solve an appropriate operator equation with a Lipschitz continuous and strongly monotone operator. Therefore, the original coupling formulations are well-defined, and the Galerkin solutions are quasi-optimal in the sense of a Céa-type lemma. For the respective Galerkin discretizations with lowest-order polynomials, we provide reliable residual-based error estimators. Together with an estimator reduction property, we prove convergence of the adaptive FEM-BEM coupling methods. A key point for the proof of the estimator reduction are novel inverse-type estimates for the involved boundary integral operators which are advertized. Numerical experiments conclude the work and compare performance and effectivity of the three adaptive coupling procedures in the presence of generic singularities.
Assessment of Disaster Risk Reduction and Climate Change Adaptation policy integration in Zambia
NASA Astrophysics Data System (ADS)
Pilli-Sihvola, K.; Väätäinen-Chimpuku, S.
2015-12-01
Integration of Disaster Risk Management (DRM) and Climate Change Adaptation (CCA) policies, their implementation measures and the contribution of these to development has been gaining attention recently. Due to the shared objectives of CCA and particularly Disaster Risk Reduction (DRR), a component of DRM, their integration provides many benefits. At the implementation level, DRR and CCA are usually integrated. Policy integration, however, is often lacking. This study presents a novel analysis of the policy integration of DRR and CCA by 1) suggesting a definition for their integration at a general and further at horizontal and vertical levels, 2) using an analysis framework for policy integration cycle, which separates the policy formulation and implementation processes, and 3) applying these to a case study in Zambia. Moreover, the study identifies the key gaps in the integration process, obtains an understanding of identified key factors for creating an enabling environment for the integration, and provides recommendations for further progress. The study is based on a document analysis of the relevant DRM, climate change (CC), agriculture, forestry, water management and meteorology policy documents and Acts, and 21 semi-structured interviews with key stakeholders. Horizontal integration has occurred both ways, as the revised DRM policy draft has incorporated CCA, and the new CC policy draft has incorporated DRR. This is not necessarily an optimal strategy and unless carefully implemented, it may create pressure on institutional structures and duplication of efforts in the implementation. Much less vertical integration takes place, and where it does, no guidance on how potential goal conflicts with sectorial and development objectives ought to be handled. The objectives of the instruments show convergence. At the programme stage, the measures are fully integrated as they can be classified as robust CCA measures, providing benefits in the current and future
Method for removing tilt control in adaptive optics systems
Salmon, Joseph Thaddeus
1998-01-01
A new adaptive optics system and method of operation, whereby the method removes tilt control, and includes the steps of using a steering mirror to steer a wavefront in the desired direction, for aiming an impinging aberrated light beam in the direction of a deformable mirror. The deformable mirror has its surface deformed selectively by means of a plurality of actuators, and compensates, at least partially, for existing aberrations in the light beam. The light beam is split into an output beam and a sample beam, and the sample beam is sampled using a wavefront sensor. The sampled signals are converted into corresponding electrical signals for driving a controller, which, in turn, drives the deformable mirror in a feedback loop in response to the sampled signals, for compensating for aberrations in the wavefront. To this purpose, a displacement error (gradient) of the wavefront is measured, and adjusted by a modified gain matrix, which satisfies the following equation: G'=(I-X(X.sup.T X).sup.-1 X.sup.T)G(I-A)
Method for removing tilt control in adaptive optics systems
Salmon, J.T.
1998-04-28
A new adaptive optics system and method of operation are disclosed, whereby the method removes tilt control, and includes the steps of using a steering mirror to steer a wavefront in the desired direction, for aiming an impinging aberrated light beam in the direction of a deformable mirror. The deformable mirror has its surface deformed selectively by means of a plurality of actuators, and compensates, at least partially, for existing aberrations in the light beam. The light beam is split into an output beam and a sample beam, and the sample beam is sampled using a wavefront sensor. The sampled signals are converted into corresponding electrical signals for driving a controller, which, in turn, drives the deformable mirror in a feedback loop in response to the sampled signals, for compensating for aberrations in the wavefront. To this purpose, a displacement error (gradient) of the wavefront is measured, and adjusted by a modified gain matrix, which satisfies the following equation: G{prime} = (I{minus}X(X{sup T} X){sup {minus}1}X{sup T})G(I{minus}A). 3 figs.
Adapted G-mode Clustering Method applied to Asteroid Taxonomy
NASA Astrophysics Data System (ADS)
Hasselmann, Pedro H.; Carvano, Jorge M.; Lazzaro, D.
2013-11-01
The original G-mode was a clustering method developed by A. I. Gavrishin in the late 60's for geochemical classification of rocks, but was also applied to asteroid photometry, cosmic rays, lunar sample and planetary science spectroscopy data. In this work, we used an adapted version to classify the asteroid photometry from SDSS Moving Objects Catalog. The method works by identifying normal distributions in a multidimensional space of variables. The identification starts by locating a set of points with smallest mutual distance in the sample, which is a problem when data is not planar. Here we present a modified version of the G-mode algorithm, which was previously written in FORTRAN 77, in Python 2.7 and using NumPy, SciPy and Matplotlib packages. The NumPy was used for array and matrix manipulation and Matplotlib for plot control. The Scipy had a import role in speeding up G-mode, Scipy.spatial.distance.mahalanobis was chosen as distance estimator and Numpy.histogramdd was applied to find the initial seeds from which clusters are going to evolve. Scipy was also used to quickly produce dendrograms showing the distances among clusters. Finally, results for Asteroids Taxonomy and tests for different sample sizes and implementations are presented.
NASA Technical Reports Server (NTRS)
Baer-Riedhart, Jennifer L.; Landy, Robert J.
1987-01-01
The highly integrated digital electronic control (HIDEC) program at NASA Ames Research Center, Dryden Flight Research Facility is a multiphase flight research program to quantify the benefits of promising integrated control systems. McDonnell Aircraft Company is the prime contractor, with United Technologies Pratt and Whitney Aircraft, and Lear Siegler Incorporated as major subcontractors. The NASA F-15A testbed aircraft was modified by the HIDEC program by installing a digital electronic flight control system (DEFCS) and replacing the standard F100 (Arab 3) engines with F100 engine model derivative (EMD) engines equipped with digital electronic engine controls (DEEC), and integrating the DEEC's and DEFCS. The modified aircraft provides the capability for testing many integrated control modes involving the flight controls, engine controls, and inlet controls. This paper focuses on the first two phases of the HIDEC program, which are the digital flight control system/aircraft model identification (DEFCS/AMI) phase and the adaptive engine control system (ADECS) phase.
A Self-Adaptive Projection and Contraction Method for Linear Complementarity Problems
Liao Lizhi Wang Shengli
2003-10-15
In this paper we develop a self-adaptive projection and contraction method for the linear complementarity problem (LCP). This method improves the practical performance of the modified projection and contraction method by adopting a self-adaptive technique. The global convergence of our new method is proved under mild assumptions. Our numerical tests clearly demonstrate the necessity and effectiveness of our proposed method.
An integrated modeling method for wind turbines
NASA Astrophysics Data System (ADS)
Fadaeinedjad, Roohollah
To study the interaction of the electrical, mechanical, and aerodynamic aspects of a wind turbine, a detailed model that considers all these aspects must be used. A drawback of many studies in the area of wind turbine simulation is that either a very simple mechanical model is used with a detailed electrical model, or vice versa. Hence the interactions between electrical and mechanical aspects of wind turbine operation are not accurately taken into account. In this research, it will be shown that a combination of different simulation packages, namely TurbSim, FAST, and Simulink can be used to model the aerodynamic, mechanical, and electrical aspects of a wind turbine in detail. In this thesis, after a review of some wind turbine concepts and software tools, a simulation structure is proposed for studying wind turbines that integrates the mechanical and electrical components of a wind energy conversion device. Based on the simulation structure, a comprehensive model for a three-bladed variable speed wind turbine with doubly-fed induction generator is developed. Using the model, the impact of a voltage sag on the wind turbine tower vibration is investigated under various operating conditions such as power system short circuit level, mechanical parameters, and wind turbine operating conditions. It is shown how an electrical disturbance can cause more sustainable tower vibrations under high speed and turbulent wind conditions, which may disrupt the operation of pitch control system. A similar simulation structure is used to model a two-bladed fixed speed wind turbine with an induction generator. An extension of the concept is introduced by adding a diesel generator system. The model is utilized to study the impact of the aeroelastic aspects of wind turbine (i.e. tower shadow, wind shears, yaw error, turbulence, and mechanical vibrations) on the power quality of a stand-alone wind-diesel system. Furthermore, an IEEE standard flickermeter model is implemented in a
Adaptable Metadata Rich IO Methods for Portable High Performance IO
Lofstead, J.; Zheng, Fang; Klasky, Scott A; Schwan, Karsten
2009-01-01
Since IO performance on HPC machines strongly depends on machine characteristics and configuration, it is important to carefully tune IO libraries and make good use of appropriate library APIs. For instance, on current petascale machines, independent IO tends to outperform collective IO, in part due to bottlenecks at the metadata server. The problem is exacerbated by scaling issues, since each IO library scales differently on each machine, and typically, operates efficiently to different levels of scaling on different machines. With scientific codes being run on a variety of HPC resources, efficient code execution requires us to address three important issues: (1) end users should be able to select the most efficient IO methods for their codes, with minimal effort in terms of code updates or alterations; (2) such performance-driven choices should not prevent data from being stored in the desired file formats, since those are crucial for later data analysis; and (3) it is important to have efficient ways of identifying and selecting certain data for analysis, to help end users cope with the flood of data produced by high end codes. This paper employs ADIOS, the ADaptable IO System, as an IO API to address (1)-(3) above. Concerning (1), ADIOS makes it possible to independently select the IO methods being used by each grouping of data in an application, so that end users can use those IO methods that exhibit best performance based on both IO patterns and the underlying hardware. In this paper, we also use this facility of ADIOS to experimentally evaluate on petascale machines alternative methods for high performance IO. Specific examples studied include methods that use strong file consistency vs. delayed parallel data consistency, as that provided by MPI-IO or POSIX IO. Concerning (2), to avoid linking IO methods to specific file formats and attain high IO performance, ADIOS introduces an efficient intermediate file format, termed BP, which can be converted, at small
Principles and Methods of Adapted Physical Education and Recreation.
ERIC Educational Resources Information Center
Arnheim, Daniel D.; And Others
This text is designed for the elementary and secondary school physical educator and the recreation specialist in adapted physical education and, more specifically, as a text for college courses in adapted and corrective physical education and therapeutic recreation. The text is divided into four major divisions: scope, key teaching and therapy…
Broom, Donald M
2006-01-01
The term adaptation is used in biology in three different ways. It may refer to changes which occur at the cell and organ level, or at the individual level, or at the level of gene action and evolutionary processes. Adaptation by cells, especially nerve cells helps in: communication within the body, the distinguishing of stimuli, the avoidance of overload and the conservation of energy. The time course and complexity of these mechanisms varies. Adaptive characters of organisms, including adaptive behaviours, increase fitness so this adaptation is evolutionary. The major part of this paper concerns adaptation by individuals and its relationships to welfare. In complex animals, feed forward control is widely used. Individuals predict problems and adapt by acting before the environmental effect is substantial. Much of adaptation involves brain control and animals have a set of needs, located in the brain and acting largely via motivational mechanisms, to regulate life. Needs may be for resources but are also for actions and stimuli which are part of the mechanism which has evolved to obtain the resources. Hence pigs do not just need food but need to be able to carry out actions like rooting in earth or manipulating materials which are part of foraging behaviour. The welfare of an individual is its state as regards its attempts to cope with its environment. This state includes various adaptive mechanisms including feelings and those which cope with disease. The part of welfare which is concerned with coping with pathology is health. Disease, which implies some significant effect of pathology, always results in poor welfare. Welfare varies over a range from very good, when adaptation is effective and there are feelings of pleasure or contentment, to very poor. A key point concerning the concept of individual adaptation in relation to welfare is that welfare may be good or poor while adaptation is occurring. Some adaptation is very easy and energetically cheap and
Adaptive multiresolution semi-Lagrangian discontinuous Galerkin methods for the Vlasov equations
NASA Astrophysics Data System (ADS)
Besse, N.; Deriaz, E.; Madaule, É.
2017-03-01
We develop adaptive numerical schemes for the Vlasov equation by combining discontinuous Galerkin discretisation, multiresolution analysis and semi-Lagrangian time integration. We implement a tree based structure in order to achieve adaptivity. Both multi-wavelets and discontinuous Galerkin rely on a local polynomial basis. The schemes are tested and validated using Vlasov-Poisson equations for plasma physics and astrophysics.
Solution of elastoplastic torsion problem by boundary integral method
NASA Technical Reports Server (NTRS)
Mendelson, A.
1975-01-01
The boundary integral method was applied to the elastoplastic analysis of the torsion of prismatic bars, and the results are compared with those obtained by the finite difference method. Although fewer unknowns were used, very good accuracy was obtained with the boundary integral method. Both simply and multiply connected bodies can be handled with equal ease.
Tsunami modelling with adaptively refined finite volume methods
LeVeque, R.J.; George, D.L.; Berger, M.J.
2011-01-01
Numerical modelling of transoceanic tsunami propagation, together with the detailed modelling of inundation of small-scale coastal regions, poses a number of algorithmic challenges. The depth-averaged shallow water equations can be used to reduce this to a time-dependent problem in two space dimensions, but even so it is crucial to use adaptive mesh refinement in order to efficiently handle the vast differences in spatial scales. This must be done in a 'wellbalanced' manner that accurately captures very small perturbations to the steady state of the ocean at rest. Inundation can be modelled by allowing cells to dynamically change from dry to wet, but this must also be done carefully near refinement boundaries. We discuss these issues in the context of Riemann-solver-based finite volume methods for tsunami modelling. Several examples are presented using the GeoClaw software, and sample codes are available to accompany the paper. The techniques discussed also apply to a variety of other geophysical flows. ?? 2011 Cambridge University Press.
A hybrid method for optimization of the adaptive Goldstein filter
NASA Astrophysics Data System (ADS)
Jiang, Mi; Ding, Xiaoli; Tian, Xin; Malhotra, Rakesh; Kong, Weixue
2014-12-01
The Goldstein filter is a well-known filter for interferometric filtering in the frequency domain. The main parameter of this filter, alpha, is set as a power of the filtering function. Depending on it, considered areas are strongly or weakly filtered. Several variants have been developed to adaptively determine alpha using different indicators such as the coherence, and phase standard deviation. The common objective of these methods is to prevent areas with low noise from being over filtered while simultaneously allowing stronger filtering over areas with high noise. However, the estimators of these indicators are biased in the real world and the optimal model to accurately determine the functional relationship between the indicators and alpha is also not clear. As a result, the filter always under- or over-filters and is rarely correct. The study presented in this paper aims to achieve accurate alpha estimation by correcting the biased estimator using homogeneous pixel selection and bootstrapping algorithms, and by developing an optimal nonlinear model to determine alpha. In addition, an iteration is also merged into the filtering procedure to suppress the high noise over incoherent areas. The experimental results from synthetic and real data show that the new filter works well under a variety of conditions and offers better and more reliable performance when compared to existing approaches.
Essays on agricultural adaptation to climate change and ethanol market integration in the U.S
NASA Astrophysics Data System (ADS)
Aisabokhae, Ruth Ada
Climate factors like precipitation and temperature, being closely intertwined with agriculture, make a changing climate a big concern for the entire human race and its basic survival. Adaptation to climate is a long-running characteristic of agriculture evidenced by the varying types and forms of agricultural enterprises associated with differing climatic conditions. Nevertheless climate change poses a substantial, additional adaptation challenge for agriculture. Mitigation encompasses efforts to reduce the current and future extent of climate change. Biofuels production, for instance, expands agriculture's role in climate change mitigation. This dissertation encompasses adaptation and mitigation strategies as a response to climate change in the U.S. by examining comprehensively scientific findings on agricultural adaptation to climate change; developing information on the costs and benefits of select adaptations to examine what adaptations are most desirable, for which society can further devote its resources; and studying how ethanol prices are interrelated across, and transmitted within the U.S., and the markets that play an important role in these dynamics. Quantitative analysis using the Forestry and Agricultural Sector Optimization Model (FASOM) shows adaptation to be highly beneficial to agriculture. On-farm varietal and other adaptations contributions outweigh a mix shift northwards significantly, implying progressive technical change and significant returns to adaptation research and investment focused on farm management and varietal adaptations could be quite beneficial over time. Northward shift of corn-acre weighted centroids observed indicates that substantial production potential may shift across regions with the possibility of less production in the South, and more in the North, and thereby, potential redistribution of income. Time series techniques employed to study ethanol price dynamics show that the markets studied are co-integrated and strongly
Multiple methods integration for structural mechanics analysis and design
NASA Technical Reports Server (NTRS)
Housner, J. M.; Aminpour, M. A.
1991-01-01
A new research area of multiple methods integration is proposed for joining diverse methods of structural mechanics analysis which interact with one another. Three categories of multiple methods are defined: those in which a physical interface are well defined; those in which a physical interface is not well-defined, but selected; and those in which the interface is a mathematical transformation. Two fundamental integration procedures are presented that can be extended to integrate various methods (e.g., finite elements, Rayleigh Ritz, Galerkin, and integral methods) with one another. Since the finite element method will likely be the major method to be integrated, its enhanced robustness under element distortion is also examined and a new robust shell element is demonstrated.
LDRD Final Report: Adaptive Methods for Laser Plasma Simulation
Dorr, M R; Garaizar, F X; Hittinger, J A
2003-01-29
The goal of this project was to investigate the utility of parallel adaptive mesh refinement (AMR) in the simulation of laser plasma interaction (LPI). The scope of work included the development of new numerical methods and parallel implementation strategies. The primary deliverables were (1) parallel adaptive algorithms to solve a system of equations combining plasma fluid and light propagation models, (2) a research code implementing these algorithms, and (3) an analysis of the performance of parallel AMR on LPI problems. The project accomplished these objectives. New algorithms were developed for the solution of a system of equations describing LPI. These algorithms were implemented in a new research code named ALPS (Adaptive Laser Plasma Simulator) that was used to test the effectiveness of the AMR algorithms on the Laboratory's large-scale computer platforms. The details of the algorithm and the results of the numerical tests were documented in an article published in the Journal of Computational Physics [2]. A principal conclusion of this investigation is that AMR is most effective for LPI systems that are ''hydrodynamically large'', i.e., problems requiring the simulation of a large plasma volume relative to the volume occupied by the laser light. Since the plasma-only regions require less resolution than the laser light, AMR enables the use of efficient meshes for such problems. In contrast, AMR is less effective for, say, a single highly filamented beam propagating through a phase plate, since the resulting speckle pattern may be too dense to adequately separate scales with a locally refined mesh. Ultimately, the gain to be expected from the use of AMR is highly problem-dependent. One class of problems investigated in this project involved a pair of laser beams crossing in a plasma flow. Under certain conditions, energy can be transferred from one beam to the other via a resonant interaction with an ion acoustic wave in the crossing region. AMR provides an
An adaptive 6-DOF tracking method by hybrid sensing for ultrasonic endoscopes.
Du, Chengyang; Chen, Xiaodong; Wang, Yi; Li, Junwei; Yu, Daoyin
2014-06-06
In this paper, a novel hybrid sensing method for tracking an ultrasonic endoscope within the gastrointestinal (GI) track is presented, and the prototype of the tracking system is also developed. We implement 6-DOF localization by sensing integration and information fusion. On the hardware level, a tri-axis gyroscope and accelerometer, and a magnetic angular rate and gravity (MARG) sensor array are attached at the end of endoscopes, and three symmetric cylindrical coils are placed around patients' abdomens. On the algorithm level, an adaptive fast quaternion convergence (AFQC) algorithm is introduced to determine the orientation by fusing inertial/magnetic measurements, in which the effects of magnetic disturbance and acceleration are estimated to gain an adaptive convergence output. A simplified electro-magnetic tracking (SEMT) algorithm for dimensional position is also implemented, which can easily integrate the AFQC's results and magnetic measurements. Subsequently, the average position error is under 0.3 cm by reasonable setting, and the average orientation error is 1° without noise. If magnetic disturbance or acceleration exists, the average orientation error can be controlled to less than 3.5°.
Integrated navigation method based on inertial navigation system and Lidar
NASA Astrophysics Data System (ADS)
Zhang, Xiaoyue; Shi, Haitao; Pan, Jianye; Zhang, Chunxi
2016-04-01
An integrated navigation method based on the inertial navigational system (INS) and Lidar was proposed for land navigation. Compared with the traditional integrated navigational method and dead reckoning (DR) method, the influence of the inertial measurement unit (IMU) scale factor and misalignment was considered in the new method. First, the influence of the IMU scale factor and misalignment on navigation accuracy was analyzed. Based on the analysis, the integrated system error model of INS and Lidar was established, in which the IMU scale factor and misalignment error states were included. Then the observability of IMU error states was analyzed. According to the results of the observability analysis, the integrated system was optimized. Finally, numerical simulation and a vehicle test were carried out to validate the availability and utility of the proposed INS/Lidar integrated navigational method. Compared with the test result of a traditional integrated navigation method and DR method, the proposed integrated navigational method could result in a higher navigation precision. Consequently, the IMU scale factor and misalignment error were effectively compensated by the proposed method and the new integrated navigational method is valid.
Nishimaru, Eiji; Ichikawa, Katsuhiro; Hara, Takanori; Terakawa, Shoichi; Yokomachi, Kazushi; Fujioka, Chikako; Kiguchi, Masao; Ishifuro, Minoru
2012-01-01
Adaptive iterative reconstruction techniques (IRs) can decrease image noise in computed tomography (CT) and are expected to contribute to reduction of the radiation dose. To evaluate the performance of IRs, the conventional two-dimensional (2D) noise power spectrum (NPS) is widely used. However, when an IR provides an NPS value drop at all spatial frequency (which is similar to NPS changes by dose increase), the conventional method cannot evaluate the correct noise property because the conventional method does not correspond to the volume data natures of CT images. The purpose of our study was to develop a new method for NPS measurements that can be adapted to IRs. Our method utilized thick multi-planar reconstruction (MPR) images. The thick images are generally made by averaging CT volume data in a direction perpendicular to a MPR plane (e.g. z-direction for axial MPR plane). By using this averaging technique as a cutter for 3D-NPS, we can obtain adequate 2D-extracted NPS (eNPS) from 3D NPS. We applied this method to IR images generated with adaptive iterative dose reduction 3D (AIDR-3D, Toshiba) to investigate the validity of our method. A water phantom with 24 cm-diameters was scanned at 120 kV and 200 mAs with a 320-row CT (Acquilion One, Toshiba). From the results of study, the adequate thickness of MPR images for eNPS was more than 25.0 mm. Our new NPS measurement method utilizing thick MPR images was accurate and effective for evaluating noise reduction effects of IRs.
NASA Astrophysics Data System (ADS)
Kim, Ji-hyun; Han, Jae-Ho; Jeong, Jichai
2015-09-01
Integration time and reference intensity are important factors for achieving high signal-to-noise ratio (SNR) and sensitivity in optical coherence tomography (OCT). In this context, we present an adaptive optimization method of reference intensity for OCT setup. The reference intensity is automatically controlled by tilting a beam position using a Galvanometric scanning mirror system. Before sample scanning, the OCT system acquires two dimensional intensity map with normalized intensity and variables in color spaces using false-color mapping. Then, the system increases or decreases reference intensity following the map data for optimization with a given algorithm. In our experiments, the proposed method successfully corrected the reference intensity with maintaining spectral shape, enabled to change integration time without manual calibration of the reference intensity, and prevented image degradation due to over-saturation and insufficient reference intensity. Also, SNR and sensitivity could be improved by increasing integration time with automatic adjustment of the reference intensity. We believe that our findings can significantly aid in the optimization of SNR and sensitivity for optical coherence tomography systems.
REVIEW: Internal models in sensorimotor integration: perspectives from adaptive control theory
NASA Astrophysics Data System (ADS)
Tin, Chung; Poon, Chi-Sang
2005-09-01
Internal models and adaptive controls are empirical and mathematical paradigms that have evolved separately to describe learning control processes in brain systems and engineering systems, respectively. This paper presents a comprehensive appraisal of the correlation between these paradigms with a view to forging a unified theoretical framework that may benefit both disciplines. It is suggested that the classic equilibrium-point theory of impedance control of arm movement is analogous to continuous gain-scheduling or high-gain adaptive control within or across movement trials, respectively, and that the recently proposed inverse internal model is akin to adaptive sliding control originally for robotic manipulator applications. Modular internal models' architecture for multiple motor tasks is a form of multi-model adaptive control. Stochastic methods, such as generalized predictive control, reinforcement learning, Bayesian learning and Hebbian feedback covariance learning, are reviewed and their possible relevance to motor control is discussed. Possible applicability of a Luenberger observer and an extended Kalman filter to state estimation problems—such as sensorimotor prediction or the resolution of vestibular sensory ambiguity—is also discussed. The important role played by vestibular system identification in postural control suggests an indirect adaptive control scheme whereby system states or parameters are explicitly estimated prior to the implementation of control. This interdisciplinary framework should facilitate the experimental elucidation of the mechanisms of internal models in sensorimotor systems and the reverse engineering of such neural mechanisms into novel brain-inspired adaptive control paradigms in future.
An Integrated Systems Approach to Designing Climate Change Adaptation Policy in Water Resources
NASA Astrophysics Data System (ADS)
Ryu, D.; Malano, H. M.; Davidson, B.; George, B.
2014-12-01
Climate change projections are characterised by large uncertainties with rainfall variability being the key challenge in designing adaptation policies. Climate change adaptation in water resources shows all the typical characteristics of 'wicked' problems typified by cognitive uncertainty as new scientific knowledge becomes available, problem instability, knowledge imperfection and strategic uncertainty due to institutional changes that inevitably occur over time. Planning that is characterised by uncertainties and instability requires an approach that can accommodate flexibility and adaptive capacity for decision-making. An ability to take corrective measures in the event that scenarios and responses envisaged initially derive into forms at some future stage. We present an integrated-multidisciplinary and comprehensive framework designed to interface and inform science and decision making in the formulation of water resource management strategies to deal with climate change in the Musi Catchment of Andhra Pradesh, India. At the core of this framework is a dialogue between stakeholders, decision makers and scientists to define a set of plausible responses to an ensemble of climate change scenarios derived from global climate modelling. The modelling framework used to evaluate the resulting combination of climate scenarios and adaptation responses includes the surface and groundwater assessment models (SWAT & MODFLOW) and the water allocation modelling (REALM) to determine the water security of each adaptation strategy. Three climate scenarios extracted from downscaled climate models were selected for evaluation together with four agreed responses—changing cropping patterns, increasing watershed development, changing the volume of groundwater extraction and improving irrigation efficiency. Water security in this context is represented by the combination of level of water availability and its associated security of supply for three economic activities (agriculture
NASA Astrophysics Data System (ADS)
Choudhury, A. Ghose; Guha, Partha; Khanra, Barun
2009-10-01
The Darboux integrability method is particularly useful to determine first integrals of nonplanar autonomous systems of ordinary differential equations, whose associated vector fields are polynomials. In particular, we obtain first integrals for a variant of the generalized Raychaudhuri equation, which has appeared in string inspired modern cosmology.
NASA Technical Reports Server (NTRS)
Wang, Ray (Inventor)
2009-01-01
A method and system for spatial data manipulation input and distribution via an adaptive wireless transceiver. The method and system include a wireless transceiver for automatically and adaptively controlling wireless transmissions using a Waveform-DNA method. The wireless transceiver can operate simultaneously over both the short and long distances. The wireless transceiver is automatically adaptive and wireless devices can send and receive wireless digital and analog data from various sources rapidly in real-time via available networks and network services.
Towards an integrated agenda for adaptation research: theory, practice, and policy: Strategy paper
Wilbanks, Thomas J; Patwardhan, Anand; Downing, Tom; Leary, Neil
2009-01-01
Adaptation to the adverse impacts of climate change has been recognized as a priority area for national and international policy. The findings of the Fourth Assessment Report of the IPCC have reemphasized the urgency of action and the scale of response needed to cope with climate change outcomes. The scientific community has an important role to play in advancing the information and knowledge base that would help in identifying, developing and implementing effective responses to enhance adaptive capacity and reduce vulnerability. This paper examines the way in which science and research could advance the adaptation agenda. To do so, we pose a number of questions aimed at identifying the knowledge gaps and research needs. We argue that in order to address these science and research needs, an integrated approach is necessary, one that combines new knowledge with new approaches for knowledge generation, and where research and practice co-evolve; and that such a learning-by-doing approach is essential to rapidly scale up and implement concrete adaptation actions.
NASA Astrophysics Data System (ADS)
Ni, Haibin; Wang, Ming; Hao, Hui; Zhou, Jing
2016-06-01
By uniform infiltration of a different material into monolayered polystyrene colloidal crystals and by flexibly combining the two materials as etching masks, we demonstrate an improved nanosphere lithography method that possesses the ability to produce a diverse range of tunable nano-patterns in a small area with good reproducibility. The factors that affect the infiltration height and uniformity are characterized and discussed. Annular gap arrays, close-packed ring arrays, and bowl arrays are demonstrated by this method. The geometry size of these nano-patterns can be tuned over the range 10 nm to ∼500 nm with steps of ∼5 nm during the fabrication progress. Formation mechanisms of the close-packed ring arrays are experimentally investigated. Because all the fabrication processes involved in this method are adaptable to sophisticated integrated circuit fabrication techniques, most of the nano-patterns produced by this method could be integrated on thin films, which is desirable for optics integration and array sensing.
Adaptive Filtering Methods for Identifying Cross-Frequency Couplings in Human EEG
Van Zaen, Jérôme; Murray, Micah M.; Meuli, Reto A.; Vesin, Jean-Marc
2013-01-01
Oscillations have been increasingly recognized as a core property of neural responses that contribute to spontaneous, induced, and evoked activities within and between individual neurons and neural ensembles. They are considered as a prominent mechanism for information processing within and communication between brain areas. More recently, it has been proposed that interactions between periodic components at different frequencies, known as cross-frequency couplings, may support the integration of neuronal oscillations at different temporal and spatial scales. The present study details methods based on an adaptive frequency tracking approach that improve the quantification and statistical analysis of oscillatory components and cross-frequency couplings. This approach allows for time-varying instantaneous frequency, which is particularly important when measuring phase interactions between components. We compared this adaptive approach to traditional band-pass filters in their measurement of phase-amplitude and phase-phase cross-frequency couplings. Evaluations were performed with synthetic signals and EEG data recorded from healthy humans performing an illusory contour discrimination task. First, the synthetic signals in conjunction with Monte Carlo simulations highlighted two desirable features of the proposed algorithm vs. classical filter-bank approaches: resilience to broad-band noise and oscillatory interference. Second, the analyses with real EEG signals revealed statistically more robust effects (i.e. improved sensitivity) when using an adaptive frequency tracking framework, particularly when identifying phase-amplitude couplings. This was further confirmed after generating surrogate signals from the real EEG data. Adaptive frequency tracking appears to improve the measurements of cross-frequency couplings through precise extraction of neuronal oscillations. PMID:23560098
Adaptive filtering methods for identifying cross-frequency couplings in human EEG.
Van Zaen, Jérôme; Murray, Micah M; Meuli, Reto A; Vesin, Jean-Marc
2013-01-01
Oscillations have been increasingly recognized as a core property of neural responses that contribute to spontaneous, induced, and evoked activities within and between individual neurons and neural ensembles. They are considered as a prominent mechanism for information processing within and communication between brain areas. More recently, it has been proposed that interactions between periodic components at different frequencies, known as cross-frequency couplings, may support the integration of neuronal oscillations at different temporal and spatial scales. The present study details methods based on an adaptive frequency tracking approach that improve the quantification and statistical analysis of oscillatory components and cross-frequency couplings. This approach allows for time-varying instantaneous frequency, which is particularly important when measuring phase interactions between components. We compared this adaptive approach to traditional band-pass filters in their measurement of phase-amplitude and phase-phase cross-frequency couplings. Evaluations were performed with synthetic signals and EEG data recorded from healthy humans performing an illusory contour discrimination task. First, the synthetic signals in conjunction with Monte Carlo simulations highlighted two desirable features of the proposed algorithm vs. classical filter-bank approaches: resilience to broad-band noise and oscillatory interference. Second, the analyses with real EEG signals revealed statistically more robust effects (i.e. improved sensitivity) when using an adaptive frequency tracking framework, particularly when identifying phase-amplitude couplings. This was further confirmed after generating surrogate signals from the real EEG data. Adaptive frequency tracking appears to improve the measurements of cross-frequency couplings through precise extraction of neuronal oscillations.
Adaptive L₁/₂ shooting regularization method for survival analysis using gene expression data.
Liu, Xiao-Ying; Liang, Yong; Xu, Zong-Ben; Zhang, Hai; Leung, Kwong-Sak
2013-01-01
A new adaptive L₁/₂ shooting regularization method for variable selection based on the Cox's proportional hazards mode being proposed. This adaptive L₁/₂ shooting algorithm can be easily obtained by the optimization of a reweighed iterative series of L₁ penalties and a shooting strategy of L₁/₂ penalty. Simulation results based on high dimensional artificial data show that the adaptive L₁/₂ shooting regularization method can be more accurate for variable selection than Lasso and adaptive Lasso methods. The results from real gene expression dataset (DLBCL) also indicate that the L₁/₂ regularization method performs competitively.
Stucki, Virpi; Smith, Mark
2011-06-01
The relationship of forests in water quantity and quality has been debated during the past years. At the same time, focus on climate change has increased interest in ecosystem restoration as a means for adaptation. Climate change might become one of the key drivers pushing integrated approaches for natural resources management into practice. The National Adaptation Programme of Action (NAPA) is an initiative agreed under the UN Framework Convention on Climate Change. An analysis was done to find out how widely ecosystem restoration and integrated approaches have been incorporated into NAPA priority adaptation projects. The data show that that the NAPAs can be seen as potentially important channel for operationalizing various integrated concepts. Key challenge is to implement the NAPA projects. The amount needed to implement the NAPA projects aiming at ecosystem restoration using integrated approaches presents only 0.7% of the money pledged in Copenhagen for climate change adaptation.
Thermally integrated staged methanol reformer and method
Skala, Glenn William; Hart-Predmore, David James; Pettit, William Henry; Borup, Rodney Lynn
2001-01-01
A thermally integrated two-stage methanol reformer including a heat exchanger and first and second reactors colocated in a common housing in which a gaseous heat transfer medium circulates to carry heat from the heat exchanger into the reactors. The heat transfer medium comprises principally hydrogen, carbon dioxide, methanol vapor and water vapor formed in a first stage reforming reaction. A small portion of the circulating heat transfer medium is drawn off and reacted in a second stage reforming reaction which substantially completes the reaction of the methanol and water remaining in the drawn-off portion. Preferably, a PrOx reactor will be included in the housing upstream of the heat exchanger to supplement the heat provided by the heat exchanger.
Makedonska, Jana; Wright, Barth W.; Strait, David S.
2012-01-01
A fundamental challenge of morphology is to identify the underlying evolutionary and developmental mechanisms leading to correlated phenotypic characters. Patterns and magnitudes of morphological integration and their association with environmental variables are essential for understanding the evolution of complex phenotypes, yet the nature of the relevant selective pressures remains poorly understood. In this study, the adaptive significance of morphological integration was evaluated through the association between feeding mechanics, ingestive behavior and craniofacial variation. Five capuchin species were examined, Cebus apella sensu stricto, Cebus libidinosus, Cebus nigritus, Cebus olivaceus and Cebus albifrons. Twenty three-dimensional landmarks were chosen to sample facial regions experiencing high strains during feeding, characteristics affecting muscular mechanical advantage and basicranial regions. Integration structure and magnitude between and within the oral and zygomatic subunits, between and within blocks maximizing modularity and within the face, the basicranium and the cranium were examined using partial-least squares, eigenvalue variance, integration indices compared inter-specifically at a common level of sampled population variance and cluster analyses. Results are consistent with previous findings reporting a relative constancy of facial and cranial correlation patterns across mammals, while covariance magnitudes vary. Results further suggest that food material properties structure integration among functionally-linked facial elements and possibly integration between the face and the basicranium. Hard-object-feeding capuchins, especially C.apella s.s., whose faces experience particularly high biomechanical loads are characterized by higher facial and cranial integration especially compared to C.albifrons, likely because morphotypes compromising feeding performance are selected against in species relying on obdurate fallback foods. This is the
Maxwell, Sean L; Venter, Oscar; Jones, Kendall R; Watson, James E M
2015-10-01
The impact of climate change on biodiversity is now evident, with the direct impacts of changing temperature and rainfall patterns and increases in the magnitude and frequency of extreme events on species distribution, populations, and overall ecosystem function being increasingly publicized. Changes in the climate system are also affecting human communities, and a range of human responses across terrestrial and marine realms have been witnessed, including altered agricultural activities, shifting fishing efforts, and human migration. Failing to account for the human responses to climate change is likely to compromise climate-smart conservation efforts. Here, we use a well-established conservation planning framework to show how integrating human responses to climate change into both species- and site-based vulnerability assessments and adaptation plans is possible. By explicitly taking into account human responses, conservation practitioners will improve their evaluation of species and ecosystem vulnerability, and will be better able to deliver win-wins for human- and biodiversity-focused climate adaptation.
NASA Technical Reports Server (NTRS)
Wissler, Steven S.; Maldague, Pierre; Rocca, Jennifer; Seybold, Calina
2006-01-01
The Deep Impact mission was ambitious and challenging. JPL's well proven, easily adaptable multi-mission sequence planning tools combined with integrated spacecraft subsystem models enabled a small operations team to develop, validate, and execute extremely complex sequence-based activities within very short development times. This paper focuses on the core planning tool used in the mission, APGEN. It shows how the multi-mission design and adaptability of APGEN made it possible to model spacecraft subsystems as well as ground assets throughout the lifecycle of the Deep Impact project, starting with models of initial, high-level mission objectives, and culminating in detailed predictions of spacecraft behavior during mission-critical activities.
Integrating Systems Health Management with Adaptive Controls for a Utility-Scale Wind Turbine
NASA Technical Reports Server (NTRS)
Frost, Susan A.; Goebel, Kai; Trinh, Khanh V.; Balas, Mark J.; Frost, Alan M.
2011-01-01
Increasing turbine up-time and reducing maintenance costs are key technology drivers for wind turbine operators. Components within wind turbines are subject to considerable stresses due to unpredictable environmental conditions resulting from rapidly changing local dynamics. Systems health management has the aim to assess the state-of-health of components within a wind turbine, to estimate remaining life, and to aid in autonomous decision-making to minimize damage. Advanced adaptive controls can provide the mechanism to enable optimized operations that also provide the enabling technology for Systems Health Management goals. The work reported herein explores the integration of condition monitoring of wind turbine blades with contingency management and adaptive controls. Results are demonstrated using a high fidelity simulator of a utility-scale wind turbine.
NASA Astrophysics Data System (ADS)
Zhou, Yanlai; Guo, Shenglian; Xu, Chong-Yu; Liu, Dedi; Chen, Lu; Wang, Dong
2015-12-01
Climate change, rapid economic development and increase of the human population are considered as the major triggers of increasing challenges for water resources management. This proposed integrated optimal allocation model (IOAM) for complex adaptive system of water resources management is applied in Dongjiang River basin located in the Guangdong Province of China. The IOAM is calibrated and validated under baseline period 2010 year and future period 2011-2030 year, respectively. The simulation results indicate that the proposed model can make a trade-off between demand and supply for sustainable development of society, economy, ecology and environment and achieve adaptive management of water resources allocation. The optimal scheme derived by multi-objective evaluation is recommended for decision-makers in order to maximize the comprehensive benefits of water resources management.
Challenges in Incorporating Climate Change Adaptation into Integrated Water Resources Management
NASA Astrophysics Data System (ADS)
Kirshen, P. H.; Cardwell, H.; Kartez, J.; Merrill, S.
2011-12-01
Over the last few decades, integrated water resources management (IWRM), under various names, has become the accepted philosophy for water management in the USA. While much is still to be learned about how to actually carry it out, implementation is slowly moving forward - spurred by both legislation and the demands of stakeholders. New challenges to IWRM have arisen because of climate change. Climate change has placed increased demands on the creativities of planners and engineers because they now must design systems that will function over decades of hydrologic uncertainties that dwarf any previous hydrologic or other uncertainties. Climate and socio-economic monitoring systems must also now be established to determine when the future climate has changed sufficiently to warrant undertaking adaptation. The requirements for taking some actions now and preserving options for future actions as well as the increased risk of social inequities in climate change impacts and adaptation are challenging experts in stakeholder participation. To meet these challenges, an integrated methodology is essential that builds upon scenario analysis, risk assessment, statistical decision theory, participatory planning, and consensus building. This integration will create cross-disciplinary boundaries for these disciplines to overcome.
Adaptation of a-Stratified Method in Variable Length Computerized Adaptive Testing.
ERIC Educational Resources Information Center
Wen, Jian-Bing; Chang, Hua-Hua; Hau, Kit-Tai
Test security has often been a problem in computerized adaptive testing (CAT) because the traditional wisdom of item selection overly exposes high discrimination items. The a-stratified (STR) design advocated by H. Chang and his collaborators, which uses items of less discrimination in earlier stages of testing, has been shown to be very…
Systems and Methods for Derivative-Free Adaptive Control
NASA Technical Reports Server (NTRS)
Yucelen, Tansel (Inventor); Kim, Kilsoo (Inventor); Calise, Anthony J. (Inventor)
2015-01-01
An adaptive control system is disclosed. The control system can control uncertain dynamic systems. The control system can employ one or more derivative-free adaptive control architectures. The control system can further employ one or more derivative-free weight update laws. The derivative-free weight update laws can comprise a time-varying estimate of an ideal vector of weights. The control system of the present invention can therefore quickly stabilize systems that undergo sudden changes in dynamics, caused by, for example, sudden changes in weight. Embodiments of the present invention can also provide a less complex control system than existing adaptive control systems. The control system can control aircraft and other dynamic systems, such as, for example, those with non-minimum phase dynamics.
Study of adaptive methods for data compression of scanner data
NASA Technical Reports Server (NTRS)
1977-01-01
The performance of adaptive image compression techniques and the applicability of a variety of techniques to the various steps in the data dissemination process are examined in depth. It is concluded that the bandwidth of imagery generated by scanners can be reduced without introducing significant degradation such that the data can be transmitted over an S-band channel. This corresponds to a compression ratio equivalent to 1.84 bits per pixel. It is also shown that this can be achieved using at least two fairly simple techniques with weight-power requirements well within the constraints of the LANDSAT-D satellite. These are the adaptive 2D DPCM and adaptive hybrid techniques.
NASA Astrophysics Data System (ADS)
Zhao, X. S.; Wang, J. J.; Yuan, Z. Y.; Gao, Y.
2013-10-01
Traditional geometry-based approach can maintain the characteristics of vector data. However, complex interpolation calculations limit its applications in high resolution and multi-source spatial data integration at spherical scale in digital earth systems. To overcome this deficiency, an adaptive integration model of vector polyline and spherical DEM is presented. Firstly, Degenerate Quadtree Grid (DQG) which is one of the partition models for global discrete grids, is selected as a basic framework for the adaptive integration model. Secondly, a novel shift algorithm is put forward based on DQG proximity search. The main idea of shift algorithm is that the vector node in a DQG cell moves to the cell corner-point when the displayed area of the cell is smaller or equal to a pixel of screen in order to find a new vector polyline approximate to the original one, which avoids lots of interpolation calculations and achieves seamless integration. Detailed operation steps are elaborated and the complexity of algorithm is analyzed. Thirdly, a prototype system has been developed by using VC++ language and OpenGL 3D API. ASTER GDEM data and DCW roads data sets of Jiangxi province in China are selected to evaluate the performance. The result shows that time consumption of shift algorithm decreased about 76% than that of geometry-based approach. Analysis on the mean shift error from different dimensions has been implemented. In the end, the conclusions and future works in the integration of vector data and DEM based on discrete global grids are also given.
Fan, Quan-Yong; Yang, Guang-Hong
2016-01-01
This paper is concerned with the problem of integral sliding-mode control for a class of nonlinear systems with input disturbances and unknown nonlinear terms through the adaptive actor-critic (AC) control method. The main objective is to design a sliding-mode control methodology based on the adaptive dynamic programming (ADP) method, so that the closed-loop system with time-varying disturbances is stable and the nearly optimal performance of the sliding-mode dynamics can be guaranteed. In the first step, a neural network (NN)-based observer and a disturbance observer are designed to approximate the unknown nonlinear terms and estimate the input disturbances, respectively. Based on the NN approximations and disturbance estimations, the discontinuous part of the sliding-mode control is constructed to eliminate the effect of the disturbances and attain the expected equivalent sliding-mode dynamics. Then, the ADP method with AC structure is presented to learn the optimal control for the sliding-mode dynamics online. Reconstructed tuning laws are developed to guarantee the stability of the sliding-mode dynamics and the convergence of the weights of critic and actor NNs. Finally, the simulation results are presented to illustrate the effectiveness of the proposed method.
An Adaptive Fast Multipole Boundary Element Method for Poisson−Boltzmann Electrostatics
2009-01-01
The numerical solution of the Poisson−Boltzmann (PB) equation is a useful but a computationally demanding tool for studying electrostatic solvation effects in chemical and biomolecular systems. Recently, we have described a boundary integral equation-based PB solver accelerated by a new version of the fast multipole method (FMM). The overall algorithm shows an order N complexity in both the computational cost and memory usage. Here, we present an updated version of the solver by using an adaptive FMM for accelerating the convolution type matrix-vector multiplications. The adaptive algorithm, when compared to our previous nonadaptive one, not only significantly improves the performance of the overall memory usage but also remarkably speeds the calculation because of an improved load balancing between the local- and far-field calculations. We have also implemented a node-patch discretization scheme that leads to a reduction of unknowns by a factor of 2 relative to the constant element method without sacrificing accuracy. As a result of these improvements, the new solver makes the PB calculation truly feasible for large-scale biomolecular systems such as a 30S ribosome molecule even on a typical 2008 desktop computer. PMID:19517026
An Adaptive Fast Multipole Boundary Element Method for Poisson-Boltzmann Electrostatics
Lu, Benzhuo; Cheng, Xiaolin; Huang, Jingfang; McCammon, Jonathan
2009-01-01
The numerical solution of the Poisson Boltzmann (PB) equation is a useful but a computationally demanding tool for studying electrostatic solvation effects in chemical and biomolecular systems. Recently, we have described a boundary integral equation-based PB solver accelerated by a new version of the fast multipole method (FMM). The overall algorithm shows an order N complexity in both the computational cost and memory usage. Here, we present an updated version of the solver by using an adaptive FMM for accelerating the convolution type matrix-vector multiplications. The adaptive algorithm, when compared to our previous nonadaptive one, not only significantly improves the performance of the overall memory usage but also remarkably speeds the calculation because of an improved load balancing between the local- and far-field calculations. We have also implemented a node-patch discretization scheme that leads to a reduction of unknowns by a factor of 2 relative to the constant element method without sacrificing accuracy. As a result of these improvements, the new solver makes the PB calculation truly feasible for large-scale biomolecular systems such as a 30S ribosome molecule even on a typical 2008 desktop computer.
Physical Constraints on Biological Integral Control Design for Homeostasis and Sensory Adaptation
Ang, Jordan; McMillen, David R.
2013-01-01
Synthetic biology includes an effort to use design-based approaches to create novel controllers, biological systems aimed at regulating the output of other biological processes. The design of such controllers can be guided by results from control theory, including the strategy of integral feedback control, which is central to regulation, sensory adaptation, and long-term robustness. Realization of integral control in a synthetic network is an attractive prospect, but the nature of biochemical networks can make the implementation of even basic control structures challenging. Here we present a study of the general challenges and important constraints that will arise in efforts to engineer biological integral feedback controllers or to analyze existing natural systems. Constraints arise from the need to identify target output values that the combined process-plus-controller system can reach, and to ensure that the controller implements a good approximation of integral feedback control. These constraints depend on mild assumptions about the shape of input-output relationships in the biological components, and thus will apply to a variety of biochemical systems. We summarize our results as a set of variable constraints intended to provide guidance for the design or analysis of a working biological integral feedback controller. PMID:23442873
Physical constraints on biological integral control design for homeostasis and sensory adaptation.
Ang, Jordan; McMillen, David R
2013-01-22
Synthetic biology includes an effort to use design-based approaches to create novel controllers, biological systems aimed at regulating the output of other biological processes. The design of such controllers can be guided by results from control theory, including the strategy of integral feedback control, which is central to regulation, sensory adaptation, and long-term robustness. Realization of integral control in a synthetic network is an attractive prospect, but the nature of biochemical networks can make the implementation of even basic control structures challenging. Here we present a study of the general challenges and important constraints that will arise in efforts to engineer biological integral feedback controllers or to analyze existing natural systems. Constraints arise from the need to identify target output values that the combined process-plus-controller system can reach, and to ensure that the controller implements a good approximation of integral feedback control. These constraints depend on mild assumptions about the shape of input-output relationships in the biological components, and thus will apply to a variety of biochemical systems. We summarize our results as a set of variable constraints intended to provide guidance for the design or analysis of a working biological integral feedback controller.
NASA Astrophysics Data System (ADS)
Bargatze, L. F.
2015-12-01
Active Data Archive Product Tracking (ADAPT) is a collection of software routines that permits one to generate XML metadata files to describe and register data products in support of the NASA Heliophysics Virtual Observatory VxO effort. ADAPT is also a philosophy. The ADAPT concept is to use any and all available metadata associated with scientific data to produce XML metadata descriptions in a consistent, uniform, and organized fashion to provide blanket access to the full complement of data stored on a targeted data server. In this poster, we present an application of ADAPT to describe all of the data products that are stored by using the Common Data File (CDF) format served out by the CDAWEB and SPDF data servers hosted at the NASA Goddard Space Flight Center. These data servers are the primary repositories for NASA Heliophysics data. For this purpose, the ADAPT routines have been used to generate data resource descriptions by using an XML schema named Space Physics Archive, Search, and Extract (SPASE). SPASE is the designated standard for documenting Heliophysics data products, as adopted by the Heliophysics Data and Model Consortium. The set of SPASE XML resource descriptions produced by ADAPT includes high-level descriptions of numerical data products, display data products, or catalogs and also includes low-level "Granule" descriptions. A SPASE Granule is effectively a universal access metadata resource; a Granule associates an individual data file (e.g. a CDF file) with a "parent" high-level data resource description, assigns a resource identifier to the file, and lists the corresponding assess URL(s). The CDAWEB and SPDF file systems were queried to provide the input required by the ADAPT software to create an initial set of SPASE metadata resource descriptions. Then, the CDAWEB and SPDF data repositories were queried subsequently on a nightly basis and the CDF file lists were checked for any changes such as the occurrence of new, modified, or deleted
Treatment of domain integrals in boundary element methods
Nintcheu Fata, Sylvain
2012-01-01
A systematic and rigorous technique to calculate domain integrals without a volume-fitted mesh has been developed and validated in the context of a boundary element approximation. In the proposed approach, a domain integral involving a continuous or weakly-singular integrand is first converted into a surface integral by means of straight-path integrals that intersect the underlying domain. Then, the resulting surface integral is carried out either via analytic integration over boundary elements or by use of standard quadrature rules. This domain-to-boundary integral transformation is derived from an extension of the fundamental theorem of calculus to higher dimension, and the divergence theorem. In establishing the method, it is shown that the higher-dimensional version of the first fundamental theorem of calculus corresponds to the well-known Poincare lemma. The proposed technique can be employed to evaluate integrals defined over simply- or multiply-connected domains with Lipschitz boundaries which are embedded in an Euclidean space of arbitrary but finite dimension. Combined with the singular treatment of surface integrals that is widely available in the literature, this approach can also be utilized to effectively deal with boundary-value problems involving non-homogeneous source terms by way of a collocation or a Galerkin boundary integral equation method using only the prescribed surface discretization. Sample problems associated with the three-dimensional Poisson equation and featuring the Newton potential are successfully solved by a constant element collocation method to validate this study.
AP-Cloud: Adaptive Particle-in-Cloud method for optimal solutions to Vlasov–Poisson equation
Wang, Xingyu; Samulyak, Roman; Jiao, Xiangmin; Yu, Kwangmin
2016-07-01
We propose a new adaptive Particle-in-Cloud (AP-Cloud) method for obtaining optimal numerical solutions to the Vlasov–Poisson equation. Unlike the traditional particle-in-cell (PIC) method, which is commonly used for solving this problem, the AP-Cloud adaptively selects computational nodes or particles to deliver higher accuracy and efficiency when the particle distribution is highly non-uniform. Unlike other adaptive techniques for PIC, our method balances the errors in PDE discretization and Monte Carlo integration, and discretizes the differential operators using a generalized finite difference (GFD) method based on a weighted least square formulation. As a result, AP-Cloud is independent of the geometric shapes of computational domains and is free of artificial parameters. Efficient and robust implementation is achieved through an octree data structure with 2:1 balance. We analyze the accuracy and convergence order of AP-Cloud theoretically, and verify the method using an electrostatic problem of a particle beam with halo. Simulation results show that the AP-Cloud method is substantially more accurate and faster than the traditional PIC, and it is free of artificial forces that are typical for some adaptive PIC techniques.
AP-Cloud: Adaptive particle-in-cloud method for optimal solutions to Vlasov–Poisson equation
Wang, Xingyu; Samulyak, Roman; Jiao, Xiangmin; Yu, Kwangmin
2016-04-19
We propose a new adaptive Particle-in-Cloud (AP-Cloud) method for obtaining optimal numerical solutions to the Vlasov–Poisson equation. Unlike the traditional particle-in-cell (PIC) method, which is commonly used for solving this problem, the AP-Cloud adaptively selects computational nodes or particles to deliver higher accuracy and efficiency when the particle distribution is highly non-uniform. Unlike other adaptive techniques for PIC, our method balances the errors in PDE discretization and Monte Carlo integration, and discretizes the differential operators using a generalized finite difference (GFD) method based on a weighted least square formulation. As a result, AP-Cloud is independent of the geometric shapes of computational domains and is free of artificial parameters. Efficient and robust implementation is achieved through an octree data structure with 2:1 balance. We analyze the accuracy and convergence order of AP-Cloud theoretically, and verify the method using an electrostatic problem of a particle beam with halo. Here, simulation results show that the AP-Cloud method is substantially more accurate and faster than the traditional PIC, and it is free of artificial forces that are typical for some adaptive PIC techniques.
AP-Cloud: Adaptive particle-in-cloud method for optimal solutions to Vlasov–Poisson equation
Wang, Xingyu; Samulyak, Roman; Jiao, Xiangmin; ...
2016-04-19
We propose a new adaptive Particle-in-Cloud (AP-Cloud) method for obtaining optimal numerical solutions to the Vlasov–Poisson equation. Unlike the traditional particle-in-cell (PIC) method, which is commonly used for solving this problem, the AP-Cloud adaptively selects computational nodes or particles to deliver higher accuracy and efficiency when the particle distribution is highly non-uniform. Unlike other adaptive techniques for PIC, our method balances the errors in PDE discretization and Monte Carlo integration, and discretizes the differential operators using a generalized finite difference (GFD) method based on a weighted least square formulation. As a result, AP-Cloud is independent of the geometric shapes ofmore » computational domains and is free of artificial parameters. Efficient and robust implementation is achieved through an octree data structure with 2:1 balance. We analyze the accuracy and convergence order of AP-Cloud theoretically, and verify the method using an electrostatic problem of a particle beam with halo. Here, simulation results show that the AP-Cloud method is substantially more accurate and faster than the traditional PIC, and it is free of artificial forces that are typical for some adaptive PIC techniques.« less
AP-Cloud: Adaptive Particle-in-Cloud method for optimal solutions to Vlasov-Poisson equation
NASA Astrophysics Data System (ADS)
Wang, Xingyu; Samulyak, Roman; Jiao, Xiangmin; Yu, Kwangmin
2016-07-01
We propose a new adaptive Particle-in-Cloud (AP-Cloud) method for obtaining optimal numerical solutions to the Vlasov-Poisson equation. Unlike the traditional particle-in-cell (PIC) method, which is commonly used for solving this problem, the AP-Cloud adaptively selects computational nodes or particles to deliver higher accuracy and efficiency when the particle distribution is highly non-uniform. Unlike other adaptive techniques for PIC, our method balances the errors in PDE discretization and Monte Carlo integration, and discretizes the differential operators using a generalized finite difference (GFD) method based on a weighted least square formulation. As a result, AP-Cloud is independent of the geometric shapes of computational domains and is free of artificial parameters. Efficient and robust implementation is achieved through an octree data structure with 2:1 balance. We analyze the accuracy and convergence order of AP-Cloud theoretically, and verify the method using an electrostatic problem of a particle beam with halo. Simulation results show that the AP-Cloud method is substantially more accurate and faster than the traditional PIC, and it is free of artificial forces that are typical for some adaptive PIC techniques.
Inner string cementing adapter and method of use
Helms, L.C.
1991-08-20
This patent describes an inner string cementing adapter for use on a work string in a well casing having floating equipment therein. It comprises mandrel means for connecting to a lower end of the work string; and sealing means adjacent to the mandrel means for substantially flatly sealing against a surface of the floating equipment without engaging a central opening in the floating equipment.
Integrating Formal Methods and Testing 2002
NASA Technical Reports Server (NTRS)
Cukic, Bojan
2002-01-01
Traditionally, qualitative program verification methodologies and program testing are studied in separate research communities. None of them alone is powerful and practical enough to provide sufficient confidence in ultra-high reliability assessment when used exclusively. Significant advances can be made by accounting not only tho formal verification and program testing. but also the impact of many other standard V&V techniques, in a unified software reliability assessment framework. The first year of this research resulted in the statistical framework that, given the assumptions on the success of the qualitative V&V and QA procedures, significantly reduces the amount of testing needed to confidently assess reliability at so-called high and ultra-high levels (10-4 or higher). The coming years shall address the methodologies to realistically estimate the impacts of various V&V techniques to system reliability and include the impact of operational risk to reliability assessment. Combine formal correctness verification, process and product metrics, and other standard qualitative software assurance methods with statistical testing with the aim of gaining higher confidence in software reliability assessment for high-assurance applications. B) Quantify the impact of these methods on software reliability. C) Demonstrate that accounting for the effectiveness of these methods reduces the number of tests needed to attain certain confidence level. D) Quantify and justify the reliability estimate for systems developed using various methods.
Integrated method for chaotic time series analysis
Hively, L.M.; Ng, E.G.
1998-09-29
Methods and apparatus for automatically detecting differences between similar but different states in a nonlinear process monitor nonlinear data are disclosed. Steps include: acquiring the data; digitizing the data; obtaining nonlinear measures of the data via chaotic time series analysis; obtaining time serial trends in the nonlinear measures; and determining by comparison whether differences between similar but different states are indicated. 8 figs.
Integrated method for chaotic time series analysis
Hively, Lee M.; Ng, Esmond G.
1998-01-01
Methods and apparatus for automatically detecting differences between similar but different states in a nonlinear process monitor nonlinear data. Steps include: acquiring the data; digitizing the data; obtaining nonlinear measures of the data via chaotic time series analysis; obtaining time serial trends in the nonlinear measures; and determining by comparison whether differences between similar but different states are indicated.
A New Method to Cancel RFI---The Adaptive Filter
NASA Astrophysics Data System (ADS)
Bradley, R.; Barnbaum, C.
1996-12-01
An increasing amount of precious radio frequency spectrum in the VHF, UHF, and microwave bands is being utilized each year to support new commercial and military ventures, and all have the potential to interfere with radio astronomy observations. Some radio spectral lines of astronomical interest occur outside the protected radio astronomy bands and are unobservable due to heavy interference. Conventional approaches to deal with RFI include legislation, notch filters, RF shielding, and post-processing techniques. Although these techniques are somewhat successful, each suffers from insufficient interference cancellation. One concept of interference excision that has not been used before in radio astronomy is adaptive interference cancellation. The concept of adaptive interference canceling was first introduced in the mid-1970s as a way to reduce unwanted noise in low frequency (audio) systems. Examples of such systems include the canceling of maternal ECG in fetal electrocardiography and the reduction of engine noise in the passenger compartment of automobiles. Only recently have high-speed digital filter chips made adaptive filtering possible in a bandwidth as large a few megahertz, finally opening the door to astronomical uses. The system consists of two receivers: the main beam of the radio telescope receives the desired signal corrupted by RFI coming in the sidelobes, and the reference antenna receives only the RFI. The reference antenna is processed using a digital adaptive filter and then subtracted from the signal in the main beam, thus producing the system output. The weights of the digital filter are adjusted by way of an algorithm that minimizes, in a least-squares sense, the power output of the system. Through an adaptive-iterative process, the interference canceler will lock onto the RFI and the filter will adjust itself to minimize the effect of the RFI at the system output. We are building a prototype 100 MHz receiver and will measure the cancellation
Adaptation of an ethnographic method for investigation of the task domain in diagnostic radiology
NASA Astrophysics Data System (ADS)
Ramey, Judith A.; Rowberg, Alan H.; Robinson, Carol
1992-07-01
A number of user-centered methods for designing radiology workstations have been described by researchers at Carleton University (Ottawa), Georgetown University, George Washington University, and University of Arizona, among others. The approach described here differs in that it enriches standard human-factors practices with methods adapted from ethnography to study users (in this case, diagnostic radiologists) as members of a distinct culture. The overall approach combines several methods; the core method, based on ethnographic ''stream of behavior chronicles'' and their analysis, has four phases: (1) first, we gather the stream of behavior by videotaping a radiologist as he or she works; (2) we view the tape ourselves and formulate questions and hypothesis about the work; and then (3) in a second videotaped session, we show the radiologist the original tape and ask for a running commentary on the work, into which, at the appropriate points, we interject our questions for clarification. We then (4) categorize/index the behavior on the ''raw data'' tapes for various kinds of follow-on analysis. We describe and illustrate this method in detail, describe how we analyze the ''raw data'' videotapes and the commentary tapes, and explain how the method can be integrated into an overall user-centered design process based on standard human-factors techniques.
The use of the spectral method within the fast adaptive composite grid method
McKay, S.M.
1994-12-31
The use of efficient algorithms for the solution of partial differential equations has been sought for many years. The fast adaptive composite grid (FAC) method combines an efficient algorithm with high accuracy to obtain low cost solutions to partial differential equations. The FAC method achieves fast solution by combining solutions on different grids with varying discretizations and using multigrid like techniques to find fast solution. Recently, the continuous FAC (CFAC) method has been developed which utilizes an analytic solution within a subdomain to iterate to a solution of the problem. This has been shown to achieve excellent results when the analytic solution can be found. The CFAC method will be extended to allow solvers which construct a function for the solution, e.g., spectral and finite element methods. In this discussion, the spectral methods will be used to provide a fast, accurate solution to the partial differential equation. As spectral methods are more accurate than finite difference methods, the ensuing accuracy from this hybrid method outside of the subdomain will be investigated.
Kedalion: NASA's Adaptable and Agile Hardware/Software Integration and Test Lab
NASA Technical Reports Server (NTRS)
Mangieri, Mark L.; Vice, Jason
2011-01-01
NASA fs Kedalion engineering analysis lab at Johnson Space Center is on the forefront of validating and using many contemporary avionics hardware/software development and integration techniques, which represent new paradigms to heritage NASA culture. Kedalion has validated many of the Orion hardware/software engineering techniques borrowed from the adjacent commercial aircraft avionics solution space, with the intention to build upon such techniques to better align with today fs aerospace market. Using agile techniques, commercial products, early rapid prototyping, in-house expertise and tools, and customer collaboration, Kedalion has demonstrated that cost effective contemporary paradigms hold the promise to serve future NASA endeavors within a diverse range of system domains. Kedalion provides a readily adaptable solution for medium/large scale integration projects. The Kedalion lab is currently serving as an in-line resource for the project and the Multipurpose Crew Vehicle (MPCV) program.
Integrated management of thesis using clustering method
NASA Astrophysics Data System (ADS)
Astuti, Indah Fitri; Cahyadi, Dedy
2017-02-01
Thesis is one of major requirements for student in pursuing their bachelor degree. In fact, finishing the thesis involves a long process including consultation, writing manuscript, conducting the chosen method, seminar scheduling, searching for references, and appraisal process by the board of mentors and examiners. Unfortunately, most of students find it hard to match all the lecturers' free time to sit together in a seminar room in order to examine the thesis. Therefore, seminar scheduling process should be on the top of priority to be solved. Manual mechanism for this task no longer fulfills the need. People in campus including students, staffs, and lecturers demand a system in which all the stakeholders can interact each other and manage the thesis process without conflicting their timetable. A branch of computer science named Management Information System (MIS) could be a breakthrough in dealing with thesis management. This research conduct a method called clustering to distinguish certain categories using mathematics formulas. A system then be developed along with the method to create a well-managed tool in providing some main facilities such as seminar scheduling, consultation and review process, thesis approval, assessment process, and also a reliable database of thesis. The database plays an important role in present and future purposes.
Damping identification in frequency domain using integral method
NASA Astrophysics Data System (ADS)
Guo, Zhiwei; Sheng, Meiping; Ma, Jiangang; Zhang, Wulin
2015-03-01
A new method for damping identification of linear system in frequency domain is presented, by using frequency response function (FRF) with integral method. The FRF curve is firstly transformed to other type of frequency-related curve by changing the representations of horizontal and vertical axes. For the newly constructed frequency-related curve, integral is conducted and the area forming from the new curve is used to determine the damping. Three different methods based on integral are proposed in this paper, which are called FDI-1, FDI-2 and FDI-3 method, respectively. For a single degree of freedom (Sdof) system, the formulated relation of each method between integrated area and loss factor is derived theoretically. The numeral simulation and experiment results show that, the proposed integral methods have high precision, strong noise resistance and are very stable in repeated measurements. Among the three integral methods, FDI-3 method is the most recommended because of its higher accuracy and simpler algorithm. The new methods are limited to linear system in which modes are well separated, and for closely spaced mode system, mode decomposition process should be conducted firstly.
An Adaptive Intelligent Integrated Lighting Control Approach for High-Performance Office Buildings
NASA Astrophysics Data System (ADS)
Karizi, Nasim
An acute and crucial societal problem is the energy consumed in existing commercial buildings. There are 1.5 million commercial buildings in the U.S. with only about 3% being built each year. Hence, existing buildings need to be properly operated and maintained for several decades. Application of integrated centralized control systems in buildings could lead to more than 50% energy savings. This research work demonstrates an innovative adaptive integrated lighting control approach which could achieve significant energy savings and increase indoor comfort in high performance office buildings. In the first phase of the study, a predictive algorithm was developed and validated through experiments in an actual test room. The objective was to regulate daylight on a specified work plane by controlling the blind slat angles. Furthermore, a sensor-based integrated adaptive lighting controller was designed in Simulink which included an innovative sensor optimization approach based on genetic algorithm to minimize the number of sensors and efficiently place them in the office. The controller was designed based on simple integral controllers. The objective of developed control algorithm was to improve the illuminance situation in the office through controlling the daylight and electrical lighting. To evaluate the performance of the system, the controller was applied on experimental office model in Lee et al.'s research study in 1998. The result of the developed control approach indicate a significantly improvement in lighting situation and 1-23% and 50-78% monthly electrical energy savings in the office model, compared to two static strategies when the blinds were left open and closed during the whole year respectively.
Exponential Methods for the Time Integration of Schroedinger Equation
Cano, B.; Gonzalez-Pachon, A.
2010-09-30
We consider exponential methods of second order in time in order to integrate the cubic nonlinear Schroedinger equation. We are interested in taking profit of the special structure of this equation. Therefore, we look at symmetry, symplecticity and approximation of invariants of the proposed methods. That will allow to integrate till long times with reasonable accuracy. Computational efficiency is also our aim. Therefore, we make numerical computations in order to compare the methods considered and so as to conclude that explicit Lawson schemes projected on the norm of the solution are an efficient tool to integrate this equation.
Adaptive finite element methods for two-dimensional problems in computational fracture mechanics
NASA Technical Reports Server (NTRS)
Min, J. B.; Bass, J. M.; Spradley, L. W.
1994-01-01
Some recent results obtained using solution-adaptive finite element methods in two-dimensional problems in linear elastic fracture mechanics are presented. The focus is on the basic issue of adaptive finite element methods for validating the new methodology by computing demonstration problems and comparing the stress intensity factors to analytical results.
A Rationale for Mixed Methods (Integrative) Research Programmes in Education
ERIC Educational Resources Information Center
Niaz, Mansoor
2008-01-01
Recent research shows that research programmes (quantitative, qualitative and mixed) in education are not displaced (as suggested by Kuhn) but rather lead to integration. The objective of this study is to present a rationale for mixed methods (integrative) research programs based on contemporary philosophy of science (Lakatos, Giere, Cartwright,…
Integration of Online Parameter Identification and Neural Network for In-Flight Adaptive Control
NASA Technical Reports Server (NTRS)
Hageman, Jacob J.; Smith, Mark S.; Stachowiak, Susan
2003-01-01
An indirect adaptive system has been constructed for robust control of an aircraft with uncertain aerodynamic characteristics. This system consists of a multilayer perceptron pre-trained neural network, online stability and control derivative identification, a dynamic cell structure online learning neural network, and a model following control system based on the stochastic optimal feedforward and feedback technique. The pre-trained neural network and model following control system have been flight-tested, but the online parameter identification and online learning neural network are new additions used for in-flight adaptation of the control system model. A description of the modification and integration of these two stand-alone software packages into the complete system in preparation for initial flight tests is presented. Open-loop results using both simulation and flight data, as well as closed-loop performance of the complete system in a nonlinear, six-degree-of-freedom, flight validated simulation, are analyzed. Results show that this online learning system, in contrast to the nonlearning system, has the ability to adapt to changes in aerodynamic characteristics in a real-time, closed-loop, piloted simulation, resulting in improved flying qualities.
NASA Astrophysics Data System (ADS)
Huggel, Christian
2010-05-01
Over centuries, Andean communities have developed strategies to cope with climate variability and extremes, such as cold waves or droughts, which can have severe impacts on their welfare. Nevertheless, the rural population, living at altitudes of 3000 to 4000 m asl or even higher, remains highly vulnerable to external stresses, partly because of the extreme living conditions, partly as a consequence of high poverty. Moreover, recent studies indicate that climatic extreme events have increased in frequency in the past years. A Peruvian-Swiss Climate Change Adaptation Programme in Peru (PACC) is currently undertaking strong efforts to understand the links between climatic conditions and local livelihood assets. The goal is to propose viable strategies for adaptation in collaboration with the local population and governments. The program considers three main areas of action, i.e. (i) water resource management; (ii) disaster risk reduction; and (iii) food security. The scientific studies carried out within the programme follow a highly transdisciplinary approach, spanning the whole range from natural and social sciences. Moreover, the scientific Peruvian-Swiss collaboration is closely connected to people and institutions operating at the implementation and political level. In this contribution we report on first results of thematic studies, address critical questions, and outline the potential of integrative research for climate change adaptation in mountain regions in the context of a developing country.
Schiffer, Anne-Marike; Siletti, Kayla; Waszak, Florian; Yeung, Nick
2017-02-01
In any non-deterministic environment, unexpected events can indicate true changes in the world (and require behavioural adaptation) or reflect chance occurrence (and must be discounted). Adaptive behaviour requires distinguishing these possibilities. We investigated how humans achieve this by integrating high-level information from instruction and experience. In a series of EEG experiments, instructions modulated the perceived informativeness of feedback: Participants performed a novel probabilistic reinforcement learning task, receiving instructions about reliability of feedback or volatility of the environment. Importantly, our designs de-confound informativeness from surprise, which typically co-vary. Behavioural results indicate that participants used instructions to adapt their behaviour faster to changes in the environment when instructions indicated that negative feedback was more informative, even if it was simultaneously less surprising. This study is the first to show that neural markers of feedback anticipation (stimulus-preceding negativity) and of feedback processing (feedback-related negativity; FRN) reflect informativeness of unexpected feedback. Meanwhile, changes in P3 amplitude indicated imminent adjustments in behaviour. Collectively, our findings provide new evidence that high-level information interacts with experience-driven learning in a flexible manner, enabling human learners to make informed decisions about whether to persevere or explore new options, a pivotal ability in our complex environment.
Integration of proteomics and metabolomics to elucidate metabolic adaptation in Leishmania.
Akpunarlieva, Snezhana; Weidt, Stefan; Lamasudin, Dhilia; Naula, Christina; Henderson, David; Barrett, Michael; Burgess, Karl; Burchmore, Richard
2017-02-23
Leishmania parasites multiply and develop in the gut of a sand fly vector in order to be transmitted to a vertebrate host. During this process they encounter and exploit various nutrients, including sugars, and amino and fatty acids. We have previously generated a mutant Leishmania line that is deficient in glucose transport and which displays some biologically important phenotypic changes such as reduced growth in axenic culture, reduced biosynthesis of hexose-containing virulence factors, increased sensitivity to oxidative stress, and dramatically reduced parasite burden in both insect vector and macrophage host cells. Here we report the generation and integration of proteomic and metabolomic approaches to identify molecular changes that may explain these phenotypes. Our data suggest changes in pathways of glycoconjugate production and redox homeostasis, which likely represent adaptations to the loss of sugar uptake capacity and explain the reduced virulence of this mutant in sand flies and mammals. Our data contribute to understanding the mechanisms of metabolic adaptation in Leishmania and illustrate the power of integrated proteomic and metabolomic approaches to relate biochemistry to phenotype.
NASA Astrophysics Data System (ADS)
Zhang, Lin-Lin; Yuan, Shi-Jin; Mu, Bin; Zhou, Fei-Fan
2017-02-01
In this paper, conditional nonlinear optimal perturbation (CNOP) was investigated to identify sensitive areas for tropical cyclone adaptive observations with principal component analysis based genetic algorithm (PCAGA) method and two tropical cyclones, Fitow (2013) and Matmo (2014), were studied with a 120 km resolution using the fifth-generation Mesoscale Model (MM5). To verify the effectiveness of PCAGA method, CNOPs were also calculated by an adjoint-based method as a benchmark for comparison on patterns, energies, and vertical distributions of temperatures. Comparing with the benchmark, the CNOPs obtained from PCAGA had similar patterns for Fitow and a little different for Matmo; the vertically integrated energies were located closer to the verification areas and the initial tropical cyclones. Experimental results also presented that the CNOPs of PCAGA had a more positive impact on the forecast improvement, which gained from the reductions of the CNOPs in the whole domain containing sensitive areas. Furthermore, the PCAGA program was executed 40 times for each case and all the averages of benefits were larger than the benchmark. This also proved the validity and stability of the PCAGA method. All results showed that the PCAGA method could approximately solve CNOP of complicated models without computing adjoint models, and obtain more benefits of reducing the CNOPs in the whole domain.
Integrated analysis considered mitigation cost, damage cost and adaptation cost in Northeast Asia
NASA Astrophysics Data System (ADS)
Park, J. H.; Lee, D. K.; Kim, H. G.; Sung, S.; Jung, T. Y.
2015-12-01
Various studies show that raising the temperature as well as storms, cold snap, raining and drought caused by climate change. And variety disasters have had a damage to mankind. The world risk report(2012, The Nature Conservancy) and UNU-EHS (the United Nations University Institute for Environment and Human Security) reported that more and more people are exposed to abnormal weather such as floods, drought, earthquakes, typhoons and hurricanes over the world. In particular, the case of Korea, we influenced by various pollutants which are occurred in Northeast Asian countries, China and Japan, due to geographical meteorological characteristics. These contaminants have had a significant impact on air quality with the pollutants generated in Korea. Recently, around the world continued their effort to reduce greenhouse gas and to improve air quality in conjunction with the national or regional development goals priority. China is also working on various efforts in accordance with the international flows to cope with climate change and air pollution. In the future, effect of climate change and air quality in Korea and Northeast Asia will be change greatly according to China's growth and mitigation policies. The purpose of this study is to minimize the damage caused by climate change on the Korean peninsula through an integrated approach taking into account the mitigation and adaptation plan. This study will suggest a climate change strategy at the national level by means of a comprehensive economic analysis of the impacts and mitigation of climate change. In order to quantify the impact and damage cost caused by climate change scenarios in a regional scale, it should be priority variables selected in accordance with impact assessment of climate change. The sectoral impact assessment was carried out on the basis of selected variables and through this, to derive the methodology how to estimate damage cost and adaptation cost. And then, the methodology was applied in Korea
A new adaptive time step method for unsteady flow simulations in a human lung.
Fenández-Tena, Ana; Marcos, Alfonso C; Martínez, Cristina; Keith Walters, D
2017-04-07
The innovation presented is a method for adaptive time-stepping that allows clustering of time steps in portions of the cycle for which flow variables are rapidly changing, based on the concept of using a uniform step in a relevant dependent variable rather than a uniform step in the independent variable time. A user-defined function was developed to adapt the magnitude of the time step (adaptive time step) to a defined rate of change in inlet velocity. Quantitative comparison indicates that the new adaptive time stepping method significantly improves accuracy for simulations using an equivalent number of time steps per cycle.
NASA Astrophysics Data System (ADS)
Bussetta, Philippe; Marceau, Daniel; Ponthot, Jean-Philippe
2012-02-01
The aim of this work is to propose a new numerical method for solving the mechanical frictional contact problem in the general case of multi-bodies in a three dimensional space. This method is called adapted augmented Lagrangian method (AALM) and can be used in a multi-physical context (like thermo-electro-mechanical fields problems). This paper presents this new method and its advantages over other classical methods such as penalty method (PM), adapted penalty method (APM) and, augmented Lagrangian method (ALM). In addition, the efficiency and the reliability of the AALM are proved with some academic problems and an industrial thermo-electromechanical problem.
NASA Astrophysics Data System (ADS)
Ushaq, Muhammad; Fang, Jiancheng
2013-10-01
Integrated navigation systems for various applications, generally employs the centralized Kalman filter (CKF) wherein all measured sensor data are communicated to a single central Kalman filter. The advantage of CKF is that there is a minimal loss of information and high precision under benign conditions. But CKF may suffer computational overloading, and poor fault tolerance. The alternative is the federated Kalman filter (FKF) wherein the local estimates can deliver optimal or suboptimal state estimate as per certain information fusion criterion. FKF has enhanced throughput and multiple level fault detection capability. The Standard CKF or FKF require that the system noise and the measurement noise are zero-mean and Gaussian. Moreover it is assumed that covariance of system and measurement noises remain constant. But if the theoretical and actual statistical features employed in Kalman filter are not compatible, the Kalman filter does not render satisfactory solutions and divergence problems also occur. To resolve such problems, in this paper, an adaptive Kalman filter scheme strengthened with fuzzy inference system (FIS) is employed to adapt the statistical features of contributing sensors, online, in the light of real system dynamics and varying measurement noises. The excessive faults are detected and isolated by employing Chi Square test method. As a case study, the presented scheme has been implemented on Strapdown Inertial Navigation System (SINS) integrated with the Celestial Navigation System (CNS), GPS and Doppler radar using FKF. Collectively the overall system can be termed as SINS/CNS/GPS/Doppler integrated navigation system. The simulation results have validated the effectiveness of the presented scheme with significantly enhanced precision, reliability and fault tolerance. Effectiveness of the scheme has been tested against simulated abnormal errors/noises during different time segments of flight. It is believed that the presented scheme can be
Neuronal Spike Timing Adaptation Described with a Fractional Leaky Integrate-and-Fire Model
Teka, Wondimu; Marinov, Toma M.; Santamaria, Fidel
2014-01-01
The voltage trace of neuronal activities can follow multiple timescale dynamics that arise from correlated membrane conductances. Such processes can result in power-law behavior in which the membrane voltage cannot be characterized with a single time constant. The emergent effect of these membrane correlations is a non-Markovian process that can be modeled with a fractional derivative. A fractional derivative is a non-local process in which the value of the variable is determined by integrating a temporal weighted voltage trace, also called the memory trace. Here we developed and analyzed a fractional leaky integrate-and-fire model in which the exponent of the fractional derivative can vary from 0 to 1, with 1 representing the normal derivative. As the exponent of the fractional derivative decreases, the weights of the voltage trace increase. Thus, the value of the voltage is increasingly correlated with the trajectory of the voltage in the past. By varying only the fractional exponent, our model can reproduce upward and downward spike adaptations found experimentally in neocortical pyramidal cells and tectal neurons in vitro. The model also produces spikes with longer first-spike latency and high inter-spike variability with power-law distribution. We further analyze spike adaptation and the responses to noisy and oscillatory input. The fractional model generates reliable spike patterns in response to noisy input. Overall, the spiking activity of the fractional leaky integrate-and-fire model deviates from the spiking activity of the Markovian model and reflects the temporal accumulated intrinsic membrane dynamics that affect the response of the neuron to external stimulation. PMID:24675903
Neuronal spike timing adaptation described with a fractional leaky integrate-and-fire model.
Teka, Wondimu; Marinov, Toma M; Santamaria, Fidel
2014-03-01
The voltage trace of neuronal activities can follow multiple timescale dynamics that arise from correlated membrane conductances. Such processes can result in power-law behavior in which the membrane voltage cannot be characterized with a single time constant. The emergent effect of these membrane correlations is a non-Markovian process that can be modeled with a fractional derivative. A fractional derivative is a non-local process in which the value of the variable is determined by integrating a temporal weighted voltage trace, also called the memory trace. Here we developed and analyzed a fractional leaky integrate-and-fire model in which the exponent of the fractional derivative can vary from 0 to 1, with 1 representing the normal derivative. As the exponent of the fractional derivative decreases, the weights of the voltage trace increase. Thus, the value of the voltage is increasingly correlated with the trajectory of the voltage in the past. By varying only the fractional exponent, our model can reproduce upward and downward spike adaptations found experimentally in neocortical pyramidal cells and tectal neurons in vitro. The model also produces spikes with longer first-spike latency and high inter-spike variability with power-law distribution. We further analyze spike adaptation and the responses to noisy and oscillatory input. The fractional model generates reliable spike patterns in response to noisy input. Overall, the spiking activity of the fractional leaky integrate-and-fire model deviates from the spiking activity of the Markovian model and reflects the temporal accumulated intrinsic membrane dynamics that affect the response of the neuron to external stimulation.
Surface estimation methods with phased-arrays for adaptive ultrasonic imaging in complex components
NASA Astrophysics Data System (ADS)
Robert, S.; Calmon, P.; Calvo, M.; Le Jeune, L.; Iakovleva, E.
2015-03-01
Immersion ultrasonic testing of structures with complex geometries may be significantly improved by using phased-arrays and specific adaptive algorithms that allow to image flaws under a complex and unknown interface. In this context, this paper presents a comparative study of different Surface Estimation Methods (SEM) available in the CIVA software and used for adaptive imaging. These methods are based either on time-of-flight measurements or on image processing. We also introduce a generalized adaptive method where flaws may be fully imaged with half-skip modes. In this method, both the surface and the back-wall of a complex structure are estimated before imaging flaws.
Lingel, Christian; Haist, Tobias; Osten, Wolfgang
2016-12-20
We propose an adaptive optical setup using a spatial light modulator (SLM), which is suitable to perform different phase retrieval methods with varying optical features and without mechanical movement. By this approach, it is possible to test many different phase retrieval methods and their parameters (optical and algorithmic) using one stable setup and without hardware adaption. We show exemplary results for the well-known transport of intensity equation (TIE) method and a new iterative adaptive phase retrieval method, where the object phase is canceled by an inverse phase written into part of the SLM. The measurement results are compared to white light interferometric measurements.
NASA Astrophysics Data System (ADS)
Xie, Guizhong; Zhang, Dehai; Zhang, Jianming; Meng, Fannian; Du, Wenliao; Wen, Xiaoyu
2016-12-01
As a widely used numerical method, boundary element method (BEM) is efficient for computer aided engineering (CAE). However, boundary integrals with near singularity need to be calculated accurately and efficiently to implement BEM for CAE analysis on thin bodies successfully. In this paper, the distance in the denominator of the fundamental solution is first designed as an equivalent form using approximate expansion and the original sinh method can be revised into a new form considering the minimum distance and the approximate expansion. Second, the acquisition of the projection point by Newton-Raphson method is introduced. We acquire the nearest point between the source point and element edge by solving a cubic equation if the location of the projection point is outside the element, where boundary integrals with near singularity appear. Finally, the subtriangles of the local coordinate space are mapped into the integration space and the sinh method is applied in the integration space. The revised sinh method can be directly performed in the integration element. Averification test of our method is proposed. Results demonstrate that our method is effective for regularizing the boundary integrals with near singularity.
Assembly and method for testing the integrity of stuffing tubes
Morrison, E.F.
1997-08-26
A stuffing tube integrity checking assembly includes first and second annular seals, with each seal adapted to be positioned about a stuffing tube penetration component. An annular inflation bladder is provided, the bladder having a slot extending longitudinally there along and including a separator for sealing the slot. A first valve is in fluid communication with the bladder for introducing pressurized fluid to the space defined by the bladder when mounted about the tube. First and second releasible clamps are provided. Each clamp assembly is positioned about the bladder for securing the bladder to one of the seals for thereby establishing a fluid-tight chamber about the tube. 5 figs.
Kaabi, Mohamed Ghaith; Tonnelier, Arnaud; Martinez, Dominique
2011-05-01
In traditional event-driven strategies, spike timings are analytically given or calculated with arbitrary precision (up to machine precision). Exact computation is possible only for simplified neuron models, mainly the leaky integrate-and-fire model. In a recent paper, Zheng, Tonnelier, and Martinez (2009) introduced an approximate event-driven strategy, named voltage stepping, that allows the generic simulation of nonlinear spiking neurons. Promising results were achieved in the simulation of single quadratic integrate-and-fire neurons. Here, we assess the performance of voltage stepping in network simulations by considering more complex neurons (quadratic integrate-and-fire neurons with adaptation) coupled with multiple synapses. To handle the discrete nature of synaptic interactions, we recast voltage stepping in a general framework, the discrete event system specification. The efficiency of the method is assessed through simulations and comparisons with a modified time-stepping scheme of the Runge-Kutta type. We demonstrated numerically that the original order of voltage stepping is preserved when simulating connected spiking neurons, independent of the network activity and connectivity.
Agarwal, Animesh Delle Site, Luigi
2015-09-07
Quantum effects due to the spatial delocalization of light atoms are treated in molecular simulation via the path integral technique. Among several methods, Path Integral (PI) Molecular Dynamics (MD) is nowadays a powerful tool to investigate properties induced by spatial delocalization of atoms; however, computationally this technique is very demanding. The above mentioned limitation implies the restriction of PIMD applications to relatively small systems and short time scales. One of the possible solutions to overcome size and time limitation is to introduce PIMD algorithms into the Adaptive Resolution Simulation Scheme (AdResS). AdResS requires a relatively small region treated at path integral level and embeds it into a large molecular reservoir consisting of generic spherical coarse grained molecules. It was previously shown that the realization of the idea above, at a simple level, produced reasonable results for toy systems or simple/test systems like liquid parahydrogen. Encouraged by previous results, in this paper, we show the simulation of liquid water at room conditions where AdResS, in its latest and more accurate Grand-Canonical-like version (GC-AdResS), is merged with two of the most relevant PIMD techniques available in the literature. The comparison of our results with those reported in the literature and/or with those obtained from full PIMD simulations shows a highly satisfactory agreement.
NASA Astrophysics Data System (ADS)
Agarwal, Animesh; Delle Site, Luigi
2015-09-01
Quantum effects due to the spatial delocalization of light atoms are treated in molecular simulation via the path integral technique. Among several methods, Path Integral (PI) Molecular Dynamics (MD) is nowadays a powerful tool to investigate properties induced by spatial delocalization of atoms; however, computationally this technique is very demanding. The above mentioned limitation implies the restriction of PIMD applications to relatively small systems and short time scales. One of the possible solutions to overcome size and time limitation is to introduce PIMD algorithms into the Adaptive Resolution Simulation Scheme (AdResS). AdResS requires a relatively small region treated at path integral level and embeds it into a large molecular reservoir consisting of generic spherical coarse grained molecules. It was previously shown that the realization of the idea above, at a simple level, produced reasonable results for toy systems or simple/test systems like liquid parahydrogen. Encouraged by previous results, in this paper, we show the simulation of liquid water at room conditions where AdResS, in its latest and more accurate Grand-Canonical-like version (GC-AdResS), is merged with two of the most relevant PIMD techniques available in the literature. The comparison of our results with those reported in the literature and/or with those obtained from full PIMD simulations shows a highly satisfactory agreement.
An integrated lean-methods approach to hospital facilities redesign.
Nicholas, John
2012-01-01
Lean production methods for eliminating waste and improving processes in manufacturing are now being applied in healthcare. As the author shows, the methods are appropriate for redesigning hospital facilities. When used in an integrated manner and employing teams of mostly clinicians, the methods produce facility designs that are custom-fit to patient needs and caregiver work processes, and reduce operational costs. The author reviews lean methods and an approach for integrating them in the redesign of hospital facilities. A case example of the redesign of an emergency department shows the feasibility and benefits of the approach.
Nonlinear mode decomposition: A noise-robust, adaptive decomposition method
NASA Astrophysics Data System (ADS)
Iatsenko, Dmytro; McClintock, Peter V. E.; Stefanovska, Aneta
2015-09-01
The signals emanating from complex systems are usually composed of a mixture of different oscillations which, for a reliable analysis, should be separated from each other and from the inevitable background of noise. Here we introduce an adaptive decomposition tool—nonlinear mode decomposition (NMD)—which decomposes a given signal into a set of physically meaningful oscillations for any wave form, simultaneously removing the noise. NMD is based on the powerful combination of time-frequency analysis techniques—which, together with the adaptive choice of their parameters, make it extremely noise robust—and surrogate data tests used to identify interdependent oscillations and to distinguish deterministic from random activity. We illustrate the application of NMD to both simulated and real signals and demonstrate its qualitative and quantitative superiority over other approaches, such as (ensemble) empirical mode decomposition, Karhunen-Loève expansion, and independent component analysis. We point out that NMD is likely to be applicable and useful in many different areas of research, such as geophysics, finance, and the life sciences. The necessary matlab codes for running NMD are freely available for download.
Nonlinear mode decomposition: a noise-robust, adaptive decomposition method.
Iatsenko, Dmytro; McClintock, Peter V E; Stefanovska, Aneta
2015-09-01
The signals emanating from complex systems are usually composed of a mixture of different oscillations which, for a reliable analysis, should be separated from each other and from the inevitable background of noise. Here we introduce an adaptive decomposition tool-nonlinear mode decomposition (NMD)-which decomposes a given signal into a set of physically meaningful oscillations for any wave form, simultaneously removing the noise. NMD is based on the powerful combination of time-frequency analysis techniques-which, together with the adaptive choice of their parameters, make it extremely noise robust-and surrogate data tests used to identify interdependent oscillations and to distinguish deterministic from random activity. We illustrate the application of NMD to both simulated and real signals and demonstrate its qualitative and quantitative superiority over other approaches, such as (ensemble) empirical mode decomposition, Karhunen-Loève expansion, and independent component analysis. We point out that NMD is likely to be applicable and useful in many different areas of research, such as geophysics, finance, and the life sciences. The necessary matlab codes for running NMD are freely available for download.
Adaptive spacetime method using Riemann jump conditions for coupled atomistic-continuum dynamics
Kraczek, B. Miller, S.T. Haber, R.B. Johnson, D.D.
2010-03-20
We combine the Spacetime Discontinuous Galerkin (SDG) method for elastodynamics with the mathematically consistent Atomistic Discontinuous Galerkin (ADG) method in a new scheme that concurrently couples continuum and atomistic models of dynamic response in solids. The formulation couples non-overlapping continuum and atomistic models across sharp interfaces by weakly enforcing jump conditions, for both momentum balance and kinematic compatibility, using Riemann values to preserve the characteristic structure of the underlying hyperbolic system. Momentum balances to within machine-precision accuracy over every element, on each atom, and over the coupled system, with small, controllable energy dissipation in the continuum region that ensures numerical stability. When implemented on suitable unstructured spacetime grids, the continuum SDG model offers linear computational complexity in the number of elements and powerful adaptive analysis capabilities that readily bridge between atomic and continuum scales in both space and time. A special trace operator for the atomic velocities and an associated atomistic traction field enter the jump conditions at the coupling interface. The trace operator depends on parameters that specify, at the scale of the atomic spacing, the position of the coupling interface relative to the atoms. In a key finding, we demonstrate that optimizing these parameters suppresses spurious reflections at the coupling interface without the use of non-physical damping or special boundary conditions. We formulate the implicit SDG-ADG coupling scheme in up to three spatial dimensions, and describe an efficient iterative solution scheme that outperforms common explicit schemes, such as the Velocity Verlet integrator. Numerical examples, in 1dxtime and employing both linear and nonlinear potentials, demonstrate the performance of the SDG-ADG method and show how adaptive spacetime meshing reconciles disparate time steps and resolves atomic-scale signals in
A dynamic integrated fault diagnosis method for power transformers.
Gao, Wensheng; Bai, Cuifen; Liu, Tong
2015-01-01
In order to diagnose transformer fault efficiently and accurately, a dynamic integrated fault diagnosis method based on Bayesian network is proposed in this paper. First, an integrated fault diagnosis model is established based on the causal relationship among abnormal working conditions, failure modes, and failure symptoms of transformers, aimed at obtaining the most possible failure mode. And then considering the evidence input into the diagnosis model is gradually acquired and the fault diagnosis process in reality is multistep, a dynamic fault diagnosis mechanism is proposed based on the integrated fault diagnosis model. Different from the existing one-step diagnosis mechanism, it includes a multistep evidence-selection process, which gives the most effective diagnostic test to be performed in next step. Therefore, it can reduce unnecessary diagnostic tests and improve the accuracy and efficiency of diagnosis. Finally, the dynamic integrated fault diagnosis method is applied to actual cases, and the validity of this method is verified.
A Dynamic Integrated Fault Diagnosis Method for Power Transformers
Gao, Wensheng; Liu, Tong
2015-01-01
In order to diagnose transformer fault efficiently and accurately, a dynamic integrated fault diagnosis method based on Bayesian network is proposed in this paper. First, an integrated fault diagnosis model is established based on the causal relationship among abnormal working conditions, failure modes, and failure symptoms of transformers, aimed at obtaining the most possible failure mode. And then considering the evidence input into the diagnosis model is gradually acquired and the fault diagnosis process in reality is multistep, a dynamic fault diagnosis mechanism is proposed based on the integrated fault diagnosis model. Different from the existing one-step diagnosis mechanism, it includes a multistep evidence-selection process, which gives the most effective diagnostic test to be performed in next step. Therefore, it can reduce unnecessary diagnostic tests and improve the accuracy and efficiency of diagnosis. Finally, the dynamic integrated fault diagnosis method is applied to actual cases, and the validity of this method is verified. PMID:25685841
Investigating Item Exposure Control Methods in Computerized Adaptive Testing
ERIC Educational Resources Information Center
Ozturk, Nagihan Boztunc; Dogan, Nuri
2015-01-01
This study aims to investigate the effects of item exposure control methods on measurement precision and on test security under various item selection methods and item pool characteristics. In this study, the Randomesque (with item group sizes of 5 and 10), Sympson-Hetter, and Fade-Away methods were used as item exposure control methods. Moreover,…
Achieving integration in mixed methods designs-principles and practices.
Fetters, Michael D; Curry, Leslie A; Creswell, John W
2013-12-01
Mixed methods research offers powerful tools for investigating complex processes and systems in health and health care. This article describes integration principles and practices at three levels in mixed methods research and provides illustrative examples. Integration at the study design level occurs through three basic mixed method designs-exploratory sequential, explanatory sequential, and convergent-and through four advanced frameworks-multistage, intervention, case study, and participatory. Integration at the methods level occurs through four approaches. In connecting, one database links to the other through sampling. With building, one database informs the data collection approach of the other. When merging, the two databases are brought together for analysis. With embedding, data collection and analysis link at multiple points. Integration at the interpretation and reporting level occurs through narrative, data transformation, and joint display. The fit of integration describes the extent the qualitative and quantitative findings cohere. Understanding these principles and practices of integration can help health services researchers leverage the strengths of mixed methods.
Achieving Integration in Mixed Methods Designs—Principles and Practices
Fetters, Michael D; Curry, Leslie A; Creswell, John W
2013-01-01
Mixed methods research offers powerful tools for investigating complex processes and systems in health and health care. This article describes integration principles and practices at three levels in mixed methods research and provides illustrative examples. Integration at the study design level occurs through three basic mixed method designs—exploratory sequential, explanatory sequential, and convergent—and through four advanced frameworks—multistage, intervention, case study, and participatory. Integration at the methods level occurs through four approaches. In connecting, one database links to the other through sampling. With building, one database informs the data collection approach of the other. When merging, the two databases are brought together for analysis. With embedding, data collection and analysis link at multiple points. Integration at the interpretation and reporting level occurs through narrative, data transformation, and joint display. The fit of integration describes the extent the qualitative and quantitative findings cohere. Understanding these principles and practices of integration can help health services researchers leverage the strengths of mixed methods. PMID:24279835
Comparison of time integration methods for the evolution of galaxies
NASA Astrophysics Data System (ADS)
Degraaf, W.
In the simulation of the evolution of elliptical galaxies, Leap-Frog is currently the most frequently used time integration method. The question is whether other methods perform better than this classical method. Improvements may also be expected from the use of variable step-lengths. We compare Leap-Frog with several other methods, namely: a fourth-order Nystrom method, a symplectic method, and DOPRI-five and eight. DOPRI uses variable steps of its own accord. For the other methods we construct a variable step procedure ourselves. The comparison of the methods is carried out in three Hamiltonian test problems.
NASA Astrophysics Data System (ADS)
Lee, Seungwon; Park, Ilkwon; Kim, Manbae; Byun, Hyeran
2006-10-01
As digital broadcasting technologies have been rapidly progressed, users' expectations for realistic and interactive broadcasting services also have been increased. As one of such services, 3D multi-view broadcasting has received much attention recently. In general, all the view sequences acquired at the server are transmitted to the client. Then, the user can select a part of views or all the views according to display capabilities. However, this kind of system requires high processing power of the server as well as the client, thus posing a difficulty in practical applications. To overcome this problem, a relatively simple method is to transmit only two view-sequences requested by the client in order to deliver a stereoscopic video. In this system, effective communication between the server and the client is one of important aspects. In this paper, we propose an efficient multi-view system that transmits two view-sequences and their depth maps according to user's request. The view selection process is integrated into MPEG-21 DIA (Digital Item Adaptation) so that our system is compatible to MPEG-21 multimedia framework. DIA is generally composed of resource adaptation and descriptor adaptation. It is one of merits that SVA (stereoscopic video adaptation) descriptors defined in DIA standard are used to deliver users' preferences and device capabilities. Furthermore, multi-view descriptions related to multi-view camera and system are newly introduced. The syntax of the descriptions and their elements is represented in XML (eXtensible Markup Language) schema. If the client requests an adapted descriptor (e.g., view numbers) to the server, then the server sends its associated view sequences. Finally, we present a method which can reduce user's visual discomfort that might occur while viewing stereoscopic video. This phenomenon happens when view changes as well as when a stereoscopic image produces excessive disparity caused by a large baseline between two cameras. To
Explicit Integration of Extremely Stiff Reaction Networks: Partial Equilibrium Methods
Guidry, Mike W; Billings, J. J.; Hix, William Raphael
2013-01-01
In two preceding papers [1,2] we have shown that, when reaction networks are well removed from equilibrium, explicit asymptotic and quasi-steady-state approximations can give algebraically stabilized integration schemes that rival standard implicit methods in accuracy and speed for extremely stiff systems. However, we also showed that these explicit methods remain accurate but are no longer competitive in speed as the network approaches equilibrium. In this paper we analyze this failure and show that it is associated with the presence of fast equilibration timescales that neither asymptotic nor quasi-steady-state approximations are able to remove efficiently from the numerical integration. Based on this understanding, we develop a partial equilibrium method to deal effectively with the new partial equilibrium methods, give an integration scheme that plausibly can deal with the stiffest networks, even in the approach to equilibrium, with accuracy and speed competitive with that of implicit methods. Thus we demonstrate that algebraically stabilized explicit methods may offer alternatives to implicit integration of even extremely stiff systems, and that these methods may permit integration of much larger networks than have been feasible previously in a variety of fields.
Methods for biological data integration: perspectives and challenges
Gligorijević, Vladimir; Pržulj, Nataša
2015-01-01
Rapid technological advances have led to the production of different types of biological data and enabled construction of complex networks with various types of interactions between diverse biological entities. Standard network data analysis methods were shown to be limited in dealing with such heterogeneous networked data and consequently, new methods for integrative data analyses have been proposed. The integrative methods can collectively mine multiple types of biological data and produce more holistic, systems-level biological insights. We survey recent methods for collective mining (integration) of various types of networked biological data. We compare different state-of-the-art methods for data integration and highlight their advantages and disadvantages in addressing important biological problems. We identify the important computational challenges of these methods and provide a general guideline for which methods are suited for specific biological problems, or specific data types. Moreover, we propose that recent non-negative matrix factorization-based approaches may become the integration methodology of choice, as they are well suited and accurate in dealing with heterogeneous data and have many opportunities for further development. PMID:26490630
New method adaptive to geospatial information acquisition and share based on grid
NASA Astrophysics Data System (ADS)
Fu, Yingchun; Yuan, Xiuxiao
2005-11-01
As we all know, it is difficult and time-consuming to acquire and share multi-source geospatial information in grid computing environment, especially for the data of different geo-reference benchmark. Although middleware for data format transformation has been applied by many grid applications and GIS software systems, it remains difficult to on demand realize spatial data assembly jobs among various geo-reference benchmarks because of complex computation of rigorous coordinate transformation model. To address the problem, an efficient hierarchical quadtree structure referred as multi-level grids is designed and coded to express the multi-scale global geo-space. The geospatial objects located in a certain grid of multi-level grids may be expressed as an increment value which is relative to the grid central point and is constant in different geo-reference benchmark. A mediator responsible for geo-reference transformation function with multi-level grids has been developed and aligned with grid service. With help of the mediator, a map or query spatial data sets from individual source of different geo-references can be merged into an uniform composite result. Instead of complex data pre-processing prior to compatible spatial integration, the introduced method is adaptive to be integrated with grid-enable service.
An examination of an adapter method for measuring the vibration transmitted to the human arms.
Xu, Xueyan S; Dong, Ren G; Welcome, Daniel E; Warren, Christopher; McDowell, Thomas W
2015-09-01
The objective of this study is to evaluate an adapter method for measuring the vibration on the human arms. Four instrumented adapters with different weights were used to measure the vibration transmitted to the wrist, forearm, and upper arm of each subject. Each adapter was attached at each location on the subjects using an elastic cloth wrap. Two laser vibrometers were also used to measure the transmitted vibration at each location to evaluate the validity of the adapter method. The apparent mass at the palm of the hand along the forearm direction was also measured to enhance the evaluation. This study found that the adapter and laser-measured transmissibility spectra were comparable with some systematic differences. While increasing the adapter mass reduced the resonant frequency at the measurement location, increasing the tightness of the adapter attachment increased the resonant frequency. However, the use of lightweight (≤15 g) adapters under medium attachment tightness did not change the basic trends of the transmissibility spectrum. The resonant features observed in the transmissibility spectra were also correlated with those observed in the apparent mass spectra. Because the local coordinate systems of the adapters may be significantly misaligned relative to the global coordinates of the vibration test systems, large errors were observed for the adapter-measured transmissibility in some individual orthogonal directions. This study, however, also demonstrated that the misalignment issue can be resolved by either using the total vibration transmissibility or by measuring the misalignment angles to correct the errors. Therefore, the adapter method is acceptable for understanding the basic characteristics of the vibration transmission in the human arms, and the adapter-measured data are acceptable for approximately modeling the system.
An examination of an adapter method for measuring the vibration transmitted to the human arms
Xu, Xueyan S.; Dong, Ren G.; Welcome, Daniel E.; Warren, Christopher; McDowell, Thomas W.
2016-01-01
The objective of this study is to evaluate an adapter method for measuring the vibration on the human arms. Four instrumented adapters with different weights were used to measure the vibration transmitted to the wrist, forearm, and upper arm of each subject. Each adapter was attached at each location on the subjects using an elastic cloth wrap. Two laser vibrometers were also used to measure the transmitted vibration at each location to evaluate the validity of the adapter method. The apparent mass at the palm of the hand along the forearm direction was also measured to enhance the evaluation. This study found that the adapter and laser-measured transmissibility spectra were comparable with some systematic differences. While increasing the adapter mass reduced the resonant frequency at the measurement location, increasing the tightness of the adapter attachment increased the resonant frequency. However, the use of lightweight (≤15 g) adapters under medium attachment tightness did not change the basic trends of the transmissibility spectrum. The resonant features observed in the transmissibility spectra were also correlated with those observed in the apparent mass spectra. Because the local coordinate systems of the adapters may be significantly misaligned relative to the global coordinates of the vibration test systems, large errors were observed for the adapter-measured transmissibility in some individual orthogonal directions. This study, however, also demonstrated that the misalignment issue can be resolved by either using the total vibration transmissibility or by measuring the misalignment angles to correct the errors. Therefore, the adapter method is acceptable for understanding the basic characteristics of the vibration transmission in the human arms, and the adapter-measured data are acceptable for approximately modeling the system. PMID:26834309
Hertäg, Loreen; Hass, Joachim; Golovko, Tatiana; Durstewitz, Daniel
2012-01-01
For large-scale network simulations, it is often desirable to have computationally tractable, yet in a defined sense still physiologically valid neuron models. In particular, these models should be able to reproduce physiological measurements, ideally in a predictive sense, and under different input regimes in which neurons may operate in vivo. Here we present an approach to parameter estimation for a simple spiking neuron model mainly based on standard f–I curves obtained from in vitro recordings. Such recordings are routinely obtained in standard protocols and assess a neuron’s response under a wide range of mean-input currents. Our fitting procedure makes use of closed-form expressions for the firing rate derived from an approximation to the adaptive exponential integrate-and-fire (AdEx) model. The resulting fitting process is simple and about two orders of magnitude faster compared to methods based on numerical integration of the differential equations. We probe this method on different cell types recorded from rodent prefrontal cortex. After fitting to the f–I current-clamp data, the model cells are tested on completely different sets of recordings obtained by fluctuating (“in vivo-like”) input currents. For a wide range of different input regimes, cell types, and cortical layers, the model could predict spike times on these test traces quite accurately within the bounds of physiological reliability, although no information from these distinct test sets was used for model fitting. Further analyses delineated some of the empirical factors constraining model fitting and the model’s generalization performance. An even simpler adaptive LIF neuron was also examined in this context. Hence, we have developed a “high-throughput” model fitting procedure which is simple and fast, with good prediction performance, and which relies only on firing rate information and standard physiological data widely and easily available. PMID:22973220
Liu, Peigui; Elshall, Ahmed S.; Ye, Ming; Beerli, Peter; Zeng, Xiankui; Lu, Dan; Tao, Yuezan
2016-02-05
Evaluating marginal likelihood is the most critical and computationally expensive task, when conducting Bayesian model averaging to quantify parametric and model uncertainties. The evaluation is commonly done by using Laplace approximations to evaluate semianalytical expressions of the marginal likelihood or by using Monte Carlo (MC) methods to evaluate arithmetic or harmonic mean of a joint likelihood function. This study introduces a new MC method, i.e., thermodynamic integration, which has not been attempted in environmental modeling. Instead of using samples only from prior parameter space (as in arithmetic mean evaluation) or posterior parameter space (as in harmonic mean evaluation), the thermodynamic integration method uses samples generated gradually from the prior to posterior parameter space. This is done through a path sampling that conducts Markov chain Monte Carlo simulation with different power coefficient values applied to the joint likelihood function. The thermodynamic integration method is evaluated using three analytical functions by comparing the method with two variants of the Laplace approximation method and three MC methods, including the nested sampling method that is recently introduced into environmental modeling. The thermodynamic integration method outperforms the other methods in terms of their accuracy, convergence, and consistency. The thermodynamic integration method is also applied to a synthetic case of groundwater modeling with four alternative models. The application shows that model probabilities obtained using the thermodynamic integration method improves predictive performance of Bayesian model averaging. As a result, the thermodynamic integration method is mathematically rigorous, and its MC implementation is computationally general for a wide range of environmental problems.
NASA Astrophysics Data System (ADS)
Chen, Chaochao; Vachtsevanos, George; Orchard, Marcos E.
2012-04-01
Machine prognosis can be considered as the generation of long-term predictions that describe the evolution in time of a fault indicator, with the purpose of estimating the remaining useful life (RUL) of a failing component/subsystem so that timely maintenance can be performed to avoid catastrophic failures. This paper proposes an integrated RUL prediction method using adaptive neuro-fuzzy inference systems (ANFIS) and high-order particle filtering, which forecasts the time evolution of the fault indicator and estimates the probability density function (pdf) of RUL. The ANFIS is trained and integrated in a high-order particle filter as a model describing the fault progression. The high-order particle filter is used to estimate the current state and carry out p-step-ahead predictions via a set of particles. These predictions are used to estimate the RUL pdf. The performance of the proposed method is evaluated via the real-world data from a seeded fault test for a UH-60 helicopter planetary gear plate. The results demonstrate that it outperforms both the conventional ANFIS predictor and the particle-filter-based predictor where the fault growth model is a first-order model that is trained via the ANFIS.
The B-cell antigen receptor integrates adaptive and innate immune signals
Otipoby, Kevin L.; Waisman, Ari; Derudder, Emmanuel; Srinivasan, Lakshmi; Franklin, Andrew; Rajewsky, Klaus
2015-01-01
B cells respond to antigens by engagement of their B-cell antigen receptor (BCR) and of coreceptors through which signals from helper T cells or pathogen-associated molecular patterns are delivered. We show that the proliferative response of B cells to the latter stimuli is controlled by BCR-dependent activation of phosphoinositidyl 3-kinase (PI-3K) signaling. Glycogen synthase kinase 3β and Foxo1 are two PI-3K-regulated targets that play important roles, but to different extents, depending on the specific mitogen. These results suggest a model for integrating signals from the innate and the adaptive immune systems in the control of the B-cell immune response. PMID:26371314
Application of integrated fluid-thermal-structural analysis methods
NASA Technical Reports Server (NTRS)
Wieting, Allan R.; Dechaumphai, Pramote; Bey, Kim S.; Thornton, Earl A.; Morgan, Ken
1988-01-01
Hypersonic vehicles operate in a hostile aerothermal environment which has a significant impact on their aerothermostructural performance. Significant coupling occurs between the aerodynamic flow field, structural heat transfer, and structural response creating a multidisciplinary interaction. Interfacing state-of-the-art disciplinary analysis methods is not efficient, hence interdisciplinary analysis methods integrated into a single aerothermostructural analyzer are needed. The NASA Langley Research Center is developing such methods in an analyzer called LIFTS (Langley Integrated Fluid-Thermal-Structural) analyzer. The evolution and status of LIFTS is reviewed and illustrated through applications.
[Study on plastic film thickness measurement by integral spectrum method].
Qiu, Chao; Sun, Xiao-Gang
2013-01-01
Band integral transmission was defined and plastic film thickness measurement model was built by analyzing the intensity variation when the light passes plastic film, after the concept of band Lambert Law was proposed. Polypropylene film samples with different thickness were taken as the research object, and their spectral transmission was measured by the spectrometer. The relationship between thickness and band integral transmission is fitted using the model mentioned before. The feasibility of developing new broad band plastic film thickness on-line measurement system based on this method was analysed employing the ideal blackbody at temperature of 500 K. The experimental results indicate that plastic film thickness will be measured accurately by integral spectrum method. Plastic film thickness on-line measurement system based on this method will hopefully solve the problems of that based on dual monochromatic light contrast method, such as low accuracy, poor universality and so on.
Adaptive Numerical Integration for Item Response Theory. Research Report. ETS RR-07-06
ERIC Educational Resources Information Center
Antal, Tamás; Oranje, Andreas
2007-01-01
Well-known numerical integration methods are applied to item response theory (IRT) with special emphasis on the estimation of the latent regression model of NAEP [National Assessment of Educational Progress]. An argument is made that the Gauss-Hermite rule enhanced with Cholesky decomposition and normal approximation of the response likelihood is…
When Curriculum and Technology Meet: Technology Integration in Methods Courses
ERIC Educational Resources Information Center
Keeler, Christy G.
2008-01-01
Reporting on the results of an action research study, this manuscript provides examples of strategies used to integrate technology into a content methods course. The study used reflective teaching of a social studies methods course at a major Southwestern university in 10 course sections over a four-semester period. In alignment with the research…
Method and system of integrating information from multiple sources
Alford, Francine A.; Brinkerhoff, David L.
2006-08-15
A system and method of integrating information from multiple sources in a document centric application system. A plurality of application systems are connected through an object request broker to a central repository. The information may then be posted on a webpage. An example of an implementation of the method and system is an online procurement system.
A Comparison of Treatment Integrity Assessment Methods for Behavioral Intervention
ERIC Educational Resources Information Center
Koh, Seong A.
2010-01-01
The purpose of this study was to examine the similarity of outcomes from three different treatment integrity (TI) methods, and to identify the method which best corresponded to the assessment of a child's behavior. Six raters were recruited through individual contact via snowball sampling. A modified intervention component list and 19 video clips…
Schreurs, K M; de Ridder, D T
1997-01-01
In this article, empirical studies dealing with the relationship between coping and social support are discussed in order to identify promising themes for research on adaptation to chronic diseases. Although only few studies deal with this issue explicitly, the review reveals that four ways to study the relationship between coping and social support can be distinguished: (a) seeking social support as a coping strategy; (b) social support as a coping resource; (c) social support as dependent on the way individual patients cope; and (d) coping by a social system. It is argued that all four ways of integrating coping and social support contribute to a better understanding of adaptation to chronic diseases. However, exploring the interrelatedness of both concepts by studying social support as a coping resource and social support as dependent on the patient's own coping behavior appear to be especially fruitful in the short term, as they: (a) provide a better insight in the social determinants of coping, and (b) may help to clarify the way social support affects health and well-being.
NASA Astrophysics Data System (ADS)
Kuliwaba, J. S.; Truong, L.; Codrington, J. D.; Fazzalari, N. L.
2010-06-01
The human skeleton has the ability to modify its material composition and structure to accommodate loads through adaptive modelling and remodelling. The osteocyte cell network is now considered to be central to the regulation of skeletal homeostasis; however, very little is known of the integrity of the osteocyte cell network in osteoporotic fragility fracture. This study was designed to characterise osteocyte morphology, the extent of osteocyte cell apoptosis and expression of sclerostin protein (a negative regulator of bone formation) in trabecular bone from the intertrochanteric region of the proximal femur, for postmenopausal women with fragility hip fracture compared to age-matched women who had not sustained fragility fracture. Osteocyte morphology (osteocyte, empty lacunar, and total lacunar densities) and the degree of osteocyte apoptosis (percent caspase-3 positive osteocyte lacunae) were similar between the fracture patients and non-fracture women. The fragility hip fracture patients had a lower proportion of sclerostin-positive osteocyte lacunae in comparison to sclerostin-negative osteocyte lacunae, in contrast to similar percent sclerostin-positive/sclerostin-negative lacunae for non-fracture women. The unexpected finding of decreased sclerostin expression in trabecular bone osteocytes from fracture cases may be indicative of elevated bone turnover and under-mineralisation, characteristic of postmenopausal osteoporosis. Further, altered osteocytic expression of sclerostin may be involved in the mechano-responsiveness of bone. Optimal function of the osteocyte cell network is likely to be a critical determinant of bone strength, acting via mechanical load adaptation, and thus contributing to osteoporotic fracture risk.
Zheng, Zewei; Zou, Yao
2016-11-01
This paper investigates the path following control problem for an unmanned airship in the presence of unknown wind and uncertainties. The backstepping technique augmented by a robust adaptive radial basis function neural network (RBFNN) is employed as the main control framework. Based on the horizontal dynamic model of the airship, an improved adaptive integral line-of-sight (LOS) guidance law is first proposed, which suits any parametric paths. The guidance law calculates the desired yaw angle and estimates the wind. Then the controller is extended to cope with the airship yaw tracking and velocity control by resorting to the augmented backstepping technique. The uncertainties of the dynamics are compensated by using the robust RBFNNs. Each robust RBFNN utilizes an nth-order smooth switching function to combine a conventional RBFNN with a robust control. The conventional RBFNN dominates in the neural active region, while the robust control retrieves the transient outside the active region, so that the stability range can be widened. Stability analysis shows that the controlled closed-loop system is globally uniformly ultimately bounded. Simulations are provided to validate the effectiveness of the proposed control approach.
Fuzzy adaptive interacting multiple model nonlinear filter for integrated navigation sensor fusion.
Tseng, Chien-Hao; Chang, Chih-Wen; Jwo, Dah-Jing
2011-01-01
In this paper, the application of the fuzzy interacting multiple model unscented Kalman filter (FUZZY-IMMUKF) approach to integrated navigation processing for the maneuvering vehicle is presented. The unscented Kalman filter (UKF) employs a set of sigma points through deterministic sampling, such that a linearization process is not necessary, and therefore the errors caused by linearization as in the traditional extended Kalman filter (EKF) can be avoided. The nonlinear filters naturally suffer, to some extent, the same problem as the EKF for which the uncertainty of the process noise and measurement noise will degrade the performance. As a structural adaptation (model switching) mechanism, the interacting multiple model (IMM), which describes a set of switching models, can be utilized for determining the adequate value of process noise covariance. The fuzzy logic adaptive system (FLAS) is employed to determine the lower and upper bounds of the system noise through the fuzzy inference system (FIS). The resulting sensor fusion strategy can efficiently deal with the nonlinear problem for the vehicle navigation. The proposed FUZZY-IMMUKF algorithm shows remarkable improvement in the navigation estimation accuracy as compared to the relatively conventional approaches such as the UKF and IMMUKF.
Controlled Aeroelastic Response and Airfoil Shaping Using Adaptive Materials and Integrated Systems
NASA Technical Reports Server (NTRS)
Pinkerton, Jennifer L.; McGowan, Anna-Maria R.; Moses, Robert W.; Scott, Robert C.; Heeg, Jennifer
1996-01-01
This paper presents an overview of several activities of the Aeroelasticity Branch at the NASA Langley Research Center in the area of applying adaptive materials and integrated systems for controlling both aircraft aeroelastic response and airfoil shape. The experimental results of four programs are discussed: the Piezoelectric Aeroelastic Response Tailoring Investigation (PARTI); the Adaptive Neural Control of Aeroelastic Response (ANCAR) program; the Actively Controlled Response of Buffet Affected Tails (ACROBAT) program; and the Airfoil THUNDER Testing to Ascertain Characteristics (ATTACH) project. The PARTI program demonstrated active flutter control and significant rcductions in aeroelastic response at dynamic pressures below flutter using piezoelectric actuators. The ANCAR program seeks to demonstrate the effectiveness of using neural networks to schedule flutter suppression control laws. Th,e ACROBAT program studied the effectiveness of a number of candidate actuators, including a rudder and piezoelectric actuators, to alleviate vertical tail buffeting. In the ATTACH project, the feasibility of using Thin-Layer Composite-Uimorph Piezoelectric Driver and Sensor (THUNDER) wafers to control airfoil aerodynamic characteristics was investigated. Plans for future applications are also discussed.
Testing and integrating the laser system of ARGOS: the ground layer adaptive optics for LBT
NASA Astrophysics Data System (ADS)
Loose, C.; Rabien, S.; Barl, L.; Borelli, J.; Deysenroth, M.; Gaessler, W.; Gemperlein, H.; Honsberg, M.; Kulas, M.; Lederer, R.; Raab, W.; Rahmer, G.; Ziegleder, J.
2012-07-01
The Laser Guide Star facility ARGOS will provide Ground Layer Adaptive Optics to the Large Binocular Telescope (LBT). The system operates three pulsed laser beacons above each of the two primary mirrors, which are Rayleigh scattered in 12km height. This enables correction over a wide field of view, using the adaptive secondary mirror of the LBT. The ARGOS laser system is designed around commercially available, pulsed Nd:YAG lasers working at 532 nm. In preparation for a successful commissioning, it is important to ascertain that the specifications are met for every component of the laser system. The testing of assembled, optical subsystems is likewise necessary. In particular it is required to confirm a high output power, beam quality and pulse stability of the beacons. In a second step, the integrated laser system along with its electronic cabinets are installed on a telescope simulator. This unit is capable of carrying the whole assembly and can be tilted to imitate working conditions at the LBT. It allows alignment and functionality testing of the entire system, ensuring that flexure compensation and system diagnosis work properly in different orientations.
The adaptive optics beam steering mirror for the GMT Integral-Field Spectrograph, GMTIFS
NASA Astrophysics Data System (ADS)
Sharp, R.; Boz, R.; Hart, J.; Bloxham, G.; Bundy, D.; Davis, J.; McGregor, P. J.; Nielson, J.; Vest, C.; Young, P. J.
2014-07-01
To achieve the high adaptive optics sky coverage necessary to allow the GMT Integral-Field Spectrograph to access key scientific targets, the on-instrument adaptive-optics wavefront-sensing system must patrol the full 180 arcsecond diameter guide field passed to the instrument. Starlight must be held stationary on the wavefront sensor (accounting for flexure, differential refraction and non-sidereal tracking rates) to ~ 1 milliarcsecond to provide the stable position reference signal for deep AO observations and avoid introducing image blur. Hence a tight tolerance of 1/180,000 is placed on the positioning and encoding accuracy for the cryogenic On-Instrument Wave-Front Sensor feed. GMTIFS will achieve this requirement using a beam-steering mirror system as an optical relay for starlight from across the accessible guide field. The system avoids hysteresis and backlash by eliminating friction and avoiding gearing while maintaining high setting speed and accuracy with a precision feedback loop. Here we present the design of the relay system and the technical solution deployed to meet the challenging specifications for drive rate, accuracy and positional encoding of the beam-steering system.
Efficient integration of spectral features for vehicle tracking utilizing an adaptive sensor
NASA Astrophysics Data System (ADS)
Uzkent, Burak; Hoffman, Matthew J.; Vodacek, Anthony
2015-03-01
Object tracking in urban environments is an important and challenging problem that is traditionally tackled using visible and near infrared wavelengths. By inserting extended data such as spectral features of the objects one can improve the reliability of the identification process. However, huge increase in data created by hyperspectral imaging is usually prohibitive. To overcome the complexity problem, we propose a persistent air-to-ground target tracking system inspired by a state-of-the-art, adaptive, multi-modal sensor. The adaptive sensor is capable of providing panchromatic images as well as the spectra of desired pixels. This addresses the data challenge of hyperspectral tracking by only recording spectral data as needed. Spectral likelihoods are integrated into a data association algorithm in a Bayesian fashion to minimize the likelihood of misidentification. A framework for controlling spectral data collection is developed by incorporating motion segmentation information and prior information from a Gaussian Sum filter (GSF) movement predictions from a multi-model forecasting set. An intersection mask of the surveillance area is extracted from OpenStreetMap source and incorporated into the tracking algorithm to perform online refinement of multiple model set. The proposed system is tested using challenging and realistic scenarios generated in an adverse environment.
Alertness Modulates Conflict Adaptation and Feature Integration in an Opposite Way
Chen, Jia; Huang, Xiting; Chen, Antao
2013-01-01
Previous studies show that the congruency sequence effect can result from both the conflict adaptation effect (CAE) and feature integration effect which can be observed as the repetition priming effect (RPE) and feature overlap effect (FOE) depending on different experimental conditions. Evidence from neuroimaging studies suggests that a close correlation exists between the neural mechanisms of alertness-related modulations and the congruency sequence effect. However, little is known about whether and how alertness mediates the congruency sequence effect. In Experiment 1, the Attentional Networks Test (ANT) and a modified flanker task were used to evaluate whether the alertness of the attentional functions had a correlation with the CAE and RPE. In Experimental 2, the ANT and another modified flanker task were used to investigate whether alertness of the attentional functions correlate with the CAE and FOE. In Experiment 1, through the correlative analysis, we found a significant positive correlation between alertness and the CAE, and a negative correlation between the alertness and the RPE. Moreover, a significant negative correlation existed between CAE and RPE. In Experiment 2, we found a marginally significant negative correlation between the CAE and the RPE, but the correlation between alertness and FOE, CAE and FOE was not significant. These results suggest that alertness can modulate conflict adaptation and feature integration in an opposite way. Participants at the high alerting level group may tend to use the top-down cognitive processing strategy, whereas participants at the low alerting level group tend to use the bottom-up processing strategy. PMID:24250824
Medical case-based retrieval: integrating query MeSH terms for query-adaptive multi-modal fusion
NASA Astrophysics Data System (ADS)
Seco de Herrera, Alba G.; Foncubierta-Rodríguez, Antonio; Müller, Henning
2015-03-01
Advances in medical knowledge give clinicians more objective information for a diagnosis. Therefore, there is an increasing need for bibliographic search engines that can provide services helping to facilitate faster information search. The ImageCLEFmed benchmark proposes a medical case-based retrieval task. This task aims at retrieving articles from the biomedical literature that are relevant for differential diagnosis of query cases including a textual description and several images. In the context of this campaign many approaches have been investigated showing that the fusion of visual and text information can improve the precision of the retrieval. However, fusion does not always lead to better results. In this paper, a new query-adaptive fusion criterion to decide when to use multi-modal (text and visual) or only text approaches is presented. The proposed method integrates text information contained in MeSH (Medical Subject Headings) terms extracted and visual features of the images to find synonym relations between them. Given a text query, the query-adaptive fusion criterion decides when it is suitable to also use visual information for the retrieval. Results show that this approach can decide if a text or multi{modal approach should be used with 77.15% of accuracy.
A new and efficient method to obtain benzalkonium chloride adapted cells of Listeria monocytogenes.
Saá Ibusquiza, Paula; Herrera, Juan J R; Vázquez-Sánchez, Daniel; Parada, Adelaida; Cabo, Marta L
2012-10-01
A new method to obtain benzalkonium chloride (BAC) adapted L. monocytogenes cells was developed. A factorial design was used to assess the effects of the inoculum size and BAC concentration on the adaptation (measured in terms of lethal dose 50 -LD50-) of 6 strains of Listeria monocytogenes after only one exposure. The proposed method could be applied successfully in the L. monocytogenes strains with higher adaptive capacity to BAC. In those cases, a significant empirical equation was obtained showing a positive effect of the inoculum size and a positive interaction between the effects of BAC and inoculum size on the level of adaptation achieved. However, a slight negative effect of BAC, due to the biocide, was also significant. The proposed method improves the classical method based on successive stationary phase cultures in sublethal BAC concentrations because it is less time-consuming and more effective. For the laboratory strain L. monocytogenes 5873, by applying the new procedure it was possible to increase BAC-adaptation 3.69-fold in only 33 h, whereas using the classical procedure 2.61-fold of increase was reached after 5 days. Moreover, with the new method, the maximum level of adaptation was determined for all the strains reaching surprisingly almost the same concentration of BAC (mg/l) for 5 out 6 strains. Thus, a good reference for establishing the effective concentrations of biocides to ensure the maximum level of adaptation was also determined.
A flexible importance sampling method for integrating subgrid processes
Raut, E. K.; Larson, V. E.
2016-01-29
Numerical models of weather and climate need to compute grid-box-averaged rates of physical processes such as microphysics. These averages are computed by integrating subgrid variability over a grid box. For this reason, an important aspect of atmospheric modeling is spatial integration over subgrid scales. The needed integrals can be estimated by Monte Carlo integration. Monte Carlo integration is simple and general but requires many evaluations of the physical process rate. To reduce the number of function evaluations, this paper describes a new, flexible method of importance sampling. It divides the domain of integration into eight categories, such as the portion that containsmore » both precipitation and cloud, or the portion that contains precipitation but no cloud. It then allows the modeler to prescribe the density of sample points within each of the eight categories. The new method is incorporated into the Subgrid Importance Latin Hypercube Sampler (SILHS). The resulting method is tested on drizzling cumulus and stratocumulus cases. In the cumulus case, the sampling error can be considerably reduced by drawing more sample points from the region of rain evaporation.« less
A flexible importance sampling method for integrating subgrid processes
NASA Astrophysics Data System (ADS)
Raut, E. K.; Larson, V. E.
2016-01-01
Numerical models of weather and climate need to compute grid-box-averaged rates of physical processes such as microphysics. These averages are computed by integrating subgrid variability over a grid box. For this reason, an important aspect of atmospheric modeling is spatial integration over subgrid scales. The needed integrals can be estimated by Monte Carlo integration. Monte Carlo integration is simple and general but requires many evaluations of the physical process rate. To reduce the number of function evaluations, this paper describes a new, flexible method of importance sampling. It divides the domain of integration into eight categories, such as the portion that contains both precipitation and cloud, or the portion that contains precipitation but no cloud. It then allows the modeler to prescribe the density of sample points within each of the eight categories. The new method is incorporated into the Subgrid Importance Latin Hypercube Sampler (SILHS). The resulting method is tested on drizzling cumulus and stratocumulus cases. In the cumulus case, the sampling error can be considerably reduced by drawing more sample points from the region of rain evaporation.
Development of Improved Surface Integral Methods for Jet Aeroacoustic Predictions
NASA Technical Reports Server (NTRS)
Pilon, Anthony R.; Lyrintzis, Anastasios S.
1997-01-01
The accurate prediction of aerodynamically generated noise has become an important goal over the past decade. Aeroacoustics must now be an integral part of the aircraft design process. The direct calculation of aerodynamically generated noise with CFD-like algorithms is plausible. However, large computer time and memory requirements often make these predictions impractical. It is therefore necessary to separate the aeroacoustics problem into two parts, one in which aerodynamic sound sources are determined, and another in which the propagating sound is calculated. This idea is applied in acoustic analogy methods. However, in the acoustic analogy, the determination of far-field sound requires the solution of a volume integral. This volume integration again leads to impractical computer requirements. An alternative to the volume integrations can be found in the Kirchhoff method. In this method, Green's theorem for the linear wave equation is used to determine sound propagation based on quantities on a surface surrounding the source region. The change from volume to surface integrals represents a tremendous savings in the computer resources required for an accurate prediction. This work is concerned with the development of enhancements of the Kirchhoff method for use in a wide variety of aeroacoustics problems. This enhanced method, the modified Kirchhoff method, is shown to be a Green's function solution of Lighthill's equation. It is also shown rigorously to be identical to the methods of Ffowcs Williams and Hawkings. This allows for development of versatile computer codes which can easily alternate between the different Kirchhoff and Ffowcs Williams-Hawkings formulations, using the most appropriate method for the problem at hand. The modified Kirchhoff method is developed primarily for use in jet aeroacoustics predictions. Applications of the method are shown for two dimensional and three dimensional jet flows. Additionally, the enhancements are generalized so that
Dinh, Vinh Quang; Nguyen, Vinh Dinh; Jeon, Jae Wook
2015-12-01
Real-world stereo images are inevitably affected by radiometric differences, including variations in exposure, vignetting, lighting, and noise. Stereo images with severe radiometric distortion can have large radiometric differences and include locally nonlinear changes. In this paper, we first introduce an adaptive orthogonal integral image, which is an improved version of an orthogonal integral image. After that, based on matching by tone mapping and the adaptive orthogonal integral image, we propose a robust and accurate matching cost function that can tolerate locally nonlinear intensity distortion. By using the adaptive orthogonal integral image, the proposed matching cost function can adaptively construct different support regions of arbitrary shapes and sizes for different pixels in the reference image, so it can operate robustly within object boundaries. Furthermore, we develop techniques to automatically estimate the values of the parameters of our proposed function. We conduct experiments using the proposed matching cost function and compare it with functions employing the census transform, supporting local binary pattern, and adaptive normalized cross correlation, as well as a mutual information-based matching cost function using different stereo data sets. By using the adaptive orthogonal integral image, the proposed matching cost function reduces the error from 21.51% to 15.73% in the Middlebury data set, and from 15.9% to 10.85% in the Kitti data set, as compared with using the orthogonal integral image. The experimental results indicate that the proposed matching cost function is superior to the state-of-the-art matching cost functions under radiometric variation.
Analysis of modified SMI method for adaptive array weight control
NASA Technical Reports Server (NTRS)
Dilsavor, R. L.; Moses, R. L.
1989-01-01
An adaptive array is applied to the problem of receiving a desired signal in the presence of weak interference signals which need to be suppressed. A modification, suggested by Gupta, of the sample matrix inversion (SMI) algorithm controls the array weights. In the modified SMI algorithm, interference suppression is increased by subtracting a fraction F of the noise power from the diagonal elements of the estimated covariance matrix. Given the true covariance matrix and the desired signal direction, the modified algorithm is shown to maximize a well-defined, intuitive output power ratio criterion. Expressions are derived for the expected value and variance of the array weights and output powers as a function of the fraction F and the number of snapshots used in the covariance matrix estimate. These expressions are compared with computer simulation and good agreement is found. A trade-off is found to exist between the desired level of interference suppression and the number of snapshots required in order to achieve that level with some certainty. The removal of noise eigenvectors from the covariance matrix inverse is also discussed with respect to this application. Finally, the type and severity of errors which occur in the covariance matrix estimate are characterized through simulation.
Parallel architectures for iterative methods on adaptive, block structured grids
NASA Technical Reports Server (NTRS)
Gannon, D.; Vanrosendale, J.
1983-01-01
A parallel computer architecture well suited to the solution of partial differential equations in complicated geometries is proposed. Algorithms for partial differential equations contain a great deal of parallelism. But this parallelism can be difficult to exploit, particularly on complex problems. One approach to extraction of this parallelism is the use of special purpose architectures tuned to a given problem class. The architecture proposed here is tuned to boundary value problems on complex domains. An adaptive elliptic algorithm which maps effectively onto the proposed architecture is considered in detail. Two levels of parallelism are exploited by the proposed architecture. First, by making use of the freedom one has in grid generation, one can construct grids which are locally regular, permitting a one to one mapping of grids to systolic style processor arrays, at least over small regions. All local parallelism can be extracted by this approach. Second, though there may be a regular global structure to the grids constructed, there will be parallelism at this level. One approach to finding and exploiting this parallelism is to use an architecture having a number of processor clusters connected by a switching network. The use of such a network creates a highly flexible architecture which automatically configures to the problem being solved.
A bin integral method for solving the kinetic collection equation
NASA Astrophysics Data System (ADS)
Wang, Lian-Ping; Xue, Yan; Grabowski, Wojciech W.
2007-09-01
A new numerical method for solving the kinetic collection equation (KCE) is proposed, and its accuracy and convergence are investigated. The method, herein referred to as the bin integral method with Gauss quadrature (BIMGQ), makes use of two binwise moments, namely, the number and mass concentration in each bin. These two degrees of freedom define an extended linear representation of the number density distribution for each bin following Enukashvily (1980). Unlike previous moment-based methods in which the gain and loss integrals are evaluated for a target bin, the concept of source-bin pair interactions is used to transfer bin moments from source bins to target bins. Collection kernels are treated by bilinear interpolations. All binwise interaction integrals are then handled exactly by Gauss quadrature of various orders. In essence the method combines favorable features in previous spectral moment-based and bin-based pair-interaction (or flux) methods to greatly enhance the logic, consistency, and simplicity in the numerical method and its implementation. Quantitative measures are developed to rigorously examine the accuracy and convergence properties of BIMGQ for both the Golovin kernel and hydrodynamic kernels. It is shown that BIMGQ has a superior accuracy for the Golovin kernel and a monotonic convergence behavior for hydrodynamic kernels. Direct comparisons are also made with the method of Berry and Reinhardt (1974), the linear flux method of Bott (1998), and the linear discrete method of Simmel et al. (2002).
Explicit Integration of Extremely Stiff Reaction Networks: Asymptotic Methods
Guidry, Mike W; Budiardja, R.; Feger, E.; Billings, J. J.; Hix, William Raphael; Messer, O.E.B.; Roche, K. J.; McMahon, E.; He, M.
2013-01-01
We show that, even for extremely stiff systems, explicit integration may compete in both accuracy and speed with implicit methods if algebraic methods are used to stabilize the numerical integration. The stabilizing algebra differs for systems well removed from equilibrium and those near equilibrium. This paper introduces a quantitative distinction between these two regimes and addresses the former case in depth, presenting explicit asymptotic methods appropriate when the system is extremely stiff but only weakly equilibrated. A second paper [1] examines quasi-steady-state methods as an alternative to asymptotic methods in systems well away from equilibrium and a third paper [2] extends these methods to equilibrium conditions in extremely stiff systems using partial equilibrium methods. All three papers present systematic evidence for timesteps competitive with implicit methods. Because explicit methods can execute a timestep faster than an implicit method, our results imply that algebraically stabilized explicit algorithms may offer a means to integration of larger networks than have been feasible previously in various disciplines.
Speckle reduction in optical coherence tomography by adaptive total variation method
NASA Astrophysics Data System (ADS)
Wu, Tong; Shi, Yaoyao; Liu, Youwen; He, Chongjun
2015-12-01
An adaptive total variation method based on the combination of speckle statistics and total variation restoration is proposed and developed for reducing speckle noise in optical coherence tomography (OCT) images. The statistical distribution of the speckle noise in OCT image is investigated and measured. With the measured parameters such as the mean value and variance of the speckle noise, the OCT image is restored by the adaptive total variation restoration method. The adaptive total variation restoration algorithm was applied to the OCT images of a volunteer's hand skin, which showed effective speckle noise reduction and image quality improvement. For image quality comparison, the commonly used median filtering method was also applied to the same images to reduce the speckle noise. The measured results demonstrate the superior performance of the adaptive total variation restoration method in terms of image signal-to-noise ratio, equivalent number of looks, contrast-to-noise ratio, and mean square error.
NASA Astrophysics Data System (ADS)
Keener, V. W.; Finucane, M.; Brewington, L.
2014-12-01
For the last century, the island of Maui, Hawaii, has been the center of environmental, agricultural, and legal conflict with respect to surface and groundwater allocation. Planning for adequate future freshwater resources requires flexible and adaptive policies that emphasize partnerships and knowledge transfer between scientists and non-scientists. In 2012 the Hawai'i state legislature passed the Climate Change Adaptation Priority Guidelines (Act 286) law requiring county and state policy makers to include island-wide climate change scenarios in their planning processes. This research details the ongoing work by researchers in the NOAA funded Pacific RISA to support the development of Hawaii's first island-wide water use plan under the new climate adaptation directive. This integrated project combines several models with participatory future scenario planning. The dynamically downscaled triply nested Hawaii Regional Climate Model (HRCM) was modified from the WRF community model and calibrated to simulate the many microclimates on the Hawaiian archipelago. For the island of Maui, the HRCM was validated using 20 years of hindcast data, and daily projections were created at a 1 km scale to capture the steep topography and diverse rainfall regimes. Downscaled climate data are input into a USGS hydrological model to quantify groundwater recharge. This model was previously used for groundwater management, and is being expanded utilizing future climate projections, current land use maps and future scenario maps informed by stakeholder input. Participatory scenario planning began in 2012 to bring together a diverse group of over 50 decision-makers in government, conservation, and agriculture to 1) determine the type of information they would find helpful in planning for climate change, and 2) develop a set of scenarios that represent alternative climate/management futures. This is an iterative process, resulting in flexible and transparent narratives at multiple scales
An adaptation of Krylov subspace methods to path following
Walker, H.F.
1996-12-31
Krylov subspace methods at present constitute a very well known and highly developed class of iterative linear algebra methods. These have been effectively applied to nonlinear system solving through Newton-Krylov methods, in which Krylov subspace methods are used to solve the linear systems that characterize steps of Newton`s method (the Newton equations). Here, we will discuss the application of Krylov subspace methods to path following problems, in which the object is to track a solution curve as a parameter varies. Path following methods are typically of predictor-corrector form, in which a point near the solution curve is {open_quotes}predicted{close_quotes} by some easy but relatively inaccurate means, and then a series of Newton-like corrector iterations is used to return approximately to the curve. The analogue of the Newton equation is underdetermined, and an additional linear condition must be specified to determine corrector steps uniquely. This is typically done by requiring that the steps be orthogonal to an approximate tangent direction. Augmenting the under-determined system with this orthogonality condition in a straightforward way typically works well if direct linear algebra methods are used, but Krylov subspace methods are often ineffective with this approach. We will discuss recent work in which this orthogonality condition is imposed directly as a constraint on the corrector steps in a certain way. The means of doing this preserves problem conditioning, allows the use of preconditioners constructed for the fixed-parameter case, and has certain other advantages. Experiments on standard PDE continuation test problems indicate that this approach is effective.
Integrative methods for analyzing big data in precision medicine.
Gligorijević, Vladimir; Malod-Dognin, Noël; Pržulj, Nataša
2016-03-01
We provide an overview of recent developments in big data analyses in the context of precision medicine and health informatics. With the advance in technologies capturing molecular and medical data, we entered the area of "Big Data" in biology and medicine. These data offer many opportunities to advance precision medicine. We outline key challenges in precision medicine and present recent advances in data integration-based methods to uncover personalized information from big data produced by various omics studies. We survey recent integrative methods for disease subtyping, biomarkers discovery, and drug repurposing, and list the tools that are available to domain scientists. Given the ever-growing nature of these big data, we highlight key issues that big data integration methods will face.
An adaptable XML based approach for scientific data management and integration
NASA Astrophysics Data System (ADS)
Wang, Fusheng; Thiel, Florian; Furrer, Daniel; Vergara-Niedermayr, Cristobal; Qin, Chen; Hackenberg, Georg; Bourgue, Pierre-Emmanuel; Kaltschmidt, David; Wang, Mo
2008-03-01
Increased complexity of scientific research poses new challenges to scientific data management. Meanwhile, scientific collaboration is becoming increasing important, which relies on integrating and sharing data from distributed institutions. We develop SciPort, a Web-based platform on supporting scientific data management and integration based on a central server based distributed architecture, where researchers can easily collect, publish, and share their complex scientific data across multi-institutions. SciPort provides an XML based general approach to model complex scientific data by representing them as XML documents. The documents capture not only hierarchical structured data, but also images and raw data through references. In addition, SciPort provides an XML based hierarchical organization of the overall data space to make it convenient for quick browsing. To provide generalization, schemas and hierarchies are customizable with XML-based definitions, thus it is possible to quickly adapt the system to different applications. While each institution can manage documents on a Local SciPort Server independently, selected documents can be published to a Central Server to form a global view of shared data across all sites. By storing documents in a native XML database, SciPort provides high schema extensibility and supports comprehensive queries through XQuery. By providing a unified and effective means for data modeling, data access and customization with XML, SciPort provides a flexible and powerful platform for sharing scientific data for scientific research communities, and has been successfully used in both biomedical research and clinical trials.
USEPA ambient air monitoring methods for volatile organic compounds (VOCs) using specially-prepared canisters and solid adsorbents are directly adaptable to monitoring for vapors in the indoor environment. The draft Method TO-15 Supplement, an extension of the USEPA Method TO-15,...
Adapting Western research methods to indigenous ways of knowing.
Simonds, Vanessa W; Christopher, Suzanne
2013-12-01
Indigenous communities have long experienced exploitation by researchers and increasingly require participatory and decolonizing research processes. We present a case study of an intervention research project to exemplify a clash between Western research methodologies and Indigenous methodologies and how we attempted reconciliation. We then provide implications for future research based on lessons learned from Native American community partners who voiced concern over methods of Western deductive qualitative analysis. Decolonizing research requires constant reflective attention and action, and there is an absence of published guidance for this process. Continued exploration is needed for implementing Indigenous methods alone or in conjunction with appropriate Western methods when conducting research in Indigenous communities. Currently, examples of Indigenous methods and theories are not widely available in academic texts or published articles, and are often not perceived as valid.
Systems and Methods for Parameter Dependent Riccati Equation Approaches to Adaptive Control
NASA Technical Reports Server (NTRS)
Kim, Kilsoo (Inventor); Yucelen, Tansel (Inventor); Calise, Anthony J. (Inventor)
2015-01-01
Systems and methods for adaptive control are disclosed. The systems and methods can control uncertain dynamic systems. The control system can comprise a controller that employs a parameter dependent Riccati equation. The controller can produce a response that causes the state of the system to remain bounded. The control system can control both minimum phase and non-minimum phase systems. The control system can augment an existing, non-adaptive control design without modifying the gains employed in that design. The control system can also avoid the use of high gains in both the observer design and the adaptive control law.
Adaptive error covariances estimation methods for ensemble Kalman filters
Zhen, Yicun; Harlim, John
2015-08-01
This paper presents a computationally fast algorithm for estimating, both, the system and observation noise covariances of nonlinear dynamics, that can be used in an ensemble Kalman filtering framework. The new method is a modification of Belanger's recursive method, to avoid an expensive computational cost in inverting error covariance matrices of product of innovation processes of different lags when the number of observations becomes large. When we use only product of innovation processes up to one-lag, the computational cost is indeed comparable to a recently proposed method by Berry–Sauer's. However, our method is more flexible since it allows for using information from product of innovation processes of more than one-lag. Extensive numerical comparisons between the proposed method and both the original Belanger's and Berry–Sauer's schemes are shown in various examples, ranging from low-dimensional linear and nonlinear systems of SDEs and 40-dimensional stochastically forced Lorenz-96 model. Our numerical results suggest that the proposed scheme is as accurate as the original Belanger's scheme on low-dimensional problems and has a wider range of more accurate estimates compared to Berry–Sauer's method on L-96 example.
Adaptive Methods within a Sequential Bayesian Approach for Structural Health Monitoring
NASA Astrophysics Data System (ADS)
Huff, Daniel W.
Structural integrity is an important characteristic of performance for critical components used in applications such as aeronautics, materials, construction and transportation. When appraising the structural integrity of these components, evaluation methods must be accurate. In addition to possessing capability to perform damage detection, the ability to monitor the level of damage over time can provide extremely useful information in assessing the operational worthiness of a structure and in determining whether the structure should be repaired or removed from service. In this work, a sequential Bayesian approach with active sensing is employed for monitoring crack growth within fatigue-loaded materials. The monitoring approach is based on predicting crack damage state dynamics and modeling crack length observations. Since fatigue loading of a structural component can change while in service, an interacting multiple model technique is employed to estimate probabilities of different loading modes and incorporate this information in the crack length estimation problem. For the observation model, features are obtained from regions of high signal energy in the time-frequency plane and modeled for each crack length damage condition. Although this observation model approach exhibits high classification accuracy, the resolution characteristics can change depending upon the extent of the damage. Therefore, several different transmission waveforms and receiver sensors are considered to create multiple modes for making observations of crack damage. Resolution characteristics of the different observation modes are assessed using a predicted mean squared error criterion and observations are obtained using the predicted, optimal observation modes based on these characteristics. Calculation of the predicted mean square error metric can be computationally intensive, especially if performed in real time, and an approximation method is proposed. With this approach, the real time
An advanced Gibbs-Duhem integration method: theory and applications.
van 't Hof, A; Peters, C J; de Leeuw, S W
2006-02-07
The conventional Gibbs-Duhem integration method is very convenient for the prediction of phase equilibria of both pure components and mixtures. However, it turns out to be inefficient. The method requires a number of lengthy simulations to predict the state conditions at which phase coexistence occurs. This number is not known from the outset of the numerical integration process. Furthermore, the molecular configurations generated during the simulations are merely used to predict the coexistence condition and not the liquid- and vapor-phase densities and mole fractions at coexistence. In this publication, an advanced Gibbs-Duhem integration method is presented that overcomes above-mentioned disadvantage and inefficiency. The advanced method is a combination of Gibbs-Duhem integration and multiple-histogram reweighting. Application of multiple-histogram reweighting enables the substitution of the unknown number of simulations by a fixed and predetermined number. The advanced method has a retroactive nature; a current simulation improves the predictions of previously computed coexistence points as well. The advanced Gibbs-Duhem integration method has been applied for the prediction of vapor-liquid equilibria of a number of binary mixtures. The method turned out to be very convenient, much faster than the conventional method, and provided smooth simulation results. As the employed force fields perfectly predict pure-component vapor-liquid equilibria, the binary simulations were very well suitable for testing the performance of different sets of combining rules. Employing Lorentz-Hudson-McCoubrey combining rules for interactions between unlike molecules, as opposed to Lorentz-Berthelot combining rules for all interactions, considerably improved the agreement between experimental and simulated data.
Digital methods of photopeak integration in activation analysis.
NASA Technical Reports Server (NTRS)
Baedecker, P. A.
1971-01-01
A study of the precision attainable by several methods of gamma-ray photopeak integration has been carried out. The 'total peak area' method, the methods proposed by Covell, Sterlinski, and Quittner, and some modifications of these methods have been considered. A modification by Wasson of the total peak area method is considered to be the most advantageous due to its simplicity and the relatively high precision obtainable with this technique. A computer routine for the analysis of spectral data from nondestructive activation analysis experiments employing a Ge(Li) detector-spectrometer system is described.
Adaptive entropy-constrained discontinuous Galerkin method for simulation of turbulent flows
NASA Astrophysics Data System (ADS)
Lv, Yu; Ihme, Matthias
2015-11-01
A robust and adaptive computational framework will be presented for high-fidelity simulations of turbulent flows based on the discontinuous Galerkin (DG) scheme. For this, an entropy-residual based adaptation indicator is proposed to enable adaptation in polynomial and physical space. The performance and generality of this entropy-residual indicator is evaluated through direct comparisons with classical indicators. In addition, a dynamic load balancing procedure is developed to improve computational efficiency. The adaptive framework is tested by considering a series of turbulent test cases, which include homogeneous isotropic turbulence, channel flow and flow-over-a-cylinder. The accuracy, performance and scalability are assessed, and the benefit of this adaptive high-order method is discussed. The funding from NSF CAREER award is greatly acknowledged.
Accelerometer Method and Apparatus for Integral Display and Control Functions
NASA Technical Reports Server (NTRS)
Bozeman, Richard J., Jr. (Inventor)
1998-01-01
Method and apparatus for detecting mechanical vibrations and outputting a signal in response thereto is discussed. An accelerometer package having integral display and control functions is suitable for mounting upon the machinery to be monitored. Display circuitry provides signals to a bar graph display which may be used to monitor machine conditions over a period of time. Control switches may be set which correspond to elements in the bar graph to provide an alert if vibration signals increase in amplitude over a selected trip point. The circuitry is shock mounted within the accelerometer housing. The method provides for outputting a broadband analog accelerometer signal, integrating this signal to produce a velocity signal, integrating and calibrating the velocity signal before application to a display driver, and selecting a trip point at which a digitally compatible output signal is generated.
NASA Astrophysics Data System (ADS)
Kindermans, Pieter-Jan; Tangermann, Michael; Müller, Klaus-Robert; Schrauwen, Benjamin
2014-06-01
Objective. Most BCIs have to undergo a calibration session in which data is recorded to train decoders with machine learning. Only recently zero-training methods have become a subject of study. This work proposes a probabilistic framework for BCI applications which exploit event-related potentials (ERPs). For the example of a visual P300 speller we show how the framework harvests the structure suitable to solve the decoding task by (a) transfer learning, (b) unsupervised adaptation, (c) language model and (d) dynamic stopping. Approach. A simulation study compares the proposed probabilistic zero framework (using transfer learning and task structure) to a state-of-the-art supervised model on n = 22 subjects. The individual influence of the involved components (a)-(d) are investigated. Main results. Without any need for a calibration session, the probabilistic zero-training framework with inter-subject transfer learning shows excellent performance—competitive to a state-of-the-art supervised method using calibration. Its decoding quality is carried mainly by the effect of transfer learning in combination with continuous unsupervised adaptation. Significance. A high-performing zero-training BCI is within reach for one of the most popular BCI paradigms: ERP spelling. Recording calibration data for a supervised BCI would require valuable time which is lost for spelling. The time spent on calibration would allow a novel user to spell 29 symbols with our unsupervised approach. It could be of use for various clinical and non-clinical ERP-applications of BCI.
A high-throughput multiplex method adapted for GMO detection.
Chaouachi, Maher; Chupeau, Gaëlle; Berard, Aurélie; McKhann, Heather; Romaniuk, Marcel; Giancola, Sandra; Laval, Valérie; Bertheau, Yves; Brunel, Dominique
2008-12-24
A high-throughput multiplex assay for the detection of genetically modified organisms (GMO) was developed on the basis of the existing SNPlex method designed for SNP genotyping. This SNPlex assay allows the simultaneous detection of up to 48 short DNA sequences (approximately 70 bp; "signature sequences") from taxa endogenous reference genes, from GMO constructions, screening targets, construct-specific, and event-specific targets, and finally from donor organisms. This assay avoids certain shortcomings of multiplex PCR-based methods already in widespread use for GMO detection. The assay demonstrated high specificity and sensitivity. The results suggest that this assay is reliable, flexible, and cost- and time-effective for high-throughput GMO detection.
MR Image Reconstruction Using Block Matching and Adaptive Kernel Methods
Schmidt, Johannes F. M.; Santelli, Claudio; Kozerke, Sebastian
2016-01-01
An approach to Magnetic Resonance (MR) image reconstruction from undersampled data is proposed. Undersampling artifacts are removed using an iterative thresholding algorithm applied to nonlinearly transformed image block arrays. Each block array is transformed using kernel principal component analysis where the contribution of each image block to the transform depends in a nonlinear fashion on the distance to other image blocks. Elimination of undersampling artifacts is achieved by conventional principal component analysis in the nonlinear transform domain, projection onto the main components and back-mapping into the image domain. Iterative image reconstruction is performed by interleaving the proposed undersampling artifact removal step and gradient updates enforcing consistency with acquired k-space data. The algorithm is evaluated using retrospectively undersampled MR cardiac cine data and compared to k-t SPARSE-SENSE, block matching with spatial Fourier filtering and k-t ℓ1-SPIRiT reconstruction. Evaluation of image quality and root-mean-squared-error (RMSE) reveal improved image reconstruction for up to 8-fold undersampled data with the proposed approach relative to k-t SPARSE-SENSE, block matching with spatial Fourier filtering and k-t ℓ1-SPIRiT. In conclusion, block matching and kernel methods can be used for effective removal of undersampling artifacts in MR image reconstruction and outperform methods using standard compressed sensing and ℓ1-regularized parallel imaging methods. PMID:27116675
An Adaptive Kalman Filter using a Simple Residual Tuning Method
NASA Technical Reports Server (NTRS)
Harman, Richard R.
1999-01-01
One difficulty in using Kalman filters in real world situations is the selection of the correct process noise, measurement noise, and initial state estimate and covariance. These parameters are commonly referred to as tuning parameters. Multiple methods have been developed to estimate these parameters. Most of those methods such as maximum likelihood, subspace, and observer Kalman Identification require extensive offline processing and are not suitable for real time processing. One technique, which is suitable for real time processing, is the residual tuning method. Any mismodeling of the filter tuning parameters will result in a non-white sequence for the filter measurement residuals. The residual tuning technique uses this information to estimate corrections to those tuning parameters. The actual implementation results in a set of sequential equations that run in parallel with the Kalman filter. Equations for the estimation of the measurement noise have also been developed. These algorithms are used to estimate the process noise and measurement noise for the Wide Field Infrared Explorer star tracker and gyro.
An Adaptive Kalman Filter Using a Simple Residual Tuning Method
NASA Technical Reports Server (NTRS)
Harman, Richard R.
1999-01-01
One difficulty in using Kalman filters in real world situations is the selection of the correct process noise, measurement noise, and initial state estimate and covariance. These parameters are commonly referred to as tuning parameters. Multiple methods have been developed to estimate these parameters. Most of those methods such as maximum likelihood, subspace, and observer Kalman Identification require extensive offline processing and are not suitable for real time processing. One technique, which is suitable for real time processing, is the residual tuning method. Any mismodeling of the filter tuning parameters will result in a non-white sequence for the filter measurement residuals. The residual tuning technique uses this information to estimate corrections to those tuning parameters. The actual implementation results in a set of sequential equations that run in parallel with the Kalman filter. A. H. Jazwinski developed a specialized version of this technique for estimation of process noise. Equations for the estimation of the measurement noise have also been developed. These algorithms are used to estimate the process noise and measurement noise for the Wide Field Infrared Explorer star tracker and gyro.
The Pilates method and cardiorespiratory adaptation to training.
Tinoco-Fernández, Maria; Jiménez-Martín, Miguel; Sánchez-Caravaca, M Angeles; Fernández-Pérez, Antonio M; Ramírez-Rodrigo, Jesús; Villaverde-Gutiérrez, Carmen
2016-01-01
Although all authors report beneficial health changes following training based on the Pilates method, no explicit analysis has been performed of its cardiorespiratory effects. The objective of this study was to evaluate possible changes in cardiorespiratory parameters with the Pilates method. A total of 45 university students aged 18-35 years (77.8% female and 22.2% male), who did not routinely practice physical exercise or sports, volunteered for the study and signed informed consent. The Pilates training was conducted over 10 weeks, with three 1-hour sessions per week. Physiological cardiorespiratory responses were assessed using a MasterScreen CPX apparatus. After the 10-week training, statistically significant improvements were observed in mean heart rate (135.4-124.2 beats/min), respiratory exchange ratio (1.1-0.9) and oxygen equivalent (30.7-27.6) values, among other spirometric parameters, in submaximal aerobic testing. These findings indicate that practice of the Pilates method has a positive influence on cardiorespiratory parameters in healthy adults who do not routinely practice physical exercise activities.
Restrictive Stochastic Item Selection Methods in Cognitive Diagnostic Computerized Adaptive Testing
ERIC Educational Resources Information Center
Wang, Chun; Chang, Hua-Hua; Huebner, Alan
2011-01-01
This paper proposes two new item selection methods for cognitive diagnostic computerized adaptive testing: the restrictive progressive method and the restrictive threshold method. They are built upon the posterior weighted Kullback-Leibler (KL) information index but include additional stochastic components either in the item selection index or in…
Leshin, Jonathan A; Rakauskaitė, Rasa; Dinman, Jonathan D; Meskauskas, Arturas
2010-01-01
One of the major challenges facing researchers working with eukaryotic ribosomes lies in their lability relative to their eubacterial and archael counterparts. In particular, lysis of cells and purification of eukaryotic ribosomes by conventional differential ultracentrifugation methods exposes them for long periods of time to a wide range of co-purifying proteases and nucleases, negatively impacting their structural integrity and functionality. A chromatographic method using a cysteine charged Sulfolink resin was adapted to address these problems. This fast and simple method significantly reduces co-purifying proteolytic and nucleolytic activities, producing good yields of highly biochemically active yeast ribosomes with fewer nicks in their rRNAs. In particular, the chromatographic purification protocol significantly improved the quality of ribosomes isolated from mutant cells. This method is likely applicable to mammalian ribosomes as well. The simplicity of the method, and the enhanced purity and activity of chromatographically purified ribosome represents a significant technical advancement for the study of eukaryotic ribosomes.
Finite element methods for integrated aerodynamic heating analysis
NASA Technical Reports Server (NTRS)
Morgan, K.; Peraire, J.
1991-01-01
This report gives a description of the work which has been undertaken during the second year of a three year research program. The objectives of the program are to produce finite element based procedures for the solution of the large scale practical problems which are of interest to the Aerothermal Loads Branch (ALB) at NASA Langley Research Establishment. The problems of interest range from Euler simulations of full three dimensional vehicle configurations to local analyses of three dimensional viscous laminar flow. Adaptive meshes produced for both steady state and transient problems are to be considered. An important feature of the work is the provision of specialized techniques which can be used at ALB for the development of an integrated fluid/thermal/structural modeling capability.
Approximation method to compute domain related integrals in structural studies
NASA Astrophysics Data System (ADS)
Oanta, E.; Panait, C.; Raicu, A.; Barhalescu, M.; Axinte, T.
2015-11-01
Various engineering calculi use integral calculus in theoretical models, i.e. analytical and numerical models. For usual problems, integrals have mathematical exact solutions. If the domain of integration is complicated, there may be used several methods to calculate the integral. The first idea is to divide the domain in smaller sub-domains for which there are direct calculus relations, i.e. in strength of materials the bending moment may be computed in some discrete points using the graphical integration of the shear force diagram, which usually has a simple shape. Another example is in mathematics, where the surface of a subgraph may be approximated by a set of rectangles or trapezoids used to calculate the definite integral. The goal of the work is to introduce our studies about the calculus of the integrals in the transverse section domains, computer aided solutions and a generalizing method. The aim of our research is to create general computer based methods to execute the calculi in structural studies. Thus, we define a Boolean algebra which operates with ‘simple’ shape domains. This algebraic standpoint uses addition and subtraction, conditioned by the sign of every ‘simple’ shape (-1 for the shapes to be subtracted). By ‘simple’ shape or ‘basic’ shape we define either shapes for which there are direct calculus relations, or domains for which their frontiers are approximated by known functions and the according calculus is carried out using an algorithm. The ‘basic’ shapes are linked to the calculus of the most significant stresses in the section, refined aspect which needs special attention. Starting from this idea, in the libraries of ‘basic’ shapes, there were included rectangles, ellipses and domains whose frontiers are approximated by spline functions. The domain triangularization methods suggested that another ‘basic’ shape to be considered is the triangle. The subsequent phase was to deduce the exact relations for the
NASA Technical Reports Server (NTRS)
Hegemann, S.; Shelhamer, M.; Kramer, P. D.; Zee, D. S.
2000-01-01
The phase of the translational linear VOR (LVOR) can be adaptively modified by exposure to a visual-vestibular mismatch. We extend here our earlier work on LVOR phase adaptation, and discuss the role of the oculomotor neural integrator. Ten subjects were oscillated laterally at 0.5 Hz, 0.3 g peak acceleration, while sitting upright on a linear sled. LVOR was assessed before and after adaptation with subjects tracking the remembered location of a target at 1 m in the dark. Phase and gain were measured by fitting sine waves to the desaccaded eye movements, and comparing sled and eye position. To adapt LVOR phase, the subject viewed a computer-generated stereoscopic visual display, at a virtual distance of 1 m, that moved so as to require either a phase lead or a phase lag of 53 deg. Adaptation lasted 20 min, during which subjects were oscillated at 0.5 Hz/0.3 g. Four of five subjects produced an adaptive change in the lag condition (range 4-45 deg), and each of five produced a change in the lead condition (range 19-56 deg), as requested. Changes in drift on eccentric gaze suggest that the oculomotor velocity-to-position integrator may be involved in the phase changes.
An Integrated Approach to Research Methods and Capstone
ERIC Educational Resources Information Center
Postic, Robert; McCandless, Ray; Stewart, Beth
2014-01-01
In 1991, the AACU issued a report on improving undergraduate education suggesting, in part, that a curriculum should be both comprehensive and cohesive. Since 2008, we have systematically integrated our research methods course with our capstone course in an attempt to accomplish the twin goals of comprehensiveness and cohesion. By taking this…
Integrating Multiple Teaching Methods into a General Chemistry Classroom.
ERIC Educational Resources Information Center
Francisco, Joseph S.; Nicoll, Gayle; Trautmann, Marcella
1998-01-01
Four different methods of teaching--cooperative learning, class discussions, concept maps, and lectures--were integrated into a freshman-level general chemistry course to compare students' levels of participation. Findings support the idea that multiple modes of learning foster the metacognitive skills necessary for mastering general chemistry.…
Integrating Methods and Materials: Developing Trainees' Reading Skills.
ERIC Educational Resources Information Center
Jarvis, Jennifer
1987-01-01
Explores issues arising from a research project which studied ways of meeting the reading needs of trainee primary school teachers (from Malawi and Tanzania) of English as a foreign language. Topics discussed include: the classroom teaching situation; teaching "quality"; and integration of materials and methods. (CB)
Integrability: mathematical methods for studying solitary waves theory
NASA Astrophysics Data System (ADS)
Wazwaz, Abdul-Majid
2014-03-01
In recent decades, substantial experimental research efforts have been devoted to linear and nonlinear physical phenomena. In particular, studies of integrable nonlinear equations in solitary waves theory have attracted intensive interest from mathematicians, with the principal goal of fostering the development of new methods, and physicists, who are seeking solutions that represent physical phenomena and to form a bridge between mathematical results and scientific structures. The aim for both groups is to build up our current understanding and facilitate future developments, develop more creative results and create new trends in the rapidly developing field of solitary waves. The notion of the integrability of certain partial differential equations occupies an important role in current and future trends, but a unified rigorous definition of the integrability of differential equations still does not exist. For example, an integrable model in the Painlevé sense may not be integrable in the Lax sense. The Painlevé sense indicates that the solution can be represented as a Laurent series in powers of some function that vanishes on an arbitrary surface with the possibility of truncating the Laurent series at finite powers of this function. The concept of Lax pairs introduces another meaning of the notion of integrability. The Lax pair formulates the integrability of nonlinear equation as the compatibility condition of two linear equations. However, it was shown by many researchers that the necessary integrability conditions are the existence of an infinite series of generalized symmetries or conservation laws for the given equation. The existence of multiple soliton solutions often indicates the integrability of the equation but other tests, such as the Painlevé test or the Lax pair, are necessary to confirm the integrability for any equation. In the context of completely integrable equations, studies are flourishing because these equations are able to describe the
Wang, Qiuying; Cui, Xufei; Li, Yibing; Ye, Fang
2017-02-03
To improve the ability of autonomous navigation for Unmanned Surface Vehicles (USVs), multi-sensor integrated navigation based on Inertial Navigation System (INS), Celestial Navigation System (CNS) and Doppler Velocity Log (DVL) is proposed. The CNS position and the DVL velocity are introduced as the reference information to correct the INS divergence error. The autonomy of the integrated system based on INS/CNS/DVL is much better compared with the integration based on INS/GNSS alone. However, the accuracy of DVL velocity and CNS position are decreased by the measurement noise of DVL and bad weather, respectively. Hence, the INS divergence error cannot be estimated and corrected by the reference information. To resolve the problem, the Adaptive Information Sharing Factor Federated Filter (AISFF) is introduced to fuse data. The information sharing factor of the Federated Filter is adaptively adjusted to maintaining multiple component solutions usable as back-ups, which can improve the reliability of overall system. The effectiveness of this approach is demonstrated by simulation and experiment, the results show that for the INS/CNS/DVL integrated system, when the DVL velocity accuracy is decreased and the CNS cannot work under bad weather conditions, the INS/CNS/DVL integrated system can operate stably based on the AISFF method.
Wang, Qiuying; Cui, Xufei; Li, Yibing; Ye, Fang
2017-01-01
To improve the ability of autonomous navigation for Unmanned Surface Vehicles (USVs), multi-sensor integrated navigation based on Inertial Navigation System (INS), Celestial Navigation System (CNS) and Doppler Velocity Log (DVL) is proposed. The CNS position and the DVL velocity are introduced as the reference information to correct the INS divergence error. The autonomy of the integrated system based on INS/CNS/DVL is much better compared with the integration based on INS/GNSS alone. However, the accuracy of DVL velocity and CNS position are decreased by the measurement noise of DVL and bad weather, respectively. Hence, the INS divergence error cannot be estimated and corrected by the reference information. To resolve the problem, the Adaptive Information Sharing Factor Federated Filter (AISFF) is introduced to fuse data. The information sharing factor of the Federated Filter is adaptively adjusted to maintaining multiple component solutions usable as back-ups, which can improve the reliability of overall system. The effectiveness of this approach is demonstrated by simulation and experiment, the results show that for the INS/CNS/DVL integrated system, when the DVL velocity accuracy is decreased and the CNS cannot work under bad weather conditions, the INS/CNS/DVL integrated system can operate stably based on the AISFF method. PMID:28165369
Integrated soil fertility management in sub-Saharan Africa: unravelling local adaptation
NASA Astrophysics Data System (ADS)
Vanlauwe, B.; Descheemaeker, K.; Giller, K. E.; Huising, J.; Merckx, R.; Nziguheba, G.; Wendt, J.; Zingore, S.
2014-12-01
Intensification of smallholder agriculture in sub-Saharan Africa is necessary to address rural poverty and natural resource degradation. Integrated Soil Fertility Management (ISFM) is a means to enhance crop productivity while maximizing the agronomic efficiency (AE) of applied inputs, and can thus contribute to sustainable intensification. ISFM consists of a set of best practices, preferably used in combination, including the use of appropriate germplasm, the appropriate use of fertilizer and of organic resources, and good agronomic practices. The large variability in soil fertility conditions within smallholder farms is also recognised within ISFM, including soils with constraints beyond those addressed by fertilizer and organic inputs. The variable biophysical environments that characterize smallholder farming systems have profound effects on crop productivity and AE and targeted application of limited agro-inputs and management practices is necessary to enhance AE. Further, management decisions depend on the farmer's resource endowments and production objectives. In this paper we discuss the "local adaptation" component of ISFM and how this can be conceptualized within an ISFM framework, backstopped by analysis of AE at plot and farm level. At plot level, a set of four constraints to maximum AE is discussed in relation to "local adaptation": soil acidity, secondary nutrient and micro-nutrient (SMN) deficiencies, physical constraints, and drought stress. In each of these cases, examples are presented whereby amendments and/or practices addressing these have a significantly positive impact on fertilizer AE, including mechanistic principles underlying these effects. While the impact of such amendments and/or practices is easily understood for some practices (e.g., the application of SMNs where these are limiting), for others, more complex interactions with fertilizer AE can be identified (e.g., water harvesting under varying rainfall conditions). At farm scale
Integrated soil fertility management in sub-Saharan Africa: unravelling local adaptation
NASA Astrophysics Data System (ADS)
Vanlauwe, B.; Descheemaeker, K.; Giller, K. E.; Huising, J.; Merckx, R.; Nziguheba, G.; Wendt, J.; Zingore, S.
2015-06-01
Intensification of smallholder agriculture in sub-Saharan Africa is necessary to address rural poverty and natural resource degradation. Integrated soil fertility management (ISFM) is a means to enhance crop productivity while maximizing the agronomic efficiency (AE) of applied inputs, and can thus contribute to sustainable intensification. ISFM consists of a set of best practices, preferably used in combination, including the use of appropriate germplasm, the appropriate use of fertilizer and of organic resources, and good agronomic practices. The large variability in soil fertility conditions within smallholder farms is also recognized within ISFM, including soils with constraints beyond those addressed by fertilizer and organic inputs. The variable biophysical environments that characterize smallholder farming systems have profound effects on crop productivity and AE, and targeted application of agro-inputs and management practices is necessary to enhance AE. Further, management decisions depend on the farmer's resource endowments and production objectives. In this paper we discuss the "local adaptation" component of ISFM and how this can be conceptualized within an ISFM framework, backstopped by analysis of AE at plot and farm level. At plot level, a set of four constraints to maximum AE is discussed in relation to "local adaptation": soil acidity, secondary nutrient and micronutrient (SMN) deficiencies, physical constraints, and drought stress. In each of these cases, examples are presented whereby amendments and/or practices addressing these have a significantly positive impact on fertilizer AE, including mechanistic principles underlying these effects. While the impact of such amendments and/or practices is easily understood for some practices (e.g. the application of SMNs where these are limiting), for others, more complex processes influence AE (e.g. water harvesting under varying rainfall conditions). At farm scale, adjusting fertilizer applications to
ERIC Educational Resources Information Center
Wittich, Walter; Watanabe, Donald H.; Scully, Lizabeth; Bergevin , Martin
2013-01-01
Introduction: In the Province of Quebec, Canada, it is estimated that only about one-third of working-age adults with visual impairments are part of the workforce, despite ongoing efforts of rehabilitation and government agencies to integrate these individuals. The present article describes the development and adaptation of a pre-employment…
Singularity Preserving Numerical Methods for Boundary Integral Equations
NASA Technical Reports Server (NTRS)
Kaneko, Hideaki (Principal Investigator)
1996-01-01
In the past twelve months (May 8, 1995 - May 8, 1996), under the cooperative agreement with Division of Multidisciplinary Optimization at NASA Langley, we have accomplished the following five projects: a note on the finite element method with singular basis functions; numerical quadrature for weakly singular integrals; superconvergence of degenerate kernel method; superconvergence of the iterated collocation method for Hammersteion equations; and singularity preserving Galerkin method for Hammerstein equations with logarithmic kernel. This final report consists of five papers describing these projects. Each project is preceeded by a brief abstract.
Adaptive method for quantifying uncertainty in discharge measurements using velocity-area method.
NASA Astrophysics Data System (ADS)
Despax, Aurélien; Favre, Anne-Catherine; Belleville, Arnaud
2015-04-01
Streamflow information provided by hydrometric services such as EDF-DTG allow real time monitoring of rivers, streamflow forecasting, paramount hydrological studies and engineering design. In open channels, the traditional approach to measure flow uses a rating curve, which is an indirect method to estimate the discharge in rivers based on water level and punctual discharge measurements. A large proportion of these discharge measurements are performed using the velocity-area method; it consists in integrating flow velocities and depths through the cross-section [1]. The velocity field is estimated by choosing a number m of verticals, distributed across the river, where vertical velocity profile is sampled by a current-meter at ni different depths. Uncertainties coming from several sources are related to the measurement process. To date, the framework for assessing uncertainty in velocity-area discharge measurements is the method presented in the ISO 748 standard [2] which follows the GUM [3] approach. The equation for the combined uncertainty in measured discharge u(Q), at 68% level of confidence, proposed by the ISO 748 standard is expressed as: Σ 2 2 2 -q2i[u2(Bi)+-u2(Di)+-u2p(Vi)+-(1ni) ×-[u2c(Vi)+-u2exp(Vi)
Cardiac power integral: a new method for monitoring cardiovascular performance.
Rimehaug, Audun E; Lyng, Oddveig; Nordhaug, Dag O; Løvstakken, Lasse; Aadahl, Petter; Kirkeby-Garstad, Idar
2013-11-01
Cardiac power (PWR) is the continuous product of flow and pressure in the proximal aorta. Our aim was to validate the PWR integral as a marker of left ventricular energy transfer to the aorta, by comparing it to stroke work (SW) under multiple different loading and contractility conditions in subjects without obstructions in the left ventricular outflow tract. Six pigs were under general anesthesia equipped with transit time flow probes on their proximal aortas and Millar micromanometer catheters in their descending aortas to measure PWR, and Leycom conductance catheters in their left ventricles to measure SW. The PWR integral was calculated as the time integral of PWR per cardiac cycle. SW was calculated as the area encompassed by the pressure-volume loop (PV loop). The relationship between the PWR integral and SW was tested during extensive mechanical and pharmacological interventions that affected the loading conditions and myocardial contractility. The PWR integral displayed a strong correlation with SW in all pigs (R (2) > 0.95, P < 0.05) under all conditions, using a linear model. Regression analysis and Bland Altman plots also demonstrated a stable relationship. A mixed linear analysis indicated that the slope of the SW-to-PWR-integral relationship was similar among all six animals, whereas loading and contractility conditions tended to affect the slope. The PWR integral followed SW and appeared to be a promising parameter for monitoring the energy transferred from the left ventricle to the aorta. This conclusion motivates further studies to determine whether the PWR integral can be evaluated using less invasive methods, such as echocardiography combined with a radial artery catheter.
Hasse, J U; Weingaertner, D E
2016-01-01
As the central product of the BMBF-KLIMZUG-funded Joint Network and Research Project (JNRP) 'dynaklim - Dynamic adaptation of regional planning and development processes to the effects of climate change in the Emscher-Lippe region (North Rhine Westphalia, Germany)', the Roadmap 2020 'Regional Climate Adaptation' has been developed by the various regional stakeholders and institutions containing specific regional scenarios, strategies and adaptation measures applicable throughout the region. This paper presents the method, elements and main results of this regional roadmap process by using the example of the thematic sub-roadmap 'Water Sensitive Urban Design 2020'. With a focus on the process support tool 'KlimaFLEX', one of the main adaptation measures of the WSUD 2020 roadmap, typical challenges for integrated climate change adaptation like scattered knowledge, knowledge gaps and divided responsibilities but also potential solutions and promising chances for urban development and urban water management are discussed. With the roadmap and the related tool, the relevant stakeholders of the Emscher-Lippe region have jointly developed important prerequisites to integrate their knowledge, to clarify vulnerabilities, adaptation goals, responsibilities and interests, and to foresightedly coordinate measures, resources, priorities and schedules for an efficient joint urban planning, well-grounded decision-making in times of continued uncertainties and step-by-step implementation of adaptation measures from now on.
NASA Astrophysics Data System (ADS)
Tanizawa, Ken; Hirose, Akira
Adaptive polarization mode dispersion (PMD) compensation is required for the speed-up and advancement of the present optical communications. The combination of a tunable PMD compensator and its adaptive control method achieves adaptive PMD compensation. In this paper, we report an effective search control algorithm for the feedback control of the PMD compensator. The algorithm is based on the hill-climbing method. However, the step size changes randomly to prevent the convergence from being trapped at a local maximum or a flat, unlike the conventional hill-climbing method. The randomness depends on the Gaussian probability density functions. We conducted transmission simulations at 160Gb/s and the results show that the proposed method provides more optimal compensator control than the conventional hill-climbing method.
Zhou, Qifan; Zhang, Hai; Li, You; Li, Zheng
2015-01-01
The main aim of this paper is to develop a low-cost GNSS/MEMS-IMU tightly-coupled integration system with aiding information that can provide reliable position solutions when the GNSS signal is challenged such that less than four satellites are visible in a harsh environment. To achieve this goal, we introduce an adaptive tightly-coupled integration system with height and heading aiding (ATCA). This approach adopts a novel redundant measurement noise estimation method for an adaptive Kalman filter application and also augments external measurements in the filter to aid the position solutions, as well as uses different filters to deal with various situations. On the one hand, the adaptive Kalman filter makes use of the redundant measurement system’s difference sequence to estimate and tune noise variance instead of employing a traditional innovation sequence to avoid coupling with the state vector error. On the other hand, this method uses the external height and heading angle as auxiliary references and establishes a model for the measurement equation in the filter. In the meantime, it also changes the effective filter online based on the number of tracked satellites. These measures have increasingly enhanced the position constraints and the system observability, improved the computational efficiency and have led to a good result. Both simulated and practical experiments have been carried out, and the results demonstrate that the proposed method is effective at limiting the system errors when there are less than four visible satellites, providing a satisfactory navigation solution. PMID:26393605
Zhou, Qifan; Zhang, Hai; Li, You; Li, Zheng
2015-09-18
The main aim of this paper is to develop a low-cost GNSS/MEMS-IMU tightly-coupled integration system with aiding information that can provide reliable position solutions when the GNSS signal is challenged such that less than four satellites are visible in a harsh environment. To achieve this goal, we introduce an adaptive tightly-coupled integration system with height and heading aiding (ATCA). This approach adopts a novel redundant measurement noise estimation method for an adaptive Kalman filter application and also augments external measurements in the filter to aid the position solutions, as well as uses different filters to deal with various situations. On the one hand, the adaptive Kalman filter makes use of the redundant measurement system's difference sequence to estimate and tune noise variance instead of employing a traditional innovation sequence to avoid coupling with the state vector error. On the other hand, this method uses the external height and heading angle as auxiliary references and establishes a model for the measurement equation in the filter. In the meantime, it also changes the effective filter online based on the number of tracked satellites. These measures have increasingly enhanced the position constraints and the system observability, improved the computational efficiency and have led to a good result. Both simulated and practical experiments have been carried out, and the results demonstrate that the proposed method is effective at limiting the system errors when there are less than four visible satellites, providing a satisfactory navigation solution.
A Massively Parallel Adaptive Fast Multipole Method on Heterogeneous Architectures
Lashuk, Ilya; Chandramowlishwaran, Aparna; Langston, Harper; Nguyen, Tuan-Anh; Sampath, Rahul S; Shringarpure, Aashay; Vuduc, Richard; Ying, Lexing; Zorin, Denis; Biros, George
2012-01-01
We describe a parallel fast multipole method (FMM) for highly nonuniform distributions of particles. We employ both distributed memory parallelism (via MPI) and shared memory parallelism (via OpenMP and GPU acceleration) to rapidly evaluate two-body nonoscillatory potentials in three dimensions on heterogeneous high performance computing architectures. We have performed scalability tests with up to 30 billion particles on 196,608 cores on the AMD/CRAY-based Jaguar system at ORNL. On a GPU-enabled system (NSF's Keeneland at Georgia Tech/ORNL), we observed 30x speedup over a single core CPU and 7x speedup over a multicore CPU implementation. By combining GPUs with MPI, we achieve less than 10 ns/particle and six digits of accuracy for a run with 48 million nonuniformly distributed particles on 192 GPUs.
Adaptive bit truncation and compensation method for EZW image coding
NASA Astrophysics Data System (ADS)
Dai, Sheng-Kui; Zhu, Guangxi; Wang, Yao
2003-09-01
The embedded zero-tree wavelet algorithm (EZW) is widely adopted to compress wavelet coefficients of images with the property that the bits stream can be truncated and produced anywhere. The lower bit plane of the wavelet coefficents is verified to be less important than the higher bit plane. Therefore it can be truncated and not encoded. Based on experiments, a generalized function, which can provide a glancing guide for EZW encoder to intelligently decide the number of low bit plane to be truncated, is deduced in this paper. In the EZW decoder, a simple method is presented to compensate for the truncated wavelet coefficients, and finally it can surprisingly enhance the quality of reconstructed image and spend scarcely any additional cost at the same time.
Naikar, Neelam; Elix, Ben
2016-01-01
This paper proposes an approach for integrated system design, which has the intent of facilitating high levels of effectiveness in sociotechnical systems by promoting their capacity for adaptation. Building on earlier ideas and empirical observations, this approach recognizes that to create adaptive systems it is necessary to integrate the design of all of the system elements, including the interfaces, teams, training, and automation, such that workers are supported in adapting their behavior as well as their structure, or organization, in a coherent manner. Current approaches for work analysis and design are limited in regard to this fundamental objective, especially in cases when workers are confronted with unforeseen events. A suitable starting point is offered by cognitive work analysis (CWA), but while this framework can support actors in adapting their behavior, it does not necessarily accommodate adaptations in their structure. Moreover, associated design approaches generally focus on individual system elements, and those that consider multiple elements appear limited in their ability to facilitate integration, especially in the manner intended here. The proposed approach puts forward the set of possibilities for work organization in a system as the central mechanism for binding the design of its various elements, so that actors can adapt their structure as well as their behavior-in a unified fashion-to handle both familiar and novel conditions. Accordingly, this paper demonstrates how the set of possibilities for work organization in a system may be demarcated independently of the situation, through extensions of CWA, and how it may be utilized in design. This lynchpin, conceptualized in the form of a diagram of work organization possibilities (WOP), is important for preserving a system's inherent capacity for adaptation. Future research should focus on validating these concepts and establishing the feasibility of implementing them in industrial contexts.
Naikar, Neelam; Elix, Ben
2016-01-01
This paper proposes an approach for integrated system design, which has the intent of facilitating high levels of effectiveness in sociotechnical systems by promoting their capacity for adaptation. Building on earlier ideas and empirical observations, this approach recognizes that to create adaptive systems it is necessary to integrate the design of all of the system elements, including the interfaces, teams, training, and automation, such that workers are supported in adapting their behavior as well as their structure, or organization, in a coherent manner. Current approaches for work analysis and design are limited in regard to this fundamental objective, especially in cases when workers are confronted with unforeseen events. A suitable starting point is offered by cognitive work analysis (CWA), but while this framework can support actors in adapting their behavior, it does not necessarily accommodate adaptations in their structure. Moreover, associated design approaches generally focus on individual system elements, and those that consider multiple elements appear limited in their ability to facilitate integration, especially in the manner intended here. The proposed approach puts forward the set of possibilities for work organization in a system as the central mechanism for binding the design of its various elements, so that actors can adapt their structure as well as their behavior—in a unified fashion—to handle both familiar and novel conditions. Accordingly, this paper demonstrates how the set of possibilities for work organization in a system may be demarcated independently of the situation, through extensions of CWA, and how it may be utilized in design. This lynchpin, conceptualized in the form of a diagram of work organization possibilities (WOP), is important for preserving a system's inherent capacity for adaptation. Future research should focus on validating these concepts and establishing the feasibility of implementing them in industrial
An Adaptive Unstructured Grid Method by Grid Subdivision, Local Remeshing, and Grid Movement
NASA Technical Reports Server (NTRS)
Pirzadeh, Shahyar Z.
1999-01-01
An unstructured grid adaptation technique has been developed and successfully applied to several three dimensional inviscid flow test cases. The approach is based on a combination of grid subdivision, local remeshing, and grid movement. For solution adaptive grids, the surface triangulation is locally refined by grid subdivision, and the tetrahedral grid in the field is partially remeshed at locations of dominant flow features. A grid redistribution strategy is employed for geometric adaptation of volume grids to moving or deforming surfaces. The method is automatic and fast and is designed for modular coupling with different solvers. Several steady state test cases with different inviscid flow features were tested for grid/solution adaptation. In all cases, the dominant flow features, such as shocks and vortices, were accurately and efficiently predicted with the present approach. A new and robust method of moving tetrahedral "viscous" grids is also presented and demonstrated on a three-dimensional example.
Liu, Peigui; Elshall, Ahmed S.; Ye, Ming; ...
2016-02-05
Evaluating marginal likelihood is the most critical and computationally expensive task, when conducting Bayesian model averaging to quantify parametric and model uncertainties. The evaluation is commonly done by using Laplace approximations to evaluate semianalytical expressions of the marginal likelihood or by using Monte Carlo (MC) methods to evaluate arithmetic or harmonic mean of a joint likelihood function. This study introduces a new MC method, i.e., thermodynamic integration, which has not been attempted in environmental modeling. Instead of using samples only from prior parameter space (as in arithmetic mean evaluation) or posterior parameter space (as in harmonic mean evaluation), the thermodynamicmore » integration method uses samples generated gradually from the prior to posterior parameter space. This is done through a path sampling that conducts Markov chain Monte Carlo simulation with different power coefficient values applied to the joint likelihood function. The thermodynamic integration method is evaluated using three analytical functions by comparing the method with two variants of the Laplace approximation method and three MC methods, including the nested sampling method that is recently introduced into environmental modeling. The thermodynamic integration method outperforms the other methods in terms of their accuracy, convergence, and consistency. The thermodynamic integration method is also applied to a synthetic case of groundwater modeling with four alternative models. The application shows that model probabilities obtained using the thermodynamic integration method improves predictive performance of Bayesian model averaging. As a result, the thermodynamic integration method is mathematically rigorous, and its MC implementation is computationally general for a wide range of environmental problems.« less
Principles and methods of integrative genomic analyses in cancer.
Kristensen, Vessela N; Lingjærde, Ole Christian; Russnes, Hege G; Vollan, Hans Kristian M; Frigessi, Arnoldo; Børresen-Dale, Anne-Lise
2014-05-01
Combined analyses of molecular data, such as DNA copy-number alteration, mRNA and protein expression, point to biological functions and molecular pathways being deregulated in multiple cancers. Genomic, metabolomic and clinical data from various solid cancers and model systems are emerging and can be used to identify novel patient subgroups for tailored therapy and monitoring. The integrative genomics methodologies that are used to interpret these data require expertise in different disciplines, such as biology, medicine, mathematics, statistics and bioinformatics, and they can seem daunting. The objectives, methods and computational tools of integrative genomics that are available to date are reviewed here, as is their implementation in cancer research.
Gibson, Oliver R; Mee, Jessica A; Tuttle, James A; Taylor, Lee; Watt, Peter W; Maxwell, Neil S
2015-01-01
Heat acclimation requires the interaction between hot environments and exercise to elicit thermoregulatory adaptations. Optimal synergism between these parameters is unknown. Common practise involves utilising a fixed workload model where exercise prescription is controlled and core temperature is uncontrolled, or an isothermic model where core temperature is controlled and work rate is manipulated to control core temperature. Following a baseline heat stress test; 24 males performed a between groups experimental design performing short term heat acclimation (STHA; five 90 min sessions) and long term heat acclimation (LTHA; STHA plus further five 90 min sessions) utilising either fixed intensity (50% VO2peak), continuous isothermic (target rectal temperature 38.5 °C for STHA and LTHA), or progressive isothermic heat acclimation (target rectal temperature 38.5 °C for STHA, and 39.0 °C for LTHA). Identical heat stress tests followed STHA and LTHA to determine the magnitude of adaptation. All methods induced equal adaptation from baseline however isothermic methods induced adaptation and reduced exercise durations (STHA = -66% and LTHA = -72%) and mean session intensity (STHA = -13% VO2peak and LTHA = -9% VO2peak) in comparison to fixed (p < 0.05). STHA decreased exercising heart rate (-10 b min(-1)), core (-0.2 °C) and skin temperature (-0.51 °C), with sweat losses increasing (+0.36 Lh(-1)) (p<0.05). No difference between heat acclimation methods, and no further benefit of LTHA was observed (p > 0.05). Only thermal sensation improved from baseline to STHA (-0.2), and then between STHA and LTHA (-0.5) (p<0.05). Both the continuous and progressive isothermic methods elicited exercise duration, mean session intensity, and mean T(rec) analogous to more efficient administration for maximising adaptation. Short term isothermic methods are therefore optimal for individuals aiming to achieve heat adaptation most economically, i.e. when integrating heat acclimation into
Impedance adaptation methods of the piezoelectric energy harvesting
NASA Astrophysics Data System (ADS)
Kim, Hyeoungwoo
In this study, the important issues of energy recovery were addressed and a comprehensive investigation was performed on harvesting electrical power from an ambient mechanical vibration source. Also discussed are the impedance matching methods used to increase the efficiency of energy transfer from the environment to the application. Initially, the mechanical impedance matching method was investigated to increase mechanical energy transferred to the transducer from the environment. This was done by reducing the mechanical impedance such as damping factor and energy reflection ratio. The vibration source and the transducer were modeled by a two-degree-of-freedom dynamic system with mass, spring constant, and damper. The transmissibility employed to show how much mechanical energy that was transferred in this system was affected by the damping ratio and the stiffness of elastic materials. The mechanical impedance of the system was described by electrical system using analogy between the two systems in order to simply the total mechanical impedance. Secondly, the transduction rate of mechanical energy to electrical energy was improved by using a PZT material which has a high figure of merit and a high electromechanical coupling factor for electrical power generation, and a piezoelectric transducer which has a high transduction rate was designed and fabricated. The high g material (g33 = 40 [10-3Vm/N]) was developed to improve the figure of merit of the PZT ceramics. The cymbal composite transducer has been found as a promising structure for piezoelectric energy harvesting under high force at cyclic conditions (10--200 Hz), because it has almost 40 times higher effective strain coefficient than PZT ceramics. The endcap of cymbal also enhances the endurance of the ceramic to sustain ac load along with stress amplification. In addition, a macro fiber composite (MFC) was employed as a strain component because of its flexibility and the high electromechanical coupling
Coleman, Andre M.
2009-07-17
The advanced geospatial information extraction and analysis capabilities of a Geographic Information System (GISs) and Artificial Neural Networks (ANNs), particularly Self-Organizing Maps (SOMs), provide a topology-preserving means for reducing and understanding complex data relationships in the landscape. The Adaptive Landscape Classification Procedure (ALCP) is presented as an adaptive and evolutionary capability where varying types of data can be assimilated to address different management needs such as hydrologic response, erosion potential, habitat structure, instrumentation placement, and various forecast or what-if scenarios. This paper defines how the evaluation and analysis of spatial and/or temporal patterns in the landscape can provide insight into complex ecological, hydrological, climatic, and other natural and anthropogenic-influenced processes. Establishing relationships among high-dimensional datasets through neurocomputing based pattern recognition methods can help 1) resolve large volumes of data into a structured and meaningful form; 2) provide an approach for inferring landscape processes in areas that have limited data available but exhibit similar landscape characteristics; and 3) discover the value of individual variables or groups of variables that contribute to specific processes in the landscape. Classification of hydrologic patterns in the landscape is demonstrated.
Method to integrate full particle orbit in toroidal plasmas
NASA Astrophysics Data System (ADS)
Wei, X. S.; Xiao, Y.; Kuley, A.; Lin, Z.
2015-09-01
It is important to integrate full particle orbit accurately when studying charged particle dynamics in electromagnetic waves with frequency higher than cyclotron frequency. We have derived a form of the Boris scheme using magnetic coordinates, which can be used effectively to integrate the cyclotron orbit in toroidal geometry over a long period of time. The new method has been verified by a full particle orbit simulation in toroidal geometry without high frequency waves. The full particle orbit calculation recovers guiding center banana orbit. This method has better numeric properties than the conventional Runge-Kutta method for conserving particle energy and magnetic moment. The toroidal precession frequency is found to match that from guiding center simulation. Many other important phenomena in the presence of an electric field, such as E × B drift, Ware pinch effect and neoclassical polarization drift are also verified by the full orbit simulation.
NASA Astrophysics Data System (ADS)
Wong, Kin-Yiu; Gao, Jiali
2007-12-01
Based on Kleinert's variational perturbation (KP) theory [Path Integrals in Quantum Mechanics, Statistics, Polymer Physics, and Financial Markets, 3rd ed. (World Scientific, Singapore, 2004)], we present an analytic path-integral approach for computing the effective centroid potential. The approach enables the KP theory to be applied to any realistic systems beyond the first-order perturbation (i.e., the original Feynman-Kleinert [Phys. Rev. A 34, 5080 (1986)] variational method). Accurate values are obtained for several systems in which exact quantum results are known. Furthermore, the computed kinetic isotope effects for a series of proton transfer reactions, in which the potential energy surfaces are evaluated by density-functional theory, are in good accordance with experiments. We hope that our method could be used by non-path-integral experts or experimentalists as a "black box" for any given system.
Wu, Guo-Qiang; Wu, Shu-Nan; Bai, Yu-Guang; Liu, Lei
2013-01-01
In this paper, an adaptive law with an integral action is designed and implemented on a DC motor by employing a rotary encoder and tachometer sensors. The stability is proved by using the Lyapunov function. The tracking errors asymptotically converge to zero according to the Barbalat lemma. The tracking performance is specified by a reference model, the convergence rate of Lyapunov function is specified by the matrix Q and the control action and the state weighting are restricted by the matrix Γ. The experimental results demonstrate the effectiveness of the proposed control. The maximum errors of the position and velocity with the integral action are reduced from 0.4 V and 1.5 V to 0.2 V and 0.4 V, respectively. The adaptive control with the integral action gives satisfactory performance, even when it suffers from input disturbance. PMID:23575034
Wu, Guo-Qiang; Wu, Shu-Nan; Bai, Yu-Guang; Liu, Lei
2013-04-10
In this paper, an adaptive law with an integral action is designed and implemented on a DC motor by employing a rotary encoder and tachometer sensors. The stability is proved by using the Lyapunov function. The tracking errors asymptotically converge to zero according to the Barbalat lemma. The tracking performance is specified by a reference model, the convergence rate of Lyapunov function is specified by the matrix Q and the control action and the state weighting are restricted by the matrix Γ. The experimental results demonstrate the effectiveness of the proposed control. The maximum errors of the position and velocity with the integral action are reduced from 0.4 V and 1.5 V to 0.2 V and 0.4 V, respectively. The adaptive control with the integral action gives satisfactory performance, even when it suffers from input disturbance.
RADIO GALAXY 3C 230 OBSERVED WITH GEMINI LASER ADAPTIVE-OPTICS INTEGRAL-FIELD SPECTROSCOPY
Steinbring, Eric
2011-11-15
The Altair laser-guide-star adaptive optics facility combined with the near-infrared integral-field spectrometer on Gemini North have been employed to study the morphology and kinematics of 3C 230 at z = 1.5, the first such observations of a high-redshift radio galaxy. These suggest a bi-polar outflow spanning 0.''9 ({approx}16 kpc projected distance for a standard {Lambda} CDM cosmology) reaching a mean relative velocity of 235 km s{sup -1} in redshifted H{alpha} +[N II] and [S II] emission. Structure is resolved to 0.''1 (0.8 kpc), which is well correlated with optical images from the Hubble Space Telescope and Very Large Array radio maps obtained at similar spatial resolution. Line diagnostics suggest that over the 10{sup 7} yr to 10{sup 8} yr duration of its active galactic nucleus activity, gas has been ejected into bright turbulent lobes at rates comparable to star formation, although constituting perhaps only 1% of the baryonic mass in the galaxy.
Adaptive neuro-fuzzy inference system for real-time monitoring of integrated-constructed wetlands.
Dzakpasu, Mawuli; Scholz, Miklas; McCarthy, Valerie; Jordan, Siobhán; Sani, Abdulkadir
2015-01-01
Monitoring large-scale treatment wetlands is costly and time-consuming, but required by regulators. Some analytical results are available only after 5 days or even longer. Thus, adaptive neuro-fuzzy inference system (ANFIS) models were developed to predict the effluent concentrations of 5-day biochemical oxygen demand (BOD5) and NH4-N from a full-scale integrated constructed wetland (ICW) treating domestic wastewater. The ANFIS models were developed and validated with a 4-year data set from the ICW system. Cost-effective, quicker and easier to measure variables were selected as the possible predictors based on their goodness of correlation with the outputs. A self-organizing neural network was applied to extract the most relevant input variables from all the possible input variables. Fuzzy subtractive clustering was used to identify the architecture of the ANFIS models and to optimize fuzzy rules, overall, improving the network performance. According to the findings, ANFIS could predict the effluent quality variation quite strongly. Effluent BOD5 and NH4-N concentrations were predicted relatively accurately by other effluent water quality parameters, which can be measured within a few hours. The simulated effluent BOD5 and NH4-N concentrations well fitted the measured concentrations, which was also supported by relatively low mean squared error. Thus, ANFIS can be useful for real-time monitoring and control of ICW systems.
Kronstad, Jim; Saikia, Sanjay; Nielson, Erik David; Kretschmer, Matthias; Jung, Wonhee; Hu, Guanggan; Geddes, Jennifer M H; Griffiths, Emma J; Choi, Jaehyuk; Cadieux, Brigitte; Caza, Mélissa; Attarian, Rodgoun
2012-02-01
The basidiomycete fungus Cryptococcus neoformans infects humans via inhalation of desiccated yeast cells or spores from the environment. In the absence of effective immune containment, the initial pulmonary infection often spreads to the central nervous system to result in meningoencephalitis. The fungus must therefore make the transition from the environment to different mammalian niches that include the intracellular locale of phagocytic cells and extracellular sites in the lung, bloodstream, and central nervous system. Recent studies provide insights into mechanisms of adaptation during this transition that include the expression of antiphagocytic functions, the remodeling of central carbon metabolism, the expression of specific nutrient acquisition systems, and the response to hypoxia. Specific transcription factors regulate these functions as well as the expression of one or more of the major known virulence factors of C. neoformans. Therefore, virulence factor expression is to a large extent embedded in the regulation of a variety of functions needed for growth in mammalian hosts. In this regard, the complex integration of these processes is reminiscent of the master regulators of virulence in bacterial pathogens.
van Wijk, Klaas J; Kessler, Felix
2017-01-25
Plastoglobuli (PGs) are plastid lipoprotein particles surrounded by a membrane lipid monolayer. PGs contain small specialized proteomes and metabolomes. They are present in different plastid types (e.g., chloroplasts, chromoplasts, and elaioplasts) and are dynamic in size and shape in response to abiotic stress or developmental transitions. PGs in chromoplasts are highly enriched in carotenoid esters and enzymes involved in carotenoid metabolism. PGs in chloroplasts are associated with thylakoids and contain ∼30 core proteins (including six ABC1 kinases) as well as additional proteins recruited under specific conditions. Systems analysis has suggested that chloroplast PGs function in metabolism of prenyl lipids (e.g., tocopherols, plastoquinone, and phylloquinone); redox and photosynthetic regulation; plastid biogenesis; and senescence, including recycling of phytol, remobilization of thylakoid lipids, and metabolism of jasmonate. These functionalities contribute to chloroplast PGs' role in responses to stresses such as high light and nitrogen starvation. PGs are thus lipid microcompartments with multiple functions integrated into plastid metabolism, developmental transitions, and environmental adaptation. This review provides an in-depth overview of PG experimental observations, summarizes the present understanding of PG features and functions, and provides a conceptual framework for PG research and the realization of opportunities for crop improvement. Expected final online publication date for the Annual Review of Plant Biology Volume 68 is April 29, 2017. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.
A self-adaptive-grid method with application to airfoil flow
NASA Technical Reports Server (NTRS)
Nakahashi, K.; Deiwert, G. S.
1985-01-01
A self-adaptive-grid method is described that is suitable for multidimensional steady and unsteady computations. Based on variational principles, a spring analogy is used to redistribute grid points in an optimal sense to reduce the overall solution error. User-specified parameters, denoting both maximum and minimum permissible grid spacings, are used to define the all-important constants, thereby minimizing the empiricism and making the method self-adaptive. Operator splitting and one-sided controls for orthogonality and smoothness are used to make the method practical, robust, and efficient. Examples are included for both steady and unsteady viscous flow computations about airfoils in two dimensions, as well as for a steady inviscid flow computation and a one-dimensional case. These examples illustrate the precise control the user has with the self-adaptive method and demonstrate a significant improvement in accuracy and quality of the solutions.
NASA Astrophysics Data System (ADS)
Susanti, D.; Hartini, E.; Permana, A.
2017-01-01
Sale and purchase of the growing competition between companies in Indonesian, make every company should have a proper planning in order to win the competition with other companies. One of the things that can be done to design the plan is to make car sales forecast for the next few periods, it’s required that the amount of inventory of cars that will be sold in proportion to the number of cars needed. While to get the correct forecasting, on of the methods that can be used is the method of Adaptive Spline Threshold Autoregression (ASTAR). Therefore, this time the discussion will focus on the use of Adaptive Spline Threshold Autoregression (ASTAR) method in forecasting the volume of car sales in PT.Srikandi Diamond Motors using time series data.In the discussion of this research, forecasting using the method of forecasting value Adaptive Spline Threshold Autoregression (ASTAR) produce approximately correct.
Integration and Evaluation of Microscope Adapter for the Ultra-Compact Imaging Spectrometer
NASA Astrophysics Data System (ADS)
Smith-Dryden, S. D.; Blaney, D. L.; Van Gorp, B.; Mouroulis, P.; Green, R. O.; Sellar, R. G.; Rodriguez, J.; Wilson, D.
2012-12-01
Petrologic, diagenetic, impact and weathering processes often happen at scales that are not observable from orbit. On Earth, one of the most common things that a scientist does when trying to understand detailed geologic history is to create a thin section of the rock and study the mineralogy and texture. Unfortunately, sample preparation and manipulation with advanced instrumentation may be a resource intensive proposition (e.g. time, power, complexity) in-situ. Getting detailed mineralogy and textural information without sample preparation is highly desirable. Visible to short wavelength microimaging spectroscopy has the potential to provide this information without sample preparation. Wavelengths between 500-2600 nm are sensitive to a wide range of minerals including mafic, carbonates, clays, and sulfates. The Ultra-Compact Imaging Spectrometer (UCIS) has been developed as a low mass (<2.0 kg), low power (~5.2 W) Offner spectrometer, ideal for use on Mars rover or other in-situ platforms. The UCIS instrument with its HgCdTe detector provides a spectral resolution of 10 nm with a range of 500-2600 nm, in addition to a 30 degree field of view and a 1.35 mrad instantaneous field of view. (Van Gorp et al. 2011). To explore applications of this technology for microscale investigations, an f/10 microimaging adapter has been designed and integrated to allow imaging of samples. The spatial coverage of the instrument is 2.56 cm with sampling of 67.5 microns (380 spatial pixels). Because the adapter is slow relative to the UCIS detector, strong sample illumination is required. Light from the lamp box was directed through optical fiber bundles, and directed onto the sample at a high angle of incidence to provide dark field imaging. For data collection, a mineral sample is mounted on the microscope adapter and scanned by the detector as it is moved horizontally via actuator. Data from the instrument is stored as a xyz cube end product with one spectral and two spatial
Smeared star spot location estimation using directional integral method.
Hou, Wang; Liu, Haibo; Lei, Zhihui; Yu, Qifeng; Liu, Xiaochun; Dong, Jing
2014-04-01
Image smearing significantly affects the accuracy of attitude determination of most star sensors. To ensure the accuracy and reliability of a star sensor under image smearing conditions, a novel directional integral method is presented for high-precision star spot location estimation to improve the accuracy of attitude determination. Simulations based on the orbit data of the challenging mini-satellite payload satellite were performed. Simulation results demonstrated that the proposed method exhibits high performance and good robustness, which indicates that the method can be applied effectively.
Webster, Clayton G; Zhang, Guannan; Gunzburger, Max D
2012-10-01
Accurate predictive simulations of complex real world applications require numerical approximations to first, oppose the curse of dimensionality and second, converge quickly in the presence of steep gradients, sharp transitions, bifurcations or finite discontinuities in high-dimensional parameter spaces. In this paper we present a novel multi-dimensional multi-resolution adaptive (MdMrA) sparse grid stochastic collocation method, that utilizes hierarchical multiscale piecewise Riesz basis functions constructed from interpolating wavelets. The basis for our non-intrusive method forms a stable multiscale splitting and thus, optimal adaptation is achieved. Error estimates and numerical examples will used to compare the efficiency of the method with several other techniques.
Integration of isothermal amplification methods in microfluidic devices: Recent advances.
Giuffrida, Maria Chiara; Spoto, Giuseppe
2017-04-15
The integration of nucleic acids detection assays in microfluidic devices represents a highly promising approach for the development of convenient, cheap and efficient diagnostic tools for clinical, food safety and environmental monitoring applications. Such tools are expected to operate at the point-of-care and in resource-limited settings. The amplification of the target nucleic acid sequence represents a key step for the development of sensitive detection protocols. The integration in microfluidic devices of the most popular technology for nucleic acids amplifications, polymerase chain reaction (PCR), is significantly limited by the thermal cycling needed to obtain the target sequence amplification. This review provides an overview of recent advances in integration of isothermal amplification methods in microfluidic devices. Isothermal methods, that operate at constant temperature, have emerged as promising alternative to PCR and greatly simplify the implementation of amplification methods in point-of-care diagnostic devices and devices to be used in resource-limited settings. Possibilities offered by isothermal methods for digital droplet amplification are discussed.
Li, Xiaoqiang; Quan, Enzhuo M.; Li, Yupeng; Pan, Xiaoning; Zhou, Yin; Wang, Xiaochun; Du, Weiliang; Kudchadker, Rajat J.; Johnson, Jennifer L.; Kuban, Deborah A.; Lee, Andrew K.; Zhang, Xiaodong
2013-08-01
Purpose: This study was designed to validate a fully automated adaptive planning (AAP) method which integrates automated recontouring and automated replanning to account for interfractional anatomical changes in prostate cancer patients receiving adaptive intensity modulated radiation therapy (IMRT) based on daily repeated computed tomography (CT)-on-rails images. Methods and Materials: Nine prostate cancer patients treated at our institution were randomly selected. For the AAP method, contours on each repeat CT image were automatically generated by mapping the contours from the simulation CT image using deformable image registration. An in-house automated planning tool incorporated into the Pinnacle treatment planning system was used to generate the original and the adapted IMRT plans. The cumulative dose–volume histograms (DVHs) of the target and critical structures were calculated based on the manual contours for all plans and compared with those of plans generated by the conventional method, that is, shifting the isocenters by aligning the images based on the center of the volume (COV) of prostate (prostate COV-aligned). Results: The target coverage from our AAP method for every patient was acceptable, while 1 of the 9 patients showed target underdosing from prostate COV-aligned plans. The normalized volume receiving at least 70 Gy (V{sub 70}), and the mean dose of the rectum and bladder were reduced by 8.9%, 6.4 Gy and 4.3%, 5.3 Gy, respectively, for the AAP method compared with the values obtained from prostate COV-aligned plans. Conclusions: The AAP method, which is fully automated, is effective for online replanning to compensate for target dose deficits and critical organ overdosing caused by interfractional anatomical changes in prostate cancer.
ERIC Educational Resources Information Center
Wang, Ze; Rohrer, David; Chuang, Chi-ching; Fujiki, Mayo; Herman, Keith; Reinke, Wendy
2015-01-01
This study compared 5 scoring methods in terms of their statistical assumptions. They were then used to score the Teacher Observation of Classroom Adaptation Checklist, a measure consisting of 3 subscales and 21 Likert-type items. The 5 methods used were (a) sum/average scores of items, (b) latent factor scores with continuous indicators, (c)…
ERIC Educational Resources Information Center
Yu, Baohua
2013-01-01
This study examined the interrelationships of integrative motivation, competence in second language (L2) communication, sociocultural adaptation, academic adaptation and persistence of international students at an Australian university. Structural equation modelling demonstrated that the integrative motivation of international students has a…
Borja, Angel; Bricker, Suzanne B; Dauer, Daniel M; Demetriades, Nicolette T; Ferreira, João G; Forbes, Anthony T; Hutchings, Pat; Jia, Xiaoping; Kenchington, Richard; Carlos Marques, João; Zhu, Changbo
2008-09-01
In recent years, several sets of legislation worldwide (Oceans Act in USA, Australia or Canada; Water Framework Directive or Marine Strategy in Europe, National Water Act in South Africa, etc.) have been developed in order to address ecological quality or integrity, within estuarine and coastal systems. Most such legislation seeks to define quality in an integrative way, by using several biological elements, together with physico-chemical and pollution elements. Such an approach allows assessment of ecological status at the ecosystem level ('ecosystem approach' or 'holistic approach' methodologies), rather than at species level (e.g. mussel biomonitoring or Mussel Watch) or just at chemical level (i.e. quality objectives) alone. Increasing attention has been paid to the development of tools for different physico-chemical or biological (phytoplankton, zooplankton, benthos, algae, phanerogams, fishes) elements of the ecosystems. However, few methodologies integrate all the elements into a single evaluation of a water body. The need for such integrative tools to assess ecosystem quality is very important, both from a scientific and stakeholder point of view. Politicians and managers need information from simple and pragmatic, but scientifically sound methodologies, in order to show to society the evolution of a zone (estuary, coastal area, etc.), taking into account human pressures or recovery processes. These approaches include: (i) multidisciplinarity, inherent in the teams involved in their implementation; (ii) integration of biotic and abiotic factors; (iii) accurate and validated methods in determining ecological integrity; and (iv) adequate indicators to follow the evolution of the monitored ecosystems. While some countries increasingly use the establishment of marine parks to conserve marine biodiversity and ecological integrity, there is awareness (e.g. in Australia) that conservation and management of marine ecosystems cannot be restricted to Marine Protected
An adaptive, formally second order accurate version of the immersed boundary method
NASA Astrophysics Data System (ADS)
Griffith, Boyce E.; Hornung, Richard D.; McQueen, David M.; Peskin, Charles S.
2007-04-01
Like many problems in biofluid mechanics, cardiac mechanics can be modeled as the dynamic interaction of a viscous incompressible fluid (the blood) and a (visco-)elastic structure (the muscular walls and the valves of the heart). The immersed boundary method is a mathematical formulation and numerical approach to such problems that was originally introduced to study blood flow through heart valves, and extensions of this work have yielded a three-dimensional model of the heart and great vessels. In the present work, we introduce a new adaptive version of the immersed boundary method. This adaptive scheme employs the same hierarchical structured grid approach (but a different numerical scheme) as the two-dimensional adaptive immersed boundary method of Roma et al. [A multilevel self adaptive version of the immersed boundary method, Ph.D. Thesis, Courant Institute of Mathematical Sciences, New York University, 1996; An adaptive version of the immersed boundary method, J. Comput. Phys. 153 (2) (1999) 509-534] and is based on a formally second order accurate (i.e., second order accurate for problems with sufficiently smooth solutions) version of the immersed boundary method that we have recently described [B.E. Griffith, C.S. Peskin, On the order of accuracy of the immersed boundary method: higher order convergence rates for sufficiently smooth problems, J. Comput. Phys. 208 (1) (2005) 75-105]. Actual second order convergence rates are obtained for both the uniform and adaptive methods by considering the interaction of a viscous incompressible flow and an anisotropic incompressible viscoelastic shell. We also present initial results from the application of this methodology to the three-dimensional simulation of blood flow in the heart and great vessels. The results obtained by the adaptive method show good qualitative agreement with simulation results obtained by earlier non-adaptive versions of the method, but the flow in the vicinity of the model heart valves
Integration of sample analysis method (SAM) for polychlorinated biphenyls
Monagle, M.; Johnson, R.C.
1996-05-01
A completely integrated Sample Analysis Method (SAM) has been tested as part of the Contaminant Analysis Automation program. The SAM system was tested for polychlorinated biphenyl samples using five Standard Laboratory Modules{trademark}: two Soxtec{trademark} modules, a high volume concentrator module, a generic materials handling module, and the gas chromatographic module. With over 300 samples completed within the first phase of the validation, recovery and precision data were comparable to manual methods. Based on experience derived from the first evaluation of the automated system, efforts are underway to improve sample recoveries and integrate a sample cleanup procedure. In addition, initial work in automating the extraction of semivolatile samples using this system will also be discussed.
Methods for Developing Emissions Scenarios for Integrated Assessment Models
Prinn, Ronald; Webster, Mort
2007-08-20
The overall objective of this research was to contribute data and methods to support the future development of new emissions scenarios for integrated assessment of climate change. Specifically, this research had two main objectives: 1. Use historical data on economic growth and energy efficiency changes, and develop probability density functions (PDFs) for the appropriate parameters for two or three commonly used integrated assessment models. 2. Using the parameter distributions developed through the first task and previous work, we will develop methods of designing multi-gas emission scenarios that usefully span the joint uncertainty space in a small number of scenarios. Results on the autonomous energy efficiency improvement (AEEI) parameter are summarized, an uncertainty analysis of elasticities of substitution is described, and the probabilistic emissions scenario approach is presented.
GMTIFS: the adaptive optics beam steering mirror for the GMT integral-field spectrograph
NASA Astrophysics Data System (ADS)
Davies, J.; Bloxham, G.; Boz, R.; Bundy, D.; Espeland, B.; Fordham, B.; Hart, J.; Herrald, N.; Nielsen, J.; Sharp, R.; Vaccarella, A.; Vest, C.; Young, P. J.
2016-07-01
To achieve the high adaptive optics sky coverage necessary to allow the GMT Integral-Field Spectrograph (GMTIFS) to access key scientific targets, the on-instrument adaptive-optics wavefront-sensing (OIWFS) system must patrol the full 180 arcsecond diameter guide field passed to the instrument. The OIWFS uses a diffraction limited guide star as the fundamental pointing reference for the instrument. During an observation the offset between the science target and the guide star will change due to sources such as flexure, differential refraction and non-sidereal tracking rates. GMTIFS uses a beam steering mirror to set the initial offset between science target and guide star and also to correct for changes in offset. In order to reduce image motion from beam steering errors to those comparable to the AO system in the most stringent case, the beam steering mirror is set a requirement of less than 1 milliarcsecond RMS. This corresponds to a dynamic range for both actuators and sensors of better than 1/180,000. The GMTIFS beam steering mirror uses piezo-walk actuators and a combination of eddy current sensors and interferometric sensors to achieve this dynamic range and control. While the sensors are rated for cryogenic operation, the actuators are not. We report on the results of prototype testing of single actuators, with the sensors, on the bench and in a cryogenic environment. Specific failures of the system are explained and suspected reasons for them. A modified test jig is used to investigate the option of heating the actuator and we report the improved results. In addition to individual component testing, we built and tested a complete beam steering mirror assembly. Testing was conducted with a point source microscope, however controlling environmental conditions to less than 1 micron was challenging. The assembly testing investigated acquisition accuracy and if there was any un-sensed hysteresis in the system. Finally we present the revised beam steering mirror
NASA Astrophysics Data System (ADS)
Moore, F.; Burke, M.
2015-12-01
A wide range of studies using a variety of methods strongly suggest that climate change will have a negative impact on agricultural production in many areas. Farmers though should be able to learn about a changing climate and to adjust what they grow and how they grow it in order to reduce these negative impacts. However, it remains unclear how effective these private (autonomous) adaptations will be, or how quickly they will be adopted. Constraining the uncertainty on this adaptation is important for understanding the impacts of climate change on agriculture. Here we review a number of empirical methods that have been proposed for understanding the rate and effectiveness of private adaptation to climate change. We compare these methods using data on agricultural yields in the United States and western Europe.
Fast multipole and space adaptive multiresolution methods for the solution of the Poisson equation
NASA Astrophysics Data System (ADS)
Bilek, Petr; Duarte, Max; Nečas, David; Bourdon, Anne; Bonaventura, Zdeněk
2016-09-01
This work focuses on the conjunction of the fast multipole method (FMM) with the space adaptive multiresolution (MR) technique for grid adaptation. Since both methods, MR and FMM provide a priori error estimates, both achieve O(N) computational complexity, and both operate on the same hierarchical space division, their conjunction represents a natural choice when designing a numerically efficient and robust strategy for time dependent problems. Special attention is given to the use of these methods in the simulation of streamer discharges in air. We have designed a FMM Poisson solver on multiresolution adapted grid in 2D. The accuracy and the computation complexity of the solver has been verified for a set of manufactured solutions. We confirmed that the developed solver attains desired accuracy and this accuracy is controlled only by the number of terms in the multipole expansion in combination with the multiresolution accuracy tolerance. The implementation has a linear computation complexity O(N).
NASA Astrophysics Data System (ADS)
Ran, Qiwen; Yang, Zhonghua; Ma, Jing; Tan, Liying; Liao, Huixi; Liu, Qingfeng
2013-02-01
In this paper, a weighted adaptive threshold estimating method is proposed to deal with long and deep channel fades in Satellite-to-Ground optical communications. During the channel correlation interval where there are sufficient correlations in adjacent signal samples, the correlations in its change rates are described by weighted equations in the form of Toeplitz matrix. As vital inputs to the proposed adaptive threshold estimator, the optimal values of the change rates can be obtained by solving the weighted equation systems. The effect of channel fades and aberrant samples can be mitigated by joint use of weighted equation systems and Kalman estimation. Based on the channel information data from star observation trails, simulations are made and the numerical results show that the proposed method have better anti-fade performances than the D-value adaptive threshold estimating method in both weak and strong turbulence conditions.
The adaptive problems of female teenage refugees and their behavioral adjustment methods for coping
Mhaidat, Fatin
2016-01-01
This study aimed at identifying the levels of adaptive problems among teenage female refugees in the government schools and explored the behavioral methods that were used to cope with the problems. The sample was composed of 220 Syrian female students (seventh to first secondary grades) enrolled at government schools within the Zarqa Directorate and who came to Jordan due to the war conditions in their home country. The study used the scale of adaptive problems that consists of four dimensions (depression, anger and hostility, low self-esteem, and feeling insecure) and a questionnaire of the behavioral adjustment methods for dealing with the problem of asylum. The results indicated that the Syrian teenage female refugees suffer a moderate degree of adaptation problems, and the positive adjustment methods they have used are more than the negatives. PMID:27175098
Inclusion of Separation in Integral Boundary Layer Methods
NASA Astrophysics Data System (ADS)
Wallace, Brodie; O'Neill, Charles
2016-11-01
An integral boundary layer (IBL) method coupled with a potential flow solver quickly allows simulating aerodynamic flows, allowing for aircraft geometries to be rapidly designed and optimized. However, most current IBL methods lack the ability to accurately model three-dimensional separated flows. Various IBL equations and closure relations were investigated in an effort to develop an IBL capable of modeling separation. Solution techniques, including a Newton's method and the inverse matrix solving program GMRES, as well as methods for coupling an IBL with a potential flow solver were also investigated. Results for two-dimensional attached flow as well as methods for expanding an IBL to model three-dimensional separation are presented. Funding from NSF REU site Grant EEC 1358991 is greatly appreciated.
Linear Multistep Methods for Integrating Reversible Differential Equations
NASA Astrophysics Data System (ADS)
Evans, N. Wyn; Tremaine, Scott
1999-10-01
This paper studies multistep methods for the integration of reversible dynamical systems, with particular emphasis on the planar Kepler problem. It has previously been shown by Cano & Sanz-Serna that reversible linear multisteps for first-order differential equations are generally unstable. Here we report on a subset of these methods-the zero-growth methods-that evade these instabilities. We provide an algorithm for identifying these rare methods. We find and study all zero-growth, reversible multisteps with six or fewer steps. This select group includes two well-known second-order multisteps (the trapezoidal and explicit midpoint methods), as well as three new fourth-order multisteps-one of which is explicit. Variable time steps can be readily implemented without spoiling the reversibility. Tests on Keplerian orbits show that these new reversible multisteps work well on orbits with low or moderate eccentricity, although at least 100 steps per radian are required for stability.
V S, Unni; Mishra, Deepak; Subrahmanyam, G R K S
2016-12-01
The need for image fusion in current image processing systems is increasing mainly due to the increased number and variety of image acquisition techniques. Image fusion is the process of combining substantial information from several sensors using mathematical techniques in order to create a single composite image that will be more comprehensive and thus more useful for a human operator or other computer vision tasks. This paper presents a new approach to multifocus image fusion based on sparse signal representation. Block-based compressive sensing integrated with a projection-driven compressive sensing (CS) recovery that encourages sparsity in the wavelet domain is used as a method to get the focused image from a set of out-of-focus images. Compression is achieved during the image acquisition process using a block compressive sensing method. An adaptive thresholding technique within the smoothed projected Landweber recovery process reconstructs high-resolution focused images from low-dimensional CS measurements of out-of-focus images. Discrete wavelet transform and dual-tree complex wavelet transform are used as the sparsifying basis for the proposed fusion. The main finding lies in the fact that sparsification enables a better selection of the fusion coefficients and hence better fusion. A Laplacian mixture model fit is done in the wavelet domain and estimation of the probability density function (pdf) parameters by expectation maximization leads us to the proper selection of the coefficients of the fused image. Using the proposed method compared with the fusion scheme without employing the projected Landweber (PL) scheme and the other existing CS-based fusion approaches, it is observed that with fewer samples itself, the proposed method outperforms other approaches.
Method for integrating microelectromechanical devices with electronic circuitry
Montague, S.; Smith, J.H.; Sniegowski, J.J.; McWhorter, P.J.
1998-08-25
A method is disclosed for integrating one or more microelectromechanical (MEM) devices with electronic circuitry. The method comprises the steps of forming each MEM device within a cavity below a device surface of the substrate; encapsulating the MEM device prior to forming electronic circuitry on the substrate; and releasing the MEM device for operation after fabrication of the electronic circuitry. Planarization of the encapsulated MEM device prior to formation of the electronic circuitry allows the use of standard processing steps for fabrication of the electronic circuitry. 13 figs.
Method for integrating microelectromechanical devices with electronic circuitry
Montague, Stephen; Smith, James H.; Sniegowski, Jeffry J.; McWhorter, Paul J.
1998-01-01
A method for integrating one or more microelectromechanical (MEM) devices with electronic circuitry. The method comprises the steps of forming each MEM device within a cavity below a device surface of the substrate; encapsulating the MEM device prior to forming electronic circuitry on the substrate; and releasing the MEM device for operation after fabrication of the electronic circuitry. Planarization of the encapsulated MEM device prior to formation of the electronic circuitry allows the use of standard processing steps for fabrication of the electronic circuitry.
Synthesis of aircraft structures using integrated design and analysis methods
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, J.; Goetz, R. C.
1978-01-01
A systematic research is reported to develop and validate methods for structural sizing of an airframe designed with the use of composite materials and active controls. This research program includes procedures for computing aeroelastic loads, static and dynamic aeroelasticity, analysis and synthesis of active controls, and optimization techniques. Development of the methods is concerned with the most effective ways of integrating and sequencing the procedures in order to generate structural sizing and the associated active control system, which is optimal with respect to a given merit function constrained by strength and aeroelasticity requirements.
Lei, Xusheng; Li, Jingjing
2012-01-01
This paper presents an adaptive information fusion method to improve the accuracy and reliability of the altitude measurement information for small unmanned aerial rotorcraft during the landing process. Focusing on the low measurement performance of sensors mounted on small unmanned aerial rotorcraft, a wavelet filter is applied as a pre-filter to attenuate the high frequency noises in the sensor output. Furthermore, to improve altitude information, an adaptive extended Kalman filter based on a maximum a posteriori criterion is proposed to estimate measurement noise covariance matrix in real time. Finally, the effectiveness of the proposed method is proved by static tests, hovering flight and autonomous landing flight tests. PMID:23201993
A comparison of locally adaptive multigrid methods: LDC, FAC and FIC
NASA Technical Reports Server (NTRS)
Khadra, Khodor; Angot, Philippe; Caltagirone, Jean-Paul
1993-01-01
This study is devoted to a comparative analysis of three 'Adaptive ZOOM' (ZOom Overlapping Multi-level) methods based on similar concepts of hierarchical multigrid local refinement: LDC (Local Defect Correction), FAC (Fast Adaptive Composite), and FIC (Flux Interface Correction)--which we proposed recently. These methods are tested on two examples of a bidimensional elliptic problem. We compare, for V-cycle procedures, the asymptotic evolution of the global error evaluated by discrete norms, the corresponding local errors, and the convergence rates of these algorithms.
Software for the parallel adaptive solution of conservation laws by discontinous Galerkin methods.
Flaherty, J. E.; Loy, R. M.; Shephard, M. S.; Teresco, J. D.
1999-08-17
The authors develop software tools for the solution of conservation laws using parallel adaptive discontinuous Galerkin methods. In particular, the Rensselaer Partition Model (RPM) provides parallel mesh structures within an adaptive framework to solve the Euler equations of compressible flow by a discontinuous Galerkin method (LOCO). Results are presented for a Rayleigh-Taylor flow instability for computations performed on 128 processors of an IBM SP computer. In addition to managing the distributed data and maintaining a load balance, RPM provides information about the parallel environment that can be used to tailor partitions to a specific computational environment.
The block adaptive multigrid method applied to the solution of the Euler equations
NASA Technical Reports Server (NTRS)
Pantelelis, Nikos
1993-01-01
In the present study, a scheme capable of solving very fast and robust complex nonlinear systems of equations is presented. The Block Adaptive Multigrid (BAM) solution method offers multigrid acceleration and adaptive grid refinement based on the prediction of the solution error. The proposed solution method was used with an implicit upwind Euler solver for the solution of complex transonic flows around airfoils. Very fast results were obtained (18-fold acceleration of the solution) using one fourth of the volumes of a global grid with the same solution accuracy for two test cases.
Thermal limits and adaptation in marine Antarctic ectotherms: an integrative view.
Pörtner, Hans O; Peck, Lloyd; Somero, George
2007-12-29
A cause and effect understanding of thermal limitation and adaptation at various levels of biological organization is crucial in the elaboration of how the Antarctic climate has shaped the functional properties of extant Antarctic fauna. At the same time, this understanding requires an integrative view of how the various levels of biological organization may be intertwined. At all levels analysed, the functional specialization to permanently low temperatures implies reduced tolerance of high temperatures, as a trade-off. Maintenance of membrane fluidity, enzyme kinetic properties (Km and k(cat)) and protein structural flexibility in the cold supports metabolic flux and regulation as well as cellular functioning overall. Gene expression patterns and, even more so, loss of genetic information, especially for myoglobin (Mb) and haemoglobin (Hb) in notothenioid fishes, reflect the specialization of Antarctic organisms to a narrow range of low temperatures. The loss of Mb and Hb in icefish, together with enhanced lipid membrane densities (e.g. higher concentrations of mitochondria), becomes explicable by the exploitation of high oxygen solubility at low metabolic rates in the cold, where an enhanced fraction of oxygen supply occurs through diffusive oxygen flux. Conversely, limited oxygen supply to tissues upon warming is an early cause of functional limitation. Low standard metabolic rates may be linked to extreme stenothermy. The evolutionary forces causing low metabolic rates as a uniform character of life in Antarctic ectothermal animals may be linked to the requirement for high energetic efficiency as required to support higher organismic functioning in the cold. This requirement may result from partial compensation for the thermal limitation of growth, while other functions like hatching, development, reproduction and ageing are largely delayed. As a perspective, the integrative approach suggests that the patterns of oxygen- and capacity-limited thermal tolerance
Method and apparatus for determining material structural integrity
Pechersky, Martin
1996-01-01
A non-destructive method and apparatus for determining the structural integrity of materials by combining laser vibrometry with damping analysis techniques to determine the damping loss factor of a material. The method comprises the steps of vibrating the area being tested over a known frequency range and measuring vibrational force and velocity as a function of time over the known frequency range. Vibrational velocity is preferably measured by a laser vibrometer. Measurement of the vibrational force depends on the vibration method. If an electromagnetic coil is used to vibrate a magnet secured to the area being tested, then the vibrational force is determined by the amount of coil current used in vibrating the magnet. If a reciprocating transducer is used to vibrate a magnet secured to the area being tested, then the vibrational force is determined by a force gauge in the reciprocating transducer. Using known vibrational analysis methods, a plot of the drive point mobility of the material over the preselected frequency range is generated from the vibrational force and velocity measurements. The damping loss factor is derived from a plot of the drive point mobility over the preselected frequency range using the resonance dwell method and compared with a reference damping loss factor for structural integrity evaluation.
A NOISE ADAPTIVE FUZZY EQUALIZATION METHOD FOR PROCESSING SOLAR EXTREME ULTRAVIOLET IMAGES
Druckmueller, M.
2013-08-15
A new image enhancement tool ideally suited for the visualization of fine structures in extreme ultraviolet images of the corona is presented in this paper. The Noise Adaptive Fuzzy Equalization method is particularly suited for the exceptionally high dynamic range images from the Atmospheric Imaging Assembly instrument on the Solar Dynamics Observatory. This method produces artifact-free images and gives significantly better results than methods based on convolution or Fourier transform which are often used for that purpose.
FLIP: A method for adaptively zoned, particle-in-cell calculations of fluid in two dimensions
Brackbill, J.U.; Ruppel, H.M.
1986-08-01
A method is presented for calculating fluid flow in two dimensions using a full particle-in-cell representation on an adaptively zoned grid. The method has many interesting properties, among them an almost total absence of numerical dissipation and the ability to represent large variations in the data. The method is described using a standard formalism and its properties are illustrated by supersonic flow over a step and the interaction of a shock with a thin foil.
Efficient Fully Implicit Time Integration Methods for Modeling Cardiac Dynamics
Rose, Donald J.; Henriquez, Craig S.
2013-01-01
Implicit methods are well known to have greater stability than explicit methods for stiff systems, but they often are not used in practice due to perceived computational complexity. This paper applies the Backward Euler method and a second-order one-step two-stage composite backward differentiation formula (C-BDF2) for the monodomain equations arising from mathematically modeling the electrical activity of the heart. The C-BDF2 scheme is an L-stable implicit time integration method and easily implementable. It uses the simplest Forward Euler and Backward Euler methods as fundamental building blocks. The nonlinear system resulting from application of the Backward Euler method for the monodomain equations is solved for the first time by a nonlinear elimination method, which eliminates local and non-symmetric components by using a Jacobian-free Newton solver, called Newton-Krylov solver. Unlike other fully implicit methods proposed for the monodomain equations in the literature, the Jacobian of the global system after the nonlinear elimination has much smaller size, is symmetric and possibly positive definite, which can be solved efficiently by standard optimal solvers. Numerical results are presented demonstrating that the C-BDF2 scheme can yield accurate results with less CPU times than explicit methods for both a single patch and spatially extended domains. PMID:19126449
A survey of motif discovery methods in an integrated framework
Sandve, Geir Kjetil; Drabløs, Finn
2006-01-01
Background There has been a growing interest in computational discovery of regulatory elements, and a multitude of motif discovery methods have been proposed. Computational motif discovery has been used with some success in simple organisms like yeast. However, as we move to higher organisms with more complex genomes, more sensitive methods are needed. Several recent methods try to integrate additional sources of information, including microarray experiments (gene expression and ChlP-chip). There is also a growing awareness that regulatory elements work in combination, and that this combinatorial behavior must be modeled for successful motif discovery. However, the multitude of methods and approaches makes it difficult to get a good understanding of the current status of the field. Results This paper presents a survey of methods for motif discovery in DNA, based on a structured and well defined framework that integrates all relevant elements. Existing methods are discussed according to this framework. Conclusion The survey shows that although no single method takes all relevant elements into consideration, a very large number of different models treating the various elements separately have been tried. Very often the choices that have been made are not explicitly stated, making it difficult to compare different implementations. Also, the tests that have been used are often not comparable. Therefore, a stringent framework and improved test methods are needed to evaluate the different approaches in order to conclude which ones are most promising. Reviewers: This article was reviewed by Eugene V. Koonin, Philipp Bucher (nominated by Mikhail Gelfand) and Frank Eisenhaber. PMID:16600018
Adaptive eigenspace method for inverse scattering problems in the frequency domain
NASA Astrophysics Data System (ADS)
Grote, Marcus J.; Kray, Marie; Nahum, Uri
2017-02-01
A nonlinear optimization method is proposed for the solution of inverse scattering problems in the frequency domain, when the scattered field is governed by the Helmholtz equation. The time-harmonic inverse medium problem is formulated as a PDE-constrained optimization problem and solved by an inexact truncated Newton-type iteration. Instead of a grid-based discrete representation, the unknown wave speed is projected to a particular finite-dimensional basis of eigenfunctions, which is iteratively adapted during the optimization. Truncating the adaptive eigenspace (AE) basis at a (small and slowly increasing) finite number of eigenfunctions effectively introduces regularization into the inversion and thus avoids the need for standard Tikhonov-type regularization. Both analytical and numerical evidence underpins the accuracy of the AE representation. Numerical experiments demonstrate the efficiency and robustness to missing or noisy data of the resulting adaptive eigenspace inversion method.
Automatic off-body overset adaptive Cartesian mesh method based on an octree approach
NASA Astrophysics Data System (ADS)
Péron, Stéphanie; Benoit, Christophe
2013-01-01
This paper describes a method for generating adaptive structured Cartesian grids within a near-body/off-body mesh partitioning framework for the flow simulation around complex geometries. The off-body Cartesian mesh generation derives from an octree structure, assuming each octree leaf node defines a structured Cartesian block. This enables one to take into account the large scale discrepancies in terms of resolution between the different bodies involved in the simulation, with minimum memory requirements. Two different conversions from the octree to Cartesian grids are proposed: the first one generates Adaptive Mesh Refinement (AMR) type grid systems, and the second one generates abutting or minimally overlapping Cartesian grid set. We also introduce an algorithm to control the number of points at each adaptation, that automatically determines relevant values of the refinement indicator driving the grid refinement and coarsening. An application to a wing tip vortex computation assesses the capability of the method to capture accurately the flow features.
A GPU-accelerated adaptive discontinuous Galerkin method for level set equation
NASA Astrophysics Data System (ADS)
Karakus, A.; Warburton, T.; Aksel, M. H.; Sert, C.
2016-01-01
This paper presents a GPU-accelerated nodal discontinuous Galerkin method for the solution of two- and three-dimensional level set (LS) equation on unstructured adaptive meshes. Using adaptive mesh refinement, computations are localised mostly near the interface location to reduce the computational cost. Small global time step size resulting from the local adaptivity is avoided by local time-stepping based on a multi-rate Adams-Bashforth scheme. Platform independence of the solver is achieved with an extensible multi-threading programming API that allows runtime selection of different computing devices (GPU and CPU) and different threading interfaces (CUDA, OpenCL and OpenMP). Overall, a highly scalable, accurate and mass conservative numerical scheme that preserves the simplicity of LS formulation is obtained. Efficiency, performance and local high-order accuracy of the method are demonstrated through distinct numerical test cases.
Method study on fuzzy-PID adaptive control of electric-hydraulic hitch system
NASA Astrophysics Data System (ADS)
Li, Mingsheng; Wang, Liubu; Liu, Jian; Ye, Jin
2017-03-01
In this paper, fuzzy-PID adaptive control method is applied to the control of tractor electric-hydraulic hitch system. According to the characteristics of the system, a fuzzy-PID adaptive controller is designed and the electric-hydraulic hitch system model is established. Traction control and position control performance simulation are carried out with the common PID control method. A field test rig was set up to test the electric-hydraulic hitch system. The test results showed that, after the fuzzy-PID adaptive control is adopted, when the tillage depth steps from 0.1m to 0.3m, the system transition process time is 4s, without overshoot, and when the tractive force steps from 3000N to 7000N, the system transition process time is 5s, the system overshoot is 25%.
Three-dimensional self-adaptive grid method for complex flows
NASA Technical Reports Server (NTRS)
Djomehri, M. Jahed; Deiwert, George S.
1988-01-01
A self-adaptive grid procedure for efficient computation of three-dimensional complex flow fields is described. The method is based on variational principles to minimize the energy of a spring system analogy which redistributes the grid points. Grid control parameters are determined by specifying maximum and minimum grid spacing. Multidirectional adaptation is achieved by splitting the procedure into a sequence of successive applications of a unidirectional adaptation. One-sided, two-directional constraints for orthogonality and smoothness are used to enhance the efficiency of the method. Feasibility of the scheme is demonstrated by application to a multinozzle, afterbody, plume flow field. Application of the algorithm for initial grid generation is illustrated by constructing a three-dimensional grid about a bump-like geometry.
Poma, A B; Delle Site, L
2010-06-25
Simulations that couple different molecular models in an adaptive way by changing resolution on the fly allow us to identify the relevant degrees of freedom of a system. This, in turn, leads to a detailed understanding of the essential physics which characterizes a system. While the delicate process of transition from one model to another is well understood for the adaptivity between classical molecular models the same cannot be said for the quantum-classical adaptivity. The main reason for this is the difficulty in describing a continuous transition between two different kinds of physical principles: probabilistic for the quantum and deterministic for the classical. Here we report the basic principles of an algorithm that allows for a continuous and smooth transition by employing the path integral description of atoms.
Method and system for training dynamic nonlinear adaptive filters which have embedded memory
NASA Technical Reports Server (NTRS)
Rabinowitz, Matthew (Inventor)
2002-01-01
Described herein is a method and system for training nonlinear adaptive filters (or neural networks) which have embedded memory. Such memory can arise in a multi-layer finite impulse response (FIR) architecture, or an infinite impulse response (IIR) architecture. We focus on filter architectures with separate linear dynamic components and static nonlinear components. Such filters can be structured so as to restrict their degrees of computational freedom based on a priori knowledge about the dynamic operation to be emulated. The method is detailed for an FIR architecture which consists of linear FIR filters together with nonlinear generalized single layer subnets. For the IIR case, we extend the methodology to a general nonlinear architecture which uses feedback. For these dynamic architectures, we describe how one can apply optimization techniques which make updates closer to the Newton direction than those of a steepest descent method, such as backpropagation. We detail a novel adaptive modified Gauss-Newton optimization technique, which uses an adaptive learning rate to determine both the magnitude and direction of update steps. For a wide range of adaptive filtering applications, the new training algorithm converges faster and to a smaller value of cost than both steepest-descent methods such as backpropagation-through-time, and standard quasi-Newton methods. We apply the algorithm to modeling the inverse of a nonlinear dynamic tracking system 5, as well as a nonlinear amplifier 6.
Impact of adhesive and photoactivation method on sealant integrity and polymer network formation.
Borges, Boniek Castillo Dutra; Pereira, Fabrício Lopes da Rocha; Alonso, Roberta Caroline Bruschi; Braz, Rodivan; Montes, Marcos Antônio Japiassú Resende; Pinheiro, Isauremi Vieira de Assunção; Santos, Alex José Souza dos
2012-01-01
We evaluated the influence of photoactivation method and hydrophobic resin (HR) application on the marginal and internal adaptation, hardness (KHN), and crosslink density (CLD) of a resin-based fissure sealant. Model fissures were created in bovine enamel fragments (n = 10) and sealed using one of the following protocols: no adhesive system + photoactivation of the sealant using continuous light (CL), no adhesive system + photoactivation of the sealant using the soft-start method (SS), HR + CL, or HR + SS. Marginal and internal gaps and KHN were assessed after storage in water for 24 h. The CLD was indirectly assessed by repeating the KHN measurement after 24 h of immersion in 100% ethanol. There was no difference among the samples with regard to marginal or internal adaptation. The KHN and CLD were similar for samples cured using either photoactivation method. Use of a hydrophobic resin prior to placement of fissure sealants and curing the sealant using the soft-start method may not provide any positive influence on integrity or crosslink density.
Method and apparatus for determining material structural integrity
Pechersky, M.J.
1994-01-01
Disclosed are a nondestructive method and apparatus for determining the structural integrity of materials by combining laser vibrometry with damping analysis to determine the damping loss factor. The method comprises the steps of vibrating the area being tested over a known frequency range and measuring vibrational force and velocity vs time over the known frequency range. Vibrational velocity is preferably measured by a laser vibrometer. Measurement of the vibrational force depends on the vibration method: if an electromagnetic coil is used to vibrate a magnet secured to the area being tested, then the vibrational force is determined by the coil current. If a reciprocating transducer is used, the vibrational force is determined by a force gauge in the transducer. Using vibrational analysis, a plot of the drive point mobility of the material over the preselected frequency range is generated from the vibrational force and velocity data. Damping loss factor is derived from a plot of the drive point mobility over the preselected frequency range using the resonance dwell method and compared with a reference damping loss factor for structural integrity evaluation.
Integrated Force Method Solution to Indeterminate Structural Mechanics Problems
NASA Technical Reports Server (NTRS)
Patnaik, Surya N.; Hopkins, Dale A.; Halford, Gary R.
2004-01-01
Strength of materials problems have been classified into determinate and indeterminate problems. Determinate analysis primarily based on the equilibrium concept is well understood. Solutions of indeterminate problems required additional compatibility conditions, and its comprehension was not exclusive. A solution to indeterminate problem is generated by manipulating the equilibrium concept, either by rewriting in the displacement variables or through the cutting and closing gap technique of the redundant force method. Compatibility improvisation has made analysis cumbersome. The authors have researched and understood the compatibility theory. Solutions can be generated with equal emphasis on the equilibrium and compatibility concepts. This technique is called the Integrated Force Method (IFM). Forces are the primary unknowns of IFM. Displacements are back-calculated from forces. IFM equations are manipulated to obtain the Dual Integrated Force Method (IFMD). Displacement is the primary variable of IFMD and force is back-calculated. The subject is introduced through response variables: force, deformation, displacement; and underlying concepts: equilibrium equation, force deformation relation, deformation displacement relation, and compatibility condition. Mechanical load, temperature variation, and support settling are equally emphasized. The basic theory is discussed. A set of examples illustrate the new concepts. IFM and IFMD based finite element methods are introduced for simple problems.
Integral structural-functional method for characterizing microbial populations
NASA Astrophysics Data System (ADS)
Yakushev, A. V.
2015-04-01
An original integral structural-functional method has been proposed for characterizing microbial communities. The novelty of the approach is the in situ study of microorganisms based on the growth kinetics of microbial associations in liquid nutrient broth media under selective conditions rather than on the level of taxa or large functional groups. The method involves the analysis of the integral growth model of a periodic culture. The kinetic parameters of such associations reflect their capacity of growing on different media, i.e., their physiological diversity, and the metabolic capacity of the microorganisms for growth on a nutrient medium. Therefore, the obtained parameters are determined by the features of the microbial ecological strategies. The inoculation of a dense medium from the original inoculate allows characterizing the taxonomic composition of the dominants in the soil community. The inoculation from the associations developed on selective media characterizes the composition of syntrophic groups, which fulfill a specific function in nature. This method is of greater information value than the classical methods of inoculation on selective media.
A Hyperspherical Adaptive Sparse-Grid Method for High-Dimensional Discontinuity Detection
Zhang, Guannan; Webster, Clayton G.; Gunzburger, Max D.; Burkardt, John V.
2015-06-24
This study proposes and analyzes a hyperspherical adaptive hierarchical sparse-grid method for detecting jump discontinuities of functions in high-dimensional spaces. The method is motivated by the theoretical and computational inefficiencies of well-known adaptive sparse-grid methods for discontinuity detection. Our novel approach constructs a function representation of the discontinuity hypersurface of an N-dimensional discontinuous quantity of interest, by virtue of a hyperspherical transformation. Then, a sparse-grid approximation of the transformed function is built in the hyperspherical coordinate system, whose value at each point is estimated by solving a one-dimensional discontinuity detection problem. Due to the smoothness of the hypersurface, the new technique can identify jump discontinuities with significantly reduced computational cost, compared to existing methods. In addition, hierarchical acceleration techniques are also incorporated to further reduce the overall complexity. Rigorous complexity analyses of the new method are provided as are several numerical examples that illustrate the effectiveness of the approach.
A Dynamically Adaptive Arbitrary Lagrangian-Eulerian Method for Solution of the Euler Equations
Anderson, R W; Elliott, N S; Pember, R B
2003-02-14
A new method that combines staggered grid arbitrary Lagrangian-Eulerian (ALE) techniques with structured local adaptive mesh refinement (AMR) has been developed for solution of the Euler equations. The novel components of the methods are driven by the need to reconcile traditional AMR techniques with the staggered variables and moving, deforming meshes associated with Lagrange based ALE schemes. We develop interlevel solution transfer operators and interlevel boundary conditions first in the case of purely Lagrangian hydrodynamics, and then extend these ideas into an ALE method by developing adaptive extensions of elliptic mesh relaxation techniques. Conservation properties of the method are analyzed, and a series of test problem calculations are presented which demonstrate the utility and efficiency of the method.
Adaptive iteration method for star centroid extraction under highly dynamic conditions
NASA Astrophysics Data System (ADS)
Gao, Yushan; Qin, Shiqiao; Wang, Xingshu
2016-10-01
Star centroiding accuracy decreases significantly when star sensor works under highly dynamic conditions or star images are corrupted by severe noise, reducing the output attitude precision. Herein, an adaptive iteration method is proposed to solve this problem. Firstly, initial star centroids are predicted by traditional method, and then based on initial reported star centroids and angular velocities of the star sensor, adaptive centroiding windows are generated to cover the star area and then an iterative method optimizing the location of centroiding window is used to obtain the final star spot extraction results. Simulation results shows that, compared with traditional star image restoration method and Iteratively Weighted Center of Gravity method, AWI algorithm maintains higher extraction accuracy when rotation velocities or noise level increases.
A numerical study of 2D detonation waves with adaptive finite volume methods on unstructured grids
NASA Astrophysics Data System (ADS)
Hu, Guanghui
2017-02-01
In this paper, a framework of adaptive finite volume solutions for the reactive Euler equations on unstructured grids is proposed. The main ingredients of the algorithm include a second order total variation diminishing Runge-Kutta method for temporal discretization, and the finite volume method with piecewise linear solution reconstruction of the conservative variables for the spatial discretization in which the least square method is employed for the reconstruction, and weighted essentially nonoscillatory strategy is used to restrain the potential numerical oscillation. To resolve the high demanding on the computational resources due to the stiffness of the system caused by the reaction term and the shock structure in the solutions, the h-adaptive method is introduced. OpenMP parallelization of the algorithm is also adopted to further improve the efficiency of the implementation. Several one and two dimensional benchmark tests on the ZND model are studied in detail, and numerical results successfully show the effectiveness of the proposed method.
Adaptive Management Methods to Protect the California Sacramento-San Joaquin Delta Water Resource
NASA Technical Reports Server (NTRS)
Bubenheim, David
2016-01-01
The California Sacramento-San Joaquin River Delta is the hub for California's water supply, conveying water from Northern to Southern California agriculture and communities while supporting important ecosystem services, agriculture, and communities in the Delta. Changes in climate, long-term drought, water quality changes, and expansion of invasive aquatic plants threatens ecosystems, impedes ecosystem restoration, and is economically, environmentally, and sociologically detrimental to the San Francisco Bay/California Delta complex. NASA Ames Research Center and the USDA-ARS partnered with the State of California and local governments to develop science-based, adaptive-management strategies for the Sacramento-San Joaquin Delta. The project combines science, operations, and economics related to integrated management scenarios for aquatic weeds to help land and waterway managers make science-informed decisions regarding management and outcomes. The team provides a comprehensive understanding of agricultural and urban land use in the Delta and the major water sheds (San Joaquin/Sacramento) supplying the Delta and interaction with drought and climate impacts on the environment, water quality, and weed growth. The team recommends conservation and modified land-use practices and aids local Delta stakeholders in developing management strategies. New remote sensing tools have been developed to enhance ability to assess conditions, inform decision support tools, and monitor management practices. Science gaps in understanding how native and invasive plants respond to altered environmental conditions are being filled and provide critical biological response parameters for Delta-SWAT simulation modeling. Operational agencies such as the California Department of Boating and Waterways provide testing and act as initial adopter of decision support tools. Methods developed by the project can become routine land and water management tools in complex river delta systems.
Development and evaluation of a method of calibrating medical displays based on fixed adaptation
Sund, Patrik Månsson, Lars Gunnar; Båth, Magnus
2015-04-15
Purpose: The purpose of this work was to develop and evaluate a new method for calibration of medical displays that includes the effect of fixed adaptation and by using equipment and luminance levels typical for a modern radiology department. Methods: Low contrast sinusoidal test patterns were derived at nine luminance levels from 2 to 600 cd/m{sup 2} and used in a two alternative forced choice observer study, where the adaptation level was fixed at the logarithmic average of 35 cd/m{sup 2}. The contrast sensitivity at each luminance level was derived by establishing a linear relationship between the ten pattern contrast levels used at every luminance level and a detectability index (d′) calculated from the fraction of correct responses. A Gaussian function was fitted to the data and normalized to the adaptation level. The corresponding equation was used in a display calibration method that included the grayscale standard display function (GSDF) but compensated for fixed adaptation. In the evaluation study, the contrast of circular objects with a fixed pixel contrast was displayed using both calibration methods and was rated on a five-grade scale. Results were calculated using a visual grading characteristics method. Error estimations in both observer studies were derived using a bootstrap method. Results: The contrast sensitivities for the darkest and brightest patterns compared to the contrast sensitivity at the adaptation luminance were 37% and 56%, respectively. The obtained Gaussian fit corresponded well with similar studies. The evaluation study showed a higher degree of equally distributed contrast throughout the luminance range with the calibration method compensated for fixed adaptation than for the GSDF. The two lowest scores for the GSDF were obtained for the darkest and brightest patterns. These scores were significantly lower than the lowest score obtained for the compensated GSDF. For the GSDF, the scores for all luminance levels were statistically
NASA Astrophysics Data System (ADS)
Liu, Qinming; Dong, Ming; Lv, Wenyuan; Geng, Xiuli; Li, Yupeng
2015-12-01
Health prognosis for equipment is considered as a key process of the condition-based maintenance strategy. This paper presents an integrated framework for multi-sensor equipment diagnosis and prognosis based on adaptive hidden semi-Markov model (AHSMM). Unlike hidden semi-Markov model (HSMM), the basic algorithms in an AHSMM are first modified in order for decreasing computation and space complexity. Then, the maximum likelihood linear regression transformations method is used to train the output and duration distributions to re-estimate all unknown parameters. The AHSMM is used to identify the hidden degradation state and obtain the transition probabilities among health states and durations. Finally, through the proposed hazard rate equations, one can predict the useful remaining life of equipment with multi-sensor information. Our main results are verified in real world applications: monitoring hydraulic pumps from Caterpillar Inc. The results show that the proposed methods are more effective for multi-sensor monitoring equipment health prognosis.
Three-dimensional electro-floating display system using an integral imaging method.
Min, Sung-Wook; Hahn, Minsoo; Kim, Joohwan; Lee, Byoungho
2005-06-13
A new-type of three-dimensional (3D) display system based on two different techniques, image floating and integral imaging, is proposed. The image floating is an antiquated 3D display technique, in which a large convex lens or a concave mirror is used to display the image of a real object to observer. The electro-floating system, which does not use a real object, requires a volumetric display part in order to present 3D moving pictures. Integral imaging is an autostereoscopic technique consisting of a lens array and a two-dimensional display device. The integral imaging method can be adapted for use in an electro-floating display system because the integrated image has volumetric characteristics within the viewing angle. The proposed system combines the merits of the two techniques such as an impressive feel of depth and the facility to assemble. In this paper, the viewing characteristics of the two techniques are defined and analyzed for the optimal design of the proposed system. The basic experiments for assembling the proposed system were performed and the results are presented. The proposed system can be successfully applied to many 3D applications such as 3D television.
NASA Astrophysics Data System (ADS)
Tago, J.; Cruz-Atienza, V. M.; Etienne, V.; Virieux, J.; Benjemaa, M.; Sanchez-Sesma, F. J.
2010-12-01
Simulating any realistic seismic scenario requires incorporating physical basis into the model. Considering both the dynamics of the rupture process and the anelastic attenuation of seismic waves is essential to this purpose and, therefore, we choose to extend the hp-adaptive Discontinuous Galerkin finite-element method to integrate these physical aspects. The 3D elastodynamic equations in an unstructured tetrahedral mesh are solved with a second-order time marching approach in a high-performance computing environment. The first extension incorporates the viscoelastic rheology so that the intrinsic attenuation of the medium is considered in terms of frequency dependent quality factors (Q). On the other hand, the extension related to dynamic rupture is integrated through explicit boundary conditions over the crack surface. For this visco-elastodynamic formulation, we introduce an original discrete scheme that preserves the optimal code performance of the elastodynamic equations. A set of relaxation mechanisms describes the behavior of a generalized Maxwell body. We approximate almost constant Q in a wide frequency range by selecting both suitable relaxation frequencies and anelastic coefficients characterizing these mechanisms. In order to do so, we solve an optimization problem which is critical to minimize the amount of relaxation mechanisms. Two strategies are explored: 1) a least squares method and 2) a genetic algorithm (GA). We found that the improvement provided by the heuristic GA method is negligible. Both optimization strategies yield Q values within the 5% of the target constant Q mechanism. Anelastic functions (i.e. memory variables) are introduced to efficiently evaluate the time convolution terms involved in the constitutive equations and thus to minimize the computational cost. The incorporation of anelastic functions implies new terms with ordinary differential equations in the mathematical formulation. We solve these equations using the same order
NASA Astrophysics Data System (ADS)
Chen, Chen; Zhong, Wen-De; Wu, Dehao
2016-12-01
In this paper, we investigate an integrated optical wireless communication (OWC) and orthogonal frequency division multiplexing based passive optical network (OFDM-PON) system for hybrid wired and wireless optical access, based on an adaptive envelope modulation technique. Both the outdoor and indoor wireless communications are considered in the integrated system. The data for wired access is carried by a conventional OFDM signal, while the data for wireless access is carried by an M-ary pulse amplitude modulation (M-PAM) signal which is modulated onto the envelope of a phase-modulated OFDM signal. By adaptively modulating the wireless M-PAM signal onto the envelope of the wired phase-modulated constant envelope OFDM (CE-OFDM) signal, hybrid wired and wireless optical access can be seamlessly integrated and variable-rate optical wireless transmission can also be achieved. Analytical bit-error-rate (BER) expressions are derived for both the CE-OFDM signal with M-PAM overlay and the overlaid unipolar M-PAM signal, which are verified by Monte Carlo simulations. The BER performances of wired access, indoor OWC wireless access and outdoor OWC wireless access are evaluated. Moreover, variable-rate indoor and outdoor optical wireless access based on the adaptive envelope modulation technique is also discussed.