A multilevel adaptive projection method for unsteady incompressible flow
NASA Technical Reports Server (NTRS)
Howell, Louis H.
1993-01-01
There are two main requirements for practical simulation of unsteady flow at high Reynolds number: the algorithm must accurately propagate discontinuous flow fields without excessive artificial viscosity, and it must have some adaptive capability to concentrate computational effort where it is most needed. We satisfy the first of these requirements with a second-order Godunov method similar to those used for high-speed flows with shocks, and the second with a grid-based refinement scheme which avoids some of the drawbacks associated with unstructured meshes. These two features of our algorithm place certain constraints on the projection method used to enforce incompressibility. Velocities are cell-based, leading to a Laplacian stencil for the projection which decouples adjacent grid points. We discuss features of the multigrid and multilevel iteration schemes required for solution of the resulting decoupled problem. Variable-density flows require use of a modified projection operator--we have found a multigrid method for this modified projection that successfully handles density jumps of thousands to one. Numerical results are shown for the 2D adaptive and 3D variable-density algorithms.
Adaptive Projection Subspace Dimension for the Thick-Restart Lanczos Method
Yamazaki, Ichitaro; Bai, Zhaojun; Simon, Horst; Wang, Lin-Wang; Wu, K.
2008-10-01
The Thick-Restart Lanczos (TRLan) method is an effective method for solving large-scale Hermitian eigenvalue problems. However, its performance strongly depends on the dimension of the projection subspace. In this paper, we propose an objective function to quantify the effectiveness of a chosen subspace dimension, and then introduce an adaptive scheme to dynamically adjust the dimension at each restart. An open-source software package, nu-TRLan, which implements the TRLan method with this adaptive projection subspace dimension is available in the public domain. The numerical results of synthetic eigenvalue problems are presented to demonstrate that nu-TRLan achieves speedups of between 0.9 and 5.1 over the static method using a default subspace dimension. To demonstrate the effectiveness of nu-TRLan in a real application, we apply it to the electronic structure calculations of quantum dots. We show that nu-TRLan can achieve speedups of greater than 1.69 over the state-of-the-art eigensolver for this application, which is based on the Conjugate Gradient method with a powerful preconditioner.
Evaluation of Load Analysis Methods for NASAs GIII Adaptive Compliant Trailing Edge Project
NASA Technical Reports Server (NTRS)
Cruz, Josue; Miller, Eric J.
2016-01-01
The Air Force Research Laboratory (AFRL), NASA Armstrong Flight Research Center (AFRC), and FlexSys Inc. (Ann Arbor, Michigan) have collaborated to flight test the Adaptive Compliant Trailing Edge (ACTE) flaps. These flaps were installed on a Gulfstream Aerospace Corporation (GAC) GIII aircraft and tested at AFRC at various deflection angles over a range of flight conditions. External aerodynamic and inertial load analyses were conducted with the intention to ensure that the change in wing loads due to the deployed ACTE flap did not overload the existing baseline GIII wing box structure. The objective of this paper was to substantiate the analysis tools used for predicting wing loads at AFRC. Computational fluid dynamics (CFD) models and distributed mass inertial models were developed for predicting the loads on the wing. The analysis tools included TRANAIR (full potential) and CMARC (panel) models. Aerodynamic pressure data from the analysis codes were validated against static pressure port data collected in-flight. Combined results from the CFD predictions and the inertial load analysis were used to predict the normal force, bending moment, and torque loads on the wing. Wing loads obtained from calibrated strain gages installed on the wing were used for substantiation of the load prediction tools. The load predictions exhibited good agreement compared to the flight load results obtained from calibrated strain gage measurements.
Adaptive Algebraic Multigrid Methods
Brezina, M; Falgout, R; MacLachlan, S; Manteuffel, T; McCormick, S; Ruge, J
2004-04-09
Our ability to simulate physical processes numerically is constrained by our ability to solve the resulting linear systems, prompting substantial research into the development of multiscale iterative methods capable of solving these linear systems with an optimal amount of effort. Overcoming the limitations of geometric multigrid methods to simple geometries and differential equations, algebraic multigrid methods construct the multigrid hierarchy based only on the given matrix. While this allows for efficient black-box solution of the linear systems associated with discretizations of many elliptic differential equations, it also results in a lack of robustness due to assumptions made on the near-null spaces of these matrices. This paper introduces an extension to algebraic multigrid methods that removes the need to make such assumptions by utilizing an adaptive process. The principles which guide the adaptivity are highlighted, as well as their application to algebraic multigrid solution of certain symmetric positive-definite linear systems.
Accelerated adaptive integration method.
Kaus, Joseph W; Arrar, Mehrnoosh; McCammon, J Andrew
2014-05-15
Conformational changes that occur upon ligand binding may be too slow to observe on the time scales routinely accessible using molecular dynamics simulations. The adaptive integration method (AIM) leverages the notion that when a ligand is either fully coupled or decoupled, according to λ, barrier heights may change, making some conformational transitions more accessible at certain λ values. AIM adaptively changes the value of λ in a single simulation so that conformations sampled at one value of λ seed the conformational space sampled at another λ value. Adapting the value of λ throughout a simulation, however, does not resolve issues in sampling when barriers remain high regardless of the λ value. In this work, we introduce a new method, called Accelerated AIM (AcclAIM), in which the potential energy function is flattened at intermediate values of λ, promoting the exploration of conformational space as the ligand is decoupled from its receptor. We show, with both a simple model system (Bromocyclohexane) and the more complex biomolecule Thrombin, that AcclAIM is a promising approach to overcome high barriers in the calculation of free energies, without the need for any statistical reweighting or additional processors. PMID:24780083
Accelerated Adaptive Integration Method
2015-01-01
Conformational changes that occur upon ligand binding may be too slow to observe on the time scales routinely accessible using molecular dynamics simulations. The adaptive integration method (AIM) leverages the notion that when a ligand is either fully coupled or decoupled, according to λ, barrier heights may change, making some conformational transitions more accessible at certain λ values. AIM adaptively changes the value of λ in a single simulation so that conformations sampled at one value of λ seed the conformational space sampled at another λ value. Adapting the value of λ throughout a simulation, however, does not resolve issues in sampling when barriers remain high regardless of the λ value. In this work, we introduce a new method, called Accelerated AIM (AcclAIM), in which the potential energy function is flattened at intermediate values of λ, promoting the exploration of conformational space as the ligand is decoupled from its receptor. We show, with both a simple model system (Bromocyclohexane) and the more complex biomolecule Thrombin, that AcclAIM is a promising approach to overcome high barriers in the calculation of free energies, without the need for any statistical reweighting or additional processors. PMID:24780083
Adaptive optics projects at ESO
NASA Astrophysics Data System (ADS)
Hubin, Norbert N.; Arsenault, Robin; Bonnet, Henri; Conan, Rodolphe; Delabre, Bernard; Donaldson, Robert; Dupuy, Christophe; Fedrigo, Enrico; Ivanescu, L.; Kasper, Markus E.; Kissler-Patig, Markus; Lizon, Jean-Luis; Le Louarn, Miska; Marchetti, Enrico; Paufique, J.; Stroebele, Stefan; Tordo, Sebastien
2003-02-01
Over the past two years ESO has reinforced its efforts in the field of Adaptive Optics. The AO team has currently the challenging objectives to provide 8 Adaptive Optics systems for the VLT in the coming years and has now a world-leading role in that field. This paper will review all AO projects and plans. We will present an overview of the Nasmyth Adaptive Optics System (NAOS) with its infrared imager CONICA installed successfully at the VLT last year. Sodium Laser Guide Star plans will be introduced. The status of the 4 curvature AO systems (MACAO) developed for the VLT interferometer will be discussed. The status of the SINFONI AO module developed to feed the infrared integral field spectrograph (SPIFFI) will be presented. A short description of the Multi-conjugate Adaptive optics Demonstrator MAD and its instrumentation will be introduced. Finally, we will present the plans for the VLT second-generation AO systems and the researches performed in the frame of OWL.
Martin, D.F.; Colella, P.; Graves, D.T.
2007-09-25
We present a method for computing incompressible viscousflows in three dimensions using block-structured local refinement in bothspace and time. This method uses a projection formulation based on acell-centered approximate projection, combined with the systematic use ofmultilevel elliptic solvers to compute increments in the solutiongenerated at boundaries between refinement levels due to refinement intime. We use an L_0-stable second-order semi-implicit scheme to evaluatethe viscous terms. Results are presentedto demonstrate the accuracy andeffectiveness of this approach.
Adaptive optical interconnects: the ADDAPT project
NASA Astrophysics Data System (ADS)
Henker, Ronny; Pliva, Jan; Khafaji, Mahdi; Ellinger, Frank; Toifl, Thomas; Offrein, Bert; Cevrero, Alessandro; Oezkaya, Ilter; Seifried, Marc; Ledentsov, Nikolay; Kropp, Joerg-R.; Shchukin, Vitaly; Zoldak, Martin; Halmo, Leos; Turkiewicz, Jaroslaw; Meredith, Wyn; Eddie, Iain; Georgiades, Michael; Charalambides, Savvas; Duis, Jeroen; van Leeuwen, Pieter
2015-09-01
Existing optical networks are driven by dynamic user and application demands but operate statically at their maximum performance. Thus, optical links do not offer much adaptability and are not very energy-efficient. In this paper a novel approach of implementing performance and power adaptivity from system down to optical device, electrical circuit and transistor level is proposed. Depending on the actual data load, the number of activated link paths and individual device parameters like bandwidth, clock rate, modulation format and gain are adapted to enable lowering the components supply power. This enables flexible energy-efficient optical transmission links which pave the way for massive reductions of CO2 emission and operating costs in data center and high performance computing applications. Within the FP7 research project Adaptive Data and Power Aware Transceivers for Optical Communications (ADDAPT) dynamic high-speed energy-efficient transceiver subsystems are developed for short-range optical interconnects taking up new adaptive technologies and methods. The research of eight partners from industry, research and education spanning seven European countries includes the investigation of several adaptive control types and algorithms, the development of a full transceiver system, the design and fabrication of optical components and integrated circuits as well as the development of high-speed, low loss packaging solutions. This paper describes and discusses the idea of ADDAPT and provides an overview about the latest research results in this field.
HIFI-C: a robust and fast method for determining NMR couplings from adaptive 3D to 2D projections.
Cornilescu, Gabriel; Bahrami, Arash; Tonelli, Marco; Markley, John L; Eghbalnia, Hamid R
2007-08-01
We describe a novel method for the robust, rapid, and reliable determination of J couplings in multi-dimensional NMR coupling data, including small couplings from larger proteins. The method, "High-resolution Iterative Frequency Identification of Couplings" (HIFI-C) is an extension of the adaptive and intelligent data collection approach introduced earlier in HIFI-NMR. HIFI-C collects one or more optimally tilted two-dimensional (2D) planes of a 3D experiment, identifies peaks, and determines couplings with high resolution and precision. The HIFI-C approach, demonstrated here for the 3D quantitative J method, offers vital features that advance the goal of rapid and robust collection of NMR coupling data. (1) Tilted plane residual dipolar couplings (RDC) data are collected adaptively in order to offer an intelligent trade off between data collection time and accuracy. (2) Data from independent planes can provide a statistical measure of reliability for each measured coupling. (3) Fast data collection enables measurements in cases where sample stability is a limiting factor (for example in the presence of an orienting medium required for residual dipolar coupling measurements). (4) For samples that are stable, or in experiments involving relatively stronger couplings, robust data collection enables more reliable determinations of couplings in shorter time, particularly for larger biomolecules. As a proof of principle, we have applied the HIFI-C approach to the 3D quantitative J experiment to determine N-C' RDC values for three proteins ranging from 56 to 159 residues (including a homodimer with 111 residues in each subunit). A number of factors influence the robustness and speed of data collection. These factors include the size of the protein, the experimental set up, and the coupling being measured, among others. To exhibit a lower bound on robustness and the potential for time saving, the measurement of dipolar couplings for the N-C' vector represents a realistic
Logarithmic Adaptive Quantization Projection for Audio Watermarking
NASA Astrophysics Data System (ADS)
Zhao, Xuemin; Guo, Yuhong; Liu, Jian; Yan, Yonghong; Fu, Qiang
In this paper, a logarithmic adaptive quantization projection (LAQP) algorithm for digital watermarking is proposed. Conventional quantization index modulation uses a fixed quantization step in the watermarking embedding procedure, which leads to poor fidelity. Moreover, the conventional methods are sensitive to value-metric scaling attack. The LAQP method combines the quantization projection scheme with a perceptual model. In comparison to some conventional quantization methods with a perceptual model, the LAQP only needs to calculate the perceptual model in the embedding procedure, avoiding the decoding errors introduced by the difference of the perceptual model used in the embedding and decoding procedure. Experimental results show that the proposed watermarking scheme keeps a better fidelity and is robust against the common signal processing attack. More importantly, the proposed scheme is invariant to value-metric scaling attack.
Focus on climate projections for adaptation strategies
NASA Astrophysics Data System (ADS)
Feijt, Arnout; Appenzeller, Christof; Siegmund, Peter; von Storch, Hans
2016-01-01
Most papers in this focus issue on ‘climate and climate impact projections for adaptation strategies’ are solicited by the guest editorial team and originate from a cluster of projects that were initiated 5 years ago. These projects aimed to provide climate change and climate change adaptation information for a wide range of societal areas for the lower parts of the deltas of the Rhine and Meuse rivers, and particularly for the Netherlands. The papers give an overview of our experiences, methods, approaches, results and surprises in the process to developing scientifically underpinned climate products and services for various clients. Although the literature on interactions between society and climate science has grown over the past decade both with respect to policy-science framing in post-normal science (Storch et al 2011 J. Environ. Law Policy 1 1-15, van der Sluijs 2012 Nature and Culture 7 174-195), user-science framing (Berkhout et al 2014 Regional Environ. Change 14 879-93) and joint knowledge production (Hegger et al 2014 Regional Environ. Change 14 1049-62), there is still a lot to gain. With this focus issue we want to contribute to best practices in this quickly moving field between science and society.
Method For Model-Reference Adaptive Control
NASA Technical Reports Server (NTRS)
Seraji, Homayoun
1990-01-01
Relatively simple method of model-reference adaptive control (MRAC) developed from two prior classes of MRAC techniques: signal-synthesis method and parameter-adaption method. Incorporated into unified theory, which yields more general adaptation scheme.
The Computerized Adaptive Testing System Development Project.
ERIC Educational Resources Information Center
McBride, James R.; Sympson, J. B.
The Computerized Adaptive Testing (CAT) project is a joint Armed Services coordinated effort to develop and evaluate a system for automated, adaptive administration of the Armed Services Vocational Aptitude Battery (ASVAB). The CAT is a system for administering personnel tests that differs from conventional test administration in two major…
Scaled norm-based Euclidean projection for sparse speaker adaptation
NASA Astrophysics Data System (ADS)
Kim, Younggwan; Kim, Myung Jong; Kim, Hoirin
2015-12-01
To reduce data storage for speaker adaptive (SA) models, in our previous work, we proposed a sparse speaker adaptation method which can efficiently reduce the number of adapted parameters by using Euclidean projection onto the L 1-ball (EPL1) while maintaining recognition performance comparable to maximum a posteriori (MAP) adaptation. In the EPL1-based sparse speaker adaptation framework, however, the adapted Gaussian mean vectors are mostly concentrated on dimensions having large variances because of assuming unit variance for all dimensions. To make EPL1 more flexible, in this paper, we propose scaled norm-based Euclidean projection (SNEP) which can consider dimension-specific variances. By using SNEP, we also propose a new sparse speaker adaptation method which can consider the variances of a speaker-independent model. Our experiments show that the adapted components of mean vectors are evenly distributed in all dimensions, and we can obtain sparsely adapted models with no loss of phone recognition performance from the proposed method compared with MAP adaptation.
Milne, R.B.
1995-12-01
This thesis describes a new method for the numerical solution of partial differential equations of the parabolic type on an adaptively refined mesh in two or more spatial dimensions. The method is motivated and developed in the context of the level set formulation for the curvature dependent propagation of surfaces in three dimensions. In that setting, it realizes the multiple advantages of decreased computational effort, localized accuracy enhancement, and compatibility with problems containing a range of length scales.
Robust Optimal Adaptive Control Method with Large Adaptive Gain
NASA Technical Reports Server (NTRS)
Nguyen, Nhan T.
2009-01-01
In the presence of large uncertainties, a control system needs to be able to adapt rapidly to regain performance. Fast adaptation is referred to the implementation of adaptive control with a large adaptive gain to reduce the tracking error rapidly. However, a large adaptive gain can lead to high-frequency oscillations which can adversely affect robustness of an adaptive control law. A new adaptive control modification is presented that can achieve robust adaptation with a large adaptive gain without incurring high-frequency oscillations as with the standard model-reference adaptive control. The modification is based on the minimization of the Y2 norm of the tracking error, which is formulated as an optimal control problem. The optimality condition is used to derive the modification using the gradient method. The optimal control modification results in a stable adaptation and allows a large adaptive gain to be used for better tracking while providing sufficient stability robustness. Simulations were conducted for a damaged generic transport aircraft with both standard adaptive control and the adaptive optimal control modification technique. The results demonstrate the effectiveness of the proposed modification in tracking a reference model while maintaining a sufficient time delay margin.
ERIC Educational Resources Information Center
Dolan, Thomas G.
2003-01-01
Describes project delivery methods that are replacing the traditional Design/Bid/Build linear approach to the management, design, and construction of new facilities. These variations can enhance construction management and teamwork. (SLD)
Mathematics Case Methods Project.
ERIC Educational Resources Information Center
Barnett, Carne S.
1998-01-01
Presents an overview and analysis of the Mathematics Case Methods Project, which uses cases in order to examine and reflect upon teaching. Focuses on a special kind of teacher knowledge, coined pedagogical-content knowledge. (ASK)
Assessing Adaptive Instructional Design Tools and Methods in ADAPT[IT].
ERIC Educational Resources Information Center
Eseryel, Deniz; Spector, J. Michael
ADAPT[IT] (Advanced Design Approach for Personalized Training - Interactive Tools) is a European project within the Information Society Technologies program that is providing design methods and tools to guide a training designer according to the latest cognitive science and standardization principles. ADAPT[IT] addresses users in two significantly…
An adaptive selective frequency damping method
NASA Astrophysics Data System (ADS)
Jordi, Bastien; Cotter, Colin; Sherwin, Spencer
2015-03-01
The selective frequency damping (SFD) method is used to obtain unstable steady-state solutions of dynamical systems. The stability of this method is governed by two parameters that are the control coefficient and the filter width. Convergence is not guaranteed for arbitrary choice of these parameters. Even when the method does converge, the time necessary to reach a steady-state solution may be very long. We present an adaptive SFD method. We show that by modifying the control coefficient and the filter width all along the solver execution, we can reach an optimum convergence rate. This method is based on successive approximations of the dominant eigenvalue of the flow studied. We design a one-dimensional model to select SFD parameters that enable us to control the evolution of the least stable eigenvalue of the system. These parameters are then used for the application of the SFD method to the multi-dimensional flow problem. We apply this adaptive method to a set of classical test cases of computational fluid dynamics and show that the steady-state solutions obtained are similar to what can be found in the literature. Then we apply it to a specific vortex dominated flow (of interest for the automotive industry) whose stability had never been studied before. Seventh Framework Programme of the European Commission - ANADE project under Grant Contract PITN-GA-289428.
Simple method for model reference adaptive control
NASA Technical Reports Server (NTRS)
Seraji, H.
1989-01-01
A simple method is presented for combined signal synthesis and parameter adaptation within the framework of model reference adaptive control theory. The results are obtained using a simple derivation based on an improved Liapunov function.
A new orientation-adaptive interpolation method.
Wang, Qing; Ward, Rabab Kreidieh
2007-04-01
We propose an isophote-oriented, orientation-adaptive interpolation method. The proposed method employs an interpolation kernel that adapts to the local orientation of isophotes, and the pixel values are obtained through an oriented, bilinear interpolation. We show that, by doing so, the curvature of the interpolated isophotes is reduced, and, thus, zigzagging artifacts are largely suppressed. Analysis and experiments show that images interpolated using the proposed method are visually pleasing and almost artifact free. PMID:17405424
The Method of Adaptive Comparative Judgement
ERIC Educational Resources Information Center
Pollitt, Alastair
2012-01-01
Adaptive Comparative Judgement (ACJ) is a modification of Thurstone's method of comparative judgement that exploits the power of adaptivity, but in scoring rather than testing. Professional judgement by teachers replaces the marking of tests; a judge is asked to compare the work of two students and simply to decide which of them is the better.…
Variational method for adaptive grid generation
Brackbill, J.U.
1983-01-01
A variational method for generating adaptive meshes is described. Functionals measuring smoothness, skewness, orientation, and the Jacobian are minimized to generate a mapping from a rectilinear domain in natural coordinate to an arbitrary domain in physical coordinates. From the mapping, a mesh is easily constructed. In using the method to adaptively zone computational problems, as few as one third the number of mesh points are required in each coordinate direction compared with a uniformly zoned mesh.
Building Knowledge in the Workplace and Beyond. Curriculum Adaptation Project.
ERIC Educational Resources Information Center
Ballinger, Ronda
A project was conducted to adapt and modify the four-part workplace literacy curriculum previously created by the College of Lake County (Illinois) and six industries in the county in order to improve the usefulness and application of the information in the original curriculum. Information for the adaptation project was generated by instructors…
Combining Adaptive Hypermedia with Project and Case-Based Learning
ERIC Educational Resources Information Center
Papanikolaou, Kyparisia; Grigoriadou, Maria
2009-01-01
In this article we investigate the design of educational hypermedia based on constructivist learning theories. According to the principles of project and case-based learning we present the design rational of an Adaptive Educational Hypermedia system prototype named MyProject; learners working with MyProject undertake a project and the system…
ERIC Educational Resources Information Center
Hernandez, Alberto; Melnick, Susan L.
The purpose of this unit of work is to provide the teacher participant with some useful guidelines for evaluating and adapting written materials for specific English as a second language (ESL) classes. There is pre- and post-assessment of specific learning tasks relevant to evaluating and adapting materials as well as learning activities, which…
Selecting downscaled climate projections for water resource impacts and adaptation
NASA Astrophysics Data System (ADS)
Vidal, Jean-Philippe; Hingray, Benoît
2015-04-01
Increasingly large ensembles of global and regional climate projections are being produced and delivered to the climate impact community. However, such an enormous amount of information can hardly been dealt with by some impact models due to computational constraints. Strategies for transparently selecting climate projections are therefore urgently needed for informing small-scale impact and adaptation studies and preventing potential pitfalls in interpreting ensemble results from impact models. This work proposes results from a selection approach implemented for an integrated water resource impact and adaptation study in the Durance river basin (Southern French Alps). A large ensemble of 3000 daily transient gridded climate projections was made available for this study. It was built from different runs of 4 ENSEMBLES Stream2 GCMs, statistically downscaled by 3 probabilistic methods based on the K-nearest neighbours resampling approach (Lafaysse et al., 2014). The selection approach considered here exemplifies one of the multiple possible approaches described in a framework for identifying tailored subsets of climate projections for impact and adaptation studies proposed by Vidal & Hingray (2014). It was chosen based on the specificities of both the study objectives and the characteristics of the projection dataset. This selection approach aims at propagating as far as possible the relative contributions of the four different sources of uncertainties considered, namely GCM structure, large-scale natural variability, structure of the downscaling method, and catchment-scale natural variability. Moreover, it took the form of a hierarchical structure to deal with the specific constraints of several types of impact models (hydrological models, irrigation demand models and reservoir management models). The implemented 3-layer selection approach is therefore mainly based on conditioned Latin Hypercube sampling (Christierson et al., 2012). The choice of conditioning
Configuration management plan for the Objective Supply Capability Adaptive Resdesign (OSCAR) project
Rasch, K.A.; Reid, R.W.
1997-02-01
The Configuration Management Plan for the Object Supply Capability Adaptive Redesign (OSCAR) documents the methods used for the OSCAR project to implement configuration management and control. Specific areas addressed include the establishment of baselines and change control procedures.
Adaptive Finite Element Methods in Geodynamics
NASA Astrophysics Data System (ADS)
Davies, R.; Davies, H.; Hassan, O.; Morgan, K.; Nithiarasu, P.
2006-12-01
Adaptive finite element methods are presented for improving the quality of solutions to two-dimensional (2D) and three-dimensional (3D) convection dominated problems in geodynamics. The methods demonstrate the application of existing technology in the engineering community to problems within the `solid' Earth sciences. Two-Dimensional `Adaptive Remeshing': The `remeshing' strategy introduced in 2D adapts the mesh automatically around regions of high solution gradient, yielding enhanced resolution of the associated flow features. The approach requires the coupling of an automatic mesh generator, a finite element flow solver and an error estimator. In this study, the procedure is implemented in conjunction with the well-known geodynamical finite element code `ConMan'. An unstructured quadrilateral mesh generator is utilised, with mesh adaptation accomplished through regeneration. This regeneration employs information provided by an interpolation based local error estimator, obtained from the computed solution on an existing mesh. The technique is validated by solving thermal and thermo-chemical problems with known benchmark solutions. In a purely thermal context, results illustrate that the method is highly successful, improving solution accuracy whilst increasing computational efficiency. For thermo-chemical simulations the same conclusions can be drawn. However, results also demonstrate that the grid based methods employed for simulating the compositional field are not competitive with the other methods (tracer particle and marker chain) currently employed in this field, even at the higher spatial resolutions allowed by the adaptive grid strategies. Three-Dimensional Adaptive Multigrid: We extend the ideas from our 2D work into the 3D realm in the context of a pre-existing 3D-spherical mantle dynamics code, `TERRA'. In its original format, `TERRA' is computationally highly efficient since it employs a multigrid solver that depends upon a grid utilizing a clever
A New Adaptive Image Denoising Method
NASA Astrophysics Data System (ADS)
Biswas, Mantosh; Om, Hari
2016-03-01
In this paper, a new adaptive image denoising method is proposed that follows the soft-thresholding technique. In our method, a new threshold function is also proposed, which is determined by taking the various combinations of noise level, noise-free signal variance, subband size, and decomposition level. It is simple and adaptive as it depends on the data-driven parameters estimation in each subband. The state-of-the-art denoising methods viz. VisuShrink, SureShrink, BayesShrink, WIDNTF and IDTVWT are not able to modify the coefficients in an efficient manner to provide the good quality of image. Our method removes the noise from the noisy image significantly and provides better visual quality of an image.
Domain adaptive boosting method and its applications
NASA Astrophysics Data System (ADS)
Geng, Jie; Miao, Zhenjiang
2015-03-01
Differences of data distributions widely exist among datasets, i.e., domains. For many pattern recognition, nature language processing, and content-based analysis systems, a decrease in performance caused by the domain differences between the training and testing datasets is still a notable problem. We propose a domain adaptation method called domain adaptive boosting (DAB). It is based on the AdaBoost approach with extensions to cover the domain differences between the source and target domains. Two main stages are contained in this approach: source-domain clustering and source-domain sample selection. By iteratively adding the selected training samples from the source domain, the discrimination model is able to achieve better domain adaptation performance based on a small validation set. The DAB algorithm is suitable for the domains with large scale samples and easy to extend for multisource adaptation. We implement this method on three computer vision systems: the skin detection model in single images, the video concept detection model, and the object classification model. In the experiments, we compare the performances of several commonly used methods and the proposed DAB. Under most situations, the DAB is superior.
Structured adaptive grid generation using algebraic methods
NASA Technical Reports Server (NTRS)
Yang, Jiann-Cherng; Soni, Bharat K.; Roger, R. P.; Chan, Stephen C.
1993-01-01
The accuracy of the numerical algorithm depends not only on the formal order of approximation but also on the distribution of grid points in the computational domain. Grid adaptation is a procedure which allows optimal grid redistribution as the solution progresses. It offers the prospect of accurate flow field simulations without the use of an excessively timely, computationally expensive, grid. Grid adaptive schemes are divided into two basic categories: differential and algebraic. The differential method is based on a variational approach where a function which contains a measure of grid smoothness, orthogonality and volume variation is minimized by using a variational principle. This approach provided a solid mathematical basis for the adaptive method, but the Euler-Lagrange equations must be solved in addition to the original governing equations. On the other hand, the algebraic method requires much less computational effort, but the grid may not be smooth. The algebraic techniques are based on devising an algorithm where the grid movement is governed by estimates of the local error in the numerical solution. This is achieved by requiring the points in the large error regions to attract other points and points in the low error region to repel other points. The development of a fast, efficient, and robust algebraic adaptive algorithm for structured flow simulation applications is presented. This development is accomplished in a three step process. The first step is to define an adaptive weighting mesh (distribution mesh) on the basis of the equidistribution law applied to the flow field solution. The second, and probably the most crucial step, is to redistribute grid points in the computational domain according to the aforementioned weighting mesh. The third and the last step is to reevaluate the flow property by an appropriate search/interpolate scheme at the new grid locations. The adaptive weighting mesh provides the information on the desired concentration
Adaptively Addressing Uncertainty in Estuarine and Near Coastal Restoration Projects
Thom, Ronald M.; Williams, Greg D.; Borde, Amy B.; Southard, John A.; Sargeant, Susan L.; Woodruff, Dana L.; Laufle, Jeffrey C.; Glasoe, Stuart
2005-03-01
Restoration projects have an uncertain outcome because of a lack of information about current site conditions, historical disturbance levels, effects of landscape alterations on site development, unpredictable trajectories or patterns of ecosystem structural development, and many other factors. A poor understanding of the factors that control the development and dynamics of a system, such as hydrology, salinity, wave energies, can also lead to an unintended outcome. Finally, lack of experience in restoring certain types of systems (e.g., rare or very fragile habitats) or systems in highly modified situations (e.g., highly urbanized estuaries) makes project outcomes uncertain. Because of these uncertainties, project costs can rise dramatically in an attempt to come closer to project goals. All of the potential sources of error can be addressed to a certain degree through adaptive management. The first step is admitting that these uncertainties can exist, and addressing as many of the uncertainties with planning and directed research prior to implementing the project. The second step is to evaluate uncertainties through hypothesis-driven experiments during project implementation. The third step is to use the monitoring program to evaluate and adjust the project as needed to improve the probability of the project to reach is goal. The fourth and final step is to use the information gained in the project to improve future projects. A framework that includes a clear goal statement, a conceptual model, and an evaluation framework can help in this adaptive restoration process. Projects and programs vary in their application of adaptive management in restoration, and it is very difficult to be highly prescriptive in applying adaptive management to projects that necessarily vary widely in scope, goal, ecosystem characteristics, and uncertainties. Very large ecosystem restoration programs in the Mississippi River delta (Coastal Wetlands Planning, Protection, and Restoration
Optimal and adaptive methods of processing hydroacoustic signals (review)
NASA Astrophysics Data System (ADS)
Malyshkin, G. S.; Sidel'nikov, G. B.
2014-09-01
Different methods of optimal and adaptive processing of hydroacoustic signals for multipath propagation and scattering are considered. Advantages and drawbacks of the classical adaptive (Capon, MUSIC, and Johnson) algorithms and "fast" projection algorithms are analyzed for the case of multipath propagation and scattering of strong signals. The classical optimal approaches to detecting multipath signals are presented. A mechanism of controlled normalization of strong signals is proposed to automatically detect weak signals. The results of simulating the operation of different detection algorithms for a linear equidistant array under multipath propagation and scattering are presented. An automatic detector is analyzed, which is based on classical or fast projection algorithms, which estimates the background proceeding from median filtering or the method of bilateral spatial contrast.
Project management plan for the Objective Supply Capability Adaptive Redesign (OSCAR) project
Rasch, K.A.; Reid, R.W.
1997-02-01
This document establishes the project management plan for design and development of the Object Supply Capability Adaptive Redesign (OSCAR) Project. The purpose of the project management plan is to document the plans, goals, directions, commitments, approaches, and decisions that relate to guiding a project throughout its life cycle. Special attention is given to project goals, deliverables, sponsor and project standards, project resources, schedule, and cost estimates.
Projecting the Scientific Method.
ERIC Educational Resources Information Center
Uthe, R. E.
2000-01-01
Describes how the gas laws are an excellent vehicle for introducing the steps of the scientific method. Students can use balloons and a simple apparatus to observe changes in various gas parameters, develop ideas about the changes they see, collect numerical data, test their ideas, derive simple equations for the relationships, and use the…
Climate services within a regional climate adaptation project
NASA Astrophysics Data System (ADS)
Hänsel, Stephanie; Heidenreich, Majana; Franke, Johannes; Riedel, Kathrin; Matschullat, Jörg; Bernhofer, Christian
2013-04-01
In recent years the demand for adapting to climate variability and change became more and more obvious. Thus a multitude of projects dealing with climate adaptation strategies and concrete measures was launched. Commonly, developing adaptation options is based on downscaled climate model outputs. These outputs have to be provided within the projects, but just providing the data is far from being sufficient. Obstacles connected with using climate projections for climate adaptation include uncertainties and bandwidths of climate projections and the inability of models to describe parameters such as extreme weather events, which are particularly relevant for many climate adaptation decisions. Climate scientists know that model outputs are no climate data and cannot be treated as observational data were treated in the past. Still, many practitioners demand precise values for future climate to replace past CLINO-values and to run their applications. Thus, climate adaptation involves adapting the instruments and processes used in deriving climate-related decisions. Communicating the challenges arising from this need in rethinking common procedures is of outstanding significance for any successful adaptation practice. Dealing with uncertainties of climate projections is a constant necessity, since they are always based on several simplifications, parameterisations and assumptions, e.g., on the future socioeconomic development or on climate sensitivity. Future climate should thus be communicated in bandwidths. Working with just one scenario, one climate model, or even working with ensemble means is risky as it evokes a higher than appropriate perceived confidence in the results. It encourages using familiar tools in processing climate information, rather than caution. Consequences are suboptimal adaption and misallocation of finances. We encourage working with bandwidths and testing climate adaptation options against a broad range of possible future climates. Climate
Parallel adaptive wavelet collocation method for PDEs
Nejadmalayeri, Alireza; Vezolainen, Alexei; Brown-Dymkoski, Eric; Vasilyev, Oleg V.
2015-10-01
A parallel adaptive wavelet collocation method for solving a large class of Partial Differential Equations is presented. The parallelization is achieved by developing an asynchronous parallel wavelet transform, which allows one to perform parallel wavelet transform and derivative calculations with only one data synchronization at the highest level of resolution. The data are stored using tree-like structure with tree roots starting at a priori defined level of resolution. Both static and dynamic domain partitioning approaches are developed. For the dynamic domain partitioning, trees are considered to be the minimum quanta of data to be migrated between the processes. This allows fully automated and efficient handling of non-simply connected partitioning of a computational domain. Dynamic load balancing is achieved via domain repartitioning during the grid adaptation step and reassigning trees to the appropriate processes to ensure approximately the same number of grid points on each process. The parallel efficiency of the approach is discussed based on parallel adaptive wavelet-based Coherent Vortex Simulations of homogeneous turbulence with linear forcing at effective non-adaptive resolutions up to 2048{sup 3} using as many as 2048 CPU cores.
Adapting Project Management Practices to Research-Based Projects
NASA Technical Reports Server (NTRS)
Bahr, P.; Baker, T.; Corbin, B.; Keith, L.; Loerch, L.; Mullenax, C.; Myers, R.; Rhodes, B.; Skytland, N.
2007-01-01
From dealing with the inherent uncertainties in outcomes of scientific research to the lack of applicability of current NASA Procedural Requirements guidance documentation, research-based projects present challenges that require unique application of classical project management techniques. If additionally challenged by the creation of a new program transitioning from basic to applied research in a technical environment often unfamiliar with the cost and schedule constraints addressed by project management practices, such projects can find themselves struggling throughout their life cycles. Finally, supplying deliverables to a prime vehicle customer, also in the formative stage, adds further complexity to the development and management of research-based projects. The Biomedical Research and Countermeasures Projects Branch at NASA Johnson Space Center encompasses several diverse applied research-based or research-enabling projects within the newly-formed Human Research Program. This presentation will provide a brief overview of the organizational structure and environment in which these projects operate and how the projects coordinate to address and manage technical requirements. We will identify several of the challenges (cost, technical, schedule, and personnel) encountered by projects across the Branch, present case reports of actions taken and techniques implemented to deal with these challenges, and then close the session with an open forum discussion of remaining challenges and potential mitigations.
Adaptive computational methods for SSME internal flow analysis
NASA Technical Reports Server (NTRS)
Oden, J. T.
1986-01-01
Adaptive finite element methods for the analysis of classes of problems in compressible and incompressible flow of interest in SSME (space shuttle main engine) analysis and design are described. The general objective of the adaptive methods is to improve and to quantify the quality of numerical solutions to the governing partial differential equations of fluid dynamics in two-dimensional cases. There are several different families of adaptive schemes that can be used to improve the quality of solutions in complex flow simulations. Among these are: (1) r-methods (node-redistribution or moving mesh methods) in which a fixed number of nodal points is allowed to migrate to points in the mesh where high error is detected; (2) h-methods, in which the mesh size h is automatically refined to reduce local error; and (3) p-methods, in which the local degree p of the finite element approximation is increased to reduce local error. Two of the three basic techniques have been studied in this project: an r-method for steady Euler equations in two dimensions and a p-method for transient, laminar, viscous incompressible flow. Numerical results are presented. A brief introduction to residual methods of a-posterior error estimation is also given and some pertinent conclusions of the study are listed.
Parallel 3D Mortar Element Method for Adaptive Nonconforming Meshes
NASA Technical Reports Server (NTRS)
Feng, Huiyu; Mavriplis, Catherine; VanderWijngaart, Rob; Biswas, Rupak
2004-01-01
High order methods are frequently used in computational simulation for their high accuracy. An efficient way to avoid unnecessary computation in smooth regions of the solution is to use adaptive meshes which employ fine grids only in areas where they are needed. Nonconforming spectral elements allow the grid to be flexibly adjusted to satisfy the computational accuracy requirements. The method is suitable for computational simulations of unsteady problems with very disparate length scales or unsteady moving features, such as heat transfer, fluid dynamics or flame combustion. In this work, we select the Mark Element Method (MEM) to handle the non-conforming interfaces between elements. A new technique is introduced to efficiently implement MEM in 3-D nonconforming meshes. By introducing an "intermediate mortar", the proposed method decomposes the projection between 3-D elements and mortars into two steps. In each step, projection matrices derived in 2-D are used. The two-step method avoids explicitly forming/deriving large projection matrices for 3-D meshes, and also helps to simplify the implementation. This new technique can be used for both h- and p-type adaptation. This method is applied to an unsteady 3-D moving heat source problem. With our new MEM implementation, mesh adaptation is able to efficiently refine the grid near the heat source and coarsen the grid once the heat source passes. The savings in computational work resulting from the dynamic mesh adaptation is demonstrated by the reduction of the the number of elements used and CPU time spent. MEM and mesh adaptation, respectively, bring irregularity and dynamics to the computer memory access pattern. Hence, they provide a good way to gauge the performance of computer systems when running scientific applications whose memory access patterns are irregular and unpredictable. We select a 3-D moving heat source problem as the Unstructured Adaptive (UA) grid benchmark, a new component of the NAS Parallel
Finite element error estimation and adaptivity based on projected stresses
Jung, J.
1990-08-01
This report investigates the behavior of a family of finite element error estimators based on projected stresses, i.e., continuous stresses that are a least squared error fit to the conventional Gauss point stresses. An error estimate based on element force equilibrium appears to be quite effective. Examples of adaptive mesh refinement for a one-dimensional problem are presented. Plans for two-dimensional adaptivity are discussed. 12 refs., 82 figs.
An Adaptive VOF Method on Unstructured Grid
NASA Astrophysics Data System (ADS)
Wu, L. L.; Huang, M.; Chen, B.
2011-09-01
In order to improve the accuracy of interface capturing and keeping the computational efficiency, an adaptive VOF method on unstructured grid is proposed in this paper. The volume fraction in each cell is regarded as the criterion to locally refine the interface cell. With the movement of interface, new interface cells (0 ≤ f ≤ 1) are subdivided into child cells, while those child cells that no longer contain interface will be merged back into the original parent cell. In order to avoid the complicated redistribution of volume fraction during the subdivision and amalgamation procedure, a predictor-corrector algorithm is proposed to implement the subdivision and amalgamation procedures only in empty or full cell ( f = 0 or 1). Thus volume fraction in the new cell can take the value from the original cell directly, and the interpolation of the interface is avoided. The advantage of this method is that the re-generation of the whole grid system is not necessary, so its implementation is very efficient. Moreover, an advection flow test of a hollow square was performed, and the relative shape error of the result obtained by adaptive mesh is smaller than those by non-refined grid, which verifies the validation of our method.
Ensemble transform sensitivity method for adaptive observations
NASA Astrophysics Data System (ADS)
Zhang, Yu; Xie, Yuanfu; Wang, Hongli; Chen, Dehui; Toth, Zoltan
2016-01-01
The Ensemble Transform (ET) method has been shown to be useful in providing guidance for adaptive observation deployment. It predicts forecast error variance reduction for each possible deployment using its corresponding transformation matrix in an ensemble subspace. In this paper, a new ET-based sensitivity (ETS) method, which calculates the gradient of forecast error variance reduction in terms of analysis error variance reduction, is proposed to specify regions for possible adaptive observations. ETS is a first order approximation of the ET; it requires just one calculation of a transformation matrix, increasing computational efficiency (60%-80% reduction in computational cost). An explicit mathematical formulation of the ETS gradient is derived and described. Both the ET and ETS methods are applied to the Hurricane Irene (2011) case and a heavy rainfall case for comparison. The numerical results imply that the sensitive areas estimated by the ETS and ET are similar. However, ETS is much more efficient, particularly when the resolution is higher and the number of ensemble members is larger.
Adaptive characterization method for desktop color printers
NASA Astrophysics Data System (ADS)
Shen, Hui-Liang; Zheng, Zhi-Huan; Jin, Chong-Chao; Du, Xin; Shao, Si-Jie; Xin, John H.
2013-04-01
With the rapid development of multispectral imaging technique, it is desired that the spectral color can be accurately reproduced using desktop color printers. However, due to the specific spectral gamuts determined by printer inks, it is almost impossible to exactly replicate the reflectance spectra in other media. In addition, as ink densities can not be individually controlled, desktop printers can only be regarded as red-green-blue devices, making physical models unfeasible. We propose a locally adaptive method, which consists of both forward and inverse models, for desktop printer characterization. In the forward model, we establish the adaptive transform between control values and reflectance spectrum on individual cellular subsets by using weighted polynomial regression. In the inverse model, we first determine the candidate space of the control values based on global inverse regression and then compute the optimal control values by minimizing the color difference between the actual spectrum and the predicted spectrum via forward transform. Experimental results show that the proposed method can reproduce colors accurately for different media under multiple illuminants.
Projection Operator: A Step Towards Certification of Adaptive Controllers
NASA Technical Reports Server (NTRS)
Larchev, Gregory V.; Campbell, Stefan F.; Kaneshige, John T.
2010-01-01
One of the major barriers to wider use of adaptive controllers in commercial aviation is the lack of appropriate certification procedures. In order to be certified by the Federal Aviation Administration (FAA), an aircraft controller is expected to meet a set of guidelines on functionality and reliability while not negatively impacting other systems or safety of aircraft operations. Due to their inherent time-variant and non-linear behavior, adaptive controllers cannot be certified via the metrics used for linear conventional controllers, such as gain and phase margin. Projection Operator is a robustness augmentation technique that bounds the output of a non-linear adaptive controller while conforming to the Lyapunov stability rules. It can also be used to limit the control authority of the adaptive component so that the said control authority can be arbitrarily close to that of a linear controller. In this paper we will present the results of applying the Projection Operator to a Model-Reference Adaptive Controller (MRAC), varying the amount of control authority, and comparing controller s performance and stability characteristics with those of a linear controller. We will also show how adjusting Projection Operator parameters can make it easier for the controller to satisfy the certification guidelines by enabling a tradeoff between controller s performance and robustness.
Adaptive method with intercessory feedback control for an intelligent agent
Goldsmith, Steven Y.
2004-06-22
An adaptive architecture method with feedback control for an intelligent agent provides for adaptively integrating reflexive and deliberative responses to a stimulus according to a goal. An adaptive architecture method with feedback control for multiple intelligent agents provides for coordinating and adaptively integrating reflexive and deliberative responses to a stimulus according to a goal. Re-programming of the adaptive architecture is through a nexus which coordinates reflexive and deliberator components.
Adaptive Accommodation Control Method for Complex Assembly
NASA Astrophysics Data System (ADS)
Kang, Sungchul; Kim, Munsang; Park, Shinsuk
Robotic systems have been used to automate assembly tasks in manufacturing and in teleoperation. Conventional robotic systems, however, have been ineffective in controlling contact force in multiple contact states of complex assemblythat involves interactions between complex-shaped parts. Unlike robots, humans excel at complex assembly tasks by utilizing their intrinsic impedance, forces and torque sensation, and tactile contact clues. By examining the human behavior in assembling complex parts, this study proposes a novel geometry-independent control method for robotic assembly using adaptive accommodation (or damping) algorithm. Two important conditions for complex assembly, target approachability and bounded contact force, can be met by the proposed control scheme. It generates target approachable motion that leads the object to move closer to a desired target position, while contact force is kept under a predetermined value. Experimental results from complex assembly tests have confirmed the feasibility and applicability of the proposed method.
Adaptive Knowledge Management of Project-Based Learning
ERIC Educational Resources Information Center
Tilchin, Oleg; Kittany, Mohamed
2016-01-01
The goal of an approach to Adaptive Knowledge Management (AKM) of project-based learning (PBL) is to intensify subject study through guiding, inducing, and facilitating development knowledge, accountability skills, and collaborative skills of students. Knowledge development is attained by knowledge acquisition, knowledge sharing, and knowledge…
Adapting implicit methods to parallel processors
Reeves, L.; McMillin, B.; Okunbor, D.; Riggins, D.
1994-12-31
When numerically solving many types of partial differential equations, it is advantageous to use implicit methods because of their better stability and more flexible parameter choice, (e.g. larger time steps). However, since implicit methods usually require simultaneous knowledge of the entire computational domain, these methods axe difficult to implement directly on distributed memory parallel processors. This leads to infrequent use of implicit methods on parallel/distributed systems. The usual implementation of implicit methods is inefficient due to the nature of parallel systems where it is common to take the computational domain and distribute the grid points over the processors so as to maintain a relatively even workload per processor. This creates a problem at the locations in the domain where adjacent points are not on the same processor. In order for the values at these points to be calculated, messages have to be exchanged between the corresponding processors. Without special adaptation, this will result in idle processors during part of the computation, and as the number of idle processors increases, the lower the effective speed improvement by using a parallel processor.
Breakthrough Propulsion Physics Project: Project Management Methods
NASA Technical Reports Server (NTRS)
Millis, Marc G.
2004-01-01
To leap past the limitations of existing propulsion, the NASA Breakthrough Propulsion Physics (BPP) Project seeks further advancements in physics from which new propulsion methods can eventually be derived. Three visionary breakthroughs are sought: (1) propulsion that requires no propellant, (2) propulsion that circumvents existing speed limits, and (3) breakthrough methods of energy production to power such devices. Because these propulsion goals are presumably far from fruition, a special emphasis is to identify credible research that will make measurable progress toward these goals in the near-term. The management techniques to address this challenge are presented, with a special emphasis on the process used to review, prioritize, and select research tasks. This selection process includes these key features: (a) research tasks are constrained to only address the immediate unknowns, curious effects or critical issues, (b) reliability of assertions is more important than the implications of the assertions, which includes the practice where the reviewers judge credibility rather than feasibility, and (c) total scores are obtained by multiplying the criteria scores rather than by adding. Lessons learned and revisions planned are discussed.
Linearly-Constrained Adaptive Signal Processing Methods
NASA Astrophysics Data System (ADS)
Griffiths, Lloyd J.
1988-01-01
In adaptive least-squares estimation problems, a desired signal d(n) is estimated using a linear combination of L observation values samples xi (n), x2(n), . . . , xL-1(n) and denoted by the vector X(n). The estimate is formed as the inner product of this vector with a corresponding L-dimensional weight vector W. One particular weight vector of interest is Wopt which minimizes the mean-square between d(n) and the estimate. In this context, the term `mean-square difference' is a quadratic measure such as statistical expectation or time average. The specific value of W which achieves the minimum is given by the prod-uct of the inverse data covariance matrix and the cross-correlation between the data vector and the desired signal. The latter is often referred to as the P-vector. For those cases in which time samples of both the desired and data vector signals are available, a variety of adaptive methods have been proposed which will guarantee that an iterative weight vector Wa(n) converges (in some sense) to the op-timal solution. Two which have been extensively studied are the recursive least-squares (RLS) method and the LMS gradient approximation approach. There are several problems of interest in the communication and radar environment in which the optimal least-squares weight set is of interest and in which time samples of the desired signal are not available. Examples can be found in array processing in which only the direction of arrival of the desired signal is known and in single channel filtering where the spectrum of the desired response is known a priori. One approach to these problems which has been suggested is the P-vector algorithm which is an LMS-like approximate gradient method. Although it is easy to derive the mean and variance of the weights which result with this algorithm, there has never been an identification of the corresponding underlying error surface which the procedure searches. The purpose of this paper is to suggest an alternative
Parallel, adaptive finite element methods for conservation laws
NASA Technical Reports Server (NTRS)
Biswas, Rupak; Devine, Karen D.; Flaherty, Joseph E.
1994-01-01
We construct parallel finite element methods for the solution of hyperbolic conservation laws in one and two dimensions. Spatial discretization is performed by a discontinuous Galerkin finite element method using a basis of piecewise Legendre polynomials. Temporal discretization utilizes a Runge-Kutta method. Dissipative fluxes and projection limiting prevent oscillations near solution discontinuities. A posteriori estimates of spatial errors are obtained by a p-refinement technique using superconvergence at Radau points. The resulting method is of high order and may be parallelized efficiently on MIMD computers. We compare results using different limiting schemes and demonstrate parallel efficiency through computations on an NCUBE/2 hypercube. We also present results using adaptive h- and p-refinement to reduce the computational cost of the method.
An adaptive SPH method for strong shocks
NASA Astrophysics Data System (ADS)
Sigalotti, Leonardo Di G.; López, Hender; Trujillo, Leonardo
2009-09-01
We propose an alternative SPH scheme to usual SPH Godunov-type methods for simulating supersonic compressible flows with sharp discontinuities. The method relies on an adaptive density kernel estimation (ADKE) algorithm, which allows the width of the kernel interpolant to vary locally in space and time so that the minimum necessary smoothing is applied in regions of low density. We have performed a von Neumann stability analysis of the SPH equations for an ideal gas and derived the corresponding dispersion relation in terms of the local width of the kernel. Solution of the dispersion relation in the short wavelength limit shows that stability is achieved for a wide range of the ADKE parameters. Application of the method to high Mach number shocks confirms the predictions of the linear analysis. Examples of the resolving power of the method are given for a set of difficult problems, involving the collision of two strong shocks, the strong shock-tube test, and the interaction of two blast waves.
Adaptive wavelet methods - Matrix-vector multiplication
NASA Astrophysics Data System (ADS)
Černá, Dana; Finěk, Václav
2012-12-01
The design of most adaptive wavelet methods for elliptic partial differential equations follows a general concept proposed by A. Cohen, W. Dahmen and R. DeVore in [3, 4]. The essential steps are: transformation of the variational formulation into the well-conditioned infinite-dimensional l2 problem, finding of the convergent iteration process for the l2 problem and finally derivation of its finite dimensional version which works with an inexact right hand side and approximate matrix-vector multiplications. In our contribution, we shortly review all these parts and wemainly pay attention to approximate matrix-vector multiplications. Effective approximation of matrix-vector multiplications is enabled by an off-diagonal decay of entries of the wavelet stiffness matrix. We propose here a new approach which better utilize actual decay of matrix entries.
Adaptive model training system and method
Bickford, Randall L; Palnitkar, Rahul M; Lee, Vo
2014-04-15
An adaptive model training system and method for filtering asset operating data values acquired from a monitored asset for selectively choosing asset operating data values that meet at least one predefined criterion of good data quality while rejecting asset operating data values that fail to meet at least the one predefined criterion of good data quality; and recalibrating a previously trained or calibrated model having a learned scope of normal operation of the asset by utilizing the asset operating data values that meet at least the one predefined criterion of good data quality for adjusting the learned scope of normal operation of the asset for defining a recalibrated model having the adjusted learned scope of normal operation of the asset.
Adaptive model training system and method
Bickford, Randall L; Palnitkar, Rahul M
2014-11-18
An adaptive model training system and method for filtering asset operating data values acquired from a monitored asset for selectively choosing asset operating data values that meet at least one predefined criterion of good data quality while rejecting asset operating data values that fail to meet at least the one predefined criterion of good data quality; and recalibrating a previously trained or calibrated model having a learned scope of normal operation of the asset by utilizing the asset operating data values that meet at least the one predefined criterion of good data quality for adjusting the learned scope of normal operation of the asset for defining a recalibrated model having the adjusted learned scope of normal operation of the asset.
Online Adaptive Replanning Method for Prostate Radiotherapy
Ahunbay, Ergun E.; Peng Cheng; Holmes, Shannon; Godley, Andrew; Lawton, Colleen; Li, X. Allen
2010-08-01
Purpose: To report the application of an adaptive replanning technique for prostate cancer radiotherapy (RT), consisting of two steps: (1) segment aperture morphing (SAM), and (2) segment weight optimization (SWO), to account for interfraction variations. Methods and Materials: The new 'SAM+SWO' scheme was retroactively applied to the daily CT images acquired for 10 prostate cancer patients on a linear accelerator and CT-on-Rails combination during the course of RT. Doses generated by the SAM+SWO scheme based on the daily CT images were compared with doses generated after patient repositioning using the current planning target volume (PTV) margin (5 mm, 3 mm toward rectum) and a reduced margin (2 mm), along with full reoptimization scans based on the daily CT images to evaluate dosimetry benefits. Results: For all cases studied, the online replanning method provided significantly better target coverage when compared with repositioning with reduced PTV (13% increase in minimum prostate dose) and improved organ sparing when compared with repositioning with regular PTV (13% decrease in the generalized equivalent uniform dose of rectum). The time required to complete the online replanning process was 6 {+-} 2 minutes. Conclusion: The proposed online replanning method can be used to account for interfraction variations for prostate RT with a practically acceptable time frame (5-10 min) and with significant dosimetric benefits. On the basis of this study, the developed online replanning scheme is being implemented in the clinic for prostate RT.
CT Image Reconstruction from Sparse Projections Using Adaptive TpV Regularization
Chen, Zijia; Zhou, Linghong
2015-01-01
Radiation dose reduction without losing CT image quality has been an increasing concern. Reducing the number of X-ray projections to reconstruct CT images, which is also called sparse-projection reconstruction, can potentially avoid excessive dose delivered to patients in CT examination. To overcome the disadvantages of total variation (TV) minimization method, in this work we introduce a novel adaptive TpV regularization into sparse-projection image reconstruction and use FISTA technique to accelerate iterative convergence. The numerical experiments demonstrate that the proposed method suppresses noise and artifacts more efficiently, and preserves structure information better than other existing reconstruction methods. PMID:26089962
Adaptive quantum computation in changing environments using projective simulation
Tiersch, M.; Ganahl, E. J.; Briegel, H. J.
2015-01-01
Quantum information processing devices need to be robust and stable against external noise and internal imperfections to ensure correct operation. In a setting of measurement-based quantum computation, we explore how an intelligent agent endowed with a projective simulator can act as controller to adapt measurement directions to an external stray field of unknown magnitude in a fixed direction. We assess the agent’s learning behavior in static and time-varying fields and explore composition strategies in the projective simulator to improve the agent’s performance. We demonstrate the applicability by correcting for stray fields in a measurement-based algorithm for Grover’s search. Thereby, we lay out a path for adaptive controllers based on intelligent agents for quantum information tasks. PMID:26260263
Framework for Adaptable Operating and Runtime Systems: Final Project Report
Patrick G. Bridges
2012-02-01
In this grant, we examined a wide range of techniques for constructing high-performance con gurable system software for HPC systems and its application to DOE-relevant problems. Overall, research and development on this project focused in three specifc areas: (1) software frameworks for constructing and deploying con gurable system software, (2) applcation of these frameworks to HPC-oriented adaptable networking software, (3) performance analysis of HPC system software to understand opportunities for performance optimization.
The VIADUC project: innovation in climate adaptation through service design
NASA Astrophysics Data System (ADS)
Corre, L.; Dandin, P.; L'Hôte, D.; Besson, F.
2015-07-01
From the French National Adaptation to Climate Change Plan, the "Drias, les futurs du climat" service has been developed to provide easy access to French regional climate projections. This is a major step for the implementation of French Climate Services. The usefulness of this service for the end-users and decision makers involved with adaptation planning at a local scale is investigated. As such, the VIADUC project is: to evaluate and enhance Drias, as well as to imagine future development in support of adaptation. Climate scientists work together with end-users and a service designer. The designer's role is to propose an innovative approach based on the interaction between scientists and citizens. The chosen end-users are three Natural Regional Parks located in the South West of France. The latter parks are administrative entities which gather municipalities having a common natural and cultural heritage. They are also rural areas in which specific economic activities take place, and therefore are concerned and involved in both protecting their environment and setting-up sustainable economic development. The first year of the project has been dedicated to investigation including the questioning of relevant representatives. Three key local economic sectors have been selected: i.e. forestry, pastoral farming and building activities. Working groups were composed of technicians, administrative and maintenance staff, policy makers and climate researchers. The sectors' needs for climate information have been assessed. The lessons learned led to actions which are presented hereinafter.
An adaptive filtered back-projection for photoacoustic image reconstruction
Huang, He; Bustamante, Gilbert; Peterson, Ralph; Ye, Jing Yong
2015-01-01
Purpose: The purpose of this study is to develop an improved filtered-back-projection (FBP) algorithm for photoacoustic tomography (PAT), which allows image reconstruction with higher quality compared to images reconstructed through traditional algorithms. Methods: A rigorous expression of a weighting function has been derived directly from a photoacoustic wave equation and used as a ramp filter in Fourier domain. The authors’ new algorithm utilizes this weighting function to precisely calculate each photoacoustic signal’s contribution and then reconstructs the image based on the retarded potential generated from the photoacoustic sources. In addition, an adaptive criterion has been derived for selecting the cutoff frequency of a low pass filter. Two computational phantoms were created to test the algorithm. The first phantom contained five spheres with each sphere having different absorbances. The phantom was used to test the capability for correctly representing both the geometry and the relative absorbed energy in a planar measurement system. The authors also used another phantom containing absorbers of different sizes with overlapping geometry to evaluate the performance of the new method for complicated geometry. In addition, random noise background was added to the simulated data, which were obtained by using an arc-shaped array of 50 evenly distributed transducers that spanned 160° over a circle with a radius of 65 mm. A normalized factor between the neighbored transducers was applied for correcting measurement signals in PAT simulations. The authors assumed that the scanned object was mounted on a holder that rotated over the full 360° and the scans were set to a sampling rate of 20.48 MHz. Results: The authors have obtained reconstructed images of the computerized phantoms by utilizing the new FBP algorithm. From the reconstructed image of the first phantom, one can see that this new approach allows not only obtaining a sharp image but also showing
An adaptive filtered back-projection for photoacoustic image reconstruction
Huang, He; Bustamante, Gilbert; Peterson, Ralph; Ye, Jing Yong
2015-05-15
Purpose: The purpose of this study is to develop an improved filtered-back-projection (FBP) algorithm for photoacoustic tomography (PAT), which allows image reconstruction with higher quality compared to images reconstructed through traditional algorithms. Methods: A rigorous expression of a weighting function has been derived directly from a photoacoustic wave equation and used as a ramp filter in Fourier domain. The authors’ new algorithm utilizes this weighting function to precisely calculate each photoacoustic signal’s contribution and then reconstructs the image based on the retarded potential generated from the photoacoustic sources. In addition, an adaptive criterion has been derived for selecting the cutoff frequency of a low pass filter. Two computational phantoms were created to test the algorithm. The first phantom contained five spheres with each sphere having different absorbances. The phantom was used to test the capability for correctly representing both the geometry and the relative absorbed energy in a planar measurement system. The authors also used another phantom containing absorbers of different sizes with overlapping geometry to evaluate the performance of the new method for complicated geometry. In addition, random noise background was added to the simulated data, which were obtained by using an arc-shaped array of 50 evenly distributed transducers that spanned 160° over a circle with a radius of 65 mm. A normalized factor between the neighbored transducers was applied for correcting measurement signals in PAT simulations. The authors assumed that the scanned object was mounted on a holder that rotated over the full 360° and the scans were set to a sampling rate of 20.48 MHz. Results: The authors have obtained reconstructed images of the computerized phantoms by utilizing the new FBP algorithm. From the reconstructed image of the first phantom, one can see that this new approach allows not only obtaining a sharp image but also showing
How Useful Are Climate Projections for Adaptation Decision Making?
NASA Astrophysics Data System (ADS)
Smith, J. B.; Vogel, J. M.
2011-12-01
Decision making is often portrayed as a linear process that assumes scientific knowledge is a necessary precursor to effective policy and is used directly in policy making. Yet, in practice, the use of scientific information in decision making is more complex than the linear model implies. The use of climate projections in adaptation decision making is a case in point. This paper briefly reviews efforts by some decision makers to understand climate change risks and to apply this knowledge when making decisions on management of climate sensitive resources and infrastructure . In general, and in spite of extensive efforts to study climate change at the regional and local scale to support decision making, few decisions outside of adapting to sea level rise appear to directly apply to climate change projections. A number of U.S. municipalities and states, including Seattle, New York City, Phoenix, and the States of California and Washington, have used climate change projections to assess their vulnerability to various climate change impacts. Some adaptation decisions have been made based on projections of sea level rise, such as change in location of infrastructure. This may be because a future rise is sea level is virtually certain. In contrast, decision making on precipitation has been more limited, even where there is consensus on likely changes in sign of the variable. Nonetheless, decision makers are adopting strategies that can be justified based on current climate and climate variability and that also reduce risks to climate change. A key question for the scientific community is whether improved projections will add value to decision making. For example, it remains unclear how higher-resolution projections can change decision making as long as the sign and magnitude of projections across climate models and downscaling techniques retains a wide range of uncertainty. It is also unclear whether even better information on the sign and magnitude of change would
Robust projective lag synchronization in drive-response dynamical networks via adaptive control
NASA Astrophysics Data System (ADS)
Al-mahbashi, G.; Noorani, M. S. Md; Bakar, S. A.; Al-sawalha, M. M.
2016-02-01
This paper investigates the problem of projective lag synchronization behavior in drive-response dynamical networks (DRDNs) with identical and non-identical nodes. An adaptive control method is designed to achieve projective lag synchronization with fully unknown parameters and unknown bounded disturbances. These parameters were estimated by adaptive laws obtained by Lyapunov stability theory. Furthermore, sufficient conditions for synchronization are derived analytically using the Lyapunov stability theory and adaptive control. In addition, the unknown bounded disturbances are also overcome by the proposed control. Finally, analytical results show that the states of the dynamical network with non-delayed coupling can be asymptotically synchronized onto a desired scaling factor under the designed controller. Simulation results show the effectiveness of the proposed method.
NASA Astrophysics Data System (ADS)
Klein, R.; Gordon, E.
2010-12-01
Scholars and policy analysts often contend that an effective climate adaptation strategy must entail "mainstreaming," or incorporating responses to possible climate impacts into existing planning and management decision frameworks. Such an approach, however, makes it difficult to assess the degree to which decisionmaking entities are engaging in adaptive activities that may or may not be explicitly framed around a changing climate. For example, a drought management plan may not explicitly address climate change, but the activities and strategies outlined in it may reduce vulnerabilities posed by a variable and changing climate. Consequently, to generate a strategic climate adaptation plan requires identifying the entire suite of activities that are implicitly linked to climate and may affect adaptive capacity within the system. Here we outline a novel, two-pronged approach, leveraging social science methods, to understanding adaptation throughout state government in Colorado. First, we conducted a series of interviews with key actors in state and federal government agencies, non-governmental organizations, universities, and other entities engaged in state issues. The purpose of these interviews was to elicit information about current activities that may affect the state’s adaptive capacity and to identify future climate-related needs across the state. Second, we have developed an interactive database cataloging organizations, products, projects, and people actively engaged in adaptive planning and policymaking that are relevant to the state of Colorado. The database includes a wiki interface, helping create a dynamic component that will enable frequent updating as climate-relevant information emerges. The results of this project are intended to paint a clear picture of sectors and agencies with higher and lower levels of adaptation awareness and to provide a roadmap for the next gubernatorial administration to pursue a more sophisticated climate adaptation agenda
Adaptive numerical methods for partial differential equations
Cololla, P.
1995-07-01
This review describes a structured approach to adaptivity. The Automated Mesh Refinement (ARM) algorithms developed by M Berger are described, touching on hyperbolic and parabolic applications. Adaptivity is achieved by overlaying finer grids only in areas flagged by a generalized error criterion. The author discusses some of the issues involved in abutting disparate-resolution grids, and demonstrates that suitable algorithms exist for dissipative as well as hyperbolic systems.
NASA Technical Reports Server (NTRS)
Taylor, Patrick C.; Baker, Noel C.
2015-01-01
Earth's climate is changing and will continue to change into the foreseeable future. Expected changes in the climatological distribution of precipitation, surface temperature, and surface solar radiation will significantly impact agriculture. Adaptation strategies are, therefore, required to reduce the agricultural impacts of climate change. Climate change projections of precipitation, surface temperature, and surface solar radiation distributions are necessary input for adaption planning studies. These projections are conventionally constructed from an ensemble of climate model simulations (e.g., the Coupled Model Intercomparison Project 5 (CMIP5)) as an equal weighted average, one model one vote. Each climate model, however, represents the array of climate-relevant physical processes with varying degrees of fidelity influencing the projection of individual climate variables differently. Presented here is a new approach, termed the "Intelligent Ensemble, that constructs climate variable projections by weighting each model according to its ability to represent key physical processes, e.g., precipitation probability distribution. This approach provides added value over the equal weighted average method. Physical process metrics applied in the "Intelligent Ensemble" method are created using a combination of NASA and NOAA satellite and surface-based cloud, radiation, temperature, and precipitation data sets. The "Intelligent Ensemble" method is applied to the RCP4.5 and RCP8.5 anthropogenic climate forcing simulations within the CMIP5 archive to develop a set of climate change scenarios for precipitation, temperature, and surface solar radiation in each USDA Farm Resource Region for use in climate change adaptation studies.
An Adaptive Multi-agent System for Project Schedule Management
NASA Astrophysics Data System (ADS)
Shou, Yongyi; Lai, Changtao
A multi-agent system is established for project schedule management, considering the need for adaptive and dynamic scheduling under uncertainty. The system is realized using Java. In the proposed system, three types of agents, namely activity agents, resource agents, and a monitoring agent, are designed. Duration and resource requirement self-learning operators are developed for activity agents in order to model the self-learning and adaptive capacities of an agent in its local environment; moreover, a monitoring operator is also presented for the monitoring agent. The system allows the user to set up simulation parameters or scheduling rules according to their own preferences. Simulation results from an example showed that the system is effective in supporting users' decision-making process.
Principles and Methods of Adapted Physical Education.
ERIC Educational Resources Information Center
Arnheim, Daniel D.; And Others
Programs in adapted physical education are presented preceded by a background of services for the handicapped, by the psychosocial implications of disability, and by the growth and development of the handicapped. Elements of conducting programs discussed are organization and administration, class organization, facilities, exercise programs…
QUEST - A Bayesian adaptive psychometric method
NASA Technical Reports Server (NTRS)
Watson, A. B.; Pelli, D. G.
1983-01-01
An adaptive psychometric procedure that places each trial at the current most probable Bayesian estimate of threshold is described. The procedure takes advantage of the common finding that the human psychometric function is invariant in form when expressed as a function of log intensity. The procedure is simple, fast, and efficient, and may be easily implemented on any computer.
Adaptive method of realizing natural gradient learning for multilayer perceptrons.
Amari, S; Park, H; Fukumizu, K
2000-06-01
The natural gradient learning method is known to have ideal performances for on-line training of multilayer perceptrons. It avoids plateaus, which give rise to slow convergence of the backpropagation method. It is Fisher efficient, whereas the conventional method is not. However, for implementing the method, it is necessary to calculate the Fisher information matrix and its inverse, which is practically very difficult. This article proposes an adaptive method of directly obtaining the inverse of the Fisher information matrix. It generalizes the adaptive Gauss-Newton algorithms and provides a solid theoretical justification of them. Simulations show that the proposed adaptive method works very well for realizing natural gradient learning. PMID:10935719
Lin, Hui; Gao, Jian; Mei, Qing; He, Yunbo; Liu, Junxiu; Wang, Xingjin
2016-04-01
It is a challenge for any optical method to measure objects with a large range of reflectivity variation across the surface. Image saturation results in incorrect intensities in captured fringe pattern images, leading to phase and measurement errors. This paper presents a new adaptive digital fringe projection technique which avoids image saturation and has a high signal to noise ratio (SNR) in the three-dimensional (3-D) shape measurement of objects that has a large range of reflectivity variation across the surface. Compared to previous high dynamic range 3-D scan methods using many exposures and fringe pattern projections, which consumes a lot of time, the proposed technique uses only two preliminary steps of fringe pattern projection and image capture to generate the adapted fringe patterns, by adaptively adjusting the pixel-wise intensity of the projected fringe patterns based on the saturated pixels in the captured images of the surface being measured. For the bright regions due to high surface reflectivity and high illumination by the ambient light and surfaces interreflections, the projected intensity is reduced just to be low enough to avoid image saturation. Simultaneously, the maximum intensity of 255 is used for those dark regions with low surface reflectivity to maintain high SNR. Our experiments demonstrate that the proposed technique can achieve higher 3-D measurement accuracy across a surface with a large range of reflectivity variation. PMID:27137056
Solution-adaptive finite element method in computational fracture mechanics
NASA Technical Reports Server (NTRS)
Min, J. B.; Bass, J. M.; Spradley, L. W.
1993-01-01
Some recent results obtained using solution-adaptive finite element method in linear elastic two-dimensional fracture mechanics problems are presented. The focus is on the basic issue of adaptive finite element method for validating the applications of new methodology to fracture mechanics problems by computing demonstration problems and comparing the stress intensity factors to analytical results.
Adaptive method for electron bunch profile prediction
NASA Astrophysics Data System (ADS)
Scheinker, Alexander; Gessner, Spencer
2015-10-01
We report on an experiment performed at the Facility for Advanced Accelerator Experimental Tests (FACET) at SLAC National Accelerator Laboratory, in which a new adaptive control algorithm, one with known, bounded update rates, despite operating on analytically unknown cost functions, was utilized in order to provide quasi-real-time bunch property estimates of the electron beam. Multiple parameters, such as arbitrary rf phase settings and other time-varying accelerator properties, were simultaneously tuned in order to match a simulated bunch energy spectrum with a measured energy spectrum. The simple adaptive scheme was digitally implemented using matlab and the experimental physics and industrial control system. The main result is a nonintrusive, nondestructive, real-time diagnostic scheme for prediction of bunch profiles, as well as other beam parameters, the precise control of which are important for the plasma wakefield acceleration experiments being explored at FACET.
Adaptive method for electron bunch profile prediction
Scheinker, Alexander; Gessner, Spencer
2015-10-01
We report on an experiment performed at the Facility for Advanced Accelerator Experimental Tests (FACET) at SLAC National Accelerator Laboratory, in which a new adaptive control algorithm, one with known, bounded update rates, despite operating on analytically unknown cost functions, was utilized in order to provide quasi-real-time bunch property estimates of the electron beam. Multiple parameters, such as arbitrary rf phase settings and other time-varying accelerator properties, were simultaneously tuned in order to match a simulated bunch energy spectrum with a measured energy spectrum. The simple adaptive scheme was digitally implemented using matlab and the experimental physics and industrial control system. The main result is a nonintrusive, nondestructive, real-time diagnostic scheme for prediction of bunch profiles, as well as other beam parameters, the precise control of which are important for the plasma wakefield acceleration experiments being explored at FACET. © 2015 authors. Published by the American Physical Society.
A massively parallel adaptive finite element method with dynamic load balancing
Devine, K.D.; Flaherty, J.E.; Wheat, S.R.; Maccabe, A.B.
1993-05-01
We construct massively parallel, adaptive finite element methods for the solution of hyperbolic conservation laws in one and two dimensions. Spatial discretization is performed by a discontinuous Galerkin finite element method using a basis of piecewise Legendre polynomials. Temporal discretization utilizes a Runge-Kutta method. Dissipative fluxes and projection limiting prevent oscillations near solution discontinuities. The resulting method is of high order and may be parallelized efficiently on MIMD computers. We demonstrate parallel efficiency through computations on a 1024-processor nCUBE/2 hypercube. We also present results using adaptive p-refinement to reduce the computational cost of the method. We describe tiling, a dynamic, element-based data migration system. Tiling dynamically maintains global load balance in the adaptive method by overlapping neighborhoods of processors, where each neighborhood performs local load balancing. We demonstrate the effectiveness of the dynamic load balancing with adaptive p-refinement examples.
A New Adaptive Image Denoising Method Based on Neighboring Coefficients
NASA Astrophysics Data System (ADS)
Biswas, Mantosh; Om, Hari
2016-03-01
Many good techniques have been discussed for image denoising that include NeighShrink, improved adaptive wavelet denoising method based on neighboring coefficients (IAWDMBNC), improved wavelet shrinkage technique for image denoising (IWST), local adaptive wiener filter (LAWF), wavelet packet thresholding using median and wiener filters (WPTMWF), adaptive image denoising method based on thresholding (AIDMT). These techniques are based on local statistical description of the neighboring coefficients in a window. These methods however do not give good quality of the images since they cannot modify and remove too many small wavelet coefficients simultaneously due to the threshold. In this paper, a new image denoising method is proposed that shrinks the noisy coefficients using an adaptive threshold. Our method overcomes these drawbacks and it has better performance than the NeighShrink, IAWDMBNC, IWST, LAWF, WPTMWF, and AIDMT denoising methods.
Online Sequential Projection Vector Machine with Adaptive Data Mean Update
Chen, Lin; Jia, Ji-Ting; Zhang, Qiong; Deng, Wan-Yu; Wei, Wei
2016-01-01
We propose a simple online learning algorithm especial for high-dimensional data. The algorithm is referred to as online sequential projection vector machine (OSPVM) which derives from projection vector machine and can learn from data in one-by-one or chunk-by-chunk mode. In OSPVM, data centering, dimension reduction, and neural network training are integrated seamlessly. In particular, the model parameters including (1) the projection vectors for dimension reduction, (2) the input weights, biases, and output weights, and (3) the number of hidden nodes can be updated simultaneously. Moreover, only one parameter, the number of hidden nodes, needs to be determined manually, and this makes it easy for use in real applications. Performance comparison was made on various high-dimensional classification problems for OSPVM against other fast online algorithms including budgeted stochastic gradient descent (BSGD) approach, adaptive multihyperplane machine (AMM), primal estimated subgradient solver (Pegasos), online sequential extreme learning machine (OSELM), and SVD + OSELM (feature selection based on SVD is performed before OSELM). The results obtained demonstrated the superior generalization performance and efficiency of the OSPVM. PMID:27143958
Online Sequential Projection Vector Machine with Adaptive Data Mean Update.
Chen, Lin; Jia, Ji-Ting; Zhang, Qiong; Deng, Wan-Yu; Wei, Wei
2016-01-01
We propose a simple online learning algorithm especial for high-dimensional data. The algorithm is referred to as online sequential projection vector machine (OSPVM) which derives from projection vector machine and can learn from data in one-by-one or chunk-by-chunk mode. In OSPVM, data centering, dimension reduction, and neural network training are integrated seamlessly. In particular, the model parameters including (1) the projection vectors for dimension reduction, (2) the input weights, biases, and output weights, and (3) the number of hidden nodes can be updated simultaneously. Moreover, only one parameter, the number of hidden nodes, needs to be determined manually, and this makes it easy for use in real applications. Performance comparison was made on various high-dimensional classification problems for OSPVM against other fast online algorithms including budgeted stochastic gradient descent (BSGD) approach, adaptive multihyperplane machine (AMM), primal estimated subgradient solver (Pegasos), online sequential extreme learning machine (OSELM), and SVD + OSELM (feature selection based on SVD is performed before OSELM). The results obtained demonstrated the superior generalization performance and efficiency of the OSPVM. PMID:27143958
NASA Technical Reports Server (NTRS)
Shelhamer, Mark; Goldberg, Jefim; Minor, Lloyd B.; Paloski, William H.; Young, Laurence R.; Zee, David S.
1999-01-01
Impairment of gaze and head stabilization reflexes can lead to disorientation and reduced performance in sensorimotor tasks such as piloting of spacecraft. Transitions between different gravitoinertial force (gif) environments - as during different phases of space flight - provide an extreme test of the adaptive capabilities of these mechanisms. We wish to determine to what extent the sensorimotor skills acquired in one gravity environment will transfer to others, and to what extent gravity serves as a context cue for inhibiting such transfer. We use the general approach of adapting a response (saccades, vestibuloocular reflex: VOR, or vestibulocollic reflex: VCR) to a particular change in gain or phase in one gif condition, adapting to a different gain or phase in a second gif condition, and then seeing if gif itself - the context cue - can recall the previously-learned adapted responses. Previous evidence indicates that unless there is specific training to induce context-specificity, reflex adaptation is sequential rather than simultaneous. Various experiments in this project investigate the behavioral properties, neurophysiological basis, and anatomical substrate of context-specific learning, using otolith (gravity) signals as a context cue. In the following, we outline the methods for all experiments in this project, and provide details and results on selected experiments.
Moving and adaptive grid methods for compressible flows
NASA Technical Reports Server (NTRS)
Trepanier, Jean-Yves; Camarero, Ricardo
1995-01-01
This paper describes adaptive grid methods developed specifically for compressible flow computations. The basic flow solver is a finite-volume implementation of Roe's flux difference splitting scheme or arbitrarily moving unstructured triangular meshes. The grid adaptation is performed according to geometric and flow requirements. Some results are included to illustrate the potential of the methodology.
An adaptive pseudospectral method for discontinuous problems
NASA Technical Reports Server (NTRS)
Augenbaum, Jeffrey M.
1988-01-01
The accuracy of adaptively chosen, mapped polynomial approximations is studied for functions with steep gradients or discontinuities. It is shown that, for steep gradient functions, one can obtain spectral accuracy in the original coordinate system by using polynomial approximations in a transformed coordinate system with substantially fewer collocation points than are necessary using polynomial expansion directly in the original, physical, coordinate system. It is also shown that one can avoid the usual Gibbs oscillation associated with steep gradient solutions of hyperbolic pde's by approximation in suitably chosen coordinate systems. Continuous, high gradient solutions are computed with spectral accuracy (as measured in the physical coordinate system). Discontinuous solutions associated with nonlinear hyperbolic equations can be accurately computed by using an artificial viscosity chosen to smooth out the solution in the mapped, computational domain. Thus, shocks can be effectively resolved on a scale that is subgrid to the resolution available with collocation only in the physical domain. Examples with Fourier and Chebyshev collocation are given.
Adaptable radiation monitoring system and method
Archer, Daniel E.; Beauchamp, Brock R.; Mauger, G. Joseph; Nelson, Karl E.; Mercer, Michael B.; Pletcher, David C.; Riot, Vincent J.; Schek, James L.; Knapp, David A.
2006-06-20
A portable radioactive-material detection system capable of detecting radioactive sources moving at high speeds. The system has at least one radiation detector capable of detecting gamma-radiation and coupled to an MCA capable of collecting spectral data in very small time bins of less than about 150 msec. A computer processor is connected to the MCA for determining from the spectral data if a triggering event has occurred. Spectral data is stored on a data storage device, and a power source supplies power to the detection system. Various configurations of the detection system may be adaptably arranged for various radiation detection scenarios. In a preferred embodiment, the computer processor operates as a server which receives spectral data from other networked detection systems, and communicates the collected data to a central data reporting system.
Adaptive computational methods for aerothermal heating analysis
NASA Technical Reports Server (NTRS)
Price, John M.; Oden, J. Tinsley
1988-01-01
The development of adaptive gridding techniques for finite-element analysis of fluid dynamics equations is described. The developmental work was done with the Euler equations with concentration on shock and inviscid flow field capturing. Ultimately this methodology is to be applied to a viscous analysis for the purpose of predicting accurate aerothermal loads on complex shapes subjected to high speed flow environments. The development of local error estimate strategies as a basis for refinement strategies is discussed, as well as the refinement strategies themselves. The application of the strategies to triangular elements and a finite-element flux-corrected-transport numerical scheme are presented. The implementation of these strategies in the GIM/PAGE code for 2-D and 3-D applications is documented and demonstrated.
Adaptive mesh strategies for the spectral element method
NASA Technical Reports Server (NTRS)
Mavriplis, Catherine
1992-01-01
An adaptive spectral method was developed for the efficient solution of time dependent partial differential equations. Adaptive mesh strategies that include resolution refinement and coarsening by three different methods are illustrated on solutions to the 1-D viscous Burger equation and the 2-D Navier-Stokes equations for driven flow in a cavity. Sharp gradients, singularities, and regions of poor resolution are resolved optimally as they develop in time using error estimators which indicate the choice of refinement to be used. The adaptive formulation presents significant increases in efficiency, flexibility, and general capabilities for high order spectral methods.
Comparing Anisotropic Output-Based Grid Adaptation Methods by Decomposition
NASA Technical Reports Server (NTRS)
Park, Michael A.; Loseille, Adrien; Krakos, Joshua A.; Michal, Todd
2015-01-01
Anisotropic grid adaptation is examined by decomposing the steps of flow solution, ad- joint solution, error estimation, metric construction, and simplex grid adaptation. Multiple implementations of each of these steps are evaluated by comparison to each other and expected analytic results when available. For example, grids are adapted to analytic metric fields and grid measures are computed to illustrate the properties of multiple independent implementations of grid adaptation mechanics. Different implementations of each step in the adaptation process can be evaluated in a system where the other components of the adaptive cycle are fixed. Detailed examination of these properties allows comparison of different methods to identify the current state of the art and where further development should be targeted.
Regional projections of North Indian climate for adaptation studies.
Mathison, Camilla; Wiltshire, Andrew; Dimri, A P; Falloon, Pete; Jacob, Daniela; Kumar, Pankaj; Moors, Eddy; Ridley, Jeff; Siderius, Christian; Stoffel, Markus; Yasunari, T
2013-12-01
Adaptation is increasingly important for regions around the world where large changes in climate could have an impact on populations and industry. The Brahmaputra-Ganges catchments have a large population, a main industry of agriculture and a growing hydro-power industry, making the region susceptible to changes in the Indian Summer Monsoon, annually the main water source. The HighNoon project has completed four regional climate model simulations for India and the Himalaya at high resolution (25km) from 1960 to 2100 to provide an ensemble of simulations for the region. In this paper we have assessed the ensemble for these catchments, comparing the simulations with observations, to give credence that the simulations provide a realistic representation of atmospheric processes and therefore future climate. We have illustrated how these simulations could be used to provide information on potential future climate impacts and therefore aid decision-making using climatology and threshold analysis. The ensemble analysis shows an increase in temperature between the baseline (1970-2000) and the 2050s (2040-2070) of between 2 and 4°C and an increase in the number of days with maximum temperatures above 28°C and 35°C. There is less certainty for precipitation and runoff which show considerable variability, even in this relatively small ensemble, spanning zero. The HighNoon ensemble is the most complete data for the region providing useful information on a wide range of variables for the regional climate of the Brahmaputra-Ganges region, however there are processes not yet included in the models that could have an impact on the simulations of future climate. We have discussed these processes and show that the range from the HighNoon ensemble is similar in magnitude to potential changes in projections where these processes are included. Therefore strategies for adaptation must be robust and flexible allowing for advances in the science and natural environmental changes. PMID
Adaptive sequential methods for detecting network intrusions
NASA Astrophysics Data System (ADS)
Chen, Xinjia; Walker, Ernest
2013-06-01
In this paper, we propose new sequential methods for detecting port-scan attackers which routinely perform random "portscans" of IP addresses to find vulnerable servers to compromise. In addition to rigorously control the probability of falsely implicating benign remote hosts as malicious, our method performs significantly faster than other current solutions. Moreover, our method guarantees that the maximum amount of observational time is bounded. In contrast to the previous most effective method, Threshold Random Walk Algorithm, which is explicit and analytical in nature, our proposed algorithm involve parameters to be determined by numerical methods. We have introduced computational techniques such as iterative minimax optimization for quick determination of the parameters of the new detection algorithm. A framework of multi-valued decision for detecting portscanners and DoS attacks is also proposed.
Adaptive finite-element method for diffraction gratings
NASA Astrophysics Data System (ADS)
Bao, Gang; Chen, Zhiming; Wu, Haijun
2005-06-01
A second-order finite-element adaptive strategy with error control for one-dimensional grating problems is developed. The unbounded computational domain is truncated to a bounded one by a perfectly-matched-layer (PML) technique. The PML parameters, such as the thickness of the layer and the medium properties, are determined through sharp a posteriori error estimates. The adaptive finite-element method is expected to increase significantly the accuracy and efficiency of the discretization as well as reduce the computation cost. Numerical experiments are included to illustrate the competitiveness of the proposed adaptive method.
Adaptive multiscale method for two-dimensional nanoscale adhesive contacts
NASA Astrophysics Data System (ADS)
Tong, Ruiting; Liu, Geng; Liu, Lan; Wu, Liyan
2013-05-01
There are two separate traditional approaches to model contact problems: continuum and atomistic theory. Continuum theory is successfully used in many domains, but when the scale of the model comes to nanometer, continuum approximation meets challenges. Atomistic theory can catch the detailed behaviors of an individual atom by using molecular dynamics (MD) or quantum mechanics, although accurately, it is usually time-consuming. A multiscale method coupled MD and finite element (FE) is presented. To mesh the FE region automatically, an adaptive method based on the strain energy gradient is introduced to the multiscale method to constitute an adaptive multiscale method. Utilizing the proposed method, adhesive contacts between a rigid cylinder and an elastic substrate are studied, and the results are compared with full MD simulations. The process of FE meshes refinement shows that adaptive multiscale method can make FE mesh generation more flexible. Comparison of the displacements of boundary atoms in the overlap region with the results from full MD simulations indicates that adaptive multiscale method can transfer displacements effectively. Displacements of atoms and FE nodes on the center line of the multiscale model agree well with that of atoms in full MD simulations, which shows the continuity in the overlap region. Furthermore, the Von Mises stress contours and contact force distributions in the contact region are almost same as full MD simulations. The method presented combines multiscale method and adaptive technique, and can provide a more effective way to multiscale method and to the investigation on nanoscale contact problems.
Fast adaptive composite grid methods on distributed parallel architectures
NASA Technical Reports Server (NTRS)
Lemke, Max; Quinlan, Daniel
1992-01-01
The fast adaptive composite (FAC) grid method is compared with the adaptive composite method (AFAC) under variety of conditions including vectorization and parallelization. Results are given for distributed memory multiprocessor architectures (SUPRENUM, Intel iPSC/2 and iPSC/860). It is shown that the good performance of AFAC and its superiority over FAC in a parallel environment is a property of the algorithm and not dependent on peculiarities of any machine.
An Adaptive De-Aliasing Strategy for Discontinuous Galerkin methods
NASA Astrophysics Data System (ADS)
Beck, Andrea; Flad, David; Frank, Hannes; Munz, Claus-Dieter
2015-11-01
Discontinuous Galerkin methods combine the accuracy of a local polynomial representation with the geometrical flexibility of an element-based discretization. In combination with their excellent parallel scalability, these methods are currently of great interest for DNS and LES. For high order schemes, the dissipation error approaches a cut-off behavior, which allows an efficient wave resolution per degree of freedom, but also reduces robustness against numerical errors. One important source of numerical error is the inconsistent discretization of the non-linear convective terms, which results in aliasing of kinetic energy and solver instability. Consistent evaluation of the inner products prevents this form of error, but is computationally very expensive. In this talk, we discuss the need for a consistent de-aliasing to achieve a neutrally stable scheme, and present a novel strategy for recovering a part of the incurred computational costs. By implementing the de-aliasing operation through a cell-local projection filter, we can perform adaptive de-aliasing in space and time, based on physically motivated indicators. We will present results for a homogeneous isotropic turbulence and the Taylor-Green vortex flow, and discuss implementation details, accuracy and efficiency.
Adaptive upscaling with the dual mesh method
Guerillot, D.; Verdiere, S.
1997-08-01
The objective of this paper is to demonstrate that upscaling should be calculated during the flow simulation instead of trying to enhance the a priori upscaling methods. Hence, counter-examples are given to motivate our approach, the so-called Dual Mesh Method. The main steps of this numerical algorithm are recalled. Applications illustrate the necessity to consider different average relative permeability values depending on the direction in space. Moreover, these values could be different for the same average saturation. This proves that an a priori upscaling cannot be the answer even in homogeneous cases because of the {open_quotes}dynamical heterogeneity{close_quotes} created by the saturation profile. Other examples show the efficiency of the Dual Mesh Method applied to heterogeneous medium and to an actual field case in South America.
Adaptive Finite Element Methods for Continuum Damage Modeling
NASA Technical Reports Server (NTRS)
Min, J. B.; Tworzydlo, W. W.; Xiques, K. E.
1995-01-01
The paper presents an application of adaptive finite element methods to the modeling of low-cycle continuum damage and life prediction of high-temperature components. The major objective is to provide automated and accurate modeling of damaged zones through adaptive mesh refinement and adaptive time-stepping methods. The damage modeling methodology is implemented in an usual way by embedding damage evolution in the transient nonlinear solution of elasto-viscoplastic deformation problems. This nonlinear boundary-value problem is discretized by adaptive finite element methods. The automated h-adaptive mesh refinements are driven by error indicators, based on selected principal variables in the problem (stresses, non-elastic strains, damage, etc.). In the time domain, adaptive time-stepping is used, combined with a predictor-corrector time marching algorithm. The time selection is controlled by required time accuracy. In order to take into account strong temperature dependency of material parameters, the nonlinear structural solution a coupled with thermal analyses (one-way coupling). Several test examples illustrate the importance and benefits of adaptive mesh refinements in accurate prediction of damage levels and failure time.
A massively parallel adaptive finite element method with dynamic load balancing
Devine, K.D.; Flaherty, J.E.; Wheat, S.R.; Maccabe, A.B.
1993-12-31
The authors construct massively parallel adaptive finite element methods for the solution of hyperbolic conservation laws. Spatial discretization is performed by a discontinuous Galerkin finite element method using a basis of piecewise Legendre polynomials. Temporal discretization utilizes a Runge-Kutta method. Dissipative fluxes and projection limiting prevent oscillations near solution discontinuities. The resulting method is of high order and may be parallelized efficiently on MIMD computers. They demonstrate parallel efficiency through computations on a 1024-processor nCUBE/2 hypercube. They present results using adaptive p-refinement to reduce the computational cost of the method, and tiling, a dynamic, element-based data migration system that maintains global load balance of the adaptive method by overlapping neighborhoods of processors that each perform local balancing.
LDRD Final Report: Adaptive Methods for Laser Plasma Simulation
Dorr, M R; Garaizar, F X; Hittinger, J A
2003-01-29
The goal of this project was to investigate the utility of parallel adaptive mesh refinement (AMR) in the simulation of laser plasma interaction (LPI). The scope of work included the development of new numerical methods and parallel implementation strategies. The primary deliverables were (1) parallel adaptive algorithms to solve a system of equations combining plasma fluid and light propagation models, (2) a research code implementing these algorithms, and (3) an analysis of the performance of parallel AMR on LPI problems. The project accomplished these objectives. New algorithms were developed for the solution of a system of equations describing LPI. These algorithms were implemented in a new research code named ALPS (Adaptive Laser Plasma Simulator) that was used to test the effectiveness of the AMR algorithms on the Laboratory's large-scale computer platforms. The details of the algorithm and the results of the numerical tests were documented in an article published in the Journal of Computational Physics [2]. A principal conclusion of this investigation is that AMR is most effective for LPI systems that are ''hydrodynamically large'', i.e., problems requiring the simulation of a large plasma volume relative to the volume occupied by the laser light. Since the plasma-only regions require less resolution than the laser light, AMR enables the use of efficient meshes for such problems. In contrast, AMR is less effective for, say, a single highly filamented beam propagating through a phase plate, since the resulting speckle pattern may be too dense to adequately separate scales with a locally refined mesh. Ultimately, the gain to be expected from the use of AMR is highly problem-dependent. One class of problems investigated in this project involved a pair of laser beams crossing in a plasma flow. Under certain conditions, energy can be transferred from one beam to the other via a resonant interaction with an ion acoustic wave in the crossing region. AMR provides an
An auto-adaptive background subtraction method for Raman spectra.
Xie, Yi; Yang, Lidong; Sun, Xilong; Wu, Dewen; Chen, Qizhen; Zeng, Yongming; Liu, Guokun
2016-05-15
Background subtraction is a crucial step in the preprocessing of Raman spectrum. Usually, parameter manipulating of the background subtraction method is necessary for the efficient removal of the background, which makes the quality of the spectrum empirically dependent. In order to avoid artificial bias, we proposed an auto-adaptive background subtraction method without parameter adjustment. The main procedure is: (1) select the local minima of spectrum while preserving major peaks, (2) apply an interpolation scheme to estimate background, (3) and design an iteration scheme to improve the adaptability of background subtraction. Both simulated data and Raman spectra have been used to evaluate the proposed method. By comparing the backgrounds obtained from three widely applied methods: the polynomial, the Baek's and the airPLS, the auto-adaptive method meets the demand of practical applications in terms of efficiency and accuracy. PMID:26950502
An auto-adaptive background subtraction method for Raman spectra
NASA Astrophysics Data System (ADS)
Xie, Yi; Yang, Lidong; Sun, Xilong; Wu, Dewen; Chen, Qizhen; Zeng, Yongming; Liu, Guokun
2016-05-01
Background subtraction is a crucial step in the preprocessing of Raman spectrum. Usually, parameter manipulating of the background subtraction method is necessary for the efficient removal of the background, which makes the quality of the spectrum empirically dependent. In order to avoid artificial bias, we proposed an auto-adaptive background subtraction method without parameter adjustment. The main procedure is: (1) select the local minima of spectrum while preserving major peaks, (2) apply an interpolation scheme to estimate background, (3) and design an iteration scheme to improve the adaptability of background subtraction. Both simulated data and Raman spectra have been used to evaluate the proposed method. By comparing the backgrounds obtained from three widely applied methods: the polynomial, the Baek's and the airPLS, the auto-adaptive method meets the demand of practical applications in terms of efficiency and accuracy.
Track and vertex reconstruction: From classical to adaptive methods
Strandlie, Are; Fruehwirth, Rudolf
2010-04-15
This paper reviews classical and adaptive methods of track and vertex reconstruction in particle physics experiments. Adaptive methods have been developed to meet the experimental challenges at high-energy colliders, in particular, the CERN Large Hadron Collider. They can be characterized by the obliteration of the traditional boundaries between pattern recognition and statistical estimation, by the competition between different hypotheses about what constitutes a track or a vertex, and by a high level of flexibility and robustness achieved with a minimum of assumptions about the data. The theoretical background of some of the adaptive methods is described, and it is shown that there is a close connection between the two main branches of adaptive methods: neural networks and deformable templates, on the one hand, and robust stochastic filters with annealing, on the other hand. As both classical and adaptive methods of track and vertex reconstruction presuppose precise knowledge of the positions of the sensitive detector elements, the paper includes an overview of detector alignment methods and a survey of the alignment strategies employed by past and current experiments.
Introduction to Adaptive Methods for Differential Equations
NASA Astrophysics Data System (ADS)
Eriksson, Kenneth; Estep, Don; Hansbo, Peter; Johnson, Claes
Knowing thus the Algorithm of this calculus, which I call Differential Calculus, all differential equations can be solved by a common method (Gottfried Wilhelm von Leibniz, 1646-1719).When, several years ago, I saw for the first time an instrument which, when carried, automatically records the number of steps taken by a pedestrian, it occurred to me at once that the entire arithmetic could be subjected to a similar kind of machinery so that not only addition and subtraction, but also multiplication and division, could be accomplished by a suitably arranged machine easily, promptly and with sure results. For it is unworthy of excellent men to lose hours like slaves in the labour of calculations, which could safely be left to anyone else if the machine was used. And now that we may give final praise to the machine, we may say that it will be desirable to all who are engaged in computations which, as is well known, are the managers of financial affairs, the administrators of others estates, merchants, surveyors, navigators, astronomers, and those connected with any of the crafts that use mathematics (Leibniz).
Stability and error estimation for Component Adaptive Grid methods
NASA Technical Reports Server (NTRS)
Oliger, Joseph; Zhu, Xiaolei
1994-01-01
Component adaptive grid (CAG) methods for solving hyperbolic partial differential equations (PDE's) are discussed in this paper. Applying recent stability results for a class of numerical methods on uniform grids. The convergence of these methods for linear problems on component adaptive grids is established here. Furthermore, the computational error can be estimated on CAG's using the stability results. Using these estimates, the error can be controlled on CAG's. Thus, the solution can be computed efficiently on CAG's within a given error tolerance. Computational results for time dependent linear problems in one and two space dimensions are presented.
NASA Astrophysics Data System (ADS)
Tsai, Chi-Yi; Song, Kai-Tai
2006-02-01
A novel heterogeneity-projection hard-decision adaptive interpolation (HPHD-AI) algorithm is proposed in this paper for color reproduction from Bayer mosaic images. The proposed algorithm aims to estimate the optimal interpolation direction and perform hard-decision interpolation, in which the decision is made before interpolation. To do so, a new heterogeneity-projection scheme based on spectral-spatial correlation is proposed to decide the best interpolation direction from the original mosaic image directly. Exploiting the proposed heterogeneity-projection scheme, a hard-decision rule can be designed easily to perform the interpolation. We have compared this technique with three recently proposed demosaicing techniques: Lu's, Gunturk's and Li's methods, by utilizing twenty-five natural images from Kodak PhotoCD. The experimental results show that HPHD-AI outperforms all of them in both PSNR values and S-CIELab ▵Ε* ab measures.
Adaptive multiscale model reduction with Generalized Multiscale Finite Element Methods
NASA Astrophysics Data System (ADS)
Chung, Eric; Efendiev, Yalchin; Hou, Thomas Y.
2016-09-01
In this paper, we discuss a general multiscale model reduction framework based on multiscale finite element methods. We give a brief overview of related multiscale methods. Due to page limitations, the overview focuses on a few related methods and is not intended to be comprehensive. We present a general adaptive multiscale model reduction framework, the Generalized Multiscale Finite Element Method. Besides the method's basic outline, we discuss some important ingredients needed for the method's success. We also discuss several applications. The proposed method allows performing local model reduction in the presence of high contrast and no scale separation.
Final Report: Symposium on Adaptive Methods for Partial Differential Equations
Pernice, M.; Johnson, C.R.; Smith, P.J.; Fogelson, A.
1998-12-10
OAK-B135 Final Report: Symposium on Adaptive Methods for Partial Differential Equations. Complex physical phenomena often include features that span a wide range of spatial and temporal scales. Accurate simulation of such phenomena can be difficult to obtain, and computations that are under-resolved can even exhibit spurious features. While it is possible to resolve small scale features by increasing the number of grid points, global grid refinement can quickly lead to problems that are intractable, even on the largest available computing facilities. These constraints are particularly severe for three dimensional problems that involve complex physics. One way to achieve the needed resolution is to refine the computational mesh locally, in only those regions where enhanced resolution is required. Adaptive solution methods concentrate computational effort in regions where it is most needed. These methods have been successfully applied to a wide variety of problems in computational science and engineering. Adaptive methods can be difficult to implement, prompting the development of tools and environments to facilitate their use. To ensure that the results of their efforts are useful, algorithm and tool developers must maintain close communication with application specialists. Conversely it remains difficult for application specialists who are unfamiliar with the methods to evaluate the trade-offs between the benefits of enhanced local resolution and the effort needed to implement an adaptive solution method.
EPACT II: project and methods.
Juillerat, Pascal; Froehlich, Florian; Felley, Christian; Pittet, Valérie; Mottet, Christian; Gonvers, Jean-Jacques; Michetti, Pierre; Vader, John-Paul
2007-01-01
Building on the first European Panel on the Appropriateness of Crohn's Disease Treatment (EPACT I) which was held in Lausanne at the beginning of March 2004, a new panel will be convened in Switzerland (EPACT II, November to December 2007) to update this work. A combined evidence- and panel-based method (RAND) will be applied to assess the appropriateness of therapy for Crohn's disease (CD). In preparation for the meeting of experts, reviews of evidence-based literature were prepared for major clinical presentations of CD. During the meeting, an international multidis- ciplinary panel that includes gastroenterologists, surgeons and general practitioners weigh the strength of evidence and apply their clinical experience when assessing the appropriateness of therapy for 569 specific indications (clinical scenarios). This chapter describes in detail the process of updating the literature review and the systematic approach of the RAND Appropriateness Method used during the expert panel meeting. PMID:18239398
A Dynamically Adaptive Arbitrary Lagrangian-Eulerian Method for Hydrodynamics
Anderson, R W; Pember, R B; Elliott, N S
2004-01-28
A new method that combines staggered grid Arbitrary Lagrangian-Eulerian (ALE) techniques with structured local adaptive mesh refinement (AMR) has been developed for solution of the Euler equations. The novel components of the combined ALE-AMR method hinge upon the integration of traditional AMR techniques with both staggered grid Lagrangian operators as well as elliptic relaxation operators on moving, deforming mesh hierarchies. Numerical examples demonstrate the utility of the method in performing detailed three-dimensional shock-driven instability calculations.
A Dynamically Adaptive Arbitrary Lagrangian-Eulerian Method for Hydrodynamics
Anderson, R W; Pember, R B; Elliott, N S
2002-10-19
A new method that combines staggered grid Arbitrary Lagrangian-Eulerian (ALE) techniques with structured local adaptive mesh refinement (AMR) has been developed for solution of the Euler equations. The novel components of the combined ALE-AMR method hinge upon the integration of traditional AMR techniques with both staggered grid Lagrangian operators as well as elliptic relaxation operators on moving, deforming mesh hierarchies. Numerical examples demonstrate the utility of the method in performing detailed three-dimensional shock-driven instability calculations.
Adaptive wavelet collocation method simulations of Rayleigh-Taylor instability
NASA Astrophysics Data System (ADS)
Reckinger, S. J.; Livescu, D.; Vasilyev, O. V.
2010-12-01
Numerical simulations of single-mode, compressible Rayleigh-Taylor instability are performed using the adaptive wavelet collocation method (AWCM), which utilizes wavelets for dynamic grid adaptation. Due to the physics-based adaptivity and direct error control of the method, AWCM is ideal for resolving the wide range of scales present in the development of the instability. The problem is initialized consistent with the solutions from linear stability theory. Non-reflecting boundary conditions are applied to prevent the contamination of the instability growth by pressure waves created at the interface. AWCM is used to perform direct numerical simulations that match the early-time linear growth, the terminal bubble velocity and a reacceleration region.
Adaptive Management for Urban Watersheds: The Slavic Village Pilot Project
Adaptive management is an environmental management strategy that uses an iterative process of decision-making to reduce the uncertainty in environmental management via system monitoring. A central tenet of adaptive management is that management involves a learning process that ca...
Adaptive windowed range-constrained Otsu method using local information
NASA Astrophysics Data System (ADS)
Zheng, Jia; Zhang, Dinghua; Huang, Kuidong; Sun, Yuanxi; Tang, Shaojie
2016-01-01
An adaptive windowed range-constrained Otsu method using local information is proposed for improving the performance of image segmentation. First, the reason why traditional thresholding methods do not perform well in the segmentation of complicated images is analyzed. Therein, the influences of global and local thresholdings on the image segmentation are compared. Second, two methods that can adaptively change the size of the local window according to local information are proposed by us. The characteristics of the proposed methods are analyzed. Thereby, the information on the number of edge pixels in the local window of the binarized variance image is employed to adaptively change the local window size. Finally, the superiority of the proposed method over other methods such as the range-constrained Otsu, the active contour model, the double Otsu, the Bradley's, and the distance-regularized level set evolution is demonstrated. It is validated by the experiments that the proposed method can keep more details and acquire much more satisfying area overlap measure as compared with the other conventional methods.
Likelihood Methods for Adaptive Filtering and Smoothing. Technical Report #455.
ERIC Educational Resources Information Center
Butler, Ronald W.
The dynamic linear model or Kalman filtering model provides a useful methodology for predicting the past, present, and future states of a dynamic system, such as an object in motion or an economic or social indicator that is changing systematically with time. Recursive likelihood methods for adaptive Kalman filtering and smoothing are developed.…
A Conditional Exposure Control Method for Multidimensional Adaptive Testing
ERIC Educational Resources Information Center
Finkelman, Matthew; Nering, Michael L.; Roussos, Louis A.
2009-01-01
In computerized adaptive testing (CAT), ensuring the security of test items is a crucial practical consideration. A common approach to reducing item theft is to define maximum item exposure rates, i.e., to limit the proportion of examinees to whom a given item can be administered. Numerous methods for controlling exposure rates have been proposed…
An analysis of European riverine flood risk and adaptation measures under projected climate change
NASA Astrophysics Data System (ADS)
Bouwer, Laurens; Burzel, Andreas; Holz, Friederike; Winsemius, Hessel; de Bruijn, Karind
2015-04-01
There is increasing need to assess costs and benefits of adaptation at scales beyond the river basin. In Europe, such estimates are required at the European scale in order to set priorities for action and financing, for instance in the context of the EU Adaptation Strategy. The goal of this work as part of the FP7 BASE project is to develop a flood impact model that can be applied at Pan-European scale and that is able to project changes in flood risk due to climate change and socio-economic developments, and costs of adaptation. For this research, we build upon the global flood hazard estimation method developed by Winsemius et al. (Hydrology and Earth System Sciences, 2013), that produces flood inundation maps at different return period, for present day (EU WATCH) and future climate (IPCC scenarios RCP4.5 and 8.5, for five climate models). These maps are used for the assessment of flood impacts. We developed and tested a model for assessing direct economic flood damages by using large scale land use maps. We characterise vulnerable land use functions, in particular residential, commercial, industrial, infrastructure and agriculture, using depth-damage relationships. Furthermore, we apply up to NUTS3 level information on Gross Domestic Product, which is used as a proxy for relative differences in maximum damage values between different areas. Next, we test two adaptation measures, by adjusting flood protection levels and adjusting damage functions. The results show the projected changes in flood risk in the future. For example, on NUTS2 level, flood risk increases in some regions up to 179% (between the baseline scenario 1960-1999 and time slice 2010-2049). On country level there are increases up to 60% for selected climate models. The conference presentation will show the most relevant improvements in damage modelling on the continental scale, and results of the analysis of adaptation measures. The results will be critically discussed under the aspect of major
Adaptive frequency estimation by MUSIC (Multiple Signal Classification) method
NASA Astrophysics Data System (ADS)
Karhunen, Juha; Nieminen, Esko; Joutsensalo, Jyrki
During the last years, the eigenvector-based method called MUSIC has become very popular in estimating the frequencies of sinusoids in additive white noise. Adaptive realizations of the MUSIC method are studied using simulated data. Several of the adaptive realizations seem to give in practice equally good results as the nonadaptive standard realization. The only exceptions are instantaneous gradient type algorithms that need considerably more samples to achieve a comparable performance. A new method is proposed for constructing initial estimates to the signal subspace. The method improves often dramatically the performance of instantaneous gradient type algorithms. The new signal subspace estimate can also be used to define a frequency estimator directly or to simplify eigenvector computation.
Adaptive reconnection-based arbitrary Lagrangian Eulerian method
Bo, Wurigen; Shashkov, Mikhail
2015-07-21
We present a new adaptive Arbitrary Lagrangian Eulerian (ALE) method. This method is based on the reconnection-based ALE (ReALE) methodology of Refs. [35], [34] and [6]. The main elements in a standard ReALE method are: an explicit Lagrangian phase on an arbitrary polygonal (in 2D) mesh in which the solution and positions of grid nodes are updated; a rezoning phase in which a new grid is defined by changing the connectivity (using Voronoi tessellation) but not the number of cells; and a remapping phase in which the Lagrangian solution is transferred onto the new grid. Furthermore, in the standard ReALE method, the rezoned mesh is smoothed by using one or several steps toward centroidal Voronoi tessellation, but it is not adapted to the solution in any way.
Adaptive reconnection-based arbitrary Lagrangian Eulerian method
Bo, Wurigen; Shashkov, Mikhail
2015-07-21
We present a new adaptive Arbitrary Lagrangian Eulerian (ALE) method. This method is based on the reconnection-based ALE (ReALE) methodology of Refs. [35], [34] and [6]. The main elements in a standard ReALE method are: an explicit Lagrangian phase on an arbitrary polygonal (in 2D) mesh in which the solution and positions of grid nodes are updated; a rezoning phase in which a new grid is defined by changing the connectivity (using Voronoi tessellation) but not the number of cells; and a remapping phase in which the Lagrangian solution is transferred onto the new grid. Furthermore, in the standard ReALEmore » method, the rezoned mesh is smoothed by using one or several steps toward centroidal Voronoi tessellation, but it is not adapted to the solution in any way.« less
Method and system for environmentally adaptive fault tolerant computing
NASA Technical Reports Server (NTRS)
Copenhaver, Jason L. (Inventor); Jeremy, Ramos (Inventor); Wolfe, Jeffrey M. (Inventor); Brenner, Dean (Inventor)
2010-01-01
A method and system for adapting fault tolerant computing. The method includes the steps of measuring an environmental condition representative of an environment. An on-board processing system's sensitivity to the measured environmental condition is measured. It is determined whether to reconfigure a fault tolerance of the on-board processing system based in part on the measured environmental condition. The fault tolerance of the on-board processing system may be reconfigured based in part on the measured environmental condition.
Workshop on adaptive grid methods for fusion plasmas
Wiley, J.C.
1995-07-01
The author describes a general `hp` finite element method with adaptive grids. The code was based on the work of Oden, et al. The term `hp` refers to the method of spatial refinement (h), in conjunction with the order of polynomials used as a part of the finite element discretization (p). This finite element code seems to handle well the different mesh grid sizes occuring between abuted grids with different resolutions.
Solving Chemical Master Equations by an Adaptive Wavelet Method
Jahnke, Tobias; Galan, Steffen
2008-09-01
Solving chemical master equations is notoriously difficult due to the tremendous number of degrees of freedom. We present a new numerical method which efficiently reduces the size of the problem in an adaptive way. The method is based on a sparse wavelet representation and an algorithm which, in each time step, detects the essential degrees of freedom required to approximate the solution up to the desired accuracy.
ICASE/LaRC Workshop on Adaptive Grid Methods
NASA Technical Reports Server (NTRS)
South, Jerry C., Jr. (Editor); Thomas, James L. (Editor); Vanrosendale, John (Editor)
1995-01-01
Solution-adaptive grid techniques are essential to the attainment of practical, user friendly, computational fluid dynamics (CFD) applications. In this three-day workshop, experts gathered together to describe state-of-the-art methods in solution-adaptive grid refinement, analysis, and implementation; to assess the current practice; and to discuss future needs and directions for research. This was accomplished through a series of invited and contributed papers. The workshop focused on a set of two-dimensional test cases designed by the organizers to aid in assessing the current state of development of adaptive grid technology. In addition, a panel of experts from universities, industry, and government research laboratories discussed their views of needs and future directions in this field.
Lafaye, Murielle; Sall, Baba; Ndiaye, Youssou; Vignolles, Cecile; Tourre, Yves M; Borchi, Franc Ois; Soubeyroux, Jean-Michel; Diallo, Mawlouth; Dia, Ibrahima; Ba, Yamar; Faye, Abdoulaye; Ba, Taibou; Ka, Alioune; Ndione, Jacques-André; Gauthier, Hélène; Lacaux, Jean-Pierre
2013-11-01
The multi-disciplinary French project "Adaptation à la Fiévre de la Vallée du Rift" (AdaptFVR) has concluded a 10-year constructive interaction between many scientists/partners involved with the Rift Valley fever (RVF) dynamics in Senegal. The three targeted objectives reached were (i) to produce--in near real-time--validated risk maps for parked livestock exposed to RVF mosquitoes/vectors bites; (ii) to assess the impacts on RVF vectors from climate variability at different time-scales including climate change; and (iii) to isolate processes improving local livestock management and animal health. Based on these results, concrete, pro-active adaptive actions were taken on site, which led to the establishment of a RVF early warning system (RVFews). Bulletins were released in a timely fashion during the project, tested and validated in close collaboration with the local populations, i.e. the primary users. Among the strategic, adaptive methods developed, conducted and evaluated in terms of cost/benefit analyses are the larvicide campaigns and the coupled bio-mathematical (hydrological and entomological) model technologies, which are being transferred to the staff of the "Centre de Suivi Ecologique" (CSE) in Dakar during 2013. Based on the results from the AdaptFVR project, other projects with similar conceptual and modelling approaches are currently being implemented, e.g. for urban and rural malaria and dengue in the French Antilles. PMID:24258902
An Adaptive Cross-Architecture Combination Method for Graph Traversal
You, Yang; Song, Shuaiwen; Kerbyson, Darren J.
2014-06-18
Breadth-First Search (BFS) is widely used in many real-world applications including computational biology, social networks, and electronic design automation. The combination method, using both top-down and bottom-up techniques, is the most effective BFS approach. However, current combination methods rely on trial-and-error and exhaustive search to locate the optimal switching point, which may cause significant runtime overhead. To solve this problem, we design an adaptive method based on regression analysis to predict an optimal switching point for the combination method at runtime within less than 0.1% of the BFS execution time.
An adaptive over/under data combination method
NASA Astrophysics Data System (ADS)
He, Jian-Wei; Lu, Wen-Kai; Li, Zhong-Xiao
2013-12-01
The traditional "dephase and sum" algorithms for over/under data combination estimate the ghost operator by assuming a calm sea surface. However, the real sea surface is typically rough, which invalidates the calm sea surface assumption. Hence, the traditional "dephase and sum" algorithms might produce poor-quality results in rough sea conditions. We propose an adaptive over/under data combination method, which adaptively estimates the amplitude spectrum of the ghost operator from the over/under data, and then over/under data combinations are implemented using the estimated ghost operators. A synthetic single shot gather is used to verify the performance of the proposed method in rough sea surface conditions and a real triple over/under dataset demonstrates the method performance.
An Adaptive Derivative-based Method for Function Approximation
Tong, C
2008-10-22
To alleviate the high computational cost of large-scale multi-physics simulations to study the relationships between the model parameters and the outputs of interest, response surfaces are often used in place of the exact functional relationships. This report explores a method for response surface construction using adaptive sampling guided by derivative information at each selected sample point. This method is especially suitable for applications that can readily provide added information such as gradients and Hessian with respect to the input parameters under study. When higher order terms (third and above) in the Taylor series are negligible, the approximation error for this method can be controlled. We present details of the adaptive algorithm and numerical results on a few test problems.
Development of a dynamically adaptive grid method for multidimensional problems
NASA Astrophysics Data System (ADS)
Holcomb, J. E.; Hindman, R. G.
1984-06-01
An approach to solution adaptive grid generation for use with finite difference techniques, previously demonstrated on model problems in one space dimension, has been extended to multidimensional problems. The method is based on the popular elliptic steady grid generators, but is 'dynamically' adaptive in the sense that a grid is maintained at all times satisfying the steady grid law driven by a solution-dependent source term. Testing has been carried out on Burgers' equation in one and two space dimensions. Results appear encouraging both for inviscid wave propagation cases and viscous boundary layer cases, suggesting that application to practical flow problems is now possible. In the course of the work, obstacles relating to grid correction, smoothing of the solution, and elliptic equation solvers have been largely overcome. Concern remains, however, about grid skewness, boundary layer resolution and the need for implicit integration methods. Also, the method in 3-D is expected to be very demanding of computer resources.
Final Report: Symposium on Adaptive Methods for Partial Differential Equations
Pernice, Michael; Johnson, Christopher R.; Smith, Philip J.; Fogelson, Aaron
1998-12-08
Complex physical phenomena often include features that span a wide range of spatial and temporal scales. Accurate simulation of such phenomena can be difficult to obtain, and computations that are under-resolved can even exhibit spurious features. While it is possible to resolve small scale features by increasing the number of grid points, global grid refinement can quickly lead to problems that are intractable, even on the largest available computing facilities. These constraints are particularly severe for three dimensional problems that involve complex physics. One way to achieve the needed resolution is to refine the computational mesh locally, in only those regions where enhanced resolution is required. Adaptive solution methods concentrate computational effort in regions where it is most needed. These methods have been successfully applied to a wide variety of problems in computational science and engineering. Adaptive methods can be difficult to implement, prompting the development of tools and environments to facilitate their use. To ensure that the results of their efforts are useful, algorithm and tool developers must maintain close communication with application specialists. Conversely it remains difficult for application specialists who are unfamiliar with the methods to evaluate the trade-offs between the benefits of enhanced local resolution and the effort needed to implement an adaptive solution method.
Advanced numerical methods in mesh generation and mesh adaptation
Lipnikov, Konstantine; Danilov, A; Vassilevski, Y; Agonzal, A
2010-01-01
Numerical solution of partial differential equations requires appropriate meshes, efficient solvers and robust and reliable error estimates. Generation of high-quality meshes for complex engineering models is a non-trivial task. This task is made more difficult when the mesh has to be adapted to a problem solution. This article is focused on a synergistic approach to the mesh generation and mesh adaptation, where best properties of various mesh generation methods are combined to build efficiently simplicial meshes. First, the advancing front technique (AFT) is combined with the incremental Delaunay triangulation (DT) to build an initial mesh. Second, the metric-based mesh adaptation (MBA) method is employed to improve quality of the generated mesh and/or to adapt it to a problem solution. We demonstrate with numerical experiments that combination of all three methods is required for robust meshing of complex engineering models. The key to successful mesh generation is the high-quality of the triangles in the initial front. We use a black-box technique to improve surface meshes exported from an unattainable CAD system. The initial surface mesh is refined into a shape-regular triangulation which approximates the boundary with the same accuracy as the CAD mesh. The DT method adds robustness to the AFT. The resulting mesh is topologically correct but may contain a few slivers. The MBA uses seven local operations to modify the mesh topology. It improves significantly the mesh quality. The MBA method is also used to adapt the mesh to a problem solution to minimize computational resources required for solving the problem. The MBA has a solid theoretical background. In the first two experiments, we consider the convection-diffusion and elasticity problems. We demonstrate the optimal reduction rate of the discretization error on a sequence of adaptive strongly anisotropic meshes. The key element of the MBA method is construction of a tensor metric from hierarchical edge
Kongsager, Rico; Locatelli, Bruno; Chazarin, Florie
2016-02-01
Adaptation and mitigation share the ultimate purpose of reducing climate change impacts. However, they tend to be considered separately in projects and policies because of their different objectives and scales. Agriculture and forestry are related to both adaptation and mitigation: they contribute to greenhouse gas emissions and removals, are vulnerable to climate variations, and form part of adaptive strategies for rural livelihoods. We assessed how climate change project design documents (PDDs) considered a joint contribution to adaptation and mitigation in forestry and agriculture in the tropics, by analyzing 201 PDDs from adaptation funds, mitigation instruments, and project standards [e.g., climate community and biodiversity (CCB)]. We analyzed whether PDDs established for one goal reported an explicit contribution to the other (i.e., whether mitigation PDDs contributed to adaptation and vice versa). We also examined whether the proposed activities or expected outcomes allowed for potential contributions to the two goals. Despite the separation between the two goals in international and national institutions, 37% of the PDDs explicitly mentioned a contribution to the other objective, although only half of those substantiated it. In addition, most adaptation (90%) and all mitigation PDDs could potentially report a contribution to at least partially to the other goal. Some adaptation project developers were interested in mitigation for the prospect of carbon funding, whereas mitigation project developers integrated adaptation to achieve greater long-term sustainability or to attain CCB certification. International and national institutions can provide incentives for projects to harness synergies and avoid trade-offs between adaptation and mitigation. PMID:26306792
NASA Astrophysics Data System (ADS)
Kongsager, Rico; Locatelli, Bruno; Chazarin, Florie
2016-02-01
Adaptation and mitigation share the ultimate purpose of reducing climate change impacts. However, they tend to be considered separately in projects and policies because of their different objectives and scales. Agriculture and forestry are related to both adaptation and mitigation: they contribute to greenhouse gas emissions and removals, are vulnerable to climate variations, and form part of adaptive strategies for rural livelihoods. We assessed how climate change project design documents (PDDs) considered a joint contribution to adaptation and mitigation in forestry and agriculture in the tropics, by analyzing 201 PDDs from adaptation funds, mitigation instruments, and project standards [e.g., climate community and biodiversity (CCB)]. We analyzed whether PDDs established for one goal reported an explicit contribution to the other (i.e., whether mitigation PDDs contributed to adaptation and vice versa). We also examined whether the proposed activities or expected outcomes allowed for potential contributions to the two goals. Despite the separation between the two goals in international and national institutions, 37 % of the PDDs explicitly mentioned a contribution to the other objective, although only half of those substantiated it. In addition, most adaptation (90 %) and all mitigation PDDs could potentially report a contribution to at least partially to the other goal. Some adaptation project developers were interested in mitigation for the prospect of carbon funding, whereas mitigation project developers integrated adaptation to achieve greater long-term sustainability or to attain CCB certification. International and national institutions can provide incentives for projects to harness synergies and avoid trade-offs between adaptation and mitigation.
Methods for prismatic/tetrahedral grid generation and adaptation
NASA Astrophysics Data System (ADS)
Kallinderis, Y.
1995-10-01
The present work involves generation of hybrid prismatic/tetrahedral grids for complex 3-D geometries including multi-body domains. The prisms cover the region close to each body's surface, while tetrahedra are created elsewhere. Two developments are presented for hybrid grid generation around complex 3-D geometries. The first is a new octree/advancing front type of method for generation of the tetrahedra of the hybrid mesh. The main feature of the present advancing front tetrahedra generator that is different from previous such methods is that it does not require the creation of a background mesh by the user for the determination of the grid-spacing and stretching parameters. These are determined via an automatically generated octree. The second development is a method for treating the narrow gaps in between different bodies in a multiply-connected domain. This method is applied to a two-element wing case. A High Speed Civil Transport (HSCT) type of aircraft geometry is considered. The generated hybrid grid required only 170 K tetrahedra instead of an estimated two million had a tetrahedral mesh been used in the prisms region as well. A solution adaptive scheme for viscous computations on hybrid grids is also presented. A hybrid grid adaptation scheme that employs both h-refinement and redistribution strategies is developed to provide optimum meshes for viscous flow computations. Grid refinement is a dual adaptation scheme that couples 3-D, isotropic division of tetrahedra and 2-D, directional division of prisms.
Efficient Unstructured Grid Adaptation Methods for Sonic Boom Prediction
NASA Technical Reports Server (NTRS)
Campbell, Richard L.; Carter, Melissa B.; Deere, Karen A.; Waithe, Kenrick A.
2008-01-01
This paper examines the use of two grid adaptation methods to improve the accuracy of the near-to-mid field pressure signature prediction of supersonic aircraft computed using the USM3D unstructured grid flow solver. The first method (ADV) is an interactive adaptation process that uses grid movement rather than enrichment to more accurately resolve the expansion and compression waves. The second method (SSGRID) uses an a priori adaptation approach to stretch and shear the original unstructured grid to align the grid with the pressure waves and reduce the cell count required to achieve an accurate signature prediction at a given distance from the vehicle. Both methods initially create negative volume cells that are repaired in a module in the ADV code. While both approaches provide significant improvements in the near field signature (< 3 body lengths) relative to a baseline grid without increasing the number of grid points, only the SSGRID approach allows the details of the signature to be accurately computed at mid-field distances (3-10 body lengths) for direct use with mid-field-to-ground boom propagation codes.
Vortical Flow Prediction Using an Adaptive Unstructured Grid Method
NASA Technical Reports Server (NTRS)
Pirzadeh, Shahyar Z.
2003-01-01
A computational fluid dynamics (CFD) method has been employed to compute vortical flows around slender wing/body configurations. The emphasis of the paper is on the effectiveness of an adaptive grid procedure in "capturing" concentrated vortices generated at sharp edges or flow separation lines of lifting surfaces flying at high angles of attack. The method is based on a tetrahedral unstructured grid technology developed at the NASA Langley Research Center. Two steady-state, subsonic, inviscid and Navier-Stokes flow test cases are presented to demonstrate the applicability of the method for solving practical vortical flow problems. The first test case concerns vortex flow over a simple 65 delta wing with different values of leading-edge radius. Although the geometry is quite simple, it poses a challenging problem for computing vortices originating from blunt leading edges. The second case is that of a more complex fighter configuration. The superiority of the adapted solutions in capturing the vortex flow structure over the conventional unadapted results is demonstrated by comparisons with the wind-tunnel experimental data. The study shows that numerical prediction of vortical flows is highly sensitive to the local grid resolution and that the implementation of grid adaptation is essential when applying CFD methods to such complicated flow problems.
Vortical Flow Prediction Using an Adaptive Unstructured Grid Method
NASA Technical Reports Server (NTRS)
Pirzadeh, Shahyar Z.
2001-01-01
A computational fluid dynamics (CFD) method has been employed to compute vortical flows around slender wing/body configurations. The emphasis of the paper is on the effectiveness of an adaptive grid procedure in "capturing" concentrated vortices generated at sharp edges or flow separation lines of lifting surfaces flying at high angles of attack. The method is based on a tetrahedral unstructured grid technology developed at the NASA Langley Research Center. Two steady-state, subsonic, inviscid and Navier-Stokes flow test cases are presented to demonstrate the applicability of the method for solving practical vortical flow problems. The first test case concerns vortex flow over a simple 65deg delta wing with different values of leading-edge bluntness, and the second case is that of a more complex fighter configuration. The superiority of the adapted solutions in capturing the vortex flow structure over the conventional unadapted results is demonstrated by comparisons with the windtunnel experimental data. The study shows that numerical prediction of vortical flows is highly sensitive to the local grid resolution and that the implementation of grid adaptation is essential when applying CFD methods to such complicated flow problems.
Adaptive [theta]-methods for pricing American options
NASA Astrophysics Data System (ADS)
Khaliq, Abdul Q. M.; Voss, David A.; Kazmi, Kamran
2008-12-01
We develop adaptive [theta]-methods for solving the Black-Scholes PDE for American options. By adding a small, continuous term, the Black-Scholes PDE becomes an advection-diffusion-reaction equation on a fixed spatial domain. Standard implementation of [theta]-methods would require a Newton-type iterative procedure at each time step thereby increasing the computational complexity of the methods. Our linearly implicit approach avoids such complications. We establish a general framework under which [theta]-methods satisfy a discrete version of the positivity constraint characteristic of American options, and numerically demonstrate the sensitivity of the constraint. The positivity results are established for the single-asset and independent two-asset models. In addition, we have incorporated and analyzed an adaptive time-step control strategy to increase the computational efficiency. Numerical experiments are presented for one- and two-asset American options, using adaptive exponential splitting for two-asset problems. The approach is compared with an iterative solution of the two-asset problem in terms of computational efficiency.
Space-time adaptive numerical methods for geophysical applications.
Castro, C E; Käser, M; Toro, E F
2009-11-28
In this paper we present high-order formulations of the finite volume and discontinuous Galerkin finite-element methods for wave propagation problems with a space-time adaptation technique using unstructured meshes in order to reduce computational cost without reducing accuracy. Both methods can be derived in a similar mathematical framework and are identical in their first-order version. In their extension to higher order accuracy in space and time, both methods use spatial polynomials of higher degree inside each element, a high-order solution of the generalized Riemann problem and a high-order time integration method based on the Taylor series expansion. The static adaptation strategy uses locally refined high-resolution meshes in areas with low wave speeds to improve the approximation quality. Furthermore, the time step length is chosen locally adaptive such that the solution is evolved explicitly in time by an optimal time step determined by a local stability criterion. After validating the numerical approach, both schemes are applied to geophysical wave propagation problems such as tsunami waves and seismic waves comparing the new approach with the classical global time-stepping technique. The problem of mesh partitioning for large-scale applications on multi-processor architectures is discussed and a new mesh partition approach is proposed and tested to further reduce computational cost. PMID:19840984
Robust flicker evaluation method for low power adaptive dimming LCDs
NASA Astrophysics Data System (ADS)
Kim, Seul-Ki; Song, Seok-Jeong; Nam, Hyoungsik
2015-05-01
This paper describes a robust dimming flicker evaluation method of adaptive dimming algorithms for low power liquid crystal displays (LCDs). While the previous methods use sum of square difference (SSD) values without excluding the image sequence information, the proposed modified SSD (mSSD) values are obtained only with the dimming flicker effects by making use of differential images. The proposed scheme is verified for eight dimming configurations of two dimming level selection methods and four temporal filters over three test videos. Furthermore, a new figure of merit is introduced to cover the dimming flicker as well as image qualities and power consumption.
ERIC Educational Resources Information Center
Massachusetts Inst. of Tech., Cambridge. Dept. of Urban Studies and Planning.
The report of Project ADAPT (Aerospace and Defense Adaptation to Public Technology), describes the design, execution, and forthcoming evaluation of the program. The program's objective was to demonstrate the feasibility of redeploying surplus technical manpower into public service at State and local levels of government. The development of the…
Adaptive domain decomposition methods for advection-diffusion problems
Carlenzoli, C.; Quarteroni, A.
1995-12-31
Domain decomposition methods can perform poorly on advection-diffusion equations if diffusion is dominated by advection. Indeed, the hyperpolic part of the equations could affect the behavior of iterative schemes among subdomains slowing down dramatically their rate of convergence. Taking into account the direction of the characteristic lines we introduce suitable adaptive algorithms which are stable with respect to the magnitude of the convective field in the equations and very effective on bear boundary value problems.
Children's Ideas about Animal Adaptations: An Action Research Project
ERIC Educational Resources Information Center
Endreny, Anna Henderson
2006-01-01
In this paper, I describe the action research I conducted in my third-grade science classrooms over the course of two years. In order to gain an understanding of my third-grade students' ideas about animal adaptations and how the teaching of a unit on crayfish influenced these ideas, I used clinical interviews, observations, and written…
NASA Astrophysics Data System (ADS)
Domingues, Margarete O.; Gomes, Anna Karina F.; Mendes, Odim; Schneider, Kai
2013-10-01
We present a new adaptive multiresoltion method for the numerical simulation of ideal magnetohydrodynamics. The governing equations, i.e., the compressible Euler equations coupled with the Maxwell equations are discretized using a finite volume scheme on a two-dimensional Cartesian mesh. Adaptivity in space is obtained via multiresolution analysis, which allows the reliable introduction of a locally refined mesh while controlling the error. The explicit time discretization uses a compact Runge-Kutta method for local time stepping and an embedded Runge-Kutta scheme for automatic time step control. An extended generalized Lagrangian multiplier approach with the mixed hyperbolic-parabolic correction type is used to control the incompressibility of the magnetic field. Applications to a two-dimensional problem illustrate the properties of the method. Memory savings and numerical divergences of the magnetic field are reported and the accuracy of the adaptive computations is assessed by comparing with the available exact solution. This work was supported by the contract SiCoMHD (ANR-Blanc 2011-045).
An adaptive unsupervised hyperspectral classification method based on Gaussian distribution
NASA Astrophysics Data System (ADS)
Yue, Jiang; Wu, Jing-wei; Zhang, Yi; Bai, Lian-fa
2014-11-01
In order to achieve adaptive unsupervised clustering in the high precision, a method using Gaussian distribution to fit the similarity of the inter-class and the noise distribution is proposed in this paper, and then the automatic segmentation threshold is determined by the fitting result. First, according with the similarity measure of the spectral curve, this method assumes that the target and the background both in Gaussian distribution, the distribution characteristics is obtained through fitting the similarity measure of minimum related windows and center pixels with Gaussian function, and then the adaptive threshold is achieved. Second, make use of the pixel minimum related windows to merge adjacent similar pixels into a picture-block, then the dimensionality reduction is completed and the non-supervised classification is realized. AVIRIS data and a set of hyperspectral data we caught are used to evaluate the performance of the proposed method. Experimental results show that the proposed algorithm not only realizes the adaptive but also outperforms K-MEANS and ISODATA on the classification accuracy, edge recognition and robustness.
A New Online Calibration Method for Multidimensional Computerized Adaptive Testing.
Chen, Ping; Wang, Chun
2016-09-01
Multidimensional-Method A (M-Method A) has been proposed as an efficient and effective online calibration method for multidimensional computerized adaptive testing (MCAT) (Chen & Xin, Paper presented at the 78th Meeting of the Psychometric Society, Arnhem, The Netherlands, 2013). However, a key assumption of M-Method A is that it treats person parameter estimates as their true values, thus this method might yield erroneous item calibration when person parameter estimates contain non-ignorable measurement errors. To improve the performance of M-Method A, this paper proposes a new MCAT online calibration method, namely, the full functional MLE-M-Method A (FFMLE-M-Method A). This new method combines the full functional MLE (Jones & Jin in Psychometrika 59:59-75, 1994; Stefanski & Carroll in Annals of Statistics 13:1335-1351, 1985) with the original M-Method A in an effort to correct for the estimation error of ability vector that might otherwise adversely affect the precision of item calibration. Two correction schemes are also proposed when implementing the new method. A simulation study was conducted to show that the new method generated more accurate item parameter estimation than the original M-Method A in almost all conditions. PMID:26608960
Projection preconditioning for Lanczos-type methods
Bielawski, S.S.; Mulyarchik, S.G.; Popov, A.V.
1996-12-31
We show how auxiliary subspaces and related projectors may be used for preconditioning nonsymmetric system of linear equations. It is shown that preconditioned in such a way (or projected) system is better conditioned than original system (at least if the coefficient matrix of the system to be solved is symmetrizable). Two approaches for solving projected system are outlined. The first one implies straightforward computation of the projected matrix and consequent using some direct or iterative method. The second approach is the projection preconditioning of conjugate gradient-type solver. The latter approach is developed here in context with biconjugate gradient iteration and some related Lanczos-type algorithms. Some possible particular choices of auxiliary subspaces are discussed. It is shown that one of them is equivalent to using colorings. Some results of numerical experiments are reported.
A novel adaptive force control method for IPMC manipulation
NASA Astrophysics Data System (ADS)
Hao, Lina; Sun, Zhiyong; Li, Zhi; Su, Yunquan; Gao, Jianchao
2012-07-01
IPMC is a type of electro-active polymer material, also called artificial muscle, which can generate a relatively large deformation under a relatively low input voltage (generally speaking, less than 5 V), and can be implemented in a water environment. Due to these advantages, IPMC can be used in many fields such as biomimetics, service robots, bio-manipulation, etc. Until now, most existing methods for IPMC manipulation are displacement control not directly force control, however, under most conditions, the success rate of manipulations for tiny fragile objects is limited by the contact force, such as using an IPMC gripper to fix cells. Like most EAPs, a creep phenomenon exists in IPMC, of which the generated force will change with time and the creep model will be influenced by the change of the water content or other environmental factors, so a proper force control method is urgently needed. This paper presents a novel adaptive force control method (AIPOF control—adaptive integral periodic output feedback control), based on employing a creep model of which parameters are obtained by using the FRLS on-line identification method. The AIPOF control method can achieve an arbitrary pole configuration as long as the plant is controllable and observable. This paper also designs the POF and IPOF controller to compare their test results. Simulation and experiments of micro-force-tracking tests are carried out, with results confirming that the proposed control method is viable.
Computerized adaptive control weld skate with CCTV weld guidance project
NASA Technical Reports Server (NTRS)
Wall, W. A.
1976-01-01
This report summarizes progress of the automatic computerized weld skate development portion of the Computerized Weld Skate with Closed Circuit Television (CCTV) Arc Guidance Project. The main goal of the project is to develop an automatic welding skate demonstration model equipped with CCTV weld guidance. The three main goals of the overall project are to: (1) develop a demonstration model computerized weld skate system, (2) develop a demonstration model automatic CCTV guidance system, and (3) integrate the two systems into a demonstration model of computerized weld skate with CCTV weld guidance for welding contoured parts.
Low Temperature Shape Memory Alloys for Adaptive, Autonomous Systems Project
NASA Technical Reports Server (NTRS)
Falker, John; Zeitlin, Nancy; Williams, Martha; Benafan, Othmane; Fesmire, James
2015-01-01
The objective of this joint activity between Kennedy Space Center (KSC) and Glenn Research Center (GRC) is to develop and evaluate the applicability of 2-way SMAs in proof-of-concept, low-temperature adaptive autonomous systems. As part of this low technology readiness (TRL) activity, we will develop and train low-temperature novel, 2-way shape memory alloys (SMAs) with actuation temperatures ranging from 0 C to 150 C. These experimental alloys will also be preliminary tested to evaluate their performance parameters and transformation (actuation) temperatures in low- temperature or cryogenic adaptive proof-of-concept systems. The challenge will be in the development, design, and training of the alloys for 2-way actuation at those temperatures.
Investigation of the Multiple Method Adaptive Control (MMAC) method for flight control systems
NASA Technical Reports Server (NTRS)
Athans, M.; Baram, Y.; Castanon, D.; Dunn, K. P.; Green, C. S.; Lee, W. H.; Sandell, N. R., Jr.; Willsky, A. S.
1979-01-01
The stochastic adaptive control of the NASA F-8C digital-fly-by-wire aircraft using the multiple model adaptive control (MMAC) method is presented. The selection of the performance criteria for the lateral and the longitudinal dynamics, the design of the Kalman filters for different operating conditions, the identification algorithm associated with the MMAC method, the control system design, and simulation results obtained using the real time simulator of the F-8 aircraft at the NASA Langley Research Center are discussed.
Korostil, Igor A; Peters, Gareth W; Cornebise, Julien; Regan, David G
2013-05-20
A Bayesian statistical model and estimation methodology based on forward projection adaptive Markov chain Monte Carlo is developed in order to perform the calibration of a high-dimensional nonlinear system of ordinary differential equations representing an epidemic model for human papillomavirus types 6 and 11 (HPV-6, HPV-11). The model is compartmental and involves stratification by age, gender and sexual-activity group. Developing this model and a means to calibrate it efficiently is relevant because HPV is a very multi-typed and common sexually transmitted infection with more than 100 types currently known. The two types studied in this paper, types 6 and 11, are causing about 90% of anogenital warts. We extend the development of a sexual mixing matrix on the basis of a formulation first suggested by Garnett and Anderson, frequently used to model sexually transmitted infections. In particular, we consider a stochastic mixing matrix framework that allows us to jointly estimate unknown attributes and parameters of the mixing matrix along with the parameters involved in the calibration of the HPV epidemic model. This matrix describes the sexual interactions between members of the population under study and relies on several quantities that are a priori unknown. The Bayesian model developed allows one to estimate jointly the HPV-6 and HPV-11 epidemic model parameters as well as unknown sexual mixing matrix parameters related to assortativity. Finally, we explore the ability of an extension to the class of adaptive Markov chain Monte Carlo algorithms to incorporate a forward projection strategy for the ordinary differential equation state trajectories. Efficient exploration of the Bayesian posterior distribution developed for the ordinary differential equation parameters provides a challenge for any Markov chain sampling methodology, hence the interest in adaptive Markov chain methods. We conclude with simulation studies on synthetic and recent actual data. PMID
Enrollment Projections--Factors and Methods.
ERIC Educational Resources Information Center
Glass, Thomas E.; Fulmer, Connie L.
1991-01-01
Outlines the importance of enrollment projections for informed decision making in educational organizations. Discusses births, migration, and holding power as the three major factors that affect school populations. Describes in detail the cohort survival ratio technique, presents a sample calculation, and mentions alternative methods. (11…
Extended abstract: Partial row projection methods
Bramley, R.; Lee, Y.
1996-12-31
Accelerated row projection (RP) algorithms for solving linear systems Ax = b are a class of iterative methods which in theory converge for any nonsingular matrix. RP methods are by definition ones that require finding the orthogonal projection of vectors onto the null space of block rows of the matrix. The Kaczmarz form, considered here because it has a better spectrum for iterative methods, has an iteration matrix that is the product of such projectors. Because straightforward Kaczmarz method converges slowly for practical problems, typically an outer CG acceleration is applied. Definiteness, symmetry, or localization of the eigenvalues, of the coefficient matrix is not required. In spite of this robustness, work has generally been limited to structured systems such as block tridiagonal matrices because unlike many iterative solvers, RP methods cannot be implemented by simply supplying a matrix-vector multiplication routine. Finding the orthogonal projection of vectors onto the null space of block rows of the matrix in practice requires accessing the actual entries in the matrix. This report introduces a new partial RP algorithm which retains advantages of the RP methods.
Project ADAPT: A Program to Assess Depression and Provide Proactive Treatment in Rural Areas
ERIC Educational Resources Information Center
Luptak, Marilyn; Kaas, Merrie J.; Artz, Margaret; McCarthy, Teresa
2008-01-01
Purpose: We describe and evaluate a project designed to pilot test an evidence-based clinical intervention for assessing and treating depression in older adults in rural primary care clinics. Project ADAPT--Assuring Depression Assessment and Proactive Treatment--utilized existing primary care resources to overcome barriers to sustainability…
Adaptive methods for nonlinear structural dynamics and crashworthiness analysis
NASA Technical Reports Server (NTRS)
Belytschko, Ted
1993-01-01
The objective is to describe three research thrusts in crashworthiness analysis: adaptivity; mixed time integration, or subcycling, in which different timesteps are used for different parts of the mesh in explicit methods; and methods for contact-impact which are highly vectorizable. The techniques are being developed to improve the accuracy of calculations, ease-of-use of crashworthiness programs, and the speed of calculations. The latter is still of importance because crashworthiness calculations are often made with models of 20,000 to 50,000 elements using explicit time integration and require on the order of 20 to 100 hours on current supercomputers. The methodologies are briefly reviewed and then some example calculations employing these methods are described. The methods are also of value to other nonlinear transient computations.
Robust time and frequency domain estimation methods in adaptive control
NASA Technical Reports Server (NTRS)
Lamaire, Richard Orville
1987-01-01
A robust identification method was developed for use in an adaptive control system. The type of estimator is called the robust estimator, since it is robust to the effects of both unmodeled dynamics and an unmeasurable disturbance. The development of the robust estimator was motivated by a need to provide guarantees in the identification part of an adaptive controller. To enable the design of a robust control system, a nominal model as well as a frequency-domain bounding function on the modeling uncertainty associated with this nominal model must be provided. Two estimation methods are presented for finding parameter estimates, and, hence, a nominal model. One of these methods is based on the well developed field of time-domain parameter estimation. In a second method of finding parameter estimates, a type of weighted least-squares fitting to a frequency-domain estimated model is used. The frequency-domain estimator is shown to perform better, in general, than the time-domain parameter estimator. In addition, a methodology for finding a frequency-domain bounding function on the disturbance is used to compute a frequency-domain bounding function on the additive modeling error due to the effects of the disturbance and the use of finite-length data. The performance of the robust estimator in both open-loop and closed-loop situations is examined through the use of simulations.
Planetary gearbox fault diagnosis using an adaptive stochastic resonance method
NASA Astrophysics Data System (ADS)
Lei, Yaguo; Han, Dong; Lin, Jing; He, Zhengjia
2013-07-01
Planetary gearboxes are widely used in aerospace, automotive and heavy industry applications due to their large transmission ratio, strong load-bearing capacity and high transmission efficiency. The tough operation conditions of heavy duty and intensive impact load may cause gear tooth damage such as fatigue crack and teeth missed etc. The challenging issues in fault diagnosis of planetary gearboxes include selection of sensitive measurement locations, investigation of vibration transmission paths and weak feature extraction. One of them is how to effectively discover the weak characteristics from noisy signals of faulty components in planetary gearboxes. To address the issue in fault diagnosis of planetary gearboxes, an adaptive stochastic resonance (ASR) method is proposed in this paper. The ASR method utilizes the optimization ability of ant colony algorithms and adaptively realizes the optimal stochastic resonance system matching input signals. Using the ASR method, the noise may be weakened and weak characteristics highlighted, and therefore the faults can be diagnosed accurately. A planetary gearbox test rig is established and experiments with sun gear faults including a chipped tooth and a missing tooth are conducted. And the vibration signals are collected under the loaded condition and various motor speeds. The proposed method is used to process the collected signals and the results of feature extraction and fault diagnosis demonstrate its effectiveness.
Parallel adaptive mesh refinement within the PUMAA3D Project
NASA Technical Reports Server (NTRS)
Freitag, Lori; Jones, Mark; Plassmann, Paul
1995-01-01
To enable the solution of large-scale applications on distributed memory architectures, we are designing and implementing parallel algorithms for the fundamental tasks of unstructured mesh computation. In this paper, we discuss efficient algorithms developed for two of these tasks: parallel adaptive mesh refinement and mesh partitioning. The algorithms are discussed in the context of two-dimensional finite element solution on triangular meshes, but are suitable for use with a variety of element types and with h- or p-refinement. Results demonstrating the scalability and efficiency of the refinement algorithm and the quality of the mesh partitioning are presented for several test problems on the Intel DELTA.
Projection methods for quantum channel construction
NASA Astrophysics Data System (ADS)
Drusvyatskiy, Dmitriy; Li, Chi-Kwong; Pelejo, Diane Christine; Voronin, Yuen-Lam; Wolkowicz, Henry
2015-08-01
We consider the problem of constructing quantum channels, if they exist, that transform a given set of quantum states to another such set . In other words, we must find a completely positive linear map, if it exists, that maps a given set of density matrices to another given set of density matrices, possibly of different dimension. Using the theory of completely positive linear maps, one can formulate the problem as an instance of a positive semidefinite feasibility problem with highly structured constraints. The nature of the constraints makes projection-based algorithms very appealing when the number of variables is huge and standard interior-point methods for semidefinite programming are not applicable. We provide empirical evidence to this effect. We moreover present heuristics for finding both high-rank and low-rank solutions. Our experiments are based on the method of alternating projections and the Douglas-Rachford reflection method.
Spatially-Anisotropic Parallel Adaptive Wavelet Collocation Method
NASA Astrophysics Data System (ADS)
Vasilyev, Oleg V.; Brown-Dymkoski, Eric
2015-11-01
Despite latest advancements in development of robust wavelet-based adaptive numerical methodologies to solve partial differential equations, they all suffer from two major ``curses'': 1) the reliance on rectangular domain and 2) the ``curse of anisotropy'' (i.e. homogeneous wavelet refinement and inability to have spatially varying aspect ratio of the mesh elements). The new method addresses both of these challenges by utilizing an adaptive anisotropic wavelet transform on curvilinear meshes that can be either algebraically prescribed or calculated on the fly using PDE-based mesh generation. In order to ensure accurate representation of spatial operators in physical space, an additional adaptation on spatial physical coordinates is also performed. It is important to note that when new nodes are added in computational space, the physical coordinates can be approximated by interpolation of the existing solution and additional local iterations to ensure that the solution of coordinate mapping PDEs is converged on the new mesh. In contrast to traditional mesh generation approaches, the cost of adding additional nodes is minimal, mainly due to localized nature of iterative mesh generation PDE solver requiring local iterations in the vicinity of newly introduced points. This work was supported by ONR MURI under grant N00014-11-1-069.
The SMART CLUSTER METHOD - adaptive earthquake cluster analysis and declustering
NASA Astrophysics Data System (ADS)
Schaefer, Andreas; Daniell, James; Wenzel, Friedemann
2016-04-01
Earthquake declustering is an essential part of almost any statistical analysis of spatial and temporal properties of seismic activity with usual applications comprising of probabilistic seismic hazard assessments (PSHAs) and earthquake prediction methods. The nature of earthquake clusters and subsequent declustering of earthquake catalogues plays a crucial role in determining the magnitude-dependent earthquake return period and its respective spatial variation. Various methods have been developed to address this issue from other researchers. These have differing ranges of complexity ranging from rather simple statistical window methods to complex epidemic models. This study introduces the smart cluster method (SCM), a new methodology to identify earthquake clusters, which uses an adaptive point process for spatio-temporal identification. Hereby, an adaptive search algorithm for data point clusters is adopted. It uses the earthquake density in the spatio-temporal neighbourhood of each event to adjust the search properties. The identified clusters are subsequently analysed to determine directional anisotropy, focussing on a strong correlation along the rupture plane and adjusts its search space with respect to directional properties. In the case of rapid subsequent ruptures like the 1992 Landers sequence or the 2010/2011 Darfield-Christchurch events, an adaptive classification procedure is applied to disassemble subsequent ruptures which may have been grouped into an individual cluster using near-field searches, support vector machines and temporal splitting. The steering parameters of the search behaviour are linked to local earthquake properties like magnitude of completeness, earthquake density and Gutenberg-Richter parameters. The method is capable of identifying and classifying earthquake clusters in space and time. It is tested and validated using earthquake data from California and New Zealand. As a result of the cluster identification process, each event in
NASA Astrophysics Data System (ADS)
Rumore, D.; Kirshen, P. H.; Susskind, L.
2014-12-01
Despite scientific consensus that the climate is changing, local efforts to prepare for and manage climate change risks remain limited. How we can raise concern about climate change risks and enhance local readiness to adapt to climate change's effects? In this presentation, we will share the lessons learned from the New England Climate Adaptation Project (NECAP), a participatory action research project that tested science-based role-play simulations as a tool for educating the public about climate change risks and simulating collective risk management efforts. NECAP was a 2-year effort involving the Massachusetts Institute of Technology, the Consensus Building Institute, the National Estuarine Research Reserve System, and four coastal New England municipalities. During 2012-2013, the NECAP team produced downscaled climate change projections, a summary risk assessment, and a stakeholder assessment for each partner community. Working with local partners, we used these assessments to create a tailored, science-based role-play simulation for each site. Through a series of workshops in 2013, NECAP engaged between 115-170 diverse stakeholders and members of the public in each partner municipality in playing the simulation and a follow up conversation about local climate change risks and possible adaptation strategies. Data were collected through before-and-after surveys administered to all workshop participants, follow-up interviews with 25 percent of workshop participants, public opinion polls conducted before and after our intervention, and meetings with public officials. This presentation will report our research findings and explain how science-based role-play simulations can be used to help communicate local climate change risks and enhance local readiness to adapt.
An adaptive pseudo-spectral method for reaction diffusion problems
NASA Technical Reports Server (NTRS)
Bayliss, A.; Gottlieb, D.; Matkowsky, B. J.; Minkoff, M.
1987-01-01
The spectral interpolation error was considered for both the Chebyshev pseudo-spectral and Galerkin approximations. A family of functionals I sub r (u), with the property that the maximum norm of the error is bounded by I sub r (u)/J sub r, where r is an integer and J is the degree of the polynomial approximation, was developed. These functionals are used in the adaptive procedure whereby the problem is dynamically transformed to minimize I sub r (u). The number of collocation points is then chosen to maintain a prescribed error bound. The method is illustrated by various examples from combustion problems in one and two dimensions.
An h-adaptive finite element method for turbulent heat transfer
Carriington, David B
2009-01-01
A two-equation turbulence closure model (k-{omega}) using an h-adaptive grid technique and finite element method (FEM) has been developed to simulate low Mach flow and heat transfer. These flows are applicable to many flows in engineering and environmental sciences. Of particular interest in the engineering modeling areas are: combustion, solidification, and heat exchanger design. Flows for indoor air quality modeling and atmospheric pollution transport are typical types of environmental flows modeled with this method. The numerical method is based on a hybrid finite element model using an equal-order projection process. The model includes thermal and species transport, localized mesh refinement (h-adaptive) and Petrov-Galerkin weighting for the stabilizing the advection. This work develops the continuum model of a two-equation turbulence closure method. The fractional step solution method is stated along with the h-adaptive grid method (Carrington and Pepper, 2002). Solutions are presented for 2d flow over a backward-facing step.
A parallel adaptive method for pseudo-arclength continuation
NASA Astrophysics Data System (ADS)
Aruliah, D. A.; van Veen, L.; Dubitski, A.
2012-10-01
Pseudo-arclength continuation is a well-established method for constructing a numerical curve comprising solutions of a system of nonlinear equations. In many complicated high-dimensional systems, the corrector steps within pseudo-arclength continuation are extremely costly to compute; as a result, the step-length of the preceding prediction step must be adapted carefully to avoid prohibitively many failed steps. We describe the essence of a parallel method for adapting the step-length of pseudo-arclength continuation. Our method employs several predictor-corrector sequences with differing step-lengths running concurrently on distinct processors. Our parallel framework permits intermediate results of correction sequences that have not yet converged to seed new predictor-corrector sequences with various step-lengths; the goal is to amortize the cost of corrector steps to make further progress along the underlying numerical curve. Results from numerical experiments suggest a three-fold speedup is attainable when the continuation curve sought has great topological complexity and the corrector steps require significant processor time.
O'Donnell, T. K.; Galat, D.L.
2008-01-01
Objective setting, performance measures, and accountability are important components of an adaptive-management approach to river-enhancement programs. Few lessons learned by river-enhancement practitioners in the United States have been documented and disseminated relative to the number of projects implemented. We conducted scripted telephone surveys with river-enhancement project managers and practitioners within the Upper Mississippi River Basin (UMRB) to determine the extent of setting project success criteria, monitoring, evaluation of monitoring data, and data dissemination. Investigation of these elements enabled a determination of those that inhibited adaptive management. Seventy river enhancement projects were surveyed. Only 34% of projects surveyed incorporated a quantified measure of project success. Managers most often relied on geophysical attributes of rivers when setting project success criteria, followed by biological communities. Ninety-one percent of projects that performed monitoring included biologic variables, but the lack of data collection before and after project completion and lack of field-based reference or control sites will make future assessments of ecologic success difficult. Twenty percent of projects that performed monitoring evaluated ???1 variable but did not disseminate their evaluations outside their organization. Results suggest greater incentives may be required to advance the science of river enhancement. Future river-enhancement programs within the UMRB and elsewhere can increase knowledge gained from individual projects by offering better guidance on setting success criteria before project initiation and evaluation through established monitoring protocols. ?? 2007 Springer Science+Business Media, LLC.
NASA Astrophysics Data System (ADS)
O'Donnell, T. Kevin; Galat, David L.
2008-01-01
Objective setting, performance measures, and accountability are important components of an adaptive-management approach to river-enhancement programs. Few lessons learned by river-enhancement practitioners in the United States have been documented and disseminated relative to the number of projects implemented. We conducted scripted telephone surveys with river-enhancement project managers and practitioners within the Upper Mississippi River Basin (UMRB) to determine the extent of setting project success criteria, monitoring, evaluation of monitoring data, and data dissemination. Investigation of these elements enabled a determination of those that inhibited adaptive management. Seventy river enhancement projects were surveyed. Only 34% of projects surveyed incorporated a quantified measure of project success. Managers most often relied on geophysical attributes of rivers when setting project success criteria, followed by biological communities. Ninety-one percent of projects that performed monitoring included biologic variables, but the lack of data collection before and after project completion and lack of field-based reference or control sites will make future assessments of ecologic success difficult. Twenty percent of projects that performed monitoring evaluated ≥1 variable but did not disseminate their evaluations outside their organization. Results suggest greater incentives may be required to advance the science of river enhancement. Future river-enhancement programs within the UMRB and elsewhere can increase knowledge gained from individual projects by offering better guidance on setting success criteria before project initiation and evaluation through established monitoring protocols.
Sweep-twist adaptive rotor blade : final project report.
Ashwill, Thomas D.
2010-02-01
Knight & Carver was contracted by Sandia National Laboratories to develop a Sweep Twist Adaptive Rotor (STAR) blade that reduced operating loads, thereby allowing a larger, more productive rotor. The blade design used outer blade sweep to create twist coupling without angled fiber. Knight & Carver successfully designed, fabricated, tested and evaluated STAR prototype blades. Through laboratory and field tests, Knight & Carver showed the STAR blade met the engineering design criteria and economic goals for the program. A STAR prototype was successfully tested in Tehachapi during 2008 and a large data set was collected to support engineering and commercial development of the technology. This report documents the methodology used to develop the STAR blade design and reviews the approach used for laboratory and field testing. The effort demonstrated that STAR technology can provide significantly greater energy capture without higher operating loads on the turbine.
Adaptive grid methods for RLV environment assessment and nozzle analysis
NASA Technical Reports Server (NTRS)
Thornburg, Hugh J.
1996-01-01
Rapid access to highly accurate data about complex configurations is needed for multi-disciplinary optimization and design. In order to efficiently meet these requirements a closer coupling between the analysis algorithms and the discretization process is needed. In some cases, such as free surface, temporally varying geometries, and fluid structure interaction, the need is unavoidable. In other cases the need is to rapidly generate and modify high quality grids. Techniques such as unstructured and/or solution-adaptive methods can be used to speed the grid generation process and to automatically cluster mesh points in regions of interest. Global features of the flow can be significantly affected by isolated regions of inadequately resolved flow. These regions may not exhibit high gradients and can be difficult to detect. Thus excessive resolution in certain regions does not necessarily increase the accuracy of the overall solution. Several approaches have been employed for both structured and unstructured grid adaption. The most widely used involve grid point redistribution, local grid point enrichment/derefinement or local modification of the actual flow solver. However, the success of any one of these methods ultimately depends on the feature detection algorithm used to determine solution domain regions which require a fine mesh for their accurate representation. Typically, weight functions are constructed to mimic the local truncation error and may require substantial user input. Most problems of engineering interest involve multi-block grids and widely disparate length scales. Hence, it is desirable that the adaptive grid feature detection algorithm be developed to recognize flow structures of different type as well as differing intensity, and adequately address scaling and normalization across blocks. These weight functions can then be used to construct blending functions for algebraic redistribution, interpolation functions for unstructured grid generation
Second derivatives for approximate spin projection methods
Thompson, Lee M.; Hratchian, Hrant P.
2015-02-07
The use of broken-symmetry electronic structure methods is required in order to obtain correct behavior of electronically strained open-shell systems, such as transition states, biradicals, and transition metals. This approach often has issues with spin contamination, which can lead to significant errors in predicted energies, geometries, and properties. Approximate projection schemes are able to correct for spin contamination and can often yield improved results. To fully make use of these methods and to carry out exploration of the potential energy surface, it is desirable to develop an efficient second energy derivative theory. In this paper, we formulate the analytical second derivatives for the Yamaguchi approximate projection scheme, building on recent work that has yielded an efficient implementation of the analytical first derivatives.
New methods and astrophysical applications of adaptive mesh fluid simulations
NASA Astrophysics Data System (ADS)
Wang, Peng
The formation of stars, galaxies and supermassive black holes are among the most interesting unsolved problems in astrophysics. Those problems are highly nonlinear and involve enormous dynamical ranges. Thus numerical simulations with spatial adaptivity are crucial in understanding those processes. In this thesis, we discuss the development and application of adaptive mesh refinement (AMR) multi-physics fluid codes to simulate those nonlinear structure formation problems. To simulate the formation of star clusters, we have developed an AMR magnetohydrodynamics (MHD) code, coupled with radiative cooling. We have also developed novel algorithms for sink particle creation, accretion, merging and outflows, all of which are coupled with the fluid algorithms using operator splitting. With this code, we have been able to perform the first AMR-MHD simulation of star cluster formation for several dynamical times, including sink particle and protostellar outflow feedbacks. The results demonstrated that protostellar outflows can drive supersonic turbulence in dense clumps and explain the observed slow and inefficient star formation. We also suggest that global collapse rate is the most important factor in controlling massive star accretion rate. In the topics of galaxy formation, we discuss the results of three projects. In the first project, using cosmological AMR hydrodynamics simulations, we found that isolated massive star still forms in cosmic string wakes even though the mega-parsec scale structure has been perturbed significantly by the cosmic strings. In the second project, we calculated the dynamical heating rate in galaxy formation. We found that by balancing our heating rate with the atomic cooling rate, it gives a critical halo mass which agrees with the result of numerical simulations. This demonstrates that the effect of dynamical heating should be put into semi-analytical works in the future. In the third project, using our AMR-MHD code coupled with radiative
Methods for the drug effectiveness review project.
McDonagh, Marian S; Jonas, Daniel E; Gartlehner, Gerald; Little, Alison; Peterson, Kim; Carson, Susan; Gibson, Mark; Helfand, Mark
2012-01-01
The Drug Effectiveness Review Project was initiated in 2003 in response to dramatic increases in the cost of pharmaceuticals, which lessened the purchasing power of state Medicaid budgets. A collaborative group of state Medicaid agencies and other organizations formed to commission high-quality comparative effectiveness reviews to inform evidence-based decisions about drugs that would be available to Medicaid recipients. The Project is coordinated by the Center for Evidence-based Policy (CEbP) at Oregon Health & Science University (OHSU), and the systematic reviews are undertaken by the Evidence-based Practice Centers (EPCs) at OHSU and at the University of North Carolina. The reviews adhere to high standards for comparative effectiveness reviews. Because the investigators have direct, regular communication with policy-makers, the reports have direct impact on policy and decision-making, unlike many systematic reviews. The Project was an innovator of methods to involve stakeholders and continues to develop its methods in conducting reviews that are highly relevant to policy-makers. The methods used for selecting topics, developing key questions, searching, determining eligibility of studies, assessing study quality, conducting qualitative and quantitative syntheses, rating the strength of evidence, and summarizing findings are described. In addition, our on-going interactions with the policy-makers that use the reports are described. PMID:22970848
Turbulence profiling methods applied to ESO's adaptive optics facility
NASA Astrophysics Data System (ADS)
Valenzuela, Javier; Béchet, Clémentine; Garcia-Rissmann, Aurea; Gonté, Frédéric; Kolb, Johann; Le Louarn, Miska; Neichel, Benoît; Madec, Pierre-Yves; Guesalaga, Andrés.
2014-07-01
Two algorithms were recently studied for C2n profiling from wide-field Adaptive Optics (AO) measurements on GeMS (Gemini Multi-Conjugate AO system). They both rely on the Slope Detection and Ranging (SLODAR) approach, using spatial covariances of the measurements issued from various wavefront sensors. The first algorithm estimates the C2n profile by applying the truncated least-squares inverse of a matrix modeling the response of slopes covariances to various turbulent layer heights. In the second method, the profile is estimated by deconvolution of these spatial cross-covariances of slopes. We compare these methods in the new configuration of ESO Adaptive Optics Facility (AOF), a high-order multiple laser system under integration. For this, we use measurements simulated by the AO cluster of ESO. The impact of the measurement noise and of the outer scale of the atmospheric turbulence is analyzed. The important influence of the outer scale on the results leads to the development of a new step for outer scale fitting included in each algorithm. This increases the reliability and robustness of the turbulence strength and profile estimations.
Maier, Andreas; Wigstroem, Lars; Hofmann, Hannes G.; Hornegger, Joachim; Zhu Lei; Strobel, Norbert; Fahrig, Rebecca
2011-11-15
Purpose: The combination of quickly rotating C-arm gantry with digital flat panel has enabled the acquisition of three-dimensional data (3D) in the interventional suite. However, image quality is still somewhat limited since the hardware has not been optimized for CT imaging. Adaptive anisotropic filtering has the ability to improve image quality by reducing the noise level and therewith the radiation dose without introducing noticeable blurring. By applying the filtering prior to 3D reconstruction, noise-induced streak artifacts are reduced as compared to processing in the image domain. Methods: 3D anisotropic adaptive filtering was used to process an ensemble of 2D x-ray views acquired along a circular trajectory around an object. After arranging the input data into a 3D space (2D projections + angle), the orientation of structures was estimated using a set of differently oriented filters. The resulting tensor representation of local orientation was utilized to control the anisotropic filtering. Low-pass filtering is applied only along structures to maintain high spatial frequency components perpendicular to these. The evaluation of the proposed algorithm includes numerical simulations, phantom experiments, and in-vivo data which were acquired using an AXIOM Artis dTA C-arm system (Siemens AG, Healthcare Sector, Forchheim, Germany). Spatial resolution and noise levels were compared with and without adaptive filtering. A human observer study was carried out to evaluate low-contrast detectability. Results: The adaptive anisotropic filtering algorithm was found to significantly improve low-contrast detectability by reducing the noise level by half (reduction of the standard deviation in certain areas from 74 to 30 HU). Virtually no degradation of high contrast spatial resolution was observed in the modulation transfer function (MTF) analysis. Although the algorithm is computationally intensive, hardware acceleration using Nvidia's CUDA Interface provided an 8.9-fold
Computerized Adaptive Assessment of Personality Disorder: Introducing the CAT-PD Project
Simms, Leonard J.; Goldberg, Lewis R.; Roberts, John E.; Watson, David; Welte, John; Rotterman, Jane H.
2011-01-01
Assessment of personality disorders (PD) has been hindered by reliance on the problematic categorical model embodied in the most recent Diagnostic and Statistical Model of Mental Disorders (DSM), lack of consensus among alternative dimensional models, and inefficient measurement methods. This article describes the rationale for and early results from an NIMH-funded, multi-year study designed to develop an integrative and comprehensive model and efficient measure of PD trait dimensions. To accomplish these goals, we are in the midst of a five-phase project to develop and validate the model and measure. The results of Phase 1 of the project—which was focused on developing the PD traits to be assessed and the initial item pool—resulted in a candidate list of 59 PD traits and an initial item pool of 2,589 items. Data collection and structural analyses in community and patient samples will inform the ultimate structure of the measure, and computerized adaptive testing (CAT) will permit efficient measurement of the resultant traits. The resultant Computerized Adaptive Test of Personality Disorder (CAT-PD) will be well positioned as a measure of the proposed DSM-5 PD traits. Implications for both applied and basic personality research are discussed. PMID:22804677
An adaptive PCA fusion method for remote sensing images
NASA Astrophysics Data System (ADS)
Guo, Qing; Li, An; Zhang, Hongqun; Feng, Zhongkui
2014-10-01
The principal component analysis (PCA) method is a popular fusion method used for its efficiency and high spatial resolution improvement. However, the spectral distortion is often found in PCA. In this paper, we propose an adaptive PCA method to enhance the spectral quality of the fused image. The amount of spatial details of the panchromatic (PAN) image injected into each band of the multi-spectral (MS) image is appropriately determined by a weighting matrix, which is defined by the edges of the PAN image, the edges of the MS image and the proportions between MS bands. In order to prove the effectiveness of the proposed method, the qualitative visual and quantitative analyses are introduced. The correlation coefficient (CC), the spectral discrepancy (SPD), and the spectral angle mapper (SAM) are used to measure the spectral quality of each fused band image. Q index is calculated to evaluate the global spectral quality of all the fused bands as a whole. The spatial quality is evaluated by the average gradient (AG) and the standard deviation (STD). Experimental results show that the proposed method improves the spectral quality very much comparing to the original PCA method while maintaining the high spatial quality of the original PCA.
A spectral projection method for transmission eigenvalues
NASA Astrophysics Data System (ADS)
Zeng, Fang; Sun, JiGuang; Xu, LiWei
2016-08-01
In this paper, we consider a nonlinear integral eigenvalue problem, which is a reformulation of the transmission eigenvalue problem arising in the inverse scattering theory. The boundary element method is employed for discretization, which leads to a generalized matrix eigenvalue problem. We propose a novel method based on the spectral projection. The method probes a given region on the complex plane using contour integrals and decides if the region contains eigenvalue(s) or not. It is particularly suitable to test if zero is an eigenvalue of the generalized eigenvalue problem, which in turn implies that the associated wavenumber is a transmission eigenvalue. Effectiveness and efficiency of the new method are demonstrated by numerical examples.
A Review on Effectiveness and Adaptability of the Design-Build Method
NASA Astrophysics Data System (ADS)
Kudo, Masataka; Miyatake, Ichiro; Baba, Kazuhito; Yokoi, Hiroyuki; Fueta, Toshiharu
In the Ministry of Land, Infrastructure, Transport and Tourism (MLIT), various approaches have been taken for efficient implementation of public works projects, one of which is the ongoing use of the design-build method on a trial basis, as a means to utilize the technical skills and knowledge of private companies. In 2005, MLIT further introduced the a dvanced technical proposal type, a kind of the comprehensive evaluation method, as part of its efforts to improve tendering and contracting systems. Meanwhile, although the positive effect of the design build method has been reported, it has not been widely published, which may be one of the reasons that the number of MLIT projects using the design-build method is declining year by year. In this context, this paper discusses the result and review of the study concerning the extent of flexibility allowed for the process and design (proposal) of public work projects, and the follow-up surveys of the actual test case projects, conducted as basic researches to examine the measure to expand and promote the use of the design-build method. The study objects were selected from the tunnel construction projects using the shield tunneling method for developing the common utility duct, and the bridge construction projects ordering construction of supers tructure work and substructure work in a single contract. In providing the result and review of the studies, the structures and the temporary installations were separately examined, and effectiveness and adaptability of the design-build method was discussed for each, respectively.
NASA Technical Reports Server (NTRS)
Kantor, A. V.; Timonin, V. G.; Azarova, Y. S.
1974-01-01
The method of adaptive discretization is the most promising for elimination of redundancy from telemetry messages characterized by signal shape. Adaptive discretization with associative sorting was considered as a way to avoid the shortcomings of adaptive discretization with buffer smoothing and adaptive discretization with logical switching in on-board information compression devices (OICD) in spacecraft. Mathematical investigations of OICD are presented.
NASA Astrophysics Data System (ADS)
Shams Esfand Abadi, Mohammad; AbbasZadeh Arani, Seyed Ali Asghar
2011-12-01
This paper extends the recently introduced variable step-size (VSS) approach to the family of adaptive filter algorithms. This method uses prior knowledge of the channel impulse response statistic. Accordingly, optimal step-size vector is obtained by minimizing the mean-square deviation (MSD). The presented algorithms are the VSS affine projection algorithm (VSS-APA), the VSS selective partial update NLMS (VSS-SPU-NLMS), the VSS-SPU-APA, and the VSS selective regressor APA (VSS-SR-APA). In VSS-SPU adaptive algorithms the filter coefficients are partially updated which reduce the computational complexity. In VSS-SR-APA, the optimal selection of input regressors is performed during the adaptation. The presented algorithms have good convergence speed, low steady state mean square error (MSE), and low computational complexity features. We demonstrate the good performance of the proposed algorithms through several simulations in system identification scenario.
Hwang, Wei-Chin
2010-01-01
How do we culturally adapt psychotherapy for ethnic minorities? Although there has been growing interest in doing so, few therapy adaptation frameworks have been developed. The majority of these frameworks take a top-down theoretical approach to adapting psychotherapy. The purpose of this paper is to introduce a community-based developmental approach to modifying psychotherapy for ethnic minorities. The Formative Method for Adapting Psychotherapy (FMAP) is a bottom-up approach that involves collaborating with consumers to generate and support ideas for therapy adaptation. It involves 5-phases that target developing, testing, and reformulating therapy modifications. These phases include: (a) generating knowledge and collaborating with stakeholders (b) integrating generated information with theory and empirical and clinical knowledge, (c) reviewing the initial culturally adapted clinical intervention with stakeholders and revising the culturally adapted intervention, (d) testing the culturally adapted intervention, and (e) finalizing the culturally adapted intervention. Application of the FMAP is illustrated using examples from a study adapting psychotherapy for Chinese Americans, but can also be readily applied to modify therapy for other ethnic groups. PMID:20625458
A Spectral Adaptive Mesh Refinement Method for the Burgers equation
NASA Astrophysics Data System (ADS)
Nasr Azadani, Leila; Staples, Anne
2013-03-01
Adaptive mesh refinement (AMR) is a powerful technique in computational fluid dynamics (CFD). Many CFD problems have a wide range of scales which vary with time and space. In order to resolve all the scales numerically, high grid resolutions are required. The smaller the scales the higher the resolutions should be. However, small scales are usually formed in a small portion of the domain or in a special period of time. AMR is an efficient method to solve these types of problems, allowing high grid resolutions where and when they are needed and minimizing memory and CPU time. Here we formulate a spectral version of AMR in order to accelerate simulations of a 1D model for isotropic homogenous turbulence, the Burgers equation, as a first test of this method. Using pseudo spectral methods, we applied AMR in Fourier space. The spectral AMR (SAMR) method we present here is applied to the Burgers equation and the results are compared with the results obtained using standard solution methods performed using a fine mesh.
Robust image registration using adaptive coherent point drift method
NASA Astrophysics Data System (ADS)
Yang, Lijuan; Tian, Zheng; Zhao, Wei; Wen, Jinhuan; Yan, Weidong
2016-04-01
Coherent point drift (CPD) method is a powerful registration tool under the framework of the Gaussian mixture model (GMM). However, the global spatial structure of point sets is considered only without other forms of additional attribute information. The equivalent simplification of mixing parameters and the manual setting of the weight parameter in GMM make the CPD method less robust to outlier and have less flexibility. An adaptive CPD method is proposed to automatically determine the mixing parameters by embedding the local attribute information of features into the construction of GMM. In addition, the weight parameter is treated as an unknown parameter and automatically determined in the expectation-maximization algorithm. In image registration applications, the block-divided salient image disk extraction method is designed to detect sparse salient image features and local self-similarity is used as attribute information to describe the local neighborhood structure of each feature. The experimental results on optical images and remote sensing images show that the proposed method can significantly improve the matching performance.
Efficient Combustion Simulation via the Adaptive Wavelet Collocation Method
NASA Astrophysics Data System (ADS)
Lung, Kevin; Brown-Dymkoski, Eric; Guerrero, Victor; Doran, Eric; Museth, Ken; Balme, Jo; Urberger, Bob; Kessler, Andre; Jones, Stephen; Moses, Billy; Crognale, Anthony
Rocket engine development continues to be driven by the intuition and experience of designers, progressing through extensive trial-and-error test campaigns. Extreme temperatures and pressures frustrate direct observation, while high-fidelity simulation can be impractically expensive owing to the inherent muti-scale, multi-physics nature of the problem. To address this cost, an adaptive multi-resolution PDE solver has been designed which targets the high performance, many-core architecture of GPUs. The adaptive wavelet collocation method is used to maintain a sparse-data representation of the high resolution simulation, greatly reducing the memory footprint while tightly controlling physical fidelity. The tensorial, stencil topology of wavelet-based grids lends itself to highly vectorized algorithms which are necessary to exploit the performance of GPUs. This approach permits efficient implementation of direct finite-rate kinetics, and improved resolution of steep thermodynamic gradients and the smaller mixing scales that drive combustion dynamics. Resolving these scales is crucial for accurate chemical kinetics, which are typically degraded or lost in statistical modeling approaches.
A locally adaptive kernel regression method for facies delineation
NASA Astrophysics Data System (ADS)
Fernàndez-Garcia, D.; Barahona-Palomo, M.; Henri, C. V.; Sanchez-Vila, X.
2015-12-01
Facies delineation is defined as the separation of geological units with distinct intrinsic characteristics (grain size, hydraulic conductivity, mineralogical composition). A major challenge in this area stems from the fact that only a few scattered pieces of hydrogeological information are available to delineate geological facies. Several methods to delineate facies are available in the literature, ranging from those based only on existing hard data, to those including secondary data or external knowledge about sedimentological patterns. This paper describes a methodology to use kernel regression methods as an effective tool for facies delineation. The method uses both the spatial and the actual sampled values to produce, for each individual hard data point, a locally adaptive steering kernel function, self-adjusting the principal directions of the local anisotropic kernels to the direction of highest local spatial correlation. The method is shown to outperform the nearest neighbor classification method in a number of synthetic aquifers whenever the available number of hard data is small and randomly distributed in space. In the case of exhaustive sampling, the steering kernel regression method converges to the true solution. Simulations ran in a suite of synthetic examples are used to explore the selection of kernel parameters in typical field settings. It is shown that, in practice, a rule of thumb can be used to obtain suboptimal results. The performance of the method is demonstrated to significantly improve when external information regarding facies proportions is incorporated. Remarkably, the method allows for a reasonable reconstruction of the facies connectivity patterns, shown in terms of breakthrough curves performance.
Adapting Western Research Methods to Indigenous Ways of Knowing
Christopher, Suzanne
2013-01-01
Indigenous communities have long experienced exploitation by researchers and increasingly require participatory and decolonizing research processes. We present a case study of an intervention research project to exemplify a clash between Western research methodologies and Indigenous methodologies and how we attempted reconciliation. We then provide implications for future research based on lessons learned from Native American community partners who voiced concern over methods of Western deductive qualitative analysis. Decolonizing research requires constant reflective attention and action, and there is an absence of published guidance for this process. Continued exploration is needed for implementing Indigenous methods alone or in conjunction with appropriate Western methods when conducting research in Indigenous communities. Currently, examples of Indigenous methods and theories are not widely available in academic texts or published articles, and are often not perceived as valid. PMID:23678897
Methods for cost estimation in software project management
NASA Astrophysics Data System (ADS)
Briciu, C. V.; Filip, I.; Indries, I. I.
2016-02-01
The speed in which the processes used in software development field have changed makes it very difficult the task of forecasting the overall costs for a software project. By many researchers, this task has been considered unachievable, but there is a group of scientist for which this task can be solved using the already known mathematical methods (e.g. multiple linear regressions) and the new techniques as genetic programming and neural networks. The paper presents a solution for building a model for the cost estimation models in the software project management using genetic algorithms starting from the PROMISE datasets related COCOMO 81 model. In the first part of the paper, a summary of the major achievements in the research area of finding a model for estimating the overall project costs is presented together with the description of the existing software development process models. In the last part, a basic proposal of a mathematical model of a genetic programming is proposed including here the description of the chosen fitness function and chromosome representation. The perspective of model described it linked with the current reality of the software development considering as basis the software product life cycle and the current challenges and innovations in the software development area. Based on the author's experiences and the analysis of the existing models and product lifecycle it was concluded that estimation models should be adapted with the new technologies and emerging systems and they depend largely by the chosen software development method.
NASA Astrophysics Data System (ADS)
Vanderlinden, J. P.; Baztan, J.
2014-12-01
The prupose of this paper is to present the "Adaptation Research a Transdisciplinary community and policy centered appoach" (ARTisticc) project. ARTisticc's goal is to apply innovative standardized transdisciplinary art and science integrative approaches to foster robust, socially, culturally and scientifically, community centred adaptation to climate change. The approach used in the project is based on the strong understanding that adaptation is: (a) still "a concept of uncertain form"; (b) a concept dealing with uncertainty; (c) a concept that calls for an analysis that goes beyond the traditional disciplinary organization of science, and; (d) an unconventional process in the realm of science and policy integration. The project is centered on case studies in France, Greenland, Russia, India, Canada, Alaska, and Senegal. In every site we jointly develop artwork while we analyzing how natural science, essentially geosciences can be used in order to better adapt in the future, how society adapt to current changes and how memories of past adaptations frames current and future processes. Artforms are mobilized in order to share scientific results with local communities and policy makers, this in a way that respects cultural specificities while empowering stakeholders, ARTISTICC translates these "real life experiments" into stories and artwork that are meaningful to those affected by climate change. The scientific results and the culturally mediated productions will thereafter be used in order to co-construct, with NGOs and policy makers, policy briefs, i.e. robust and scientifically legitimate policy recommendations regarding coastal adaptation. This co-construction process will be in itself analysed with the goal of increasing arts and science's performative functions in the universe of evidence-based policy making. The project involves scientists from natural sciences, the social sciences and the humanities, as well as artitis from the performing arts (playwriters
A forward method for optimal stochastic nonlinear and adaptive control
NASA Technical Reports Server (NTRS)
Bayard, David S.
1988-01-01
A computational approach is taken to solve the optimal nonlinear stochastic control problem. The approach is to systematically solve the stochastic dynamic programming equations forward in time, using a nested stochastic approximation technique. Although computationally intensive, this provides a straightforward numerical solution for this class of problems and provides an alternative to the usual dimensionality problem associated with solving the dynamic programming equations backward in time. It is shown that the cost degrades monotonically as the complexity of the algorithm is reduced. This provides a strategy for suboptimal control with clear performance/computation tradeoffs. A numerical study focusing on a generic optimal stochastic adaptive control example is included to demonstrate the feasibility of the method.
Neurology diagnostics security and terminal adaptation for PocketNeuro project.
Chemak, C; Bouhlel, M-S; Lapayre, J-C
2008-09-01
This paper presents new approaches of medical information security and terminal mobile phone adaptation for the PocketNeuro project. The latter term refers to a project created for the management of neurological diseases. It consists of transmitting information about patients ("desk of patients") to a doctor's mobile phone during a visit and examination of a patient. These new approaches for the PocketNeuro project were analyzed in terms of medical information security and adaptation of the diagnostic images to the doctor's mobile phone. Images were extracted from a DICOM library. Matlab and its library were used as software to test our approaches and to validate our results. Experiments performed on a database of 30 256 x 256 pixel-sized neuronal medical images indicated that our new approaches for PocketNeuro project are valid and support plans for large-scale studies between French and Swiss hospitals using secured connections. PMID:18817496
Results of a Formal Methods Demonstration Project
NASA Technical Reports Server (NTRS)
Kelly, J.; Covington, R.; Hamilton, D.
1994-01-01
This paper describes the results of a cooperative study conducted by a team of researchers in formal methods at three NASA Centers to demonstrate FM techniques and to tailor them to critical NASA software systems. This pilot project applied FM to an existing critical software subsystem, the Shuttle's Jet Select subsystem (Phase I of an ongoing study). The present study shows that FM can be used successfully to uncover hidden issues in a highly critical and mature Functional Subsystem Software Requirements (FSSR) specification which are very difficult to discover by traditional means.
Adaptive mesh refinement and adjoint methods in geophysics simulations
NASA Astrophysics Data System (ADS)
Burstedde, Carsten
2013-04-01
It is an ongoing challenge to increase the resolution that can be achieved by numerical geophysics simulations. This applies to considering sub-kilometer mesh spacings in global-scale mantle convection simulations as well as to using frequencies up to 1 Hz in seismic wave propagation simulations. One central issue is the numerical cost, since for three-dimensional space discretizations, possibly combined with time stepping schemes, a doubling of resolution can lead to an increase in storage requirements and run time by factors between 8 and 16. A related challenge lies in the fact that an increase in resolution also increases the dimensionality of the model space that is needed to fully parametrize the physical properties of the simulated object (a.k.a. earth). Systems that exhibit a multiscale structure in space are candidates for employing adaptive mesh refinement, which varies the resolution locally. An example that we found well suited is the mantle, where plate boundaries and fault zones require a resolution on the km scale, while deeper area can be treated with 50 or 100 km mesh spacings. This approach effectively reduces the number of computational variables by several orders of magnitude. While in this case it is possible to derive the local adaptation pattern from known physical parameters, it is often unclear what are the most suitable criteria for adaptation. We will present the goal-oriented error estimation procedure, where such criteria are derived from an objective functional that represents the observables to be computed most accurately. Even though this approach is well studied, it is rarely used in the geophysics community. A related strategy to make finer resolution manageable is to design methods that automate the inference of model parameters. Tweaking more than a handful of numbers and judging the quality of the simulation by adhoc comparisons to known facts and observations is a tedious task and fundamentally limited by the turnaround times
Formal methods demonstration project for space applications
NASA Technical Reports Server (NTRS)
Divito, Ben L.
1995-01-01
The Space Shuttle program is cooperating in a pilot project to apply formal methods to live requirements analysis activities. As one of the larger ongoing shuttle Change Requests (CR's), the Global Positioning System (GPS) CR involves a significant upgrade to the Shuttle's navigation capability. Shuttles are to be outfitted with GPS receivers and the primary avionics software will be enhanced to accept GPS-provided positions and integrate them into navigation calculations. Prior to implementing the CR, requirements analysts at Loral Space Information Systems, the Shuttle software contractor, must scrutinize the CR to identify and resolve any requirements issues. We describe an ongoing task of the Formal Methods Demonstration Project for Space Applications whose goal is to find an effective way to use formal methods in the GPS CR requirements analysis phase. This phase is currently under way and a small team from NASA Langley, ViGYAN Inc. and Loral is now engaged in this task. Background on the GPS CR is provided and an overview of the hardware/software architecture is presented. We outline the approach being taken to formalize the requirements, only a subset of which is being attempted. The approach features the use of the PVS specification language to model 'principal functions', which are major units of Shuttle software. Conventional state machine techniques form the basis of our approach. Given this background, we present interim results based on a snapshot of work in progress. Samples of requirements specifications rendered in PVS are offered to illustration. We walk through a specification sketch for the principal function known as GPS Receiver State processing. Results to date are summarized and feedback from Loral requirements analysts is highlighted. Preliminary data is shown comparing issues detected by the formal methods team versus those detected using existing requirements analysis methods. We conclude by discussing our plan to complete the remaining
Evaluation of Adaptive Subdivision Method on Mobile Device
NASA Astrophysics Data System (ADS)
Rahim, Mohd Shafry Mohd; Isa, Siti Aida Mohd; Rehman, Amjad; Saba, Tanzila
2013-06-01
Recently, there are significant improvements in the capabilities of mobile devices; but rendering large 3D object is still tedious because of the constraint in resources of mobile devices. To reduce storage requirement, 3D object is simplified but certain area of curvature is compromised and the surface will not be smooth. Therefore a method to smoother selected area of a curvature is implemented. One of the popular methods is adaptive subdivision method. Experiments are performed using two data with results based on processing time, rendering speed and the appearance of the object on the devices. The result shows a downfall in frame rate performance due to the increase in the number of triangles with each level of iteration while the processing time of generating the new mesh also significantly increase. Since there is a difference in screen size between the devices the surface on the iPhone appears to have more triangles and more compact than the surface displayed on the iPad. [Figure not available: see fulltext.
Random projection and SVD methods in hyperspectral imaging
NASA Astrophysics Data System (ADS)
Zhang, Jiani
Hyperspectral imaging provides researchers with abundant information with which to study the characteristics of objects in a scene. Processing the massive hyperspectral imagery datasets in a way that efficiently provides useful information becomes an important issue. In this thesis, we consider methods which reduce the dimension of hyperspectral data while retaining as much useful information as possible. Traditional deterministic methods for low-rank approximation are not always adaptable to process huge datasets in an effective way, and therefore probabilistic methods are useful in dimension reduction of hyperspectral images. In this thesis, we begin by generally introducing the background and motivations of this work. Next, we summarize the preliminary knowledge and the applications of SVD and PCA. After these descriptions, we present a probabilistic method, randomized Singular Value Decomposition (rSVD), for the purposes of dimension reduction, compression, reconstruction, and classification of hyperspectral data. We discuss some variations of this method. These variations offer the opportunity to obtain a more accurate reconstruction of the matrix whose singular values decay gradually, to process matrices without target rank, and to obtain the rSVD with only one single pass over the original data. Moreover, we compare the method with Compressive-Projection Principle Component Analysis (CPPCA). From the numerical results, we can see that rSVD has better performance in compression and reconstruction than truncated SVD and CPPCA. We also apply rSVD to classification methods for the hyperspectral data provided by the National Geospatial-Intelligence Agency (NGA).
ERIC Educational Resources Information Center
Polman, Joseph L.
This paper discusses research on activity structure design in a project-based science classroom and efforts to adapt designs from this setting to an after-school program involving historical inquiry. Common activity structures such as classroom lessons and Initiation-Reply-Evaluation (I-R-E) sequences are important cultural tools that help…
An Adaptation of Dual Labor Market Theory to the Evaluation of an Youth Employment Project.
ERIC Educational Resources Information Center
Spiessl, Ronald W.
This paper reports the problems arising out of, and the solution developed, in adapting dual labor market theory to the evaluation of a CETA youth employment demonstration project. The theory posits that some jobs operate within a primary labor market, and are characterized by good wages and benefits, job security and potential for within firm…
ERIC Educational Resources Information Center
Melaragno, Ralph J.
The two-phase study compared two methods of adapting self-instructional materials to individual differences among learners. The methods were compared with each other and with a control condition involving only minimal adaptation. The first adaptation procedure was based on subjects' performances on a learning task in Phase I of the study; the…
Broom, Donald M
2006-01-01
The term adaptation is used in biology in three different ways. It may refer to changes which occur at the cell and organ level, or at the individual level, or at the level of gene action and evolutionary processes. Adaptation by cells, especially nerve cells helps in: communication within the body, the distinguishing of stimuli, the avoidance of overload and the conservation of energy. The time course and complexity of these mechanisms varies. Adaptive characters of organisms, including adaptive behaviours, increase fitness so this adaptation is evolutionary. The major part of this paper concerns adaptation by individuals and its relationships to welfare. In complex animals, feed forward control is widely used. Individuals predict problems and adapt by acting before the environmental effect is substantial. Much of adaptation involves brain control and animals have a set of needs, located in the brain and acting largely via motivational mechanisms, to regulate life. Needs may be for resources but are also for actions and stimuli which are part of the mechanism which has evolved to obtain the resources. Hence pigs do not just need food but need to be able to carry out actions like rooting in earth or manipulating materials which are part of foraging behaviour. The welfare of an individual is its state as regards its attempts to cope with its environment. This state includes various adaptive mechanisms including feelings and those which cope with disease. The part of welfare which is concerned with coping with pathology is health. Disease, which implies some significant effect of pathology, always results in poor welfare. Welfare varies over a range from very good, when adaptation is effective and there are feelings of pleasure or contentment, to very poor. A key point concerning the concept of individual adaptation in relation to welfare is that welfare may be good or poor while adaptation is occurring. Some adaptation is very easy and energetically cheap and
Method for removing tilt control in adaptive optics systems
Salmon, J.T.
1998-04-28
A new adaptive optics system and method of operation are disclosed, whereby the method removes tilt control, and includes the steps of using a steering mirror to steer a wavefront in the desired direction, for aiming an impinging aberrated light beam in the direction of a deformable mirror. The deformable mirror has its surface deformed selectively by means of a plurality of actuators, and compensates, at least partially, for existing aberrations in the light beam. The light beam is split into an output beam and a sample beam, and the sample beam is sampled using a wavefront sensor. The sampled signals are converted into corresponding electrical signals for driving a controller, which, in turn, drives the deformable mirror in a feedback loop in response to the sampled signals, for compensating for aberrations in the wavefront. To this purpose, a displacement error (gradient) of the wavefront is measured, and adjusted by a modified gain matrix, which satisfies the following equation: G{prime} = (I{minus}X(X{sup T} X){sup {minus}1}X{sup T})G(I{minus}A). 3 figs.
Method for removing tilt control in adaptive optics systems
Salmon, Joseph Thaddeus
1998-01-01
A new adaptive optics system and method of operation, whereby the method removes tilt control, and includes the steps of using a steering mirror to steer a wavefront in the desired direction, for aiming an impinging aberrated light beam in the direction of a deformable mirror. The deformable mirror has its surface deformed selectively by means of a plurality of actuators, and compensates, at least partially, for existing aberrations in the light beam. The light beam is split into an output beam and a sample beam, and the sample beam is sampled using a wavefront sensor. The sampled signals are converted into corresponding electrical signals for driving a controller, which, in turn, drives the deformable mirror in a feedback loop in response to the sampled signals, for compensating for aberrations in the wavefront. To this purpose, a displacement error (gradient) of the wavefront is measured, and adjusted by a modified gain matrix, which satisfies the following equation: G'=(I-X(X.sup.T X).sup.-1 X.sup.T)G(I-A)
Adapted G-mode Clustering Method applied to Asteroid Taxonomy
NASA Astrophysics Data System (ADS)
Hasselmann, Pedro H.; Carvano, Jorge M.; Lazzaro, D.
2013-11-01
The original G-mode was a clustering method developed by A. I. Gavrishin in the late 60's for geochemical classification of rocks, but was also applied to asteroid photometry, cosmic rays, lunar sample and planetary science spectroscopy data. In this work, we used an adapted version to classify the asteroid photometry from SDSS Moving Objects Catalog. The method works by identifying normal distributions in a multidimensional space of variables. The identification starts by locating a set of points with smallest mutual distance in the sample, which is a problem when data is not planar. Here we present a modified version of the G-mode algorithm, which was previously written in FORTRAN 77, in Python 2.7 and using NumPy, SciPy and Matplotlib packages. The NumPy was used for array and matrix manipulation and Matplotlib for plot control. The Scipy had a import role in speeding up G-mode, Scipy.spatial.distance.mahalanobis was chosen as distance estimator and Numpy.histogramdd was applied to find the initial seeds from which clusters are going to evolve. Scipy was also used to quickly produce dendrograms showing the distances among clusters. Finally, results for Asteroids Taxonomy and tests for different sample sizes and implementations are presented.
Deng, Luzhen; Mi, Deling; He, Peng; Feng, Peng; Yu, Pengwei; Chen, Mianyi; Li, Zhichao; Wang, Jian; Wei, Biao
2015-01-01
For lack of directivity in Total Variation (TV) which only uses x-coordinate and y-coordinate gradient transform as its sparse representation approach during the iteration process, this paper brought in Adaptive-weighted Diagonal Total Variation (AwDTV) that uses the diagonal direction gradient to constraint reconstructed image and adds associated weights which are expressed as an exponential function and can be adaptively adjusted by the local image-intensity diagonal gradient for the purpose of preserving the edge details, then using the steepest descent method to solve the optimization problem. Finally, we did two sets of numerical simulation and the results show that the proposed algorithm can reconstruct high-quality CT images from few-views projection, which has lower Root Mean Square Error (RMSE) and higher Universal Quality Index (UQI) than Algebraic Reconstruction Technique (ART) and TV-based reconstruction method. PMID:26405935
NASA Astrophysics Data System (ADS)
Brekke, L. D.; Clark, M. P.; Gutmann, E. D.; Wood, A.; Mizukami, N.; Mendoza, P. A.; Rasmussen, R.; Ikeda, K.; Pruitt, T.; Arnold, J. R.; Rajagopalan, B.
2015-12-01
Adaptation planning assessments often rely on single methods for climate projection downscaling and hydrologic analysis, do not reveal uncertainties from associated method choices, and thus likely produce overly confident decision-support information. Recent work by the authors has highlighted this issue by identifying strengths and weaknesses of widely applied methods for downscaling climate projections and assessing hydrologic impacts. This work has shown that many of the methodological choices made can alter the magnitude, and even the sign of the climate change signal. Such results motivate consideration of both sources of method uncertainty within an impacts assessment. Consequently, the authors have pursued development of improved downscaling techniques spanning a range of method classes (quasi-dynamical and circulation-based statistical methods) and developed approaches to better account for hydrologic analysis uncertainty (multi-model; regional parameter estimation under forcing uncertainty). This presentation summarizes progress in the development of these methods, as well as implications of pursuing these developments. First, having access to these methods creates an opportunity to better reveal impacts uncertainty through multi-method ensembles, expanding on present-practice ensembles which are often based only on emissions scenarios and GCM choices. Second, such expansion of uncertainty treatment combined with an ever-expanding wealth of global climate projection information creates a challenge of how to use such a large ensemble for local adaptation planning. To address this challenge, the authors are evaluating methods for ensemble selection (considering the principles of fidelity, diversity and sensitivity) that is compatible with present-practice approaches for abstracting change scenarios from any "ensemble of opportunity". Early examples from this development will also be presented.
Nonlinear optimization with linear constraints using a projection method
NASA Technical Reports Server (NTRS)
Fox, T.
1982-01-01
Nonlinear optimization problems that are encountered in science and industry are examined. A method of projecting the gradient vector onto a set of linear contraints is developed, and a program that uses this method is presented. The algorithm that generates this projection matrix is based on the Gram-Schmidt method and overcomes some of the objections to the Rosen projection method.
Adaptable Metadata Rich IO Methods for Portable High Performance IO
Lofstead, J.; Zheng, Fang; Klasky, Scott A; Schwan, Karsten
2009-01-01
Since IO performance on HPC machines strongly depends on machine characteristics and configuration, it is important to carefully tune IO libraries and make good use of appropriate library APIs. For instance, on current petascale machines, independent IO tends to outperform collective IO, in part due to bottlenecks at the metadata server. The problem is exacerbated by scaling issues, since each IO library scales differently on each machine, and typically, operates efficiently to different levels of scaling on different machines. With scientific codes being run on a variety of HPC resources, efficient code execution requires us to address three important issues: (1) end users should be able to select the most efficient IO methods for their codes, with minimal effort in terms of code updates or alterations; (2) such performance-driven choices should not prevent data from being stored in the desired file formats, since those are crucial for later data analysis; and (3) it is important to have efficient ways of identifying and selecting certain data for analysis, to help end users cope with the flood of data produced by high end codes. This paper employs ADIOS, the ADaptable IO System, as an IO API to address (1)-(3) above. Concerning (1), ADIOS makes it possible to independently select the IO methods being used by each grouping of data in an application, so that end users can use those IO methods that exhibit best performance based on both IO patterns and the underlying hardware. In this paper, we also use this facility of ADIOS to experimentally evaluate on petascale machines alternative methods for high performance IO. Specific examples studied include methods that use strong file consistency vs. delayed parallel data consistency, as that provided by MPI-IO or POSIX IO. Concerning (2), to avoid linking IO methods to specific file formats and attain high IO performance, ADIOS introduces an efficient intermediate file format, termed BP, which can be converted, at small
The Project Method in Agricultural Education: Then and Now
ERIC Educational Resources Information Center
Roberts, T. Grady; Harlin, Julie F.
2007-01-01
The purpose of this philosophical paper was to synthesize theoretical and historical foundations of the project method and compare them to modern best-practices. A review of historical and contemporary literature related to the project method yielded six themes: 1) purpose of projects; 2) project classification; 3) the process; 4) the context; 5)…
An Alternative Method to Project Wind Patterns
NASA Astrophysics Data System (ADS)
Fadillioglu, Cagla; Kiyisuren, I. Cagatay; Collu, Kamil; Turp, M. Tufan; Kurnaz, M. Levent; Ozturk, Tugba
2016-04-01
Wind energy is one of the major clean and sustainable energy sources. Beside its various advantages, wind energy has a downside that its performance cannot be projected very accurately in the long-term. In this study, we offer an alternative method which can be used to determine the best location to install a wind turbine in a large area aiming maximum energy performance in the long run. For this purpose, a regional climate model (i.e. RegCM4.4) is combined with a software called Winds on Critical Streamline Surfaces (WOCSS) in order to identify wind patterns for any domains even in a changing climate. As a special case, Çanakkale region is examined due to the terrain profile having both coastal and mountainous features. WOCSS program was run twice for each month in the sample years in a double nested fashion, using the provisional RegCM4.4 wind data between years 2020 and 2040. Modified version of WOCSS provides terrain following flow surfaces and by processing those data, it makes a wind profile output for certain heights specified by the user. The computational time of WOCSS is also in reasonable range. Considering the lack of alternative methods for long-term wind performance projection, the model used in this study is a very good way for obtaining quick indications for wind performance taking the impact of the terrain effects into account. This research has been supported by Boǧaziçi University Research Fund Grant Number 10421.
A hybrid method for optimization of the adaptive Goldstein filter
NASA Astrophysics Data System (ADS)
Jiang, Mi; Ding, Xiaoli; Tian, Xin; Malhotra, Rakesh; Kong, Weixue
2014-12-01
The Goldstein filter is a well-known filter for interferometric filtering in the frequency domain. The main parameter of this filter, alpha, is set as a power of the filtering function. Depending on it, considered areas are strongly or weakly filtered. Several variants have been developed to adaptively determine alpha using different indicators such as the coherence, and phase standard deviation. The common objective of these methods is to prevent areas with low noise from being over filtered while simultaneously allowing stronger filtering over areas with high noise. However, the estimators of these indicators are biased in the real world and the optimal model to accurately determine the functional relationship between the indicators and alpha is also not clear. As a result, the filter always under- or over-filters and is rarely correct. The study presented in this paper aims to achieve accurate alpha estimation by correcting the biased estimator using homogeneous pixel selection and bootstrapping algorithms, and by developing an optimal nonlinear model to determine alpha. In addition, an iteration is also merged into the filtering procedure to suppress the high noise over incoherent areas. The experimental results from synthetic and real data show that the new filter works well under a variety of conditions and offers better and more reliable performance when compared to existing approaches.
Tsunami modelling with adaptively refined finite volume methods
LeVeque, R.J.; George, D.L.; Berger, M.J.
2011-01-01
Numerical modelling of transoceanic tsunami propagation, together with the detailed modelling of inundation of small-scale coastal regions, poses a number of algorithmic challenges. The depth-averaged shallow water equations can be used to reduce this to a time-dependent problem in two space dimensions, but even so it is crucial to use adaptive mesh refinement in order to efficiently handle the vast differences in spatial scales. This must be done in a 'wellbalanced' manner that accurately captures very small perturbations to the steady state of the ocean at rest. Inundation can be modelled by allowing cells to dynamically change from dry to wet, but this must also be done carefully near refinement boundaries. We discuss these issues in the context of Riemann-solver-based finite volume methods for tsunami modelling. Several examples are presented using the GeoClaw software, and sample codes are available to accompany the paper. The techniques discussed also apply to a variety of other geophysical flows. ?? 2011 Cambridge University Press.
Adaptation of a Psycho-Oncology Intervention for Black Breast Cancer Survivors: Project CARE
Lechner, Suzanne C.; Ennis-Whitehead, Nicole; Robertson, Belinda Ryan; Annane, Debra W.; Vargas, Sara; Carver, Charles S.; Antoni, Michael H.
2014-01-01
Black women are traditionally underserved in all aspects of cancer care. This disparity is particularly evident in the area of psychosocial interventions where there are few programs designed to specifically meet the needs of Black breast cancer survivors. Cognitive-behavioral stress management intervention (CBSM) has been shown to facilitate adjustment to cancer. Recently, this intervention model has been adapted for Black women who have recently completed treatment for breast cancer. We outline the components of the CBSM intervention, the steps we took to adapt the intervention to meet the needs of Black women (Project CARE) and discuss the preliminary findings regarding acceptability and retention of participants in this novel study. PMID:25544778
Supporting UK adaptation: building services for the next set of UK climate projections
NASA Astrophysics Data System (ADS)
Fung, Fai; Lowe, Jason
2016-04-01
As part of the Climate Change Act 2008, the UK Government sets out a national adaptation programme to address the risks and opportunities identified in a national climate change risk assessment (CCRA) every five years. The last risk assessment in 2012 was based on the probabilistic projections for the UK published in 2009 (UKCP09). The second risk assessment will also use information from UKCP09 alongside other evidence on climate projections. However, developments in the science of climate projeciton, and evolving user needs (based partly on what has been learnt about the diverse user requirements of the UK adaptation community from the seven years of delivering and managing UKCP09 products, market research and the peer-reviewed literature) suggest now is an appropriate time to update the projections and how they are delivered. A new set of UK climate projections are now being produced to upgrade UKCP09 to reflect the latest developments in climate science, the first phase of which will be delivered in 2018 to support the third CCRA. A major component of the work is the building of a tailored service to support users of the new projections during their development and to involve users in key decisions so that the projections are of most use. We will set out the plan for the new climate projections that seek to address the evolving user need. We will also present a framework which aims to (i) facilitate the dialogue between users, boundary organisations and producers, reflecting their different decision-making roles (ii) produce scientifically robust, user-relevant climate information (iii) provide the building blocks for developing further climate services to support adaptation activities in the UK.
The importance of including variability in climate change projections used for adaptation
NASA Astrophysics Data System (ADS)
Sexton, David M. H.; Harris, Glen R.
2015-10-01
Our understanding of mankind’s influence on the climate is largely based on computer simulations. Model output is typically averaged over several decades so that the anthropogenic climate change signal stands out from the largely unpredictable `noise’ of climate variability. Similar averaging periods (30-year) are used for regional climate projections to inform adaptation. According to two such projections, UKCIP02 (ref. ) and UKCP09 (ref. ), the UK will experience `hotter drier summers and warmer wetter winters’ in the future. This message is about a typical rather than any individual future season, and these projections should not be compared directly to observed weather as this neglects the sizeable contribution from year-to-year climate variability. Therefore, despite the apparent contradiction with the messages, it is a fallacy to suggest the recent cold UK winters like 2009/2010 disprove human-made climate change. Nevertheless, such claims understandably cause public confusion and doubt. Here we include year-to-year variability to provide projections for individual seasons. This approach has two advantages. First, it allows fair comparisons with recent weather events, for instance showing that recent cold winters are within projected ranges. Second, it allows the projections to be expressed in terms of the extreme hot, cold, wet or dry seasons that impact society, providing a better idea of adaptation needs.
MR Image Reconstruction Using Block Matching and Adaptive Kernel Methods
Schmidt, Johannes F. M.; Santelli, Claudio; Kozerke, Sebastian
2016-01-01
An approach to Magnetic Resonance (MR) image reconstruction from undersampled data is proposed. Undersampling artifacts are removed using an iterative thresholding algorithm applied to nonlinearly transformed image block arrays. Each block array is transformed using kernel principal component analysis where the contribution of each image block to the transform depends in a nonlinear fashion on the distance to other image blocks. Elimination of undersampling artifacts is achieved by conventional principal component analysis in the nonlinear transform domain, projection onto the main components and back-mapping into the image domain. Iterative image reconstruction is performed by interleaving the proposed undersampling artifact removal step and gradient updates enforcing consistency with acquired k-space data. The algorithm is evaluated using retrospectively undersampled MR cardiac cine data and compared to k-t SPARSE-SENSE, block matching with spatial Fourier filtering and k-t ℓ1-SPIRiT reconstruction. Evaluation of image quality and root-mean-squared-error (RMSE) reveal improved image reconstruction for up to 8-fold undersampled data with the proposed approach relative to k-t SPARSE-SENSE, block matching with spatial Fourier filtering and k-t ℓ1-SPIRiT. In conclusion, block matching and kernel methods can be used for effective removal of undersampling artifacts in MR image reconstruction and outperform methods using standard compressed sensing and ℓ1-regularized parallel imaging methods. PMID:27116675
MR Image Reconstruction Using Block Matching and Adaptive Kernel Methods.
Schmidt, Johannes F M; Santelli, Claudio; Kozerke, Sebastian
2016-01-01
An approach to Magnetic Resonance (MR) image reconstruction from undersampled data is proposed. Undersampling artifacts are removed using an iterative thresholding algorithm applied to nonlinearly transformed image block arrays. Each block array is transformed using kernel principal component analysis where the contribution of each image block to the transform depends in a nonlinear fashion on the distance to other image blocks. Elimination of undersampling artifacts is achieved by conventional principal component analysis in the nonlinear transform domain, projection onto the main components and back-mapping into the image domain. Iterative image reconstruction is performed by interleaving the proposed undersampling artifact removal step and gradient updates enforcing consistency with acquired k-space data. The algorithm is evaluated using retrospectively undersampled MR cardiac cine data and compared to k-t SPARSE-SENSE, block matching with spatial Fourier filtering and k-t ℓ1-SPIRiT reconstruction. Evaluation of image quality and root-mean-squared-error (RMSE) reveal improved image reconstruction for up to 8-fold undersampled data with the proposed approach relative to k-t SPARSE-SENSE, block matching with spatial Fourier filtering and k-t ℓ1-SPIRiT. In conclusion, block matching and kernel methods can be used for effective removal of undersampling artifacts in MR image reconstruction and outperform methods using standard compressed sensing and ℓ1-regularized parallel imaging methods. PMID:27116675
How to Select a Project Delivery Method for School Facilities
ERIC Educational Resources Information Center
Kalina, David
2007-01-01
In this article, the author discusses and explains three project delivery methods that are commonly used today in the United States. The first project delivery method mentioned is the design-bid-build, which is still the predominant method of project delivery for public works and school construction in the United States. The second is the…
Adaptive optical beam shaping for compensating projection-induced focus deformation
NASA Astrophysics Data System (ADS)
Pütsch, Oliver; Stollenwerk, Jochen; Loosen, Peter
2016-02-01
Scanner-based applications are already widely used for the processing of surfaces, as they allow for highly dynamic deflection of the laser beam. Particularly, the processing of three-dimensional surfaces with laser radiation initiates the development of highly innovative manufacturing techniques. Unfortunately, the focused laser beam suffers from deformation caused by the involved projection mechanisms. The degree of deformation is field variant and depends on both the surface geometry and the working position of the laser beam. Depending on the process sensitivity, the deformation affects the process quality, which motivates a method of compensation. Current approaches are based on a local adaption of the laser power to maintain constant intensity within the interaction zone. For advanced manufacturing, this approach is insufficient, as the residual deformation of the initial circular laser spot is not taken into account. In this paper, an alternative approach is discussed. Additional beam-shaping devices are integrated between the laser source and the scanner, and allow for an in situ compensation to ensure a field-invariant circular focus spot within the interaction zone. Beyond the optical design, the approach is challenging with respect to the control theory's point of view, as both the beam deflection and the compensation have to be synchronized.
Solution of Reactive Compressible Flows Using an Adaptive Wavelet Method
NASA Astrophysics Data System (ADS)
Zikoski, Zachary; Paolucci, Samuel; Powers, Joseph
2008-11-01
This work presents numerical simulations of reactive compressible flow, including detailed multicomponent transport, using an adaptive wavelet algorithm. The algorithm allows for dynamic grid adaptation which enhances our ability to fully resolve all physically relevant scales. The thermodynamic properties, equation of state, and multicomponent transport properties are provided by CHEMKIN and TRANSPORT libraries. Results for viscous detonation in a H2:O2:Ar mixture, and other problems in multiple dimensions, are included.
On Accuracy of Adaptive Grid Methods for Captured Shocks
NASA Technical Reports Server (NTRS)
Yamaleev, Nail K.; Carpenter, Mark H.
2002-01-01
The accuracy of two grid adaptation strategies, grid redistribution and local grid refinement, is examined by solving the 2-D Euler equations for the supersonic steady flow around a cylinder. Second- and fourth-order linear finite difference shock-capturing schemes, based on the Lax-Friedrichs flux splitting, are used to discretize the governing equations. The grid refinement study shows that for the second-order scheme, neither grid adaptation strategy improves the numerical solution accuracy compared to that calculated on a uniform grid with the same number of grid points. For the fourth-order scheme, the dominant first-order error component is reduced by the grid adaptation, while the design-order error component drastically increases because of the grid nonuniformity. As a result, both grid adaptation techniques improve the numerical solution accuracy only on the coarsest mesh or on very fine grids that are seldom found in practical applications because of the computational cost involved. Similar error behavior has been obtained for the pressure integral across the shock. A simple analysis shows that both grid adaptation strategies are not without penalties in the numerical solution accuracy. Based on these results, a new grid adaptation criterion for captured shocks is proposed.
NASA Technical Reports Server (NTRS)
Wang, Ray (Inventor)
2009-01-01
A method and system for spatial data manipulation input and distribution via an adaptive wireless transceiver. The method and system include a wireless transceiver for automatically and adaptively controlling wireless transmissions using a Waveform-DNA method. The wireless transceiver can operate simultaneously over both the short and long distances. The wireless transceiver is automatically adaptive and wireless devices can send and receive wireless digital and analog data from various sources rapidly in real-time via available networks and network services.
Global Change adaptation in water resources management: the Water Change project.
Pouget, Laurent; Escaler, Isabel; Guiu, Roger; Mc Ennis, Suzy; Versini, Pierre-Antoine
2012-12-01
In recent years, water resources management has been facing new challenges due to increasing changes and their associated uncertainties, such as changes in climate, water demand or land use, which can be grouped under the term Global Change. The Water Change project (LIFE+ funding) developed a methodology and a tool to assess the Global Change impacts on water resources, thus helping river basin agencies and water companies in their long term planning and in the definition of adaptation measures. The main result of the project was the creation of a step by step methodology to assess Global Change impacts and define strategies of adaptation. This methodology was tested in the Llobregat river basin (Spain) with the objective of being applicable to any water system. It includes several steps such as setting-up the problem with a DPSIR framework, developing Global Change scenarios, running river basin models and performing a cost-benefit analysis to define optimal strategies of adaptation. This methodology was supported by the creation of a flexible modelling system, which can link a wide range of models, such as hydrological, water quality, and water management models. The tool allows users to integrate their own models to the system, which can then exchange information among them automatically. This enables to simulate the interactions among multiple components of the water cycle, and run quickly a large number of Global Change scenarios. The outcomes of this project make possible to define and test different sets of adaptation measures for the basin that can be further evaluated through cost-benefit analysis. The integration of the results contributes to an efficient decision-making on how to adapt to Global Change impacts. PMID:22883209
NASA Astrophysics Data System (ADS)
Bargatze, L. F.
2015-12-01
Active Data Archive Product Tracking (ADAPT) is a collection of software routines that permits one to generate XML metadata files to describe and register data products in support of the NASA Heliophysics Virtual Observatory VxO effort. ADAPT is also a philosophy. The ADAPT concept is to use any and all available metadata associated with scientific data to produce XML metadata descriptions in a consistent, uniform, and organized fashion to provide blanket access to the full complement of data stored on a targeted data server. In this poster, we present an application of ADAPT to describe all of the data products that are stored by using the Common Data File (CDF) format served out by the CDAWEB and SPDF data servers hosted at the NASA Goddard Space Flight Center. These data servers are the primary repositories for NASA Heliophysics data. For this purpose, the ADAPT routines have been used to generate data resource descriptions by using an XML schema named Space Physics Archive, Search, and Extract (SPASE). SPASE is the designated standard for documenting Heliophysics data products, as adopted by the Heliophysics Data and Model Consortium. The set of SPASE XML resource descriptions produced by ADAPT includes high-level descriptions of numerical data products, display data products, or catalogs and also includes low-level "Granule" descriptions. A SPASE Granule is effectively a universal access metadata resource; a Granule associates an individual data file (e.g. a CDF file) with a "parent" high-level data resource description, assigns a resource identifier to the file, and lists the corresponding assess URL(s). The CDAWEB and SPDF file systems were queried to provide the input required by the ADAPT software to create an initial set of SPASE metadata resource descriptions. Then, the CDAWEB and SPDF data repositories were queried subsequently on a nightly basis and the CDF file lists were checked for any changes such as the occurrence of new, modified, or deleted
Riemannian mean and space-time adaptive processing using projection and inversion algorithms
NASA Astrophysics Data System (ADS)
Balaji, Bhashyam; Barbaresco, Frédéric
2013-05-01
The estimation of the covariance matrix from real data is required in the application of space-time adaptive processing (STAP) to an airborne ground moving target indication (GMTI) radar. A natural approach to estimation of the covariance matrix that is based on the information geometry has been proposed. In this paper, the output of the Riemannian mean is used in inversion and projection algorithms. It is found that the projection class of algorithms can yield very significant gains, even when the gains due to inversion-based algorithms are marginal over standard algorithms. The performance of the projection class of algorithms does not appear to be overly sensitive to the projected subspace dimension.
On axis fringe projection: A new method for shape measurement
NASA Astrophysics Data System (ADS)
Sicardi-Segade, Analía; Estrada, J. C.; Martínez-García, Amalia; Garnica, Guillermo
2015-06-01
The traditional fringe projection technique requires a non-zero angle between projection and observation directions to have sensitivity in the z direction. In this work, a new method for shape measurement using fringe projection is presented. In our case, the angle between projection and observation directions is zero, but the system presents sensitivity due to divergent projection which changes the fringes frequency in each one of the normal planes to z-axis. The accuracy of the new method proposed here is validated with real measurements obtained with a coordinate measuring machine (CMM) and compared with the standard fringe projection technique. Finally, we discuss the advantages of the new method.
Zeng, Songjun; Liu, Hongrong; Yang, Qibin
2010-01-01
A method for three-dimensional (3D) reconstruction of macromolecule assembles, that is, octahedral symmetrical adapted functions (OSAFs) method, was introduced in this paper and a series of formulations for reconstruction by OSAF method were derived. To verify the feasibility and advantages of the method, two octahedral symmetrical macromolecules, that is, heat shock protein Degp24 and the Red-cell L Ferritin, were utilized as examples to implement reconstruction by the OSAF method. The schedule for simulation was designed as follows: 2000 random orientated projections of single particles with predefined Euler angles and centers of origins were generated, then different levels of noises that is signal-to-noise ratio (S/N) = 0.1, 0.5, and 0.8 were added. The structures reconstructed by the OSAF method were in good agreement with the standard models and the relative errors of the structures reconstructed by the OSAF method to standard structures were very little even for high level noise. The facts mentioned above account for that the OSAF method is feasible and efficient approach to reconstruct structures of macromolecules and have ability to suppress the influence of noise. PMID:20150955
Adaptation of a-Stratified Method in Variable Length Computerized Adaptive Testing.
ERIC Educational Resources Information Center
Wen, Jian-Bing; Chang, Hua-Hua; Hau, Kit-Tai
Test security has often been a problem in computerized adaptive testing (CAT) because the traditional wisdom of item selection overly exposes high discrimination items. The a-stratified (STR) design advocated by H. Chang and his collaborators, which uses items of less discrimination in earlier stages of testing, has been shown to be very…
Study of adaptive methods for data compression of scanner data
NASA Technical Reports Server (NTRS)
1977-01-01
The performance of adaptive image compression techniques and the applicability of a variety of techniques to the various steps in the data dissemination process are examined in depth. It is concluded that the bandwidth of imagery generated by scanners can be reduced without introducing significant degradation such that the data can be transmitted over an S-band channel. This corresponds to a compression ratio equivalent to 1.84 bits per pixel. It is also shown that this can be achieved using at least two fairly simple techniques with weight-power requirements well within the constraints of the LANDSAT-D satellite. These are the adaptive 2D DPCM and adaptive hybrid techniques.
Systems and Methods for Derivative-Free Adaptive Control
NASA Technical Reports Server (NTRS)
Yucelen, Tansel (Inventor); Kim, Kilsoo (Inventor); Calise, Anthony J. (Inventor)
2015-01-01
An adaptive control system is disclosed. The control system can control uncertain dynamic systems. The control system can employ one or more derivative-free adaptive control architectures. The control system can further employ one or more derivative-free weight update laws. The derivative-free weight update laws can comprise a time-varying estimate of an ideal vector of weights. The control system of the present invention can therefore quickly stabilize systems that undergo sudden changes in dynamics, caused by, for example, sudden changes in weight. Embodiments of the present invention can also provide a less complex control system than existing adaptive control systems. The control system can control aircraft and other dynamic systems, such as, for example, those with non-minimum phase dynamics.
Lessons learned applying CASE methods/tools to Ada software development projects
NASA Technical Reports Server (NTRS)
Blumberg, Maurice H.; Randall, Richard L.
1993-01-01
This paper describes the lessons learned from introducing CASE methods/tools into organizations and applying them to actual Ada software development projects. This paper will be useful to any organization planning to introduce a software engineering environment (SEE) or evolving an existing one. It contains management level lessons learned, as well as lessons learned in using specific SEE tools/methods. The experiences presented are from Alpha Test projects established under the STARS (Software Technology for Adaptable and Reliable Systems) project. They reflect the front end efforts by those projects to understand the tools/methods, initial experiences in their introduction and use, and later experiences in the use of specific tools/methods and the introduction of new ones.
A New Method to Cancel RFI---The Adaptive Filter
NASA Astrophysics Data System (ADS)
Bradley, R.; Barnbaum, C.
1996-12-01
An increasing amount of precious radio frequency spectrum in the VHF, UHF, and microwave bands is being utilized each year to support new commercial and military ventures, and all have the potential to interfere with radio astronomy observations. Some radio spectral lines of astronomical interest occur outside the protected radio astronomy bands and are unobservable due to heavy interference. Conventional approaches to deal with RFI include legislation, notch filters, RF shielding, and post-processing techniques. Although these techniques are somewhat successful, each suffers from insufficient interference cancellation. One concept of interference excision that has not been used before in radio astronomy is adaptive interference cancellation. The concept of adaptive interference canceling was first introduced in the mid-1970s as a way to reduce unwanted noise in low frequency (audio) systems. Examples of such systems include the canceling of maternal ECG in fetal electrocardiography and the reduction of engine noise in the passenger compartment of automobiles. Only recently have high-speed digital filter chips made adaptive filtering possible in a bandwidth as large a few megahertz, finally opening the door to astronomical uses. The system consists of two receivers: the main beam of the radio telescope receives the desired signal corrupted by RFI coming in the sidelobes, and the reference antenna receives only the RFI. The reference antenna is processed using a digital adaptive filter and then subtracted from the signal in the main beam, thus producing the system output. The weights of the digital filter are adjusted by way of an algorithm that minimizes, in a least-squares sense, the power output of the system. Through an adaptive-iterative process, the interference canceler will lock onto the RFI and the filter will adjust itself to minimize the effect of the RFI at the system output. We are building a prototype 100 MHz receiver and will measure the cancellation
2012-01-01
Background Study-based global health interventions, especially those that are conducted on an international or multi-site basis, frequently require site-specific adaptations in order to (1) respond to socio-cultural differences in risk determinants, (2) to make interventions more relevant to target population needs, and (3) in recognition of ‘global health diplomacy' issues. We report on the adaptations development, approval and implementation process from the Project Accept voluntary counseling and testing, community mobilization and post-test support services intervention. Methods We reviewed all relevant documentation collected during the study intervention period (e.g. monthly progress reports; bi-annual steering committee presentations) and conducted a series of semi-structured interviews with project directors and between 12 and 23 field staff at each study site in South Africa, Zimbabwe, Thailand and Tanzania during 2009. Respondents were asked to describe (1) the adaptations development and approval process and (2) the most successful site-specific adaptations from the perspective of facilitating intervention implementation. Results Across sites, proposed adaptations were identified by field staff and submitted to project directors for review on a formally planned basis. The cross-site intervention sub-committee then ensured fidelity to the study protocol before approval. Successfully-implemented adaptations included: intervention delivery adaptations (e.g. development of tailored counseling messages for immigrant labour groups in South Africa) political, environmental and infrastructural adaptations (e.g. use of local community centers as VCT venues in Zimbabwe); religious adaptations (e.g. dividing clients by gender in Muslim areas of Tanzania); economic adaptations (e.g. co-provision of income generating skills classes in Zimbabwe); epidemiological adaptations (e.g. provision of ‘youth-friendly’ services in South Africa, Zimbabwe and Tanzania), and
Adaptive region of interest method for analytical micro-CT reconstruction.
Yang, Wanneng; Xu, Xiaochun; Bi, Kun; Zeng, Shaoqun; Liu, Qian; Chen, Shangbin
2011-01-01
The real-time imaging is important in automatic successive inspection with micro-computerized tomography (micro-CT). Generally, the size of the detector is chosen according to the most probable size of the measured object to acquire all the projection data. Given enough imaging area and imaging resolution of X-ray detector, the detector is larger than specimen projection area, which results in redundant data in the Sinogram. The process of real-time micro-CT is computation-intensive because of the large amounts of source and destination data. The speed of the reconstruction algorithm can't always meet the requirements of real-time applications. A preprocessing method called adaptive region of interest (AROI), which detects the object's boundaries automatically to focus the active Sinogram regions, is introduced into the analytical reconstruction algorithm in this paper. The AROI method reduces the volume of the reconstructing data and thus directly accelerates the reconstruction process. It has been further shown that image quality is not compromised when applying AROI, while the reconstruction speed is increased as the square of the ratio of the sizes of the detector and the specimen slice. In practice, the conch reconstruction experiment indicated that the process is accelerated by 5.2 times with AROI and the imaging quality is not degraded. Therefore, the AROI method improves the speed of analytical micro-CT reconstruction significantly. PMID:21422587
Evaluation of intrinsic respiratory signal determination methods for 4D CBCT adapted for mice
Martin, Rachael; Pan, Tinsu; Rubinstein, Ashley; Court, Laurence; Ahmad, Moiz
2015-01-15
Purpose: 4D CT imaging in mice is important in a variety of areas including studies of lung function and tumor motion. A necessary step in 4D imaging is obtaining a respiratory signal, which can be done through an external system or intrinsically through the projection images. A number of methods have been developed that can successfully determine the respiratory signal from cone-beam projection images of humans, however only a few have been utilized in a preclinical setting and most of these rely on step-and-shoot style imaging. The purpose of this work is to assess and make adaptions of several successful methods developed for humans for an image-guided preclinical radiation therapy system. Methods: Respiratory signals were determined from the projection images of free-breathing mice scanned on the X-RAD system using four methods: the so-called Amsterdam shroud method, a method based on the phase of the Fourier transform, a pixel intensity method, and a center of mass method. The Amsterdam shroud method was modified so the sharp inspiration peaks associated with anesthetized mouse breathing could be detected. Respiratory signals were used to sort projections into phase bins and 4D images were reconstructed. Error and standard deviation in the assignment of phase bins for the four methods compared to a manual method considered to be ground truth were calculated for a range of region of interest (ROI) sizes. Qualitative comparisons were additionally made between the 4D images obtained using each of the methods and the manual method. Results: 4D images were successfully created for all mice with each of the respiratory signal extraction methods. Only minimal qualitative differences were noted between each of the methods and the manual method. The average error (and standard deviation) in phase bin assignment was 0.24 ± 0.08 (0.49 ± 0.11) phase bins for the Fourier transform method, 0.09 ± 0.03 (0.31 ± 0.08) phase bins for the modified Amsterdam shroud method, 0
NASA Astrophysics Data System (ADS)
Willems, Patrick
2015-04-01
case study), following the approach proposed by Ntegeka et al. (2014). When the consequences of given scenarios are high, they should be taken into account in the decision making process. For the Flanders' guidelines, it was agreed among the members of the regional Coordination Commission Integrated Water Management to consider (in addition to the traditional range of return periods up to 5 years) a 20-year design storm for scenario investigation. It was motivated by the outcome of this study that under the high climate scenario a 20-year storm would become - in order of magnitude - a 5-year storm. If after a design for a 5-year storm, the 20-year scenario investigation would conclude that specific zones along the sewer system would have severe additional impacts, it is recommended to apply changes to the system or to design flexible adaptation measures for the future (depending on which of the options would be most cost-efficient). Another adaptation action agreed was the installation of storm water infiltration devices at private houses and make these mandatory for new and renovated houses. Such installation was found to be cost-effective in any of the climate scenario's. This is one way of dealing with climate uncertainties, but lessons learned from other cases/applications are highly welcomed. References Ntegeka, V., Baguis, P., Roulin, E., Willems, P. (2014), 'Developing tailored climate change scenarios for hydrological impact assessments', Journal of Hydrology, 508C, 307-321 Willems, P. (2013). 'Revision of urban drainage design rules after assessment of climate change impacts on precipitation extremes at Uccle, Belgium', Journal of Hydrology, 496, 166-177 Willems, P., Arnbjerg-Nielsen, K., Olsson, J., Nguyen, V.T.V. (2012), 'Climate change impact assessment on urban rainfall extremes and urban drainage: methods and shortcomings', Atmospheric Research, 103, 106-118
The use of the spectral method within the fast adaptive composite grid method
McKay, S.M.
1994-12-31
The use of efficient algorithms for the solution of partial differential equations has been sought for many years. The fast adaptive composite grid (FAC) method combines an efficient algorithm with high accuracy to obtain low cost solutions to partial differential equations. The FAC method achieves fast solution by combining solutions on different grids with varying discretizations and using multigrid like techniques to find fast solution. Recently, the continuous FAC (CFAC) method has been developed which utilizes an analytic solution within a subdomain to iterate to a solution of the problem. This has been shown to achieve excellent results when the analytic solution can be found. The CFAC method will be extended to allow solvers which construct a function for the solution, e.g., spectral and finite element methods. In this discussion, the spectral methods will be used to provide a fast, accurate solution to the partial differential equation. As spectral methods are more accurate than finite difference methods, the ensuing accuracy from this hybrid method outside of the subdomain will be investigated.
Hemakom, Apit; Goverdovsky, Valentin; Looney, David; Mandic, Danilo P
2016-04-13
An extension to multivariate empirical mode decomposition (MEMD), termed adaptive-projection intrinsically transformed MEMD (APIT-MEMD), is proposed to cater for power imbalances and inter-channel correlations in real-world multichannel data. It is shown that the APIT-MEMD exhibits similar or better performance than MEMD for a large number of projection vectors, whereas it outperforms MEMD for the critical case of a small number of projection vectors within the sifting algorithm. We also employ the noise-assisted APIT-MEMD within our proposed intrinsic multiscale analysis framework and illustrate the advantages of such an approach in notoriously noise-dominated cooperative brain-computer interface (BCI) based on the steady-state visual evoked potentials and the P300 responses. Finally, we show that for a joint cognitive BCI task, the proposed intrinsic multiscale analysis framework improves system performance in terms of the information transfer rate. PMID:26953174
NASA Technical Reports Server (NTRS)
Kopasakis, George
2005-01-01
This year, an improved adaptive-feedback control method was demonstrated that suppresses thermoacoustic instabilities in a liquid-fueled combustor of a type used in aircraft engines. Extensive research has been done to develop lean-burning (low fuel-to-air ratio) combustors that can reduce emissions throughout the mission cycle to reduce the environmental impact of aerospace propulsion systems. However, these lean-burning combustors are susceptible to thermoacoustic instabilities (high-frequency pressure waves), which can fatigue combustor components and even downstream turbine blades. This can significantly decrease the safe operating life of the combustor and turbine. Thus, suppressing the thermoacoustic combustor instabilities is an enabling technology for meeting the low-emission goals of the NASA Ultra-Efficient Engine Technology (UEET) Project.
Adaptive finite element methods for two-dimensional problems in computational fracture mechanics
NASA Technical Reports Server (NTRS)
Min, J. B.; Bass, J. M.; Spradley, L. W.
1994-01-01
Some recent results obtained using solution-adaptive finite element methods in two-dimensional problems in linear elastic fracture mechanics are presented. The focus is on the basic issue of adaptive finite element methods for validating the new methodology by computing demonstration problems and comparing the stress intensity factors to analytical results.
Post-project appraisals in adaptive management of river channel restoration.
Downs, Peter W; Kondolf, G Mathias
2002-04-01
Post-project appraisals (PPAs) can evaluate river restoration schemes in relation to their compliance with design, their short-term performance attainment, and their longer-term geomorphological compatibility with the catchment hydrology and sediment transport processes. PPAs provide the basis for communicating the results of one restoration scheme to another, thereby improving future restoration designs. They also supply essential performance feedback needed for adaptive management, in which management actions are treated as experiments. PPAs allow river restoration success to be defined both in terms of the scheme attaining its performance objectives and in providing a significant learning experience. Different levels of investment in PPA, in terms of pre-project data and follow-up information, bring with them different degrees of understanding and tbus different abilities to gauge both types of success. We present four case studies to illustrate how the commitment to PPA has determined the understanding achieved in each case. In Moore's Gulch (California, USA), understanding was severely constrained by the lack of pre-project data and post-implementation monitoring. Pre-project data existed for the Kitswell Brook (Hertfordshire, UK), but the monitoring consisted only of one site visit and thus the understanding achieved is related primarily to design compliance issues. The monitoring undertaken for Deep Run (Maryland, USA) and the River Idle (Nottinghamshire, UK) enabled some understanding of the short-term performance of each scheme. The transferable understanding gained from each case study is used to develop an illustrative five-fold classification of geomorphological PPAs (full, medium-term, short-term, one-shot, and remains) according to their potential as learning experiences. The learning experience is central to adaptive management but rarely articulated in the literature. Here, we gauge the potential via superimposition onto a previous schematic
NASA Astrophysics Data System (ADS)
Menz, Christoph
2016-04-01
Climate change interferes with various aspects of the socio-economic system. One important aspect is its influence on animal husbandry, especially dairy faming. Dairy cows are usually kept in naturally ventilated barns (NVBs) which are particular vulnerable to extreme events due to their low adaptation capabilities. An effective adaptation to high outdoor temperatures for example, is only possible under certain wind and humidity conditions. High temperature extremes are expected to increase in number and strength under climate change. To assess the impact of this change on NVBs and dairy cows also the changes in wind and humidity needs to be considered. Hence we need to consider the multivariate structure of future temperature extremes. The OptiBarn project aims to develop sustainable adaptation strategies for dairy housings under climate change for Europe, by considering the multivariate structure of high temperature extremes. In a first step we identify various multivariate high temperature extremes for three core regions in Europe. With respect to dairy cows in NVBs we will focus on the wind and humidity field during high temperature events. In a second step we will use the CORDEX-EUR-11 ensemble to evaluate the capability of the RCMs to model such events and assess their future change potential. By transferring the outdoor conditions to indoor climate and animal wellbeing the results of this assessment can be used to develop technical, architectural and animal specific adaptation strategies for high temperature extremes.
Locally-Adaptive, Spatially-Explicit Projection of U.S. Population for 2030 and 2050
McKee, Jacob J.; Rose, Amy N.; Bright, Eddie A.; Huynh, Timmy N.; Bhaduri, Budhendra L.
2015-02-03
Localized adverse events, including natural hazards, epidemiological events, and human conflict, underscore the criticality of quantifying and mapping current population. Moreover, knowing the spatial distribution of future population allows for increased preparation in the event of an emergency. Building on the spatial interpolation technique previously developed for high resolution population distribution data (LandScan Global and LandScan USA), we have constructed an empirically-informed spatial distribution of the projected population of the contiguous U.S. for 2030 and 2050. Whereas most current large-scale, spatially explicit population projections typically rely on a population gravity model to determine areas of future growth, our projection modelmore » departs from these by accounting for multiple components that affect population distribution. Modelled variables, which included land cover, slope, distances to larger cities, and a moving average of current population, were locally adaptive and geographically varying. The resulting weighted surface was used to determine which areas had the greatest likelihood for future population change. Population projections of county level numbers were developed using a modified version of the U.S. Census s projection methodology with the U.S. Census s official projection as the benchmark. Applications of our model include, but are not limited to, suitability modelling, service area planning for governmental agencies, consequence assessment, mitigation planning and implementation, and assessment of spatially vulnerable populations.« less
Locally-Adaptive, Spatially-Explicit Projection of U.S. Population for 2030 and 2050
McKee, Jacob J.; Rose, Amy N.; Bright, Eddie A.; Huynh, Timmy N.; Bhaduri, Budhendra L.
2015-02-03
Localized adverse events, including natural hazards, epidemiological events, and human conflict, underscore the criticality of quantifying and mapping current population. Moreover, knowing the spatial distribution of future population allows for increased preparation in the event of an emergency. Building on the spatial interpolation technique previously developed for high resolution population distribution data (LandScan Global and LandScan USA), we have constructed an empirically-informed spatial distribution of the projected population of the contiguous U.S. for 2030 and 2050. Whereas most current large-scale, spatially explicit population projections typically rely on a population gravity model to determine areas of future growth, our projection model departs from these by accounting for multiple components that affect population distribution. Modelled variables, which included land cover, slope, distances to larger cities, and a moving average of current population, were locally adaptive and geographically varying. The resulting weighted surface was used to determine which areas had the greatest likelihood for future population change. Population projections of county level numbers were developed using a modified version of the U.S. Census s projection methodology with the U.S. Census s official projection as the benchmark. Applications of our model include, but are not limited to, suitability modelling, service area planning for governmental agencies, consequence assessment, mitigation planning and implementation, and assessment of spatially vulnerable populations.
Locally adaptive, spatially explicit projection of US population for 2030 and 2050
McKee, Jacob J.; Rose, Amy N.; Bright, Edward A.; Huynh, Timmy; Bhaduri, Budhendra L.
2015-01-01
Localized adverse events, including natural hazards, epidemiological events, and human conflict, underscore the criticality of quantifying and mapping current population. Building on the spatial interpolation technique previously developed for high-resolution population distribution data (LandScan Global and LandScan USA), we have constructed an empirically informed spatial distribution of projected population of the contiguous United States for 2030 and 2050, depicting one of many possible population futures. Whereas most current large-scale, spatially explicit population projections typically rely on a population gravity model to determine areas of future growth, our projection model departs from these by accounting for multiple components that affect population distribution. Modeled variables, which included land cover, slope, distances to larger cities, and a moving average of current population, were locally adaptive and geographically varying. The resulting weighted surface was used to determine which areas had the greatest likelihood for future population change. Population projections of county level numbers were developed using a modified version of the US Census’s projection methodology, with the US Census’s official projection as the benchmark. Applications of our model include incorporating multiple various scenario-driven events to produce a range of spatially explicit population futures for suitability modeling, service area planning for governmental agencies, consequence assessment, mitigation planning and implementation, and assessment of spatially vulnerable populations. PMID:25605882
Evaluation of an adaptive beamforming method for hearing aids.
Greenberg, J E; Zurek, P M
1992-03-01
In this paper evaluations of a two-microphone adaptive beamforming system for hearing aids are presented. The system, based on the constrained adaptive beamformer described by Griffiths and Jim [IEEE Trans. Antennas Propag. AP-30, 27-34 (1982)], adapts to preserve target signals from straight ahead and to minimize jammer signals arriving from other directions. Modifications of the basic Griffiths-Jim algorithm are proposed to alleviate problems of target cancellation and misadjustment that arise in the presence of strong target signals. The evaluations employ both computer simulations and a real-time hardware implementation and are restricted to the case of a single jammer. Performance is measured by the spectrally weighted gain in the target-to-jammer ratio in the steady state. Results show that in environments with relatively little reverberation: (1) the modifications allow good performance even with misaligned arrays and high input target-to-jammer ratios; and (2) performance is better with a broadside array with 7-cm spacing between microphones than with a 26-cm broadside or a 7-cm endfire configuration. Performance degrades in reverberant environments; at the critical distance of a room, improvement with a practical system is limited to a few dB. PMID:1564202
Recommendable Communication Method in Project Management
NASA Astrophysics Data System (ADS)
Watanabe, Kosei
A role of communication among project stakeholders in project execution is significant, sometimes it makes a project with bad. General speaking, most Japanese think that communication between Japanese is easy, because we are living in relatively homogenized sense of value. In my experience, I have opposite opinion, because most Japanese don't study communication theory and practical training. For example, when they ask something to someone, they never offer their request clearly, because of expecting him to understand context of the request. This is typical way of Japanese communication. I recommend if you want to be good communicator, you have to think what is your mission, your purpose and your goal, prior to ask your request to someone. It is self-evident that unclear request never lead to proper results. There is some difficulty on communication in our society as high context culture.
Method and apparatus for adaptive force and position control of manipulators
NASA Technical Reports Server (NTRS)
Seraji, Homayoun (Inventor)
1989-01-01
The present invention discloses systematic methods and apparatus for the design of real time controllers. Real-time control employs adaptive force/position by use of feedforward and feedback controllers, with the feedforward controller being the inverse of the linearized model of robot dynamics and containing only proportional-double-derivative terms is disclosed. The feedback controller, of the proportional-integral-derivative type, ensures that manipulator joints follow reference trajectories and the feedback controller achieves robust tracking of step-plus-exponential trajectories, all in real time. The adaptive controller includes adaptive force and position control within a hybrid control architecture. The adaptive controller, for force control, achieves tracking of desired force setpoints, and the adaptive position controller accomplishes tracking of desired position trajectories. Circuits in the adaptive feedback and feedforward controllers are varied by adaptation laws.
The Action-Project Method in Counseling Psychology
ERIC Educational Resources Information Center
Young, Richard A.; Valach, Ladislav; Domene, Jose F.
2005-01-01
The qualitative action-project method is described as an appropriate and heuristic qualitative research method for use in counseling psychology. Action theory, which addresses human intentional, goal-directed action, project, and career, provides the conceptual framework for the method. Data gathering and analysis involve multiple procedures to…
The image adaptive method for solder paste 3D measurement system
NASA Astrophysics Data System (ADS)
Xiaohui, Li; Changku, Sun; Peng, Wang
2015-03-01
The extensive application of Surface Mount Technology (SMT) requires various measurement methods to evaluate the circuit board. The solder paste 3D measurement system utilizing laser light projecting on the printed circuit board (PCB) surface is one of the critical methods. The local oversaturation, arising from the non-consistent reflectivity of the PCB surface, will lead to inaccurate measurement. The paper reports a novel optical image adaptive method of remedying the local oversaturation for solder paste measurement. The liquid crystal on silicon (LCoS) and image sensor (CCD or CMOS) are combined as the high dynamic range image (HDRI) acquisition system. The significant characteristic of the new method is that the image after adjustment is captured by specially designed HDRI acquisition system programmed by the LCoS mask. The formation of the LCoS mask, depending on a HDRI combined with the image fusion algorithm, is based on separating the laser light from the local oversaturated region. Experimental results demonstrate that the method can significantly improve the accuracy for the solder paste 3D measurement system with local oversaturation.
Lorenz, Susanne; Dessai, Suraje; Forster, Piers M.; Paavola, Jouni
2015-01-01
Visualizations are widely used in the communication of climate projections. However, their effectiveness has rarely been assessed among their target audience. Given recent calls to increase the usability of climate information through the tailoring of climate projections, it is imperative to assess the effectiveness of different visualizations. This paper explores the complexities of tailoring through an online survey conducted with 162 local adaptation practitioners in Germany and the UK. The survey examined respondents’ assessed and perceived comprehension (PC) of visual representations of climate projections as well as preferences for using different visualizations in communicating and planning for a changing climate. Comprehension and use are tested using four different graph formats, which are split into two pairs. Within each pair the information content is the same but is visualized differently. We show that even within a fairly homogeneous user group, such as local adaptation practitioners, there are clear differences in respondents’ comprehension of and preference for visualizations. We do not find a consistent association between assessed comprehension and PC or use within the two pairs of visualizations that we analysed. There is, however, a clear link between PC and use of graph format. This suggests that respondents use what they think they understand the best, rather than what they actually understand the best. These findings highlight that audience-specific targeted communication may be more complex and challenging than previously recognized. PMID:26460109