Interactive solution-adaptive grid generation procedure
NASA Technical Reports Server (NTRS)
Henderson, Todd L.; Choo, Yung K.; Lee, Ki D.
1992-01-01
TURBO-AD is an interactive solution adaptive grid generation program under development. The program combines an interactive algebraic grid generation technique and a solution adaptive grid generation technique into a single interactive package. The control point form uses a sparse collection of control points to algebraically generate a field grid. This technique provides local grid control capability and is well suited to interactive work due to its speed and efficiency. A mapping from the physical domain to a parametric domain was used to improve difficulties encountered near outwardly concave boundaries in the control point technique. Therefore, all grid modifications are performed on the unit square in the parametric domain, and the new adapted grid is then mapped back to the physical domain. The grid adaption is achieved by adapting the control points to a numerical solution in the parametric domain using control sources obtained from the flow properties. Then a new modified grid is generated from the adapted control net. This process is efficient because the number of control points is much less than the number of grid points and the generation of the grid is an efficient algebraic process. TURBO-AD provides the user with both local and global controls.
Kim, D.; Ghanem, R.
1994-12-31
Multigrid solution technique to solve a material nonlinear problem in a visual programming environment using the finite element method is discussed. The nonlinear equation of equilibrium is linearized to incremental form using Newton-Rapson technique, then multigrid solution technique is used to solve linear equations at each Newton-Rapson step. In the process, adaptive mesh refinement, which is based on the bisection of a pair of triangles, is used to form grid hierarchy for multigrid iteration. The solution process is implemented in a visual programming environment with distributed computing capability, which enables more intuitive understanding of solution process, and more effective use of resources.
NASA Technical Reports Server (NTRS)
Padovan, J.; Tovichakchaikul, S.
1983-01-01
This paper will develop a new solution strategy which can handle elastic-plastic-creep problems in an inherently stable manner. This is achieved by introducing a new constrained time stepping algorithm which will enable the solution of creep initiated pre/postbuckling behavior where indefinite tangent stiffnesses are encountered. Due to the generality of the scheme, both monotone and cyclic loading histories can be handled. The presentation will give a thorough overview of current solution schemes and their short comings, the development of constrained time stepping algorithms as well as illustrate the results of several numerical experiments which benchmark the new procedure.
Transonic airfoil calculations using solution-adaptive grids
NASA Technical Reports Server (NTRS)
Holst, T. L.; Brown, D.
1981-01-01
A new algorithm for generating solution-adaptive grids (SAG) about airfoil configurations embedded in transonic flow is presented. The present SAG approach uses only the airfoil surface solution to recluster grid points on the airfoil surface, i.e., the reclustering problem is one dimension smaller than the flow-field calculation problem. Special controls automatically built into the elliptic grid generation procedure are then used to obtain grids with suitable interior behavior. This concept of redistributing grid points greatly simplifies the idea of solution-adaptive grids. Numerical results indicate significant improvements in accuracy for SAG grids relative to standard grids using the same number of points.
Combined LAURA-UPS hypersonic solution procedure
NASA Technical Reports Server (NTRS)
Wood, William A.; Thompson, Richard A.
1993-01-01
A combined solution procedure for hypersonic flowfields around blunted slender bodies was implemented using a thin-layer Navier-Stokes code (LAURA) in the nose region and a parabolized Navier-Stokes code (UPS) on the after body region. Perfect gas, equilibrium air, and non-equilibrium air solutions to sharp cones and a sharp wedge were obtained using UPS alone as a preliminary step. Surface heating rates are presented for two slender bodies with blunted noses, having used LAURA to provide a starting solution to UPS downstream of the sonic line. These are an 8 deg sphere-cone in Mach 5, perfect gas, laminar flow at 0 and 4 deg angles of attack and the Reentry F body at Mach 20, 80,000 ft equilibrium gas conditions for 0 and 0.14 deg angles of attack. The results indicate that this procedure is a timely and accurate method for obtaining aerothermodynamic predictions on slender hypersonic vehicles.
Adaptive EAGLE dynamic solution adaptation and grid quality enhancement
NASA Technical Reports Server (NTRS)
Luong, Phu Vinh; Thompson, J. F.; Gatlin, B.; Mastin, C. W.; Kim, H. J.
1992-01-01
In the effort described here, the elliptic grid generation procedure in the EAGLE grid code was separated from the main code into a subroutine, and a new subroutine which evaluates several grid quality measures at each grid point was added. The elliptic grid routine can now be called, either by a computational fluid dynamics (CFD) code to generate a new adaptive grid based on flow variables and quality measures through multiple adaptation, or by the EAGLE main code to generate a grid based on quality measure variables through static adaptation. Arrays of flow variables can be read into the EAGLE grid code for use in static adaptation as well. These major changes in the EAGLE adaptive grid system make it easier to convert any CFD code that operates on a block-structured grid (or single-block grid) into a multiple adaptive code.
Multigrid solution of internal flows using unstructured solution adaptive meshes
NASA Technical Reports Server (NTRS)
Smith, Wayne A.; Blake, Kenneth R.
1992-01-01
This is the final report of the NASA Lewis SBIR Phase 2 Contract Number NAS3-25785, Multigrid Solution of Internal Flows Using Unstructured Solution Adaptive Meshes. The objective of this project, as described in the Statement of Work, is to develop and deliver to NASA a general three-dimensional Navier-Stokes code using unstructured solution-adaptive meshes for accuracy and multigrid techniques for convergence acceleration. The code will primarily be applied, but not necessarily limited, to high speed internal flows in turbomachinery.
Adaptive Distributed Environment for Procedure Training (ADEPT)
NASA Technical Reports Server (NTRS)
Domeshek, Eric; Ong, James; Mohammed, John
2013-01-01
ADEPT (Adaptive Distributed Environment for Procedure Training) is designed to provide more effective, flexible, and portable training for NASA systems controllers. When creating a training scenario, an exercise author can specify a representative rationale structure using the graphical user interface, annotating the results with instructional texts where needed. The author's structure may distinguish between essential and optional parts of the rationale, and may also include "red herrings" - hypotheses that are essential to consider, until evidence and reasoning allow them to be ruled out. The system is built from pre-existing components, including Stottler Henke's SimVentive? instructional simulation authoring tool and runtime. To that, a capability was added to author and exploit explicit control decision rationale representations. ADEPT uses SimVentive's Scalable Vector Graphics (SVG)- based interactive graphic display capability as the basis of the tool for quickly noting aspects of decision rationale in graph form. The ADEPT prototype is built in Java, and will run on any computer using Windows, MacOS, or Linux. No special peripheral equipment is required. The software enables a style of student/ tutor interaction focused on the reasoning behind systems control behavior that better mimics proven Socratic human tutoring behaviors for highly cognitive skills. It supports fast, easy, and convenient authoring of such tutoring behaviors, allowing specification of detailed scenario-specific, but content-sensitive, high-quality tutor hints and feedback. The system places relatively light data-entry demands on the student to enable its rationale-centered discussions, and provides a support mechanism for fostering coherence in the student/ tutor dialog by including focusing, sequencing, and utterance tuning mechanisms intended to better fit tutor hints and feedback into the ongoing context.
Effects of adaptive refinement on the inverse EEG solution
NASA Astrophysics Data System (ADS)
Weinstein, David M.; Johnson, Christopher R.; Schmidt, John A.
1995-10-01
One of the fundamental problems in electroencephalography can be characterized by an inverse problem. Given a subset of electrostatic potentials measured on the surface of the scalp and the geometry and conductivity properties within the head, calculate the current vectors and potential fields within the cerebrum. Mathematically the generalized EEG problem can be stated as solving Poisson's equation of electrical conduction for the primary current sources. The resulting problem is mathematically ill-posed i.e., the solution does not depend continuously on the data, such that small errors in the measurement of the voltages on the scalp can yield unbounded errors in the solution, and, for the general treatment of a solution of Poisson's equation, the solution is non-unique. However, if accurate solutions the general treatment of a solution of Poisson's equation, the solution is non-unique. However, if accurate solutions to such problems could be obtained, neurologists would gain noninvasive accesss to patient-specific cortical activity. Access to such data would ultimately increase the number of patients who could be effectively treated for pathological cortical conditions such as temporal lobe epilepsy. In this paper, we present the effects of spatial adaptive refinement on the inverse EEG problem and show that the use of adaptive methods allow for significantly better estimates of electric and potential fileds within the brain through an inverse procedure. To test these methods, we have constructed several finite element head models from magneteic resonance images of a patient. The finite element meshes ranged in size from 2724 nodes and 12,812 elements to 5224 nodes and 29,135 tetrahedral elements, depending on the level of discretization. We show that an adaptive meshing algorithm minimizes the error in the forward problem due to spatial discretization and thus increases the accuracy of the inverse solution.
Symmetry-adapted Wannier functions in the maximal localization procedure
NASA Astrophysics Data System (ADS)
Sakuma, R.
2013-06-01
A procedure to construct symmetry-adapted Wannier functions in the framework of the maximally localized Wannier function approach [Marzari and Vanderbilt, Phys. Rev. BPRBMDO1098-012110.1103/PhysRevB.56.12847 56, 12847 (1997); Souza, Marzari, and Vanderbilt, Phys. Rev. BPRBMDO1098-012110.1103/PhysRevB.65.035109 65, 035109 (2001)] is presented. In this scheme, the minimization of the spread functional of the Wannier functions is performed with constraints that are derived from symmetry properties of the specified set of the Wannier functions and the Bloch functions used to construct them, therefore one can obtain a solution that does not necessarily yield the global minimum of the spread functional. As a test of this approach, results of atom-centered Wannier functions for GaAs and Cu are presented.
Adaptive resolution simulation of salt solutions
NASA Astrophysics Data System (ADS)
Bevc, Staš; Junghans, Christoph; Kremer, Kurt; Praprotnik, Matej
2013-10-01
We present an adaptive resolution simulation of aqueous salt (NaCl) solutions at ambient conditions using the adaptive resolution scheme. Our multiscale approach concurrently couples the atomistic and coarse-grained models of the aqueous NaCl, where water molecules and ions change their resolution while moving from one resolution domain to the other. We employ standard extended simple point charge (SPC/E) and simple point charge (SPC) water models in combination with AMBER and GROMOS force fields for ion interactions in the atomistic domain. Electrostatics in our model are described by the generalized reaction field method. The effective interactions for water-water and water-ion interactions in the coarse-grained model are derived using structure-based coarse-graining approach while the Coulomb interactions between ions are appropriately screened. To ensure an even distribution of water molecules and ions across the simulation box we employ thermodynamic forces. We demonstrate that the equilibrium structural, e.g. radial distribution functions and density distributions of all the species, and dynamical properties are correctly reproduced by our adaptive resolution method. Our multiscale approach, which is general and can be used for any classical non-polarizable force-field and/or types of ions, will significantly speed up biomolecular simulation involving aqueous salt.
An adaptive embedded mesh procedure for leading-edge vortex flows
NASA Technical Reports Server (NTRS)
Powell, Kenneth G.; Beer, Michael A.; Law, Glenn W.
1989-01-01
A procedure for solving the conical Euler equations on an adaptively refined mesh is presented, along with a method for determining which cells to refine. The solution procedure is a central-difference cell-vertex scheme. The adaptation procedure is made up of a parameter on which the refinement decision is based, and a method for choosing a threshold value of the parameter. The refinement parameter is a measure of mesh-convergence, constructed by comparison of locally coarse- and fine-grid solutions. The threshold for the refinement parameter is based on the curvature of the curve relating the number of cells flagged for refinement to the value of the refinement threshold. Results for three test cases are presented. The test problem is that of a delta wing at angle of attack in a supersonic free-stream. The resulting vortices and shocks are captured efficiently by the adaptive code.
Procedure for Adapting Direct Simulation Monte Carlo Meshes
NASA Technical Reports Server (NTRS)
Woronowicz, Michael S.; Wilmoth, Richard G.; Carlson, Ann B.; Rault, Didier F. G.
1992-01-01
A technique is presented for adapting computational meshes used in the G2 version of the direct simulation Monte Carlo method. The physical ideas underlying the technique are discussed, and adaptation formulas are developed for use on solutions generated from an initial mesh. The effect of statistical scatter on adaptation is addressed, and results demonstrate the ability of this technique to achieve more accurate results without increasing necessary computational resources.
Automatic procedure for generating symmetry adapted wavefunctions.
Johansson, Marcus; Veryazov, Valera
2017-01-01
Automatic detection of point groups as well as symmetrisation of molecular geometry and wavefunctions are useful tools in computational quantum chemistry. Algorithms for developing these tools as well as an implementation are presented. The symmetry detection algorithm is a clustering algorithm for symmetry invariant properties, combined with logical deduction of possible symmetry elements using the geometry of sets of symmetrically equivalent atoms. An algorithm for determining the symmetry adapted linear combinations (SALCs) of atomic orbitals is also presented. The SALCs are constructed with the use of projection operators for the irreducible representations, as well as subgroups for determining splitting fields for a canonical basis. The character tables for the point groups are auto generated, and the algorithm is described. Symmetrisation of molecules use a projection into the totally symmetric space, whereas for wavefunctions projection as well and partner function determination and averaging is used. The software has been released as a stand-alone, open source library under the MIT license and integrated into both computational and molecular modelling software.Graphical abstract.
a Procedural Solution to Model Roman Masonry Structures
NASA Astrophysics Data System (ADS)
Cappellini, V.; Saleri, R.; Stefani, C.; Nony, N.; De Luca, L.
2013-07-01
The paper will describe a new approach based on the development of a procedural modelling methodology for archaeological data representation. This is a custom-designed solution based on the recognition of the rules belonging to the construction methods used in roman times. We have conceived a tool for 3D reconstruction of masonry structures starting from photogrammetric surveying. Our protocol considers different steps. Firstly we have focused on the classification of opus based on the basic interconnections that can lead to a descriptive system used for their unequivocal identification and design. Secondly, we have chosen an automatic, accurate, flexible and open-source photogrammetric pipeline named Pastis Apero Micmac - PAM, developed by IGN (Paris). We have employed it to generate ortho-images from non-oriented images, using a user-friendly interface implemented by CNRS Marseille (France). Thirdly, the masonry elements are created in parametric and interactive way, and finally they are adapted to the photogrammetric data. The presented application, currently under construction, is developed with an open source programming language called Processing, useful for visual, animated or static, 2D or 3D, interactive creations. Using this computer language, a Java environment has been developed. Therefore, even if the procedural modelling reveals an accuracy level inferior to the one obtained by manual modelling (brick by brick), this method can be useful when taking into account the static evaluation on buildings (requiring quantitative aspects) and metric measures for restoration purposes.
Anisotropic Solution Adaptive Unstructured Grid Generation Using AFLR
NASA Technical Reports Server (NTRS)
Marcum, David L.
2007-01-01
An existing volume grid generation procedure, AFLR3, was successfully modified to generate anisotropic tetrahedral elements using a directional metric transformation defined at source nodes. The procedure can be coupled with a solver and an error estimator as part of an overall anisotropic solution adaptation methodology. It is suitable for use with an error estimator based on an adjoint, optimization, sensitivity derivative, or related approach. This offers many advantages, including more efficient point placement along with robust and efficient error estimation. It also serves as a framework for true grid optimization wherein error estimation and computational resources can be used as cost functions to determine the optimal point distribution. Within AFLR3 the metric transformation is implemented using a set of transformation vectors and associated aspect ratios. The modified overall procedure is presented along with details of the anisotropic transformation implementation. Multiple two-and three-dimensional examples are also presented that demonstrate the capability of the modified AFLR procedure to generate anisotropic elements using a set of source nodes with anisotropic transformation metrics. The example cases presented use moderate levels of anisotropy and result in usable element quality. Future testing with various flow solvers and methods for obtaining transformation metric information is needed to determine practical limits and evaluate the efficacy of the overall approach.
An adaptive mesh-moving and refinement procedure for one-dimensional conservation laws
NASA Technical Reports Server (NTRS)
Biswas, Rupak; Flaherty, Joseph E.; Arney, David C.
1993-01-01
We examine the performance of an adaptive mesh-moving and /or local mesh refinement procedure for the finite difference solution of one-dimensional hyperbolic systems of conservation laws. Adaptive motion of a base mesh is designed to isolate spatially distinct phenomena, and recursive local refinement of the time step and cells of the stationary or moving base mesh is performed in regions where a refinement indicator exceeds a prescribed tolerance. These adaptive procedures are incorporated into a computer code that includes a MacCormack finite difference scheme wih Davis' artificial viscosity model and a discretization error estimate based on Richardson's extrapolation. Experiments are conducted on three problems in order to qualify the advantages of adaptive techniques relative to uniform mesh computations and the relative benefits of mesh moving and refinement. Key results indicate that local mesh refinement, with and without mesh moving, can provide reliable solutions at much lower computational cost than possible on uniform meshes; that mesh motion can be used to improve the results of uniform mesh solutions for a modest computational effort; that the cost of managing the tree data structure associated with refinement is small; and that a combination of mesh motion and refinement reliably produces solutions for the least cost per unit accuracy.
DMM assessments of attachment and adaptation: Procedures, validity and utility.
Farnfield, Steve; Hautamäki, Airi; Nørbech, Peder; Sahhar, Nicola
2010-07-01
This article gives a brief over view of the Dynamic-Maturational Model of attachment and adaptation (DMM; Crittenden, 2008) together with the various DMM assessments of attachment that have been developed for specific stages of development. Each assessment is discussed in terms of procedure, outcomes, validity, advantages and limitations, comparable procedures and areas for further research and validation. The aims are twofold: to provide an introduction to DMM theory and its application that underlie the articles in this issue of CCPP; and to provide researchers and clinicians with a guide to DMM assessments.
A novel hyperbolic grid generation procedure with inherent adaptive dissipation
Tai, C.H.; Yin, S.L.; Soong, C.Y.
1995-01-01
This paper reports a novel hyperbolic grid-generation with an inherent adaptive dissipation (HGAD), which is capable of improving the oscillation and overlapping of grid lines. In the present work upwinding differencing is applied to discretize the hyperbolic system and, thereby, to develop the adaptive dissipation coefficient. Complex configurations with the features of geometric discontinuity, exceptional concavity and convexity are used as the test cases for comparison of the present HGAD procedure with the conventional hyerbolic and elliptic ones. The results reveal that the HGAD method is superior in orthogonality and smoothness of the grid system. In addition, the computational efficiency of the flow solver may be improved by using the present HGAD procedure. 15 refs., 8 figs.
NIF Anti-Reflective Coating Solutions: Preparation, Procedures and Specifications
Suratwala, T; Carman, L; Thomas, I
2003-07-01
The following document contains a detailed description of the preparation procedures for the antireflective coating solutions used for NIF optics. This memo includes preparation procedures for the coating solutions (sections 2.0-4.0), specifications and vendor information of the raw materials used and on all equipment used (section 5.0), and QA specifications (section 6.0) and procedures (section 7.0) to determine quality and repeatability of all the coating solutions. There are different five coating solutions that will be used to coat NIF optics. These solutions are listed below: (1) Colloidal silica (3%) in ethanol (2) Colloidal silica (2%) in sec-butanol (3) Colloidal silica (9%) in sec-butanol (deammoniated) (4) HMDS treated silica (10%) in decane (5) GR650 (3.3%) in ethanol/sec-butanol The names listed above are to be considered the official name for the solution. They will be referred to by these names in the remainder of this document. Table 1 gives a summary of all the optics to be coated including: (1) the surface to be coated; (2) the type of solution to be used; (3) the coating method (meniscus, dip, or spin coating) to be used; (4) the type of coating (broadband, 1?, 2?, 3?) to be made; (5) number of optics to be coated; and (6) the type of post processing required (if any). Table 2 gives a summary of the batch compositions and measured properties of all five of these solutions.
Solution-adaptive finite element method in computational fracture mechanics
NASA Technical Reports Server (NTRS)
Min, J. B.; Bass, J. M.; Spradley, L. W.
1993-01-01
Some recent results obtained using solution-adaptive finite element method in linear elastic two-dimensional fracture mechanics problems are presented. The focus is on the basic issue of adaptive finite element method for validating the applications of new methodology to fracture mechanics problems by computing demonstration problems and comparing the stress intensity factors to analytical results.
Multigrid solution strategies for adaptive meshing problems
NASA Technical Reports Server (NTRS)
Mavriplis, Dimitri J.
1995-01-01
This paper discusses the issues which arise when combining multigrid strategies with adaptive meshing techniques for solving steady-state problems on unstructured meshes. A basic strategy is described, and demonstrated by solving several inviscid and viscous flow cases. Potential inefficiencies in this basic strategy are exposed, and various alternate approaches are discussed, some of which are demonstrated with an example. Although each particular approach exhibits certain advantages, all methods have particular drawbacks, and the formulation of a completely optimal strategy is considered to be an open problem.
Transmission Line Adapted Analytical Power Charts Solution
NASA Astrophysics Data System (ADS)
Sakala, Japhet D.; Daka, James S. J.; Setlhaolo, Ditiro; Malichi, Alec Pulu
2016-08-01
The performance of a transmission line has been assessed over the years using power charts. These are graphical representations, drawn to scale, of the equations that describe the performance of transmission lines. Various quantities that describe the performance, such as sending end voltage, sending end power and compensation to give zero voltage regulation, may be deduced from the power charts. Usually required values are read off and then converted using the appropriate scales and known relationships. In this paper, the authors revisit this area of circle diagrams for transmission line performance. The work presented here formulates the mathematical model that analyses the transmission line performance from the power charts relationships and then uses them to calculate the transmission line performance. In this proposed approach, it is not necessary to draw the power charts for the solution. However the power charts may be drawn for the visual presentation. The method is based on applying derived equations and is simple to use since it does not require rigorous derivations.
Space-Time Adaptive Solution of Richards' Equation
NASA Astrophysics Data System (ADS)
Abhishek, C.; Miller, C. T.; Farthing, M. W.
2003-12-01
Efficient, robust simulation of groundwater flow in the unsaturated zone remains computationally expensive, especially for problems characterized by sharp fronts in both space and time. Standard approaches that employ uniform spatial and temporal discretizations for the numerical solution of these problems lead to inefficient and expensive simulations. In this work, we solve Richards' equation using adaptive methods in both space and time. Spatial adaption is based upon a coarse grid solve and gradient-based error indicators, while the spatial step size is adjusted using a fixed-order approximation. Temporal adaption is accomplished using variable-order, variable-step-size approximations based upon the backward difference formulas up to fifth order. Since the advantages of similar adaptive methods in time are now established, we evaluate our method by comparison with a uniform spatial discretization that is adaptive in time for four different test problems. The numerical results demonstrate that the proposed method provides a robust and efficient alternative to standard approaches for simulating variably saturated flow.
A spatially and temporally adaptive solution of Richards’ equation
NASA Astrophysics Data System (ADS)
Miller, Cass T.; Abhishek, Chandra; Farthing, Matthew W.
2006-04-01
Efficient, robust simulation of groundwater flow in the unsaturated zone remains computationally expensive, especially for problems characterized by sharp fronts in both space and time. Standard approaches that employ uniform spatial and temporal discretizations for the numerical solution of these problems lead to inefficient and expensive simulations. In this work, we solve Richards' equation using adaptive methods in both space and time. Spatial adaption is based upon a coarse grid solve and a gradient error indicator using a fixed-order approximation. Temporal adaption is accomplished using variable order, variable step size approximations based upon the backward difference formulas up to fifth order. Since the advantages of similar adaptive methods in time are now established, we evaluate our method by comparison with a uniform spatial discretization that is adaptive in time for four different one-dimensional test problems. The numerical results demonstrate that the proposed method provides a robust and efficient alternative to standard approaches for simulating variably saturated flow in one spatial dimension.
Solution-Adaptive Cartesian Cell Approach for Viscous and Inviscid Flows
NASA Technical Reports Server (NTRS)
Coirier, William J.; Powell, Kenneth G.
1996-01-01
A Cartesian cell-based approach for adaptively refined solutions of the Euler and Navier-Stokes equations in two dimensions is presented. Grids about geometrically complicated bodies are generated automatically, by the recursive subdivision of a single Cartesian cell encompassing the entire flow domain. Where the resulting cells intersect bodies, polygonal cut cells are created using modified polygon-clipping algorithms. The grid is stored in a binary tree data structure that provides a natural means of obtaining cell-to-cell connectivity and of carrying out solution-adaptive mesh refinement. The Euler and Navier-Stokes equations are solved on the resulting grids using a finite volume formulation. The convective terms are upwinded: A linear reconstruction of the primitive variables is performed, providing input states to an approximate Riemann solver for computing the fluxes between neighboring cells. The results of a study comparing the accuracy and positivity of two classes of cell-centered, viscous gradient reconstruction procedures is briefly summarized. Adaptively refined solutions of the Navier-Stokes equations are shown using the more robust of these gradient reconstruction procedures, where the results computed by the Cartesian approach are compared to theory, experiment, and other accepted computational results for a series of low and moderate Reynolds number flows.
NASA Technical Reports Server (NTRS)
Rebstock, Rainer
1987-01-01
Numerical methods are developed for control of three dimensional adaptive test sections. The physical properties of the design problem occurring in the external field computation are analyzed, and a design procedure suited for solution of the problem is worked out. To do this, the desired wall shape is determined by stepwise modification of an initial contour. The necessary changes in geometry are determined with the aid of a panel procedure, or, with incident flow near the sonic range, with a transonic small perturbation (TSP) procedure. The designed wall shape, together with the wall deflections set during the tunnel run, are the input to a newly derived one-step formula which immediately yields the adapted wall contour. This is particularly important since the classical iterative adaptation scheme is shown to converge poorly for 3D flows. Experimental results obtained in the adaptive test section with eight flexible walls are presented to demonstrate the potential of the procedure. Finally, a method is described to minimize wall interference in 3D flows by adapting only the top and bottom wind tunnel walls.
A "Rearrangement Procedure" for Scoring Adaptive Tests with Review Options
ERIC Educational Resources Information Center
Papanastasiou, Elena C.; Reckase, Mark D.
2007-01-01
Because of the increased popularity of computerized adaptive testing (CAT), many admissions tests, as well as certification and licensure examinations, have been transformed from their paper-and-pencil versions to computerized adaptive versions. A major difference between paper-and-pencil tests and CAT from an examinee's point of view is that in…
Adaptive Multigrid Solution of Stokes' Equation on CELL Processor
NASA Astrophysics Data System (ADS)
Elgersma, M. R.; Yuen, D. A.; Pratt, S. G.
2006-12-01
We are developing an adaptive multigrid solver for treating nonlinear elliptic partial-differential equations, needed for mantle convection problems. Since multigrid is being used for the complete solution, not just as a preconditioner, spatial difference operators are kept nearly diagonally dominant by increasing density of the coarsest grid in regions where coefficients have rapid spatial variation. At each time step, the unstructured coarse grid is refined in regions where coefficients associated with the differential operators or boundary conditions have rapid spatial variation, and coarsened in regions where there is more gradual spatial variation. For three-dimensional problems, the boundary is two-dimensional, and regions where coefficients change rapidly are often near two-dimensional surfaces, so the coarsest grid is only fine near two-dimensional subsets of the three-dimensional space. Coarse grid density drops off exponentially with distance from boundary surfaces and rapid-coefficient-change surfaces. This unstructured coarse grid results in the number of coarse grid voxels growing proportional to surface area, rather than proportional to volume. This results in significant computational savings for the coarse-grid solution. This coarse-grid solution is then refined for the fine-grid solution, and multigrid methods have memory usage and runtime proportional to the number of fine-grid voxels. This adaptive multigrid algorithm is being implemented on the CELL processor, where each chip has eight floating point processors and each processor operates on four floating point numbers each clock cycle. Both the adaptive grid algorithm and the multigrid solver have very efficient parallel implementations, in order to take advantage of the CELL processor architecture.
Stratified and Maximum Information Item Selection Procedures in Computer Adaptive Testing
ERIC Educational Resources Information Center
Deng, Hui; Ansley, Timothy; Chang, Hua-Hua
2010-01-01
In this study we evaluated and compared three item selection procedures: the maximum Fisher information procedure (F), the a-stratified multistage computer adaptive testing (CAT) (STR), and a refined stratification procedure that allows more items to be selected from the high a strata and fewer items from the low a strata (USTR), along with…
Solution adaptive grids applied to low Reynolds number flow
NASA Astrophysics Data System (ADS)
de With, G.; Holdø, A. E.; Huld, T. A.
2003-08-01
A numerical study has been undertaken to investigate the use of a solution adaptive grid for flow around a cylinder in the laminar flow regime. The main purpose of this work is twofold. The first aim is to investigate the suitability of a grid adaptation algorithm and the reduction in mesh size that can be obtained. Secondly, the uniform asymmetric flow structures are ideal to validate the mesh structures due to mesh refinement and consequently the selected refinement criteria. The refinement variable used in this work is a product of the rate of strain and the mesh cell size, and contains two variables Cm and Cstr which determine the order of each term. By altering the order of either one of these terms the refinement behaviour can be modified.
A Procedure for Empirical Initialization of Adaptive Testing Algorithms.
ERIC Educational Resources Information Center
van der Linden, Wim J.
In constrained adaptive testing, the numbers of constraints needed to control the content of the tests can easily run into the hundreds. Proper initialization of the algorithm becomes a requirement because the presence of large numbers of constraints slows down the convergence of the ability estimator. In this paper, an empirical initialization of…
ERIC Educational Resources Information Center
Barrouillet, Pierre; Camos, Valerie; Perruchet, Pierre; Seron, Xavier
2004-01-01
This article presents a new model of transcoding numbers from verbal to arabic form. This model, called ADAPT, is developmental, asemantic, and procedural. The authors' main proposal is that the transcoding process shifts from an algorithmic strategy to the direct retrieval from memory of digital forms. Thus, the model is evolutive, adaptive, and…
Zhang, M; Westerly, D C; Mackie, T R
2011-08-07
With on-line image guidance (IG), prostate shifts relative to the bony anatomy can be corrected by realigning the patient with respect to the treatment fields. In image guided intensity modulated proton therapy (IG-IMPT), because the proton range is more sensitive to the material it travels through, the realignment may introduce large dose variations. This effect is studied in this work and an on-line adaptive procedure is proposed to restore the planned dose to the target. A 2D anthropomorphic phantom was constructed from a real prostate patient's CT image. Two-field laterally opposing spot 3D-modulation and 24-field full arc distal edge tracking (DET) plans were generated with a prescription of 70 Gy to the planning target volume. For the simulated delivery, we considered two types of procedures: the non-adaptive procedure and the on-line adaptive procedure. In the non-adaptive procedure, only patient realignment to match the prostate location in the planning CT was performed. In the on-line adaptive procedure, on top of the patient realignment, the kinetic energy for each individual proton pencil beam was re-determined from the on-line CT image acquired after the realignment and subsequently used for delivery. Dose distributions were re-calculated for individual fractions for different plans and different delivery procedures. The results show, without adaptive, that both the 3D-modulation and the DET plans experienced delivered dose degradation by having large cold or hot spots in the prostate. The DET plan had worse dose degradation than the 3D-modulation plan. The adaptive procedure effectively restored the planned dose distribution in the DET plan, with delivered prostate D(98%), D(50%) and D(2%) values less than 1% from the prescription. In the 3D-modulation plan, in certain cases the adaptive procedure was not effective to reduce the delivered dose degradation and yield similar results as the non-adaptive procedure. In conclusion, based on this 2D phantom
A Solution Adaptive Technique Using Tetrahedral Unstructured Grids
NASA Technical Reports Server (NTRS)
Pirzadeh, Shahyar Z.
2000-01-01
An adaptive unstructured grid refinement technique has been developed and successfully applied to several three dimensional inviscid flow test cases. The method is based on a combination of surface mesh subdivision and local remeshing of the volume grid Simple functions of flow quantities are employed to detect dominant features of the flowfield The method is designed for modular coupling with various error/feature analyzers and flow solvers. Several steady-state, inviscid flow test cases are presented to demonstrate the applicability of the method for solving practical three-dimensional problems. In all cases, accurate solutions featuring complex, nonlinear flow phenomena such as shock waves and vortices have been generated automatically and efficiently.
Cooperative solutions coupling a geometry engine and adaptive solver codes
NASA Technical Reports Server (NTRS)
Dickens, Thomas P.
1995-01-01
Follow-on work has progressed in using Aero Grid and Paneling System (AGPS), a geometry and visualization system, as a dynamic real time geometry monitor, manipulator, and interrogator for other codes. In particular, AGPS has been successfully coupled with adaptive flow solvers which iterate, refining the grid in areas of interest, and continuing on to a solution. With the coupling to the geometry engine, the new grids represent the actual geometry much more accurately since they are derived directly from the geometry and do not use refits to the first-cut grids. Additional work has been done with design runs where the geometric shape is modified to achieve a desired result. Various constraints are used to point the solution in a reasonable direction which also more closely satisfies the desired results. Concepts and techniques are presented, as well as examples of sample case studies. Issues such as distributed operation of the cooperative codes versus running all codes locally and pre-calculation for performance are discussed. Future directions are considered which will build on these techniques in light of changing computer environments.
An adaptive solution domain algorithm for solving multiphase flow equations
NASA Astrophysics Data System (ADS)
Katyal, A. K.; Parker, J. C.
1992-01-01
An adaptive solution domain (ASD) finite-element model for simulating hydrocarbon spills has been developed that is computationally more efficient than conventional numerical methods. Coupled flow of water and oil with an air phase at constant pressure is considered. In the ASD formulation, the solution domain for water- and oil-flow equations is restricted by eliminating elements from the global matrix assembly which are not experiencing significant changes in fluid saturations or pressures. When any nodes of an element exhibit changes in fluid pressures more than a stipulated tolerance τ, or changes in fluid saturations greater than tolerance τ 2 during the current time step, it is labeled active and included in the computations for the next iteration. This formulation achieves computational efficiency by solving the flow equations for only the part of the domain where changes in fluid pressure or the saturations take place above stipulated tolerances. Examples involving infiltration and redistribution of oil in 1- and 2-D spatial domains are described to illustrate the application of the ASD method and the savings in the processor time achieved by this formulation. Savings in the computational effort up to 84% during infiltration and 63% during redistribution were achieved for the 2-D example problem.
A mineral separation procedure using hot Clerici solution
Rosenblum, Sam
1974-01-01
Careful boiling of Clerici solution in a Pyrex test tube in an oil bath is used to float minerals with densities up to 5.0 in order to obtain purified concentrates of monazite (density 5.1) for analysis. The "sink" and "float" fractions are trapped in solidified Clerici salts on rapid chilling, and the fractions are washed into separate filter papers with warm water. The hazardous nature of Clerici solution requires unusual care in handling.
Comparison of Disinfection Procedures on the Catheter Adapter-Transfer Set Junction.
Firanek, Catherine; Szpara, Edward; Polanco, Patricia; Davis, Ira; Sloand, James
2016-01-01
Peritonitis is a significant complication of peritoneal dialysis (PD), contributing to mortality and technique failure. Suboptimal disinfection and/or a loose connection at the catheter adapter-transfer set junction are forms of touch contamination that can compromise the integrity of the sterile fluid path and lead to peritonitis. Proper use of the right disinfectants for connections at the PD catheter adapter-transfer set interface can help eliminate bacteria at surface interfaces, secure connections, and prevent bacteria from entering into the sterile fluid pathway. Three studies were conducted to assess the antibacterial effects of various disinfecting agents and procedures, and ensuing security of the catheter adapter-transfer set junction. An open-soak disinfection procedure with 10% povidone iodine improves disinfection and tightness/security of catheter adapter-transfer set connection.
Adaptive correction procedure for TVL1 image deblurring under impulse noise
NASA Astrophysics Data System (ADS)
Bai, Minru; Zhang, Xiongjun; Shao, Qianqian
2016-08-01
For the problem of image restoration of observed images corrupted by blur and impulse noise, the widely used TVL1 model may deviate from both the data-acquisition model and the prior model, especially for high noise levels. In order to seek a solution of high recovery quality beyond the reach of the TVL1 model, we propose an adaptive correction procedure for TVL1 image deblurring under impulse noise. Then, a proximal alternating direction method of multipliers (ADMM) is presented to solve the corrected TVL1 model and its convergence is also established under very mild conditions. It is verified by numerical experiments that our proposed approach outperforms the TVL1 model in terms of signal-to-noise ratio (SNR) values and visual quality, especially for high noise levels: it can handle salt-and-pepper noise as high as 90% and random-valued noise as high as 70%. In addition, a comparison with a state-of-the-art method, the two-phase method, demonstrates the superiority of the proposed approach.
Measurement of Actinides in Molybdenum-99 Solution Analytical Procedure
Soderquist, Chuck Z.; Weaver, Jamie L.
2015-11-01
This document is a companion report to a previous report, PNNL 24519, Measurement of Actinides in Molybdenum-99 Solution, A Brief Review of the Literature, August 2015. In this companion report, we report a fast, accurate, newly developed analytical method for measurement of trace alpha-emitting actinide elements in commercial high-activity molybdenum-99 solution. Molybdenum-99 is widely used to produce ^{99m}Tc for medical imaging. Because it is used as a radiopharmaceutical, its purity must be proven to be extremely high, particularly for the alpha emitting actinides. The sample of ^{99}Mo solution is measured into a vessel (such as a polyethylene centrifuge tube) and acidified with dilute nitric acid. A gadolinium carrier is added (50 µg). Tracers and spikes are added as necessary. Then the solution is made strongly basic with ammonium hydroxide, which causes the gadolinium carrier to precipitate as hydrous Gd(OH)_{3}. The precipitate of Gd(OH)_{3} carries all of the actinide elements. The suspension of gadolinium hydroxide is then passed through a membrane filter to make a counting mount suitable for direct alpha spectrometry. The high-activity ^{99}Mo and ^{99m}Tc pass through the membrane filter and are separated from the alpha emitters. The gadolinium hydroxide, carrying any trace actinide elements that might be present in the sample, forms a thin, uniform cake on the surface of the membrane filter. The filter cake is first washed with dilute ammonium hydroxide to push the last traces of molybdate through, then with water. The filter is then mounted on a stainless steel counting disk. Finally, the alpha emitting actinide elements are measured by alpha spectrometry.
Shen, Yi
2013-05-01
A subject's sensitivity to a stimulus variation can be studied by estimating the psychometric function. Generally speaking, three parameters of the psychometric function are of interest: the performance threshold, the slope of the function, and the rate at which attention lapses occur. In the present study, three psychophysical procedures were used to estimate the three-parameter psychometric function for an auditory gap detection task. These were an up-down staircase (up-down) procedure, an entropy-based Bayesian (entropy) procedure, and an updated maximum-likelihood (UML) procedure. Data collected from four young, normal-hearing listeners showed that while all three procedures provided similar estimates of the threshold parameter, the up-down procedure performed slightly better in estimating the slope and lapse rate for 200 trials of data collection. When the lapse rate was increased by mixing in random responses for the three adaptive procedures, the larger lapse rate was especially detrimental to the efficiency of the up-down procedure, and the UML procedure provided better estimates of the threshold and slope than did the other two procedures.
An Investigation of Procedures for Computerized Adaptive Testing Using Partial Credit Scoring.
ERIC Educational Resources Information Center
Koch, William R.; Dodd, Barbara G.
1989-01-01
Various aspects of the computerized adaptive testing (CAT) procedure for partial credit scoring were manipulated, focusing on the effects of the manipulations on operational characteristics of the CAT. The effects of item-pool size, item-pool information, and stepsizes used along the trait continuum were assessed. (TJH)
The block adaptive multigrid method applied to the solution of the Euler equations
NASA Technical Reports Server (NTRS)
Pantelelis, Nikos
1993-01-01
In the present study, a scheme capable of solving very fast and robust complex nonlinear systems of equations is presented. The Block Adaptive Multigrid (BAM) solution method offers multigrid acceleration and adaptive grid refinement based on the prediction of the solution error. The proposed solution method was used with an implicit upwind Euler solver for the solution of complex transonic flows around airfoils. Very fast results were obtained (18-fold acceleration of the solution) using one fourth of the volumes of a global grid with the same solution accuracy for two test cases.
NASA Astrophysics Data System (ADS)
Chen, Y. Y.; Chen, S. H.; Zhao, W.
2017-07-01
An improved procedure for perturbation method is presented for constructing homoclinic solutions of strongly nonlinear self-excited oscillators. Compared with current perturbation methods based on nonlinear time transformations, the preference of the present method is that the explicit solutions, in respect to the original time variable, can be derived. In the paper, the equivalence and unified perturbation procedure with nonlinear time transformations, by which implicit solutions can be derived at nonlinear time scales, are firstly presented. Then an explicit generating homoclinic solution for power-law strongly nonlinear oscillator is derived with proposed hyperbolic function balance procedure. An approximation scheme is presented to improve the perturbation procedure and the explicit expression for nonlinear time transformation can be achieved. Applications and comparisons with other methods are performed to assess the advantage of the present method.
Three-dimensional Navier-Stokes calculations using solution-adapted grids
NASA Technical Reports Server (NTRS)
Henderson, T. L.; Huang, W.; Lee, K. D.; Choo, Y. K.
1993-01-01
A three-dimensional solution-adaptive grid generation technique is presented. The adaptation technique redistributes grid points to improve the accuracy of a flow solution without increasing the number of grid points. It is applicable to structured grids with a multiblock topology. The method uses a numerical mapping and potential theory to modify the initial grid distribution based on the properties of the flow solution on the initial grid. The technique is demonstrated with two examples - a transonic finite wing and a supersonic blunt fin. The advantages are shown by comparing flow solutions on the adapted grids with those on the initial grids.
NASA Technical Reports Server (NTRS)
Gossard, Myron L
1952-01-01
An iterative transformation procedure suggested by H. Wielandt for numerical solution of flutter and similar characteristic-value problems is presented. Application of this procedure to ordinary natural-vibration problems and to flutter problems is shown by numerical examples. Comparisons of computed results with experimental values and with results obtained by other methods of analysis are made.
A Two Stage Solution Procedure for Production Planning System with Advance Demand Information
NASA Astrophysics Data System (ADS)
Ueno, Nobuyuki; Kadomoto, Kiyotaka; Hasuike, Takashi; Okuhara, Koji
We model for ‘Naiji System’ which is a unique corporation technique between a manufacturer and suppliers in Japan. We propose a two stage solution procedure for a production planning problem with advance demand information, which is called ‘Naiji’. Under demand uncertainty, this model is formulated as a nonlinear stochastic programming problem which minimizes the sum of production cost and inventory holding cost subject to a probabilistic constraint and some linear production constraints. By the convexity and the special structure of correlation matrix in the problem where inventory for different periods is not independent, we propose a solution procedure with two stages which are named Mass Customization Production Planning & Management System (MCPS) and Variable Mesh Neighborhood Search (VMNS) based on meta-heuristics. It is shown that the proposed solution procedure is available to get a near optimal solution efficiently and practical for making a good master production schedule in the suppliers.
A structured multi-block solution-adaptive mesh algorithm with mesh quality assessment
NASA Technical Reports Server (NTRS)
Ingram, Clint L.; Laflin, Kelly R.; Mcrae, D. Scott
1995-01-01
The dynamic solution adaptive grid algorithm, DSAGA3D, is extended to automatically adapt 2-D structured multi-block grids, including adaption of the block boundaries. The extension is general, requiring only input data concerning block structure, connectivity, and boundary conditions. Imbedded grid singular points are permitted, but must be prevented from moving in space. Solutions for workshop cases 1 and 2 are obtained on multi-block grids and illustrate both increased resolution of and alignment with the solution. A mesh quality assessment criteria is proposed to determine how well a given mesh resolves and aligns with the solution obtained upon it. The criteria is used to evaluate the grid quality for solutions of workshop case 6 obtained on both static and dynamically adapted grids. The results indicate that this criteria shows promise as a means of evaluating resolution.
Auto-adaptive statistical procedure for tracking structural health monitoring data
NASA Astrophysics Data System (ADS)
Smith, R. Lowell; Jannarone, Robert J.
2004-07-01
Whatever specific methods come to be preferred in the field of structural health/integrity monitoring, the associated raw data will eventually have to provide inputs for appropriate damage accumulation models and decision making protocols. The status of hardware under investigation eventually will be inferred from the evolution in time of the characteristics of this kind of functional figure of merit. Irrespective of the specific character of raw and processed data, it is desirable to develop simple, practical procedures to support damage accumulation modeling, status discrimination, and operational decision making in real time. This paper addresses these concerns and presents an auto-adaptive procedure developed to process data output from an array of many dozens of correlated sensors. These represent a full complement of information channels associated with typical structural health monitoring applications. What the algorithm does is learn in statistical terms the normal behavior patterns of the system, and against that backdrop, is configured to recognize and flag departures from expected behavior. This is accomplished using standard statistical methods, with certain proprietary enhancements employed to address issues of ill conditioning that may arise. Examples have been selected to illustrate how the procedure performs in practice. These are drawn from the fields of nondestructive testing, infrastructure management, and underwater acoustics. The demonstrations presented include the evaluation of historical electric power utilization data for a major facility, and a quantitative assessment of the performance benefits of net-centric, auto-adaptive computational procedures as a function of scale.
Bhatt, Divesh; Bahar, Ivet
2012-01-01
We introduce an adaptive weighted-ensemble procedure (aWEP) for efficient and accurate evaluation of first-passage rates between states for two-state systems. The basic idea that distinguishes aWEP from conventional weighted-ensemble (WE) methodology is the division of the configuration space into smaller regions and equilibration of the trajectories within each region upon adaptive partitioning of the regions themselves into small grids. The equilibrated conditional/transition probabilities between each pair of regions lead to the determination of populations of the regions and the first-passage times between regions, which in turn are combined to evaluate the first passage times for the forward and backward transitions between the two states. The application of the procedure to a non-trivial coarse–grained model of a 70-residue calcium binding domain of calmodulin is shown to efficiently yield information on the equilibrium probabilities of the two states as well as their first passage times. Notably, the new procedure is significantly more efficient than the canonical implementation of the WE procedure, and this improvement becomes even more significant at low temperatures. PMID:22979844
Eldred, Michael Scott; Subia, Samuel Ramirez; Neckels, David; Hopkins, Matthew Morgan; Notz, Patrick K.; Adams, Brian M.; Carnes, Brian; Wittwer, Jonathan W.; Bichon, Barron J.; Copps, Kevin D.
2006-10-01
This report documents the results for an FY06 ASC Algorithms Level 2 milestone combining error estimation and adaptivity, uncertainty quantification, and probabilistic design capabilities applied to the analysis and design of bistable MEMS. Through the use of error estimation and adaptive mesh refinement, solution verification can be performed in an automated and parameter-adaptive manner. The resulting uncertainty analysis and probabilistic design studies are shown to be more accurate, efficient, reliable, and convenient.
NASA Technical Reports Server (NTRS)
Wang, Gang
2003-01-01
A multi grid solution procedure for the numerical simulation of turbulent flows in complex geometries has been developed. A Full Multigrid-Full Approximation Scheme (FMG-FAS) is incorporated into the continuity and momentum equations, while the scalars are decoupled from the multi grid V-cycle. A standard kappa-Epsilon turbulence model with wall functions has been used to close the governing equations. The numerical solution is accomplished by solving for the Cartesian velocity components either with a traditional grid staggering arrangement or with a multiple velocity grid staggering arrangement. The two solution methodologies are evaluated for relative computational efficiency. The solution procedure with traditional staggering arrangement is subsequently applied to calculate the flow and temperature fields around a model Short Take-off and Vertical Landing (STOVL) aircraft hovering in ground proximity.
1993-06-01
An adapted toxicity characteristic leaching procedure was used to determine toxicity of soils to Daphnia magna . Soil samples were collected from U.S...vol/vol). Contaminated boils, Munition residues, Daphnia magna , EC50 Toxicity.
Zhu, Hongjian
2016-12-12
Seamless phase II/III clinical trials have attracted increasing attention recently. They mainly use Bayesian response adaptive randomization (RAR) designs. There has been little research into seamless clinical trials using frequentist RAR designs because of the difficulty in performing valid statistical inference following this procedure. The well-designed frequentist RAR designs can target theoretically optimal allocation proportions, and they have explicit asymptotic results. In this paper, we study the asymptotic properties of frequentist RAR designs with adjusted target allocation proportions, and investigate statistical inference for this procedure. The properties of the proposed design provide an important theoretical foundation for advanced seamless clinical trials. Our numerical studies demonstrate that the design is ethical and efficient.
An adaptive nonlinear solution scheme for reservoir simulation
Lett, G.S.
1996-12-31
Numerical reservoir simulation involves solving large, nonlinear systems of PDE with strongly discontinuous coefficients. Because of the large demands on computer memory and CPU, most users must perform simulations on very coarse grids. The average properties of the fluids and rocks must be estimated on these grids. These coarse grid {open_quotes}effective{close_quotes} properties are costly to determine, and risky to use, since their optimal values depend on the fluid flow being simulated. Thus, they must be found by trial-and-error techniques, and the more coarse the grid, the poorer the results. This paper describes a numerical reservoir simulator which accepts fine scale properties and automatically generates multiple levels of coarse grid rock and fluid properties. The fine grid properties and the coarse grid simulation results are used to estimate discretization errors with multilevel error expansions. These expansions are local, and identify areas requiring local grid refinement. These refinements are added adoptively by the simulator, and the resulting composite grid equations are solved by a nonlinear Fast Adaptive Composite (FAC) Grid method, with a damped Newton algorithm being used on each local grid. The nonsymmetric linear system of equations resulting from Newton`s method are in turn solved by a preconditioned Conjugate Gradients-like algorithm. The scheme is demonstrated by performing fine and coarse grid simulations of several multiphase reservoirs from around the world.
Adaptive clinical trials in tuberculosis: applications, challenges and solutions.
Davies, G R; Phillips, P P J; Jaki, T
2015-06-01
Drug development for tuberculosis (TB) faces numerous practical obstacles, including the need for combination treatment with at least three drugs, reliance on possibly unrepresentative animal models which may not reproduce key features of human disease and the lack of a well-validated surrogate endpoint for stable cure. Pivotal Phase III trials are large, lengthy and expensive, and the funding and capacity to conduct them are limited worldwide. More rational methods for the selection of priority regimens for Phase III are urgently needed to avoid costly late-stage failures. We examine the suitability of adaptive clinical trial designs for drug development in TB, focusing on designs for Phase IIB and III trials, where we believe the biggest gains in efficiency can be made. Key areas that may be addressed by such designs are improvements in the selection of doses and combinations of drugs in early clinical development and in maximising the power of confirmatory trials in multidrug-resistant TB, where patient numbers and complexity pose practical limitations. We encourage trialists and regulators in this area to consider the advantages that may be offered by these designs and their potential to more effectively and rapidly identify better treatment regimens for TB patients worldwide.
A Heat Vulnerability Index and Adaptation Solutions for Pittsburgh, Pennsylvania.
Bradford, Kathryn; Abrahams, Leslie; Hegglin, Miriam; Klima, Kelly
2015-10-06
With increasing evidence of global warming, many cities have focused attention on response plans to address their populations' vulnerabilities. Despite expected increased frequency and intensity of heat waves, the health impacts of such events in urban areas can be minimized with careful policy and economic investments. We focus on Pittsburgh, Pennsylvania and ask two questions. First, what are the top factors contributing to heat vulnerability and how do these characteristics manifest geospatially throughout Pittsburgh? Second, assuming the City wishes to deploy additional cooling centers, what placement will optimally address the vulnerability of the at risk populations? We use national census data, ArcGIS geospatial modeling, and statistical analysis to determine a range of heat vulnerability indices and optimal cooling center placement. We find that while different studies use different data and statistical calculations, all methods tested locate additional cooling centers at the confluence of the three rivers (Downtown), the northeast side of Pittsburgh (Shadyside/Highland Park), and the southeast side of Pittsburgh (Squirrel Hill). This suggests that for Pittsburgh, a researcher could apply the same factor analysis procedure to compare data sets for different locations and times; factor analyses for heat vulnerability are more robust than previously thought.
A Heat Vulnerability Index and Adaptation Solutions for Pittsburgh, Pennsylvania
NASA Astrophysics Data System (ADS)
Klima, K.; Abrahams, L.; Bradford, K.; Hegglin, M.
2015-12-01
With increasing evidence of global warming, many cities have focused attention on response plans to address their populations' vulnerabilities. Despite expected increased frequency and intensity of heat waves, the health impacts of such events in urban areas can be minimized with careful policy and economic investments. We focus on Pittsburgh, Pennsylvania and ask two questions. First, what are the top factors contributing to heat vulnerability and how do these characteristics manifest geospatially throughout Pittsburgh? Second, assuming the City wishes to deploy additional cooling centers, what placement will optimally address the vulnerability of the at risk populations? We use national census data, ArcGIS geospatial modeling, and statistical analysis to determine a range of heat vulnerability indices and optimal cooling center placement. We find that while different studies use different data and statistical calculations, all methods tested locate additional cooling centers at the confluence of the three rivers (Downtown), the northeast side of Pittsburgh (Shadyside/ Highland Park), and the southeast side of Pittsburgh (Squirrel Hill). This suggests that for Pittsburgh, a researcher could apply the same factor analysis procedure to compare datasets for different locations and times; factor analyses for heat vulnerability are more robust than previously thought.
Combined LAURA-UPS solution procedure for chemically-reacting flows. M.S. Thesis
NASA Technical Reports Server (NTRS)
Wood, William A.
1994-01-01
A new procedure seeks to combine the thin-layer Navier-Stokes solver LAURA with the parabolized Navier-Stokes solver UPS for the aerothermodynamic solution of chemically-reacting air flowfields. The interface protocol is presented and the method is applied to two slender, blunted shapes. Both axisymmetric and three dimensional solutions are included with surface pressure and heat transfer comparisons between the present method and previously published results. The case of Mach 25 flow over an axisymmetric six degree sphere-cone with a noncatalytic wall is considered to 100 nose radii. A stability bound on the marching step size was observed with this case and is attributed to chemistry effects resulting from the noncatalytic wall boundary condition. A second case with Mach 28 flow over a sphere-cone-cylinder-flare configuration is computed at both two and five degree angles of attack with a fully-catalytic wall. Surface pressures are seen to be within five percent with the present method compared to the baseline LAURA solution and heat transfers are within 10 percent. The effect of grid resolution is investigated and the nonequilibrium results are compared with a perfect gas solution, showing that while the surface pressure is relatively unchanged by the inclusion of reacting chemistry the nonequilibrium heating is 25 percent higher. The procedure demonstrates significant, order of magnitude reductions in solution time and required memory for the three dimensional case over an all thin-layer Navier-Stokes solution.
A solution procedure based on the Ateb function for a two-degree-of-freedom oscillator
NASA Astrophysics Data System (ADS)
Cveticanin, L.
2015-06-01
In this paper vibration of a two mass system with two degrees of freedom is considered. Two equal harmonic oscillators are coupled with a strong nonlinear viscoelastic connection. Mathematical model of the system is two coupled second-order strong nonlinear differential equations. Introducing new variables the system transforms into two uncoupled equations: one of them is linear and the other with a strong nonlinearity. In the paper a method for solving the strong nonlinear equation is developed. Based on the exact solution of a pure nonlinear differential equation, we assumed a perturbed version of the solution with time variable parameters. Due to the fact that the solution is periodical, the averaging procedure is introduced. As a special case vibrations of harmonic oscillators with fraction order nonlinear connection are considered. Depending on the order and coefficient of nonlinearities bounded and unbounded motion of masses is determined. Besides, the conditions for steady-state periodical solution are discussed. The procedure given in the paper is applied for investigation of the vibration of a vocal cord, which is modeled with two harmonic oscillators with strong nonlinear fraction order viscoelastic connection. Using the experimental data for the vocal cord the parameters for the steady-state solution which describes the flexural vibration of the vocal cord is analyzed. The influence of the order of nonlinearity on the amplitude and frequency of vibration of the vocal cord is obtained. The analytical results are close to those obtained experimentally.
Kim, S.
1994-12-31
Parallel iterative procedures based on domain decomposition techniques are defined and analyzed for the numerical solution of wave propagation by finite element and finite difference methods. For finite element methods, in a Lagrangian framework, an efficient way for choosing the algorithm parameter as well as the algorithm convergence are indicated. Some heuristic arguments for finding the algorithm parameter for finite difference schemes are addressed. Numerical results are presented to indicate the effectiveness of the methods.
An efficient solution procedure for the thermoelastic analysis of truss space structures
NASA Technical Reports Server (NTRS)
Givoli, D.; Rand, O.
1992-01-01
A solution procedure is proposed for the thermal and thermoelastic analysis of truss space structures in periodic motion. In this method, the spatial domain is first descretized using a consistent finite element formulation. Then the resulting semi-discrete equations in time are solved analytically by using Fourier decomposition. Geometrical symmetry is taken advantage of completely. An algorithm is presented for the calculation of heat flux distribution. The method is demonstrated via a numerical example of a cylindrically shaped space structure.
Multigrid iteration solution procedure for solving two-dimensional sets of coupled equations. [HTGR
Vondy, D.R.
1984-07-01
A procedure of iterative solution was coded in Fortran to apply the multigrid scheme of iteration to a set of coupled equations for solving two-dimensional sets of coupled equations. The incentive for this effort was to make available an implemented procedure that may be readily used as an alternative to overrelaxation, of special interest in applications where the latter is ineffective. The multigrid process was found to be effective, although not always competitive with simple overrelaxation. Implementing an effective and flexible procedure is a time-consuming task. Absolute error level evaluation was found to be essential to support methods assessment. A code source listing is presented to allow simple application when the computer memory size is adequate, avoiding data transfer from auxiliary storage. Included are the capabilities for one-dimensional rebalance and a driver program illustrating use requirements. Feedback of additional experience from application is anticipated.
A solution procedure for three-dimensional incompressible Navier-Stokes equation and its application
NASA Technical Reports Server (NTRS)
Kwak, D.; Chang, J. L. C.; Shanks, S. P.
1984-01-01
An implicit, finite-difference procedure is presented for numerically solving viscous incompressible flows. For convenience of applying the present method to three-dimensional problems, primitive variables, namely the pressure and velocities, are used. One of the major difficulties in solving incompressible flows that use primitive variables is caused by the pressure field solution method which is used as a mapping procedure to obtain a divergence-free velocity field. The present method is designed to accelerate the pressure-field solution procedure. This is achieved by the method of pseudocompressibility in which the time derivative pressure term is introduced into the mass conservation equation. The pressure wave propagation and the spreading of the viscous effect is investigated using simple test problems. The present study clarifies physical and numerical characteristics of the pseudo-compressible approach in simulating incompressible flows. Computed results for external and internal flows are presented to verify the present procedure. The present algorithm has been shown to be very robust and accurate if the selection of the pseudo-compressibility parameter has been made according to the guidelines given.
qPR: An adaptive partial-report procedure based on Bayesian inference
Baek, Jongsoo; Lesmes, Luis Andres; Lu, Zhong-Lin
2016-01-01
Iconic memory is best assessed with the partial report procedure in which an array of letters appears briefly on the screen and a poststimulus cue directs the observer to report the identity of the cued letter(s). Typically, 6–8 cue delays or 600–800 trials are tested to measure the iconic memory decay function. Here we develop a quick partial report, or qPR, procedure based on a Bayesian adaptive framework to estimate the iconic memory decay function with much reduced testing time. The iconic memory decay function is characterized by an exponential function and a joint probability distribution of its three parameters. Starting with a prior of the parameters, the method selects the stimulus to maximize the expected information gain in the next test trial. It then updates the posterior probability distribution of the parameters based on the observer's response using Bayesian inference. The procedure is reiterated until either the total number of trials or the precision of the parameter estimates reaches a certain criterion. Simulation studies showed that only 100 trials were necessary to reach an average absolute bias of 0.026 and a precision of 0.070 (both in terms of probability correct). A psychophysical validation experiment showed that estimates of the iconic memory decay function obtained with 100 qPR trials exhibited good precision (the half width of the 68.2% credible interval = 0.055) and excellent agreement with those obtained with 1,600 trials of the conventional method of constant stimuli procedure (RMSE = 0.063). Quick partial-report relieves the data collection burden in characterizing iconic memory and makes it possible to assess iconic memory in clinical populations. PMID:27580045
Construction and solution of an adaptive image-restoration model for removing blur and mixed noise
NASA Astrophysics Data System (ADS)
Wang, Youquan; Cui, Lihong; Cen, Yigang; Sun, Jianjun
2016-03-01
We establish a practical regularized least-squares model with adaptive regularization for dealing with blur and mixed noise in images. This model has some advantages, such as good adaptability for edge restoration and noise suppression due to the application of a priori spatial information obtained from a polluted image. We further focus on finding an important feature of image restoration using an adaptive restoration model with different regularization parameters in polluted images. A more important observation is that the gradient of an image varies regularly from one regularization parameter to another under certain conditions. Then, a modified graduated nonconvexity approach combined with a median filter version of a spatial information indicator is proposed to seek the solution of our adaptive image-restoration model by applying variable splitting and weighted penalty techniques. Numerical experiments show that the method is robust and effective for dealing with various blur and mixed noise levels in images.
An edge-based solution-adaptive method applied to the AIRPLANE code
NASA Technical Reports Server (NTRS)
Biswas, Rupak; Thomas, Scott D.; Cliff, Susan E.
1995-01-01
Computational methods to solve large-scale realistic problems in fluid flow can be made more efficient and cost effective by using them in conjunction with dynamic mesh adaption procedures that perform simultaneous coarsening and refinement to capture flow features of interest. This work couples the tetrahedral mesh adaption scheme, 3D_TAG, with the AIRPLANE code to solve complete aircraft configuration problems in transonic and supersonic flow regimes. Results indicate that the near-field sonic boom pressure signature of a cone-cylinder is improved, the oblique and normal shocks are better resolved on a transonic wing, and the bow shock ahead of an unstarted inlet is better defined.
An edge-based solution-adaptive method applied to the AIRPLANE code
NASA Astrophysics Data System (ADS)
Biswas, Rupak; Thomas, Scott D.; Cliff, Susan E.
1995-11-01
Computational methods to solve large-scale realistic problems in fluid flow can be made more efficient and cost effective by using them in conjunction with dynamic mesh adaption procedures that perform simultaneous coarsening and refinement to capture flow features of interest. This work couples the tetrahedral mesh adaption scheme, 3D_TAG, with the AIRPLANE code to solve complete aircraft configuration problems in transonic and supersonic flow regimes. Results indicate that the near-field sonic boom pressure signature of a cone-cylinder is improved, the oblique and normal shocks are better resolved on a transonic wing, and the bow shock ahead of an unstarted inlet is better defined.
A new solution procedure for a nonlinear infinite beam equation of motion
NASA Astrophysics Data System (ADS)
Jang, T. S.
2016-10-01
Our goal of this paper is of a purely theoretical question, however which would be fundamental in computational partial differential equations: Can a linear solution-structure for the equation of motion for an infinite nonlinear beam be directly manipulated for constructing its nonlinear solution? Here, the equation of motion is modeled as mathematically a fourth-order nonlinear partial differential equation. To answer the question, a pseudo-parameter is firstly introduced to modify the equation of motion. And then, an integral formalism for the modified equation is found here, being taken as a linear solution-structure. It enables us to formulate a nonlinear integral equation of second kind, equivalent to the original equation of motion. The fixed point approach, applied to the integral equation, results in proposing a new iterative solution procedure for constructing the nonlinear solution of the original beam equation of motion, which consists luckily of just the simple regular numerical integration for its iterative process; i.e., it appears to be fairly simple as well as straightforward to apply. A mathematical analysis is carried out on both natures of convergence and uniqueness of the iterative procedure by proving a contractive character of a nonlinear operator. It follows conclusively,therefore, that it would be one of the useful nonlinear strategies for integrating the equation of motion for a nonlinear infinite beam, whereby the preceding question may be answered. In addition, it may be worth noticing that the pseudo-parameter introduced here has double roles; firstly, it connects the original beam equation of motion with the integral equation, second, it is related with the convergence of the iterative method proposed here.
Adaptively Refined Euler and Navier-Stokes Solutions with a Cartesian-Cell Based Scheme
NASA Technical Reports Server (NTRS)
Coirier, William J.; Powell, Kenneth G.
1995-01-01
A Cartesian-cell based scheme with adaptive mesh refinement for solving the Euler and Navier-Stokes equations in two dimensions has been developed and tested. Grids about geometrically complicated bodies were generated automatically, by recursive subdivision of a single Cartesian cell encompassing the entire flow domain. Where the resulting cells intersect bodies, N-sided 'cut' cells were created using polygon-clipping algorithms. The grid was stored in a binary-tree data structure which provided a natural means of obtaining cell-to-cell connectivity and of carrying out solution-adaptive mesh refinement. The Euler and Navier-Stokes equations were solved on the resulting grids using an upwind, finite-volume formulation. The inviscid fluxes were found in an upwinded manner using a linear reconstruction of the cell primitives, providing the input states to an approximate Riemann solver. The viscous fluxes were formed using a Green-Gauss type of reconstruction upon a co-volume surrounding the cell interface. Data at the vertices of this co-volume were found in a linearly K-exact manner, which ensured linear K-exactness of the gradients. Adaptively-refined solutions for the inviscid flow about a four-element airfoil (test case 3) were compared to theory. Laminar, adaptively-refined solutions were compared to accepted computational, experimental and theoretical results.
NASA Astrophysics Data System (ADS)
Delgado, J. M. P. Q.
2013-06-01
The aim of this work is to present a mathematical and experimental formulation of a new simple procedure for the measurement of effective molecular diffusion coefficients of a salt solution in a water-saturated building material. This innovate experimental procedure and mathematical formulation is presented in detail and experimental values of "effective" molecular diffusion coefficient of sodium chloride in a concrete sample ( w/ c = 0.45), at five different temperatures (between 10 and 30 °C) and four different initial NaCl concentrations (between 0.1 and 0.5 M), are reported. The experimental results obtained are in good agreement with the theoretical and experimental values of molecular diffusion coefficient presented in literature. An empirical correlation is presented for the prediction of "effective" molecular diffusion coefficient over the entire range of temperatures and initial salt concentrations studied.
NASA Technical Reports Server (NTRS)
Przekwas, A. J.; Singhal, A. K.; Tam, L. T.
1984-01-01
The capability of simulating three dimensional two phase reactive flows with combustion in the liquid fuelled rocket engines is demonstrated. This was accomplished by modifying an existing three dimensional computer program (REFLAN3D) with Eulerian Lagrangian approach to simulate two phase spray flow, evaporation and combustion. The modified code is referred as REFLAN3D-SPRAY. The mathematical formulation of the fluid flow, heat transfer, combustion and two phase flow interaction of the numerical solution procedure, boundary conditions and their treatment are described.
Calculation procedures for potential and viscous flow solutions for engine inlets
NASA Technical Reports Server (NTRS)
Albers, J. A.; Stockman, N. O.
1973-01-01
The method and basic elements of computer solutions for both potential flow and viscous flow calculations for engine inlets are described. The procedure is applicable to subsonic conventional (CTOL), short-haul (STOL), and vertical takeoff (VTOL) aircraft engine nacelles operating in a compressible viscous flow. The calculated results compare well with measured surface pressure distributions for a number of model inlets. The paper discusses the uses of the program in both the design and analysis of engine inlets, with several examples given for VTOL lift fans, acoustic splitters, and for STOL engine nacelles. Several test support applications are also given.
Aguila-Camacho, Norelys; Duarte-Mermoud, Manuel A
2016-01-01
This paper presents the analysis of three classes of fractional differential equations appearing in the field of fractional adaptive systems, for the case when the fractional order is in the interval α ∈(0,1] and the Caputo definition for fractional derivatives is used. The boundedness of the solutions is proved for all three cases, and the convergence to zero of the mean value of one of the variables is also proved. Applications of the obtained results to fractional adaptive schemes in the context of identification and control problems are presented at the end of the paper, including numerical simulations which support the analytical results.
Software for the parallel adaptive solution of conservation laws by discontinous Galerkin methods.
Flaherty, J. E.; Loy, R. M.; Shephard, M. S.; Teresco, J. D.
1999-08-17
The authors develop software tools for the solution of conservation laws using parallel adaptive discontinuous Galerkin methods. In particular, the Rensselaer Partition Model (RPM) provides parallel mesh structures within an adaptive framework to solve the Euler equations of compressible flow by a discontinuous Galerkin method (LOCO). Results are presented for a Rayleigh-Taylor flow instability for computations performed on 128 processors of an IBM SP computer. In addition to managing the distributed data and maintaining a load balance, RPM provides information about the parallel environment that can be used to tailor partitions to a specific computational environment.
An Adaptive Landscape Classification Procedure using Geoinformatics and Artificial Neural Networks
Coleman, Andre Michael
2008-06-01
The Adaptive Landscape Classification Procedure (ALCP), which links the advanced geospatial analysis capabilities of Geographic Information Systems (GISs) and Artificial Neural Networks (ANNs) and particularly Self-Organizing Maps (SOMs), is proposed as a method for establishing and reducing complex data relationships. Its adaptive and evolutionary capability is evaluated for situations where varying types of data can be combined to address different prediction and/or management needs such as hydrologic response, water quality, aquatic habitat, groundwater recharge, land use, instrumentation placement, and forecast scenarios. The research presented here documents and presents favorable results of a procedure that aims to be a powerful and flexible spatial data classifier that fuses the strengths of geoinformatics and the intelligence of SOMs to provide data patterns and spatial information for environmental managers and researchers. This research shows how evaluation and analysis of spatial and/or temporal patterns in the landscape can provide insight into complex ecological, hydrological, climatic, and other natural and anthropogenic-influenced processes. Certainly, environmental management and research within heterogeneous watersheds provide challenges for consistent evaluation and understanding of system functions. For instance, watersheds over a range of scales are likely to exhibit varying levels of diversity in their characteristics of climate, hydrology, physiography, ecology, and anthropogenic influence. Furthermore, it has become evident that understanding and analyzing these diverse systems can be difficult not only because of varying natural characteristics, but also because of the availability, quality, and variability of spatial and temporal data. Developments in geospatial technologies, however, are providing a wide range of relevant data, and in many cases, at a high temporal and spatial resolution. Such data resources can take the form of high
ERIC Educational Resources Information Center
Colorado State Dept. of Education, Denver. Special Education Services Unit.
This document is intended to provide guidance in the delivery of motor services to Colorado students with impairments in movement, sensory feedback, and sensory motor areas. Presented first is a rationale for providing adapted physical education, occupational therapy, and/or physical therapy services. The next chapter covers definitions,…
Karmali, Faisal; Chaudhuri, Shomesh E; Yi, Yongwoo; Merfeld, Daniel M
2016-03-01
When measuring thresholds, careful selection of stimulus amplitude can increase efficiency by increasing the precision of psychometric fit parameters (e.g., decreasing the fit parameter error bars). To find efficient adaptive algorithms for psychometric threshold ("sigma") estimation, we combined analytic approaches, Monte Carlo simulations, and human experiments for a one-interval, binary forced-choice, direction-recognition task. To our knowledge, this is the first time analytic results have been combined and compared with either simulation or human results. Human performance was consistent with theory and not significantly different from simulation predictions. Our analytic approach provides a bound on efficiency, which we compared against the efficiency of standard staircase algorithms, a modified staircase algorithm with asymmetric step sizes, and a maximum likelihood estimation (MLE) procedure. Simulation results suggest that optimal efficiency at determining threshold is provided by the MLE procedure targeting a fraction correct level of 0.92, an asymmetric 4-down, 1-up staircase targeting between 0.86 and 0.92 or a standard 6-down, 1-up staircase. Psychometric test efficiency, computed by comparing simulation and analytic results, was between 41 and 58% for 50 trials for these three algorithms, reaching up to 84% for 200 trials. These approaches were 13-21% more efficient than the commonly used 3-down, 1-up symmetric staircase. We also applied recent advances to reduce accuracy errors using a bias-reduced fitting approach. Taken together, the results lend confidence that the assumptions underlying each approach are reasonable and that human threshold forced-choice decision making is modeled well by detection theory models and mimics simulations based on detection theory models.
A Dynamically Adaptive Arbitrary Lagrangian-Eulerian Method for Solution of the Euler Equations
Anderson, R W; Elliott, N S; Pember, R B
2003-02-14
A new method that combines staggered grid arbitrary Lagrangian-Eulerian (ALE) techniques with structured local adaptive mesh refinement (AMR) has been developed for solution of the Euler equations. The novel components of the methods are driven by the need to reconcile traditional AMR techniques with the staggered variables and moving, deforming meshes associated with Lagrange based ALE schemes. We develop interlevel solution transfer operators and interlevel boundary conditions first in the case of purely Lagrangian hydrodynamics, and then extend these ideas into an ALE method by developing adaptive extensions of elliptic mesh relaxation techniques. Conservation properties of the method are analyzed, and a series of test problem calculations are presented which demonstrate the utility and efficiency of the method.
Fast multipole and space adaptive multiresolution methods for the solution of the Poisson equation
NASA Astrophysics Data System (ADS)
Bilek, Petr; Duarte, Max; Nečas, David; Bourdon, Anne; Bonaventura, Zdeněk
2016-09-01
This work focuses on the conjunction of the fast multipole method (FMM) with the space adaptive multiresolution (MR) technique for grid adaptation. Since both methods, MR and FMM provide a priori error estimates, both achieve O(N) computational complexity, and both operate on the same hierarchical space division, their conjunction represents a natural choice when designing a numerically efficient and robust strategy for time dependent problems. Special attention is given to the use of these methods in the simulation of streamer discharges in air. We have designed a FMM Poisson solver on multiresolution adapted grid in 2D. The accuracy and the computation complexity of the solver has been verified for a set of manufactured solutions. We confirmed that the developed solver attains desired accuracy and this accuracy is controlled only by the number of terms in the multipole expansion in combination with the multiresolution accuracy tolerance. The implementation has a linear computation complexity O(N).
NASA Technical Reports Server (NTRS)
Jawerth, Bjoern; Sweldens, Wim
1993-01-01
We present ideas on how to use wavelets in the solution of boundary value ordinary differential equations. Rather than using classical wavelets, we adapt their construction so that they become (bi)orthogonal with respect to the inner product defined by the operator. The stiffness matrix in a Galerkin method then becomes diagonal and can thus be trivially inverted. We show how one can construct an O(N) algorithm for various constant and variable coefficient operators.
Roberts, Nathan V.; Demkowiz, Leszek; Moser, Robert
2015-11-15
The discontinuous Petrov-Galerkin methodology with optimal test functions (DPG) of Demkowicz and Gopalakrishnan [18, 20] guarantees the optimality of the solution in an energy norm, and provides several features facilitating adaptive schemes. Whereas Bubnov-Galerkin methods use identical trial and test spaces, Petrov-Galerkin methods allow these function spaces to differ. In DPG, test functions are computed on the fly and are chosen to realize the supremum in the inf-sup condition; the method is equivalent to a minimum residual method. For well-posed problems with sufficiently regular solutions, DPG can be shown to converge at optimal rates—the inf-sup constants governing the convergence are mesh-independent, and of the same order as those governing the continuous problem [48]. DPG also provides an accurate mechanism for measuring the error, and this can be used to drive adaptive mesh refinements. We employ DPG to solve the steady incompressible Navier-Stokes equations in two dimensions, building on previous work on the Stokes equations, and focusing particularly on the usefulness of the approach for automatic adaptivity starting from a coarse mesh. We apply our approach to a manufactured solution due to Kovasznay as well as the lid-driven cavity flow, backward-facing step, and flow past a cylinder problems.
ERIC Educational Resources Information Center
Yang, Xiangdong; Poggio, John C.; Glasnapp, Douglas R.
2006-01-01
The effects of five ability estimators, that is, maximum likelihood estimator, weighted likelihood estimator, maximum a posteriori, expected a posteriori, and Owen's sequential estimator, on the performances of the item response theory-based adaptive classification procedure on multiple categories were studied via simulations. The following…
NASA Astrophysics Data System (ADS)
Gioiella, Lucia; Altobelli, Rosaria; de Luna, Martina Salzano; Filippone, Giovanni
2016-05-01
The efficacy of chitosan-based hydrogels in the removal of dyes from aqueous solutions has been investigated as a function of different parameters. Hydrogels were obtained by gelation of chitosan with a non-toxic gelling agent based on an aqueous basic solution. The preparation procedure has been optimized in terms of chitosan concentration in the starting solution, gelling agent concentration and chitosan-to-gelling agent ratio. The goal is to properly select the material- and process-related parameters in order to optimize the performances of the chitosan-based dye adsorbent. First, the influence of such factors on the gelling process has been studied from a kinetic point of view. Then, the effects on the adsorption capacity and kinetics of the chitosan hydrogels obtained in different conditions have been investigated. A common food dye (Indigo Carmine) has been used for this purpose. Noticeably, although the disk-shaped hydrogels are in the bulk form, their adsorption capacity is comparable to that reported in the literature for films and beads. In addition, the bulk samples can be easily separated from the liquid phase after the adsorption process, which is highly attractive from a practical point of view. Compression tests reveal that the samples do not breakup even after relatively large compressive strains. The obtained results suggest that the fine tuning of the process parameters allows the production of mechanical resistant and highly adsorbing chitosan-based hydrogels.
NASA Astrophysics Data System (ADS)
Wissmeier, L. C.; Barry, D. A.
2009-12-01
Computer simulations of water availability and quality play an important role in state-of-the-art water resources management. However, many of the most utilized software programs focus either on physical flow and transport phenomena (e.g., MODFLOW, MT3DMS, FEFLOW, HYDRUS) or on geochemical reactions (e.g., MINTEQ, PHREEQC, CHESS, ORCHESTRA). In recent years, several couplings between both genres of programs evolved in order to consider interactions between flow and biogeochemical reactivity (e.g., HP1, PHWAT). Software coupling procedures can be categorized as ‘close couplings’, where programs pass information via the memory stack at runtime, and ‘remote couplings’, where the information is exchanged at each time step via input/output files. The former generally involves modifications of software codes and therefore expert programming skills are required. We present a generic recipe for remotely coupling the PHREEQC geochemical modeling framework and flow and solute transport (FST) simulators. The iterative scheme relies on operator splitting with continuous re-initialization of PHREEQC and the FST of choice at each time step. Since PHREEQC calculates the geochemistry of aqueous solutions in contact with soil minerals, the procedure is primarily designed for couplings to FST’s for liquid phase flow in natural environments. It requires the accessibility of initial conditions and numerical parameters such as time and space discretization in the input text file for the FST and control of the FST via commands to the operating system (batch on Windows; bash/shell on Unix/Linux). The coupling procedure is based on PHREEQC’s capability to save the state of a simulation with all solid, liquid and gaseous species as a PHREEQC input file by making use of the dump file option in the TRANSPORT keyword. The output from one reaction calculation step is therefore reused as input for the following reaction step where changes in element amounts due to advection
Mission to Mars: Adaptive Identifier for the Solution of Inverse Optical Metrology Tasks
NASA Astrophysics Data System (ADS)
Krapivin, Vladimir F.; Varotsos, Costas A.; Christodoulakis, John
2016-06-01
A human mission to Mars requires the solution of many problems that mainly linked to the safety of life, the reliable operational control of drinking water as well as health care. The availability of liquid fuels is also an important issue since the existing tools cannot fully provide the required liquid fuels quantities for the mission return journey. This paper presents the development of new methods and technology for reliable, operational, and with high availability chemical analysis of liquid solutions of various types. This technology is based on the employment of optical sensors (such as the multi-channel spectrophotometers or spectroellipsometers and microwave radiometers) and the development of a database of spectral images for typical liquid solutions that could be the objects of life on Mars. This database exploits the adaptive recognition of optical images of liquids using specific algorithms that are based on spectral analysis, cluster analysis and methods for solving the inverse optical metrology tasks.
Triangle Based Adaptive Stencils for the Solution of Hyperbolic Conservation Laws
NASA Astrophysics Data System (ADS)
Durlofsky, Louis J.; Engquist, Bjorn; Osher, Stanley
1992-01-01
A triangle based adaptive difference stencil for the numerical approximation of hyperbolic conservation laws in two space dimensions is constructed. The novelty of the resulting scheme lies in the nature of the preprocessing of the cell averaged data, which is accomplished via a nearest neighbor linear interpolation followed by a slope limiting procedure. Two such limiting procedures are suggested. The resulting method is considerably more simple than other triangle based non-oscillatory approximations which, like this scheme, approximate the flux up to second-order accuracy. Numerical results for constant and variable coefficient linear advection, as well as for nonlinear flux functions (Burgers' equation and the Buckley-Leverett equation), are presented. The observed order of convergence, after local averaging, is from 1.7 to 2.0 in L1.
Adaptive resolution simulation of an atomistic DNA molecule in MARTINI salt solution
NASA Astrophysics Data System (ADS)
Zavadlav, J.; Podgornik, R.; Melo, M. N.; Marrink, S. J.; Praprotnik, M.
2016-10-01
We present a dual-resolution model of a deoxyribonucleic acid (DNA) molecule in a bathing solution, where we concurrently couple atomistic bundled water and ions with the coarse-grained MARTINI model of the solvent. We use our fine-grained salt solution model as a solvent in the inner shell surrounding the DNA molecule, whereas the solvent in the outer shell is modeled by the coarse-grained model. The solvent entities can exchange between the two domains and adapt their resolution accordingly. We critically asses the performance of our multiscale model in adaptive resolution simulations of an infinitely long DNA molecule, focusing on the structural characteristics of the solvent around DNA. Our analysis shows that the adaptive resolution scheme does not produce any noticeable artifacts in comparison to a reference system simulated in full detail. The effect of using a bundled-SPC model, required for multiscaling, compared to the standard free SPC model is also evaluated. Our multiscale approach opens the way for large scale applications of DNA and other biomolecules which require a large solvent reservoir to avoid boundary effects.
Adaptive finite element methods for the solution of inverse problems in optical tomography
NASA Astrophysics Data System (ADS)
Bangerth, Wolfgang; Joshi, Amit
2008-06-01
Optical tomography attempts to determine a spatially variable coefficient in the interior of a body from measurements of light fluxes at the boundary. Like in many other applications in biomedical imaging, computing solutions in optical tomography is complicated by the fact that one wants to identify an unknown number of relatively small irregularities in this coefficient at unknown locations, for example corresponding to the presence of tumors. To recover them at the resolution needed in clinical practice, one has to use meshes that, if uniformly fine, would lead to intractably large problems with hundreds of millions of unknowns. Adaptive meshes are therefore an indispensable tool. In this paper, we will describe a framework for the adaptive finite element solution of optical tomography problems. It takes into account all steps starting from the formulation of the problem including constraints on the coefficient, outer Newton-type nonlinear and inner linear iterations, regularization, and in particular the interplay of these algorithms with discretizing the problem on a sequence of adaptively refined meshes. We will demonstrate the efficiency and accuracy of these algorithms on a set of numerical examples of clinical relevance related to locating lymph nodes in tumor diagnosis.
Patched based methods for adaptive mesh refinement solutions of partial differential equations
Saltzman, J.
1997-09-02
This manuscript contains the lecture notes for a course taught from July 7th through July 11th at the 1997 Numerical Analysis Summer School sponsored by C.E.A., I.N.R.I.A., and E.D.F. The subject area was chosen to support the general theme of that year`s school which is ``Multiscale Methods and Wavelets in Numerical Simulation.`` The first topic covered in these notes is a description of the problem domain. This coverage is limited to classical PDEs with a heavier emphasis on hyperbolic systems and constrained hyperbolic systems. The next topic is difference schemes. These schemes are the foundation for the adaptive methods. After the background material is covered, attention is focused on a simple patched based adaptive algorithm and its associated data structures for square grids and hyperbolic conservation laws. Embellishments include curvilinear meshes, embedded boundary and overset meshes. Next, several strategies for parallel implementations are examined. The remainder of the notes contains descriptions of elliptic solutions on the mesh hierarchy, elliptically constrained flow solution methods and elliptically constrained flow solution methods with diffusion.
Pérez-Jordá, José M
2011-11-28
A series of improvements for the solution of the three-dimensional Schrödinger equation over a method introduced by Gygi [F. Gygi, Europhys. Lett. 19, 617 (1992); F. Gygi, Phys. Rev. B 48, 11692 (1993)] are presented. As in the original Gygi's method, the solution (orbital) is expressed by means of plane waves in adaptive coordinates u, where u is mapped from Cartesian coordinates, u=f(r). The improvements implemented are threefold. First, maps are introduced that allow the application of the method to atoms and molecules without the assistance of the supercell approximation. Second, the electron-nucleus singularities are exactly removed, so that pseudo-potentials are no longer required. Third, the sampling error during integral evaluation is made negligible, which results in a true variational, second-order energy error procedure. The method is tested on the hydrogen atom (ground and excited states) and the H(2)(+) molecule, resulting in milli-Hartree accuracy with a moderate number of plane waves.
An Adaptive QoS Routing Solution for MANET Based Multimedia Communications in Emergency Cases
NASA Astrophysics Data System (ADS)
Ramrekha, Tipu Arvind; Politis, Christos
The Mobile Ad hoc Networks (MANET) is a wireless network deprived of any fixed central authoritative routing entity. It relies entirely on collaborating nodes forwarding packets from source to destination. This paper describes the design, implementation and performance evaluation of CHAMELEON, an adaptive Quality of Service (QoS) routing solution, with improved delay and jitter performances, enabling multimedia communication for MANETs in extreme emergency situations such as forest fire and terrorist attacks as defined in the PEACE project. CHAMELEON is designed to adapt its routing behaviour according to the size of a MANET. The reactive Ad Hoc on-Demand Distance Vector Routing (AODV) and proactive Optimized Link State Routing (OLSR) protocols are deemed appropriate for CHAMELEON through their performance evaluation in terms of delay and jitter for different MANET sizes in a building fire emergency scenario. CHAMELEON is then implemented in NS-2 and evaluated similarly. The paper concludes with a summary of findings so far and intended future work.
NASA Technical Reports Server (NTRS)
Stricklin, J. A.; Haisler, W. E.; Von Riesemann, W. A.
1972-01-01
This paper presents an assessment of the solution procedures available for the analysis of inelastic and/or large deflection structural behavior. A literature survey is given which summarized the contribution of other researchers in the analysis of structural problems exhibiting material nonlinearities and combined geometric-material nonlinearities. Attention is focused at evaluating the available computation and solution techniques. Each of the solution techniques is developed from a common equation of equilibrium in terms of pseudo forces. The solution procedures are applied to circular plates and shells of revolution in an attempt to compare and evaluate each with respect to computational accuracy, economy, and efficiency. Based on the numerical studies, observations and comments are made with regard to the accuracy and economy of each solution technique.
Adaptive-mesh-based algorithm for fluorescence molecular tomography using an analytical solution
NASA Astrophysics Data System (ADS)
Wang, Daifa; Song, Xiaolei; Bai, Jing
2007-07-01
Fluorescence molecular tomography (FMT) has become an important method for in-vivo imaging of small animals. It has been widely used for tumor genesis, cancer detection, metastasis, drug discovery, and gene therapy. In this study, an algorithm for FMT is proposed to obtain accurate and fast reconstruction by combining an adaptive mesh refinement technique and an analytical solution of diffusion equation. Numerical studies have been performed on a parallel plate FMT system with matching fluid. The reconstructions obtained show that the algorithm is efficient in computation time, and they also maintain image quality.
A constrained backpropagation approach for the adaptive solution of partial differential equations.
Rudd, Keith; Di Muro, Gianluca; Ferrari, Silvia
2014-03-01
This paper presents a constrained backpropagation (CPROP) methodology for solving nonlinear elliptic and parabolic partial differential equations (PDEs) adaptively, subject to changes in the PDE parameters or external forcing. Unlike existing methods based on penalty functions or Lagrange multipliers, CPROP solves the constrained optimization problem associated with training a neural network to approximate the PDE solution by means of direct elimination. As a result, CPROP reduces the dimensionality of the optimization problem, while satisfying the equality constraints associated with the boundary and initial conditions exactly, at every iteration of the algorithm. The effectiveness of this method is demonstrated through several examples, including nonlinear elliptic and parabolic PDEs with changing parameters and nonhomogeneous terms.
2013-01-01
Analysis of Adaptive Mesh Refinement for IMEX Discontinuous Galerkin Solutions of the Compressible Euler Equations with Application to Atmospheric...order discontinuous Galerkin method on quadrilateral grids with non-conforming elements. We perform a detailed analysis of the cost of AMR by comparing...adaptive mesh refinement, discontinuous Galerkin method, non-conforming mesh, IMEX, compressible Euler equations, atmospheric simulations 1. Introduction
NASA Astrophysics Data System (ADS)
Ozcelikkale, Altug; Sert, Cuneyt
2012-05-01
Least-squares spectral element solution of steady, two-dimensional, incompressible flows are obtained by approximating velocity, pressure and vorticity variable set on Gauss-Lobatto-Legendre nodes. Constrained Approximation Method is used for h- and p-type nonconforming interfaces of quadrilateral elements. Adaptive solutions are obtained using a posteriori error estimates based on least squares functional and spectral coefficient. Effective use of p-refinement to overcome poor mass conservation drawback of least-squares formulation and successful use of h- and p-refinement together to solve problems with geometric singularities are demonstrated. Capabilities and limitations of the developed code are presented using Kovasznay flow, flow past a circular cylinder in a channel and backward facing step flow.
Comparison of Facts and DPD-Steadifac Procedures for Free and Combined Chlorine in Aqueous Solution.
1980-01-01
of Nitrogen Trichloride. 13 2. Interference of Monochloramine on the DPD Procedure in Synthetic Waters ........... ... ..... ... .. 17 3. Comparison... Monochloramine with the FACTS Procedure Using Synthetic Waters ..... ..... ......................... 14 2. Interference of Monochloramine with the DPD...Procedure in Synthetic W.aters .......... ......................... 16 3. Interference of Monochloramine with the DPD-STEADIFAC Modified Procedure in
Solution of three-dimensional groundwater flow equations using the strongly implicit procedure
Trescott, P.C.; Larson, S.P.
1977-01-01
A three-dimensional numerical model has been coded to use the strongly implicit procedure for solving the finite-difference approximations to the ground-water flow equation. The model allows for: (1) the representation of each aquifer and each confining bed by several layers; and (2) the use of an anisotropic hydraulic conductivity at each finite-difference block. The model is compared with a previously developed quasi-three-dimensional model by simulating the steady-state flow in an aquifer system in the Piceance Creek Basin, Colorado. The aquifer system consists of two aquifers separated by a leaky confining bed. The upper aquifer receives recharge from precipitation and is hydraulically connected to streams. For this problem, in order to make a valid comparison of results, a single layer was used to represent each aquifer. Furthermore, the need for a layer to represent the confining bed was eliminated by incorporating the effects of vertical leakage into the vertical component of the anisotropic hydraulic conductivity of the adjacent aquifers. Thus, the problem was represented by only two layers in each model with a total of about 2,100 equations. This restricted the effects of flow in the confining layer to the vertical component, but simulations with a third layer in the three-dimensional model permitting horizontal flow in the confining bed show that the two-layer approach is reasonable. Convergence to a solution of this problem takes about one minute of computer time on the IBM/155. This is about 30 times faster than the time required using the quasi-three-dimensional model. ?? 1977.
Error norms for the adaptive solution of the Navier-Stokes equations
NASA Technical Reports Server (NTRS)
Forester, C. K.
1982-01-01
The adaptive solution of the Navier-Stokes equations depends upon the successful interaction of three key elements: (1) the ability to flexibly select grid length scales in composite grids, (2) the ability to efficiently control residual error in composite grids, and (3) the ability to define reliable, convenient error norms to guide the grid adjustment and optimize the residual levels relative to the local truncation errors. An initial investigation was conducted to explore how to approach developing these key elements. Conventional error assessment methods were defined and defect and deferred correction methods were surveyed. The one dimensional potential equation was used as a multigrid test bed to investigate how to achieve successful interaction of these three key elements.
Zonal multigrid solution of compressible flow problems on unstructured and adaptive meshes
NASA Technical Reports Server (NTRS)
Mavriplis, Dimitri J.
1989-01-01
The simultaneous use of adaptive meshing techniques with a multigrid strategy for solving the 2-D Euler equations in the context of unstructured meshes is studied. To obtain optimal efficiency, methods capable of computing locally improved solutions without recourse to global recalculations are pursued. A method for locally refining an existing unstructured mesh, without regenerating a new global mesh is employed, and the domain is automatically partitioned into refined and unrefined regions. Two multigrid strategies are developed. In the first, time-stepping is performed on a global fine mesh covering the entire domain, and convergence acceleration is achieved through the use of zonal coarse grid accelerator meshes, which lie under the adaptively refined regions of the global fine mesh. Both schemes are shown to produce similar convergence rates to each other, and also with respect to a previously developed global multigrid algorithm, which performs time-stepping throughout the entire domain, on each mesh level. However, the present schemes exhibit higher computational efficiency due to the smaller number of operations on each level.
Physiology driven adaptivity for the numerical solution of the bidomain equations.
Whiteley, Jonathan P
2007-09-01
Previous work [Whiteley, J. P. IEEE Trans. Biomed. Eng. 53:2139-2147, 2006] derived a stable, semi-implicit numerical scheme for solving the bidomain equations. This scheme allows the timestep used when solving the bidomain equations numerically to be chosen by accuracy considerations rather than stability considerations. In this study we modify this scheme to allow an adaptive numerical solution in both time and space. The spatial mesh size is determined by the gradient of the transmembrane and extracellular potentials while the timestep is determined by the values of: (i) the fast sodium current; and (ii) the calcium release from junctional sarcoplasmic reticulum to myoplasm current. For two-dimensional simulations presented here, combining the numerical algorithm in the paper cited above with the adaptive algorithm presented here leads to an increase in computational efficiency by a factor of around 250 over previous work, together with significantly less computational memory being required. The speedup for three-dimensional simulations is likely to be more impressive.
Operational Characteristics of Adaptive Testing Procedures Using the Graded Response Model.
ERIC Educational Resources Information Center
Dodd, Barbara G.; And Others
1989-01-01
General guidelines are developed to assist practitioners in devising operational computerized adaptive testing systems based on the graded response model. The effects of the following major variables were examined: item pool size; stepsize used along the trait continuum until maximum likelihood estimation could be calculated; and stopping rule…
Two-stage sequential sampling: A neighborhood-free adaptive sampling procedure
Salehi, M.; Smith, D.R.
2005-01-01
Designing an efficient sampling scheme for a rare and clustered population is a challenging area of research. Adaptive cluster sampling, which has been shown to be viable for such a population, is based on sampling a neighborhood of units around a unit that meets a specified condition. However, the edge units produced by sampling neighborhoods have proven to limit the efficiency and applicability of adaptive cluster sampling. We propose a sampling design that is adaptive in the sense that the final sample depends on observed values, but it avoids the use of neighborhoods and the sampling of edge units. Unbiased estimators of population total and its variance are derived using Murthy's estimator. The modified two-stage sampling design is easy to implement and can be applied to a wider range of populations than adaptive cluster sampling. We evaluate the proposed sampling design by simulating sampling of two real biological populations and an artificial population for which the variable of interest took the value either 0 or 1 (e.g., indicating presence and absence of a rare event). We show that the proposed sampling design is more efficient than conventional sampling in nearly all cases. The approach used to derive estimators (Murthy's estimator) opens the door for unbiased estimators to be found for similar sequential sampling designs. ?? 2005 American Statistical Association and the International Biometric Society.
Impact of Metal Nanoform Colloidal Solution on the Adaptive Potential of Plants
NASA Astrophysics Data System (ADS)
Taran, Nataliya; Batsmanova, Ludmila; Kovalenko, Mariia; Okanenko, Alexander
2016-02-01
Nanoparticles are a known cause of oxidative stress and so induce antistress action. The latter property was the purpose of our study. The effect of two concentrations (120 and 240 mg/l) of nanoform biogenic metal (Ag, Cu, Fe, Zn, Mn) colloidal solution on antioxidant enzymes, superoxide dismutase and catalase; the level of the factor of the antioxidant state; and the content of thiobarbituric acid reactive substances (TBARSs) of soybean plant in terms of field experience were studied. It was found that the oxidative processes developed a metal nanoparticle pre-sowing seed treatment variant at a concentration of 120 mg/l, as evidenced by the increase in the content of TBARS in photosynthetic tissues by 12 %. Pre-sowing treatment in a double concentration (240 mg/l) resulted in a decrease in oxidative processes (19 %), and pre-sowing treatment combined with vegetative treatment also contributed to the reduction of TBARS (10 %). Increased activity of superoxide dismutase (SOD) was observed in a variant by increasing the content of TBARS; SOD activity was at the control level in two other variants. Catalase activity decreased in all variants. The factor of antioxidant activity was highest (0.3) in a variant with nanoparticle double treatment (pre-sowing and vegetative) at a concentration of 120 mg/l. Thus, the studied nanometal colloidal solution when used in small doses, in a certain time interval, can be considered as a low-level stress factor which according to hormesis principle promoted adaptive response reaction.
Bayesian Procedures for Identifying Aberrant Response-Time Patterns in Adaptive Testing
ERIC Educational Resources Information Center
van der Linden, Wim J.; Guo, Fanmin
2008-01-01
In order to identify aberrant response-time patterns on educational and psychological tests, it is important to be able to separate the speed at which the test taker operates from the time the items require. A lognormal model for response times with this feature was used to derive a Bayesian procedure for detecting aberrant response times.…
NASA Astrophysics Data System (ADS)
Saanouni, Kkemais; Labergère, Carl; Issa, Mazen; Rassineux, Alain
2010-06-01
This work proposes a complete adaptive numerical methodology which uses `advanced' elastoplastic constitutive equations coupling: thermal effects, large elasto-viscoplasticity with mixed non linear hardening, ductile damage and contact with friction, for 2D machining simulation. Fully coupled (strong coupling) thermo-elasto-visco-plastic-damage constitutive equations based on the state variables under large plastic deformation developed for metal forming simulation are presented. The relevant numerical aspects concerning the local integration scheme as well as the global resolution strategy and the adaptive remeshing facility are briefly discussed. Applications are made to the orthogonal metal cutting by chip formation and segmentation under high velocity. The interactions between hardening, plasticity, ductile damage and thermal effects and their effects on the adiabatic shear band formation including the formation of cracks are investigated.
EEG-Based BCI System Using Adaptive Features Extraction and Classification Procedures
Mangia, Anna Lisa; Cappello, Angelo
2016-01-01
Motor imagery is a common control strategy in EEG-based brain-computer interfaces (BCIs). However, voluntary control of sensorimotor (SMR) rhythms by imagining a movement can be skilful and unintuitive and usually requires a varying amount of user training. To boost the training process, a whole class of BCI systems have been proposed, providing feedback as early as possible while continuously adapting the underlying classifier model. The present work describes a cue-paced, EEG-based BCI system using motor imagery that falls within the category of the previously mentioned ones. Specifically, our adaptive strategy includes a simple scheme based on a common spatial pattern (CSP) method and support vector machine (SVM) classification. The system's efficacy was proved by online testing on 10 healthy participants. In addition, we suggest some features we implemented to improve a system's “flexibility” and “customizability,” namely, (i) a flexible training session, (ii) an unbalancing in the training conditions, and (iii) the use of adaptive thresholds when giving feedback. PMID:27635129
Broom, Donald M
2006-01-01
The term adaptation is used in biology in three different ways. It may refer to changes which occur at the cell and organ level, or at the individual level, or at the level of gene action and evolutionary processes. Adaptation by cells, especially nerve cells helps in: communication within the body, the distinguishing of stimuli, the avoidance of overload and the conservation of energy. The time course and complexity of these mechanisms varies. Adaptive characters of organisms, including adaptive behaviours, increase fitness so this adaptation is evolutionary. The major part of this paper concerns adaptation by individuals and its relationships to welfare. In complex animals, feed forward control is widely used. Individuals predict problems and adapt by acting before the environmental effect is substantial. Much of adaptation involves brain control and animals have a set of needs, located in the brain and acting largely via motivational mechanisms, to regulate life. Needs may be for resources but are also for actions and stimuli which are part of the mechanism which has evolved to obtain the resources. Hence pigs do not just need food but need to be able to carry out actions like rooting in earth or manipulating materials which are part of foraging behaviour. The welfare of an individual is its state as regards its attempts to cope with its environment. This state includes various adaptive mechanisms including feelings and those which cope with disease. The part of welfare which is concerned with coping with pathology is health. Disease, which implies some significant effect of pathology, always results in poor welfare. Welfare varies over a range from very good, when adaptation is effective and there are feelings of pleasure or contentment, to very poor. A key point concerning the concept of individual adaptation in relation to welfare is that welfare may be good or poor while adaptation is occurring. Some adaptation is very easy and energetically cheap and
Simulation of metal forming processes with a 3D adaptive remeshing procedure
NASA Astrophysics Data System (ADS)
Zeramdini, Bessam; Robert, Camille; Germain, Guenael; Pottier, Thomas
2016-10-01
In this paper, a fully adaptive 3D numerical methodology based on a tetrahedral element was proposed in order to improve the finite element simulation of any metal forming process. This automatic methodology was implemented in a computational platform which integrates a finite element solver, 3D mesh generation and a field transfer algorithm. The proposed remeshing method was developed in order to solve problems associated with the severe distortion of elements subject to large deformations, to concentrate the elements where the error is large and to coarsen the mesh where the error is small. This leads to a significant reduction in the computation times while maintaining simulation accuracy. In addition, in order to enhance the contact conditions, this method has been coupled with a specific operator to maintain the initial contact between the workpiece nodes and the rigid tool after each remeshing step. In this paper special attention is paid to the data transfer methods and the necessary adaptive remeshing steps are given. Finally, a numerical example is detailed to demonstrate the efficiency of the approach and to compare the results for the different field transfer strategies.
A goal-oriented adaptive procedure for the quasi-continuum method with cluster approximation
NASA Astrophysics Data System (ADS)
Memarnahavandi, Arash; Larsson, Fredrik; Runesson, Kenneth
2015-04-01
We present a strategy for adaptive error control for the quasi-continuum (QC) method applied to molecular statics problems. The QC-method is introduced in two steps: Firstly, introducing QC-interpolation while accounting for the exact summation of all the bond-energies, we compute goal-oriented error estimators in a straight-forward fashion based on the pertinent adjoint (dual) problem. Secondly, for large QC-elements the bond energy and its derivatives are typically computed using an appropriate discrete quadrature using cluster approximations, which introduces a model error. The combined error is estimated approximately based on the same dual problem in conjunction with a hierarchical strategy for approximating the residual. As a model problem, we carry out atomistic-to-continuum homogenization of a graphene monolayer, where the Carbon-Carbon energy bonds are modeled via the Tersoff-Brenner potential, which involves next-nearest neighbor couplings. In particular, we are interested in computing the representative response for an imperfect lattice. Within the goal-oriented framework it becomes natural to choose the macro-scale (continuum) stress as the "quantity of interest". Two different formulations are adopted: The Basic formulation and the Global formulation. The presented numerical investigation shows the accuracy and robustness of the proposed error estimator and the pertinent adaptive algorithm.
Uystepruyst, Ch; Coghe, J; Dorts, Th; Harmegnies, N; Delsemme, M-H; Art, T; Lekeux, P
2002-01-01
The purpose of this study was to evaluate the effects of three resuscitation procedures on respiratory and metabolic adaptation to extra-uterine life during the first 24 h after birth in healthy newborn calves. Twenty-four newborn calves were randomly grouped into four categories: six calves did not receive any specific resuscitation procedure and were considered as controls (C); six received pharyngeal and nasal suctioning immediately after birth by use of a hand-powered vacuum pump (SUC); six received five litres of cold water poured over their heads immediately after birth (CW) and six were housed in a calf pen with an infrared radiant heater for 24 h after birth (IR). Calves were examined at birth, 5, 15, 30, 45 and 60 min, 2, 3, 6, 12 and 24 h after birth and the following measurements were recorded: physical and clinical examination, arterial blood gas analysis, pulmonary function tests using the oesophageal balloon catheter technique, arterial and venous blood acid-base balance analysis, jugular venous blood sampling for determination of metabolic, haematological and passive immune transfer variables. SUC was accompanied by improved pulmonary function efficiency and by a less pronounced decrease in body temperature. The "head shaking movement" and the subsequent temporary increase in total pulmonary resistance as well as the greater lactic acidosis due to CW were accompanied by more efficient, but statistically non-significant, pulmonary gas exchanges. IR allowed maintenance of higher body temperature without requiring increased catabolism of energetic stores. IR also caused a change in breathing pattern which contributed to better distribution of the ventilation and to slightly improved gas exchange. The results indicate that use of SUC, CW and IR modified respiratory and metabolic adaptation during the first 24 h after birth without side-effects. These resuscitation procedures should be recommended for their specific indication, i.e. cleansing of fetal
Lazar, Ann A; Zerbe, Gary O
2011-12-01
Researchers often compare the relationship between an outcome and covariate for two or more groups by evaluating whether the fitted regression curves differ significantly. When they do, researchers need to determine the "significance region," or the values of the covariate where the curves significantly differ. In analysis of covariance (ANCOVA), the Johnson-Neyman procedure can be used to determine the significance region; for the hierarchical linear model (HLM), the Miyazaki and Maier (M-M) procedure has been suggested. However, neither procedure can assume nonnormally distributed data. Furthermore, the M-M procedure produces biased (downward) results because it uses the Wald test, does not control the inflated Type I error rate due to multiple testing, and requires implementing multiple software packages to determine the significance region. In this article, we address these limitations by proposing solutions for determining the significance region suitable for generalized linear (mixed) model (GLM or GLMM). These proposed solutions incorporate test statistics that resolve the biased results, control the Type I error rate using Scheffé's method, and uses a single statistical software package to determine the significance region.
A solution-adaptive mesh algorithm for dynamic/static refinement of two and three dimensional grids
NASA Technical Reports Server (NTRS)
Benson, Rusty A.; Mcrae, D. S.
1991-01-01
An adaptive grid algorithm has been developed in two and three dimensions that can be used dynamically with a solver or as part of a grid refinement process. The algorithm employs a transformation from the Cartesian coordinate system to a general coordinate space, which is defined as a parallelepiped in three dimensions. A weighting function, independent for each coordinate direction, is developed that will provide the desired refinement criteria in regions of high solution gradient. The adaptation is performed in the general coordinate space and the new grid locations are returned to the Cartesian space via a simple, one-step inverse mapping. The algorithm for relocation of the mesh points in the parametric space is based on the center of mass for distributed weights. Dynamic solution-adaptive results are presented for laminar flows in two and three dimensions.
Kreitler, Jason; Stoms, David M; Davis, Frank W
2014-01-01
Quantitative methods of spatial conservation prioritization have traditionally been applied to issues in conservation biology and reserve design, though their use in other types of natural resource management is growing. The utility maximization problem is one form of a covering problem where multiple criteria can represent the expected social benefits of conservation action. This approach allows flexibility with a problem formulation that is more general than typical reserve design problems, though the solution methods are very similar. However, few studies have addressed optimization in utility maximization problems for conservation planning, and the effect of solution procedure is largely unquantified. Therefore, this study mapped five criteria describing elements of multifunctional agriculture to determine a hypothetical conservation resource allocation plan for agricultural land conservation in the Central Valley of CA, USA. We compared solution procedures within the utility maximization framework to determine the difference between an open source integer programming approach and a greedy heuristic, and find gains from optimization of up to 12%. We also model land availability for conservation action as a stochastic process and determine the decline in total utility compared to the globally optimal set using both solution algorithms. Our results are comparable to other studies illustrating the benefits of optimization for different conservation planning problems, and highlight the importance of maximizing the effectiveness of limited funding for conservation and natural resource management.
Kreitler, Jason R.; Stoms, David M.; Davis, Frank W.
2014-01-01
Quantitative methods of spatial conservation prioritization have traditionally been applied to issues in conservation biology and reserve design, though their use in other types of natural resource management is growing. The utility maximization problem is one form of a covering problem where multiple criteria can represent the expected social benefits of conservation action. This approach allows flexibility with a problem formulation that is more general than typical reserve design problems, though the solution methods are very similar. However, few studies have addressed optimization in utility maximization problems for conservation planning, and the effect of solution procedure is largely unquantified. Therefore, this study mapped five criteria describing elements of multifunctional agriculture to determine a hypothetical conservation resource allocation plan for agricultural land conservation in the Central Valley of CA, USA. We compared solution procedures within the utility maximization framework to determine the difference between an open source integer programming approach and a greedy heuristic, and find gains from optimization of up to 12%. We also model land availability for conservation action as a stochastic process and determine the decline in total utility compared to the globally optimal set using both solution algorithms. Our results are comparable to other studies illustrating the benefits of optimization for different conservation planning problems, and highlight the importance of maximizing the effectiveness of limited funding for conservation and natural resource management.
NASA Technical Reports Server (NTRS)
Rudy, D. H.; Morris, D. J.; Blanchard, D. K.; Cooke, C. H.; Rubin, S. G.
1975-01-01
The status of an investigation of four numerical techniques for the time-dependent compressible Navier-Stokes equations is presented. Results for free shear layer calculations in the Reynolds number range from 1000 to 81000 indicate that a sequential alternating-direction implicit (ADI) finite-difference procedure requires longer computing times to reach steady state than a low-storage hopscotch finite-difference procedure. A finite-element method with cubic approximating functions was found to require excessive computer storage and computation times. A fourth method, an alternating-direction cubic spline technique which is still being tested, is also described.
Churchill, Nathan W; Strother, Stephen C
2013-11-15
The presence of physiological noise in functional MRI can greatly limit the sensitivity and accuracy of BOLD signal measurements, and produce significant false positives. There are two main types of physiological confounds: (1) high-variance signal in non-neuronal tissues of the brain including vascular tracts, sinuses and ventricles, and (2) physiological noise components which extend into gray matter tissue. These physiological effects may also be partially coupled with stimuli (and thus the BOLD response). To address these issues, we have developed PHYCAA+, a significantly improved version of the PHYCAA algorithm (Churchill et al., 2011) that (1) down-weights the variance of voxels in probable non-neuronal tissue, and (2) identifies the multivariate physiological noise subspace in gray matter that is linked to non-neuronal tissue. This model estimates physiological noise directly from EPI data, without requiring external measures of heartbeat and respiration, or manual selection of physiological components. The PHYCAA+ model significantly improves the prediction accuracy and reproducibility of single-subject analyses, compared to PHYCAA and a number of commonly-used physiological correction algorithms. Individual subject denoising with PHYCAA+ is independently validated by showing that it consistently increased between-subject activation overlap, and minimized false-positive signal in non gray-matter loci. The results are demonstrated for both block and fast single-event task designs, applied to standard univariate and adaptive multivariate analysis models.
NASA Technical Reports Server (NTRS)
Jiang, Yi-Tsann; Usab, William J., Jr.
1993-01-01
A general solution adaptive scheme based on a remeshing technique is developed for solving the two-dimensional and quasi-three-dimensional Euler and Favre-averaged Navier-Stokes equations. The numerical scheme is formulated on an unstructured triangular mesh utilizing an edge-based pointer system which defines the edge connectivity of the mesh structure. Jameson's four-stage hybrid Runge-Kutta scheme is used to march the solution in time. The convergence rate is enhanced through the use of local time stepping and implicit residual averaging. As the solution evolves, the mesh is regenerated adaptively using flow field information. Mesh adaptation parameters are evaluated such that an estimated local numerical error is equally distributed over the whole domain. For inviscid flows, the present approach generates a complete unstructured triangular mesh using the advancing front method. For turbulent flows, the approach combines a local highly stretched structured triangular mesh in the boundary layer region with an unstructured mesh in the remaining regions to efficiently resolve the important flow features. One-equation and two-equation turbulence models are incorporated into the present unstructured approach. Results are presented for a wide range of flow problems including two-dimensional multi-element airfoils, two-dimensional cascades, and quasi-three-dimensional cascades. This approach is shown to gain flow resolution in the refined regions while achieving a great reduction in the computational effort and storage requirements since solution points are not wasted in regions where they are not required.
NASA Technical Reports Server (NTRS)
Jiang, Yi-Tsann
1993-01-01
A general solution adaptive scheme-based on a remeshing technique is developed for solving the two-dimensional and quasi-three-dimensional Euler and Favre-averaged Navier-Stokes equations. The numerical scheme is formulated on an unstructured triangular mesh utilizing an edge-based pointer system which defines the edge connectivity of the mesh structure. Jameson's four-stage hybrid Runge-Kutta scheme is used to march the solution in time. The convergence rate is enhanced through the use of local time stepping and implicit residual averaging. As the solution evolves, the mesh is regenerated adaptively using flow field information. Mesh adaptation parameters are evaluated such that an estimated local numerical error is equally distributed over the whole domain. For inviscid flows, the present approach generates a complete unstructured triangular mesh using the advancing front method. For turbulent flows, the approach combines a local highly stretched structured triangular mesh in the boundary layer region with an unstructured mesh in the remaining regions to efficiently resolve the important flow features. One-equation and two-equation turbulence models are incorporated into the present unstructured approach. Results are presented for a wide range of flow problems including two-dimensional multi-element airfoils, two-dimensional cascades, and quasi-three-dimensional cascades. This approach is shown to gain flow resolution in the refined regions while achieving a great reduction in the computational effort and storage requirements since solution points are not wasted in regions where they are not required.
NASA Astrophysics Data System (ADS)
Bargatze, L. F.
2015-12-01
Active Data Archive Product Tracking (ADAPT) is a collection of software routines that permits one to generate XML metadata files to describe and register data products in support of the NASA Heliophysics Virtual Observatory VxO effort. ADAPT is also a philosophy. The ADAPT concept is to use any and all available metadata associated with scientific data to produce XML metadata descriptions in a consistent, uniform, and organized fashion to provide blanket access to the full complement of data stored on a targeted data server. In this poster, we present an application of ADAPT to describe all of the data products that are stored by using the Common Data File (CDF) format served out by the CDAWEB and SPDF data servers hosted at the NASA Goddard Space Flight Center. These data servers are the primary repositories for NASA Heliophysics data. For this purpose, the ADAPT routines have been used to generate data resource descriptions by using an XML schema named Space Physics Archive, Search, and Extract (SPASE). SPASE is the designated standard for documenting Heliophysics data products, as adopted by the Heliophysics Data and Model Consortium. The set of SPASE XML resource descriptions produced by ADAPT includes high-level descriptions of numerical data products, display data products, or catalogs and also includes low-level "Granule" descriptions. A SPASE Granule is effectively a universal access metadata resource; a Granule associates an individual data file (e.g. a CDF file) with a "parent" high-level data resource description, assigns a resource identifier to the file, and lists the corresponding assess URL(s). The CDAWEB and SPDF file systems were queried to provide the input required by the ADAPT software to create an initial set of SPASE metadata resource descriptions. Then, the CDAWEB and SPDF data repositories were queried subsequently on a nightly basis and the CDF file lists were checked for any changes such as the occurrence of new, modified, or deleted
A procedure to construct exact solutions of nonlinear fractional differential equations.
Güner, Özkan; Cevikel, Adem C
2014-01-01
We use the fractional transformation to convert the nonlinear partial fractional differential equations with the nonlinear ordinary differential equations. The Exp-function method is extended to solve fractional partial differential equations in the sense of the modified Riemann-Liouville derivative. We apply the Exp-function method to the time fractional Sharma-Tasso-Olver equation, the space fractional Burgers equation, and the time fractional fmKdV equation. As a result, we obtain some new exact solutions.
A Procedure to Construct Exact Solutions of Nonlinear Fractional Differential Equations
Güner, Özkan; Cevikel, Adem C.
2014-01-01
We use the fractional transformation to convert the nonlinear partial fractional differential equations with the nonlinear ordinary differential equations. The Exp-function method is extended to solve fractional partial differential equations in the sense of the modified Riemann-Liouville derivative. We apply the Exp-function method to the time fractional Sharma-Tasso-Olver equation, the space fractional Burgers equation, and the time fractional fmKdV equation. As a result, we obtain some new exact solutions. PMID:24737972
A Simple Procedure for Constructing 5'-Amino-Terminated Oligodeoxynucleotides in Aqueous Solution
NASA Technical Reports Server (NTRS)
Bruick, Richard K.; Koppitz, Marcus; Joyce, Gerald F.; Orgel, Leslie E.
1997-01-01
A rapid method for the synthesis of oligodeoxynucleotides (ODNs) terminated by 5'-amino-5'-deoxythymidine is described. A 3'-phosphorylated ODN (the donor) is incubated in aqueous solution with 5'-amino- 5'-deoxythymidine in the presence of N-(3-dimethylaminopropyl)-)N'-ethylcarbodiimide hydrochloride (EDC), extending the donor by one residue via a phosphoramidate bond. Template- directed ligation of the extended donor and an acceptor ODN, followed by acid hydrolysis, yields the acceptor ODN extended by a single 5'-amino-5'-deoxythymidine residue at its 5'terminus.
NASA Astrophysics Data System (ADS)
Sartoros, Christine; Salin, Eric D.
1998-05-01
Lines available while running a blank solution were used to monitor the analytical performance of an inductively coupled plasma atomic emission spectrometry (ICP-AES) system in real time. Using H and Ar lines and their signal-to-background ratios (SBRs), simple rules in the form of a prediction table were developed by inspection of the data. These rules could be used for predicting changes in radio-frequency power, carrier gas flow rates, and sample introduction rate. The performance of the prediction table was good but not excellent. Another set of rules in the form of a decision tree was developed in an automated fashion using the C4.5 induction engine. The performance of the decision tree was superior to that of the prediction table. It appears that blank spectral information can be used to predict with over 90% accuracy when an ICP-AES is breaking down. However this is not as definitive at identifying the exact fault as some more exhaustive approaches involving the use of standard solutions.
NASA Technical Reports Server (NTRS)
Coirier, William J.; Powell, Kenneth G.
1994-01-01
A Cartesian, cell-based approach for adaptively-refined solutions of the Euler and Navier-Stokes equations in two dimensions is developed and tested. Grids about geometrically complicated bodies are generated automatically, by recursive subdivision of a single Cartesian cell encompassing the entire flow domain. Where the resulting cells intersect bodies, N-sided 'cut' cells are created using polygon-clipping algorithms. The grid is stored in a binary-tree structure which provides a natural means of obtaining cell-to-cell connectivity and of carrying out solution-adaptive mesh refinement. The Euler and Navier-Stokes equations are solved on the resulting grids using a finite-volume formulation. The convective terms are upwinded: a gradient-limited, linear reconstruction of the primitive variables is performed, providing input states to an approximate Riemann solver for computing the fluxes between neighboring cells. The more robust of a series of viscous flux functions is used to provide the viscous fluxes at the cell interfaces. Adaptively-refined solutions of the Navier-Stokes equations using the Cartesian, cell-based approach are obtained and compared to theory, experiment, and other accepted computational results for a series of low and moderate Reynolds number flows.
NASA Technical Reports Server (NTRS)
Coirier, William J.; Powell, Kenneth G.
1995-01-01
A Cartesian, cell-based approach for adaptively-refined solutions of the Euler and Navier-Stokes equations in two dimensions is developed and tested. Grids about geometrically complicated bodies are generated automatically, by recursive subdivision of a single Cartesian cell encompassing the entire flow domain. Where the resulting cells intersect bodies, N-sided 'cut' cells are created using polygon-clipping algorithms. The grid is stored in a binary-tree data structure which provides a natural means of obtaining cell-to-cell connectivity and of carrying out solution-adaptive mesh refinement. The Euler and Navier-Stokes equations are solved on the resulting grids using a finite-volume formulation. The convective terms are upwinded: A gradient-limited, linear reconstruction of the primitive variables is performed, providing input states to an approximate Riemann solver for computing the fluxes between neighboring cells. The more robust of a series of viscous flux functions is used to provide the viscous fluxes at the cell interfaces. Adaptively-refined solutions of the Navier-Stokes equations using the Cartesian, cell-based approach are obtained and compared to theory, experiment and other accepted computational results for a series of low and moderate Reynolds number flows.
This standard operating procedure describes the method used for preparing internal standard, surrogate recovery standard and calibration standard solutions for neutral analytes used for gas chromatography/mass spectrometry analysis.
Adaptive Filtering for Large Space Structures: A Closed-Form Solution
NASA Technical Reports Server (NTRS)
Rauch, H. E.; Schaechter, D. B.
1985-01-01
In a previous paper Schaechter proposes using an extended Kalman filter to estimate adaptively the (slowly varying) frequencies and damping ratios of a large space structure. The time varying gains for estimating the frequencies and damping ratios can be determined in closed form so it is not necessary to integrate the matrix Riccati equations. After certain approximations, the time varying adaptive gain can be written as the product of a constant matrix times a matrix derived from the components of the estimated state vector. This is an important savings of computer resources and allows the adaptive filter to be implemented with approximately the same effort as the nonadaptive filter. The success of this new approach for adaptive filtering was demonstrated using synthetic data from a two mode system.
A procedure to create isoconcentration surfaces in low-chemical-partitioning, high-solute alloys.
Hornbuckle, B C; Kapoor, M; Thompson, G B
2015-12-01
A proximity histogram or proxigram is the prevailing technique of calculating 3D composition profiles of a second phase in atom probe tomography. The second phase in the reconstruction is delineated by creating an isoconcentration surface, i.e. the precipitate-matrix interface. The 3D composition profile is then calculated with respect to this user-defined isoconcentration surface. Hence, the selection of the correct isoconcentration surface is critical. In general, the preliminary selection of an isoconcentration value is guided by the visual observation of a chemically partitioned second phase. However, in low-chemical -partitioning systems, such a visual guide is absent. The lack of a priori composition information of the precipitate phase may further confound the issue. This paper presents a methodology of selecting an appropriate elemental species and subsequently obtaining an isoconcentration value to create an accurate isoconcentration surface that will act as the precipitate-matrix interface. We use the H-phase precipitate in the Ni-Ti-Hf shape memory alloy as our case study to illustrate the procedure.
Brahme, Anders; Nyman, Peter; Skatt, Björn
2008-05-01
A four-dimensional (4D) laser camera (LC) has been developed for accurate patient imaging in diagnostic and therapeutic radiology. A complementary metal-oxide semiconductor camera images the intersection of a scanned fan shaped laser beam with the surface of the patient and allows real time recording of movements in a three-dimensional (3D) or four-dimensional (4D) format (3D +time). The LC system was first designed as an accurate patient setup tool during diagnostic and therapeutic applications but was found to be of much wider applicability as a general 4D photon "tag" for the surface of the patient in different clinical procedures. It is presently used as a 3D or 4D optical benchmark or tag for accurate delineation of the patient surface as demonstrated for patient auto setup, breathing and heart motion detection. Furthermore, its future potential applications in gating, adaptive therapy, 3D or 4D image fusion between most imaging modalities and image processing are discussed. It is shown that the LC system has a geometrical resolution of about 0, 1 mm and that the rigid body repositioning accuracy is about 0, 5 mm below 20 mm displacements, 1 mm below 40 mm and better than 2 mm at 70 mm. This indicates a slight need for repeated repositioning when the initial error is larger than about 50 mm. The positioning accuracy with standard patient setup procedures for prostate cancer at Karolinska was found to be about 5-6 mm when independently measured using the LC system. The system was found valuable for positron emission tomography-computed tomography (PET-CT) in vivo tumor and dose delivery imaging where it potentially may allow effective correction for breathing artifacts in 4D PET-CT and image fusion with lymph node atlases for accurate target volume definition in oncology. With a LC system in all imaging and radiation therapy rooms, auto setup during repeated diagnostic and therapeutic procedures may save around 5 min per session, increase accuracy and allow
ERIC Educational Resources Information Center
Riley, Barth B.; Dennis, Michael L.; Conrad, Kendon J.
2010-01-01
This simulation study sought to compare four different computerized adaptive testing (CAT) content-balancing procedures designed for use in a multidimensional assessment with respect to measurement precision, symptom severity classification, validity of clinical diagnostic recommendations, and sensitivity to atypical responding. The four…
Warren, Rachel
2011-01-13
The papers in this volume discuss projections of climate change impacts upon humans and ecosystems under a global mean temperature rise of 4°C above preindustrial levels. Like most studies, they are mainly single-sector or single-region-based assessments. Even the multi-sector or multi-region approaches generally consider impacts in sectors and regions independently, ignoring interactions. Extreme weather and adaptation processes are often poorly represented and losses of ecosystem services induced by climate change or human adaptation are generally omitted. This paper addresses this gap by reviewing some potential interactions in a 4°C world, and also makes a comparison with a 2°C world. In a 4°C world, major shifts in agricultural land use and increased drought are projected, and an increased human population might increasingly be concentrated in areas remaining wet enough for economic prosperity. Ecosystem services that enable prosperity would be declining, with carbon cycle feedbacks and fire causing forest losses. There is an urgent need for integrated assessments considering the synergy of impacts and limits to adaptation in multiple sectors and regions in a 4°C world. By contrast, a 2°C world is projected to experience about one-half of the climate change impacts, with concomitantly smaller challenges for adaptation. Ecosystem services, including the carbon sink provided by the Earth's forests, would be expected to be largely preserved, with much less potential for interaction processes to increase challenges to adaptation. However, demands for land and water for biofuel cropping could reduce the availability of these resources for agricultural and natural systems. Hence, a whole system approach to mitigation and adaptation, considering interactions, potential human and species migration, allocation of land and water resources and ecosystem services, will be important in either a 2°C or a 4°C world.
Webster, Clayton G; Zhang, Guannan; Gunzburger, Max D
2012-10-01
Accurate predictive simulations of complex real world applications require numerical approximations to first, oppose the curse of dimensionality and second, converge quickly in the presence of steep gradients, sharp transitions, bifurcations or finite discontinuities in high-dimensional parameter spaces. In this paper we present a novel multi-dimensional multi-resolution adaptive (MdMrA) sparse grid stochastic collocation method, that utilizes hierarchical multiscale piecewise Riesz basis functions constructed from interpolating wavelets. The basis for our non-intrusive method forms a stable multiscale splitting and thus, optimal adaptation is achieved. Error estimates and numerical examples will used to compare the efficiency of the method with several other techniques.
AP-Cloud: Adaptive Particle-in-Cloud method for optimal solutions to Vlasov–Poisson equation
Wang, Xingyu; Samulyak, Roman; Jiao, Xiangmin; Yu, Kwangmin
2016-07-01
We propose a new adaptive Particle-in-Cloud (AP-Cloud) method for obtaining optimal numerical solutions to the Vlasov–Poisson equation. Unlike the traditional particle-in-cell (PIC) method, which is commonly used for solving this problem, the AP-Cloud adaptively selects computational nodes or particles to deliver higher accuracy and efficiency when the particle distribution is highly non-uniform. Unlike other adaptive techniques for PIC, our method balances the errors in PDE discretization and Monte Carlo integration, and discretizes the differential operators using a generalized finite difference (GFD) method based on a weighted least square formulation. As a result, AP-Cloud is independent of the geometric shapes of computational domains and is free of artificial parameters. Efficient and robust implementation is achieved through an octree data structure with 2:1 balance. We analyze the accuracy and convergence order of AP-Cloud theoretically, and verify the method using an electrostatic problem of a particle beam with halo. Simulation results show that the AP-Cloud method is substantially more accurate and faster than the traditional PIC, and it is free of artificial forces that are typical for some adaptive PIC techniques.
AP-Cloud: Adaptive particle-in-cloud method for optimal solutions to Vlasov–Poisson equation
Wang, Xingyu; Samulyak, Roman; Jiao, Xiangmin; Yu, Kwangmin
2016-04-19
We propose a new adaptive Particle-in-Cloud (AP-Cloud) method for obtaining optimal numerical solutions to the Vlasov–Poisson equation. Unlike the traditional particle-in-cell (PIC) method, which is commonly used for solving this problem, the AP-Cloud adaptively selects computational nodes or particles to deliver higher accuracy and efficiency when the particle distribution is highly non-uniform. Unlike other adaptive techniques for PIC, our method balances the errors in PDE discretization and Monte Carlo integration, and discretizes the differential operators using a generalized finite difference (GFD) method based on a weighted least square formulation. As a result, AP-Cloud is independent of the geometric shapes of computational domains and is free of artificial parameters. Efficient and robust implementation is achieved through an octree data structure with 2:1 balance. We analyze the accuracy and convergence order of AP-Cloud theoretically, and verify the method using an electrostatic problem of a particle beam with halo. Here, simulation results show that the AP-Cloud method is substantially more accurate and faster than the traditional PIC, and it is free of artificial forces that are typical for some adaptive PIC techniques.
AP-Cloud: Adaptive particle-in-cloud method for optimal solutions to Vlasov–Poisson equation
Wang, Xingyu; Samulyak, Roman; Jiao, Xiangmin; ...
2016-04-19
We propose a new adaptive Particle-in-Cloud (AP-Cloud) method for obtaining optimal numerical solutions to the Vlasov–Poisson equation. Unlike the traditional particle-in-cell (PIC) method, which is commonly used for solving this problem, the AP-Cloud adaptively selects computational nodes or particles to deliver higher accuracy and efficiency when the particle distribution is highly non-uniform. Unlike other adaptive techniques for PIC, our method balances the errors in PDE discretization and Monte Carlo integration, and discretizes the differential operators using a generalized finite difference (GFD) method based on a weighted least square formulation. As a result, AP-Cloud is independent of the geometric shapes ofmore » computational domains and is free of artificial parameters. Efficient and robust implementation is achieved through an octree data structure with 2:1 balance. We analyze the accuracy and convergence order of AP-Cloud theoretically, and verify the method using an electrostatic problem of a particle beam with halo. Here, simulation results show that the AP-Cloud method is substantially more accurate and faster than the traditional PIC, and it is free of artificial forces that are typical for some adaptive PIC techniques.« less
AP-Cloud: Adaptive Particle-in-Cloud method for optimal solutions to Vlasov-Poisson equation
NASA Astrophysics Data System (ADS)
Wang, Xingyu; Samulyak, Roman; Jiao, Xiangmin; Yu, Kwangmin
2016-07-01
We propose a new adaptive Particle-in-Cloud (AP-Cloud) method for obtaining optimal numerical solutions to the Vlasov-Poisson equation. Unlike the traditional particle-in-cell (PIC) method, which is commonly used for solving this problem, the AP-Cloud adaptively selects computational nodes or particles to deliver higher accuracy and efficiency when the particle distribution is highly non-uniform. Unlike other adaptive techniques for PIC, our method balances the errors in PDE discretization and Monte Carlo integration, and discretizes the differential operators using a generalized finite difference (GFD) method based on a weighted least square formulation. As a result, AP-Cloud is independent of the geometric shapes of computational domains and is free of artificial parameters. Efficient and robust implementation is achieved through an octree data structure with 2:1 balance. We analyze the accuracy and convergence order of AP-Cloud theoretically, and verify the method using an electrostatic problem of a particle beam with halo. Simulation results show that the AP-Cloud method is substantially more accurate and faster than the traditional PIC, and it is free of artificial forces that are typical for some adaptive PIC techniques.
de Abreu, Igor Renato Louro Bruno; Abrão, Fernando Conrado; Silva, Alessandra Rodrigues; Corrêa, Larissa Teresa Cirera; Younes, Riad Nain
2015-05-01
Currently, there is a tendency to perform surgical procedures via laparoscopic or thoracoscopic access. However, even with the impressive technological advancement in surgical materials, such as improvement in quality of monitors, light sources, and optical fibers, surgeons have to face simple problems that can greatly hinder surgery by video. One is the formation of "fog" or residue buildup on the lens, causing decreased visibility. Intracavitary techniques for cleaning surgical optics and preventing fog formation have been described; however, some of these techniques employ the use of expensive and complex devices designed solely for this purpose. Moreover, these techniques allow the cleaning of surgical optics when they becomes dirty, which does not prevent the accumulation of residue in the optics. To solve this problem we have designed a device that allows cleaning the optics with no surgical stops and prevents the fogging and residue accumulation. The objective of this study is to evaluate through experimental testing the effectiveness of a simple device that prevents the accumulation of residue and fogging of optics used in surgical procedures performed through thoracoscopic or laparoscopic access. Ex-vivo experiments were performed simulating the conditions of residue presence in surgical optics during a video surgery. The experiment consists in immersing the optics and catheter set connected to the IV line with crystalloid solution in three types of materials: blood, blood plus fat solution, and 200 mL of distilled water and 1 vial of methylene blue. The optics coupled to the device were immersed in 200 mL of each type of residue, repeating each immersion 10 times for each distinct residue for both thirty and zero degrees optics, totaling 420 experiments. A success rate of 98.1% was observed after the experiments, in these cases the device was able to clean and prevent the residue accumulation in the optics.
Tests of an adaptive QM/MM calculation on free energy profiles of chemical reactions in solution.
Várnai, Csilla; Bernstein, Noam; Mones, Letif; Csányi, Gábor
2013-10-10
We present reaction free energy calculations using the adaptive buffered force mixing quantum mechanics/molecular mechanics (bf-QM/MM) method. The bf-QM/MM method combines nonadaptive electrostatic embedding QM/MM calculations with extended and reduced QM regions to calculate accurate forces on all atoms, which can be used in free energy calculation methods that require only the forces and not the energy. We calculate the free energy profiles of two reactions in aqueous solution: the nucleophilic substitution reaction of methyl chloride with a chloride anion and the deprotonation reaction of the tyrosine side chain. We validate the bf-QM/MM method against a full QM simulation, and show that it correctly reproduces both geometrical properties and free energy profiles of the QM model, while the electrostatic embedding QM/MM method using a static QM region comprising only the solute is unable to do so. The bf-QM/MM method is not explicitly dependent on the details of the QM and MM methods, so long as it is possible to compute QM forces in a small region and MM forces in the rest of the system, as in a conventional QM/MM calculation. It is simple, with only a few parameters needed to control the QM calculation sizes, and allows (but does not require) a varying and adapting QM region which is necessary for simulating solutions.
NASA Astrophysics Data System (ADS)
Verdugo, Francesc; Parés, Núria; Díez, Pedro
2014-08-01
This article presents a space-time adaptive strategy for transient elastodynamics. The method aims at computing an optimal space-time discretization such that the computed solution has an error in the quantity of interest below a user-defined tolerance. The methodology is based on a goal-oriented error estimate that requires accounting for an auxiliary adjoint problem. The major novelty of this paper is using modal analysis to obtain a proper approximation of the adjoint solution. The idea of using a modal-based description was introduced in a previous work for error estimation purposes. Here this approach is used for the first time in the context of adaptivity. With respect to the standard direct time-integration methods, the modal solution of the adjoint problem is highly competitive in terms of computational effort and memory requirements. The performance of the proposed strategy is tested in two numerical examples. The two examples are selected to be representative of different wave propagation phenomena, one being a 2D bulky continuum and the second a 2D domain representing a structural frame.
NASA Technical Reports Server (NTRS)
Swei, Sean; Cheung, Kenneth
2016-01-01
This project is to develop a novel aerostructure concept that takes advantage of emerging digital composite materials and manufacturing methods to build high stiffness-to-density ratio, ultra-light structures that can provide mission adaptive and aerodynamically efficient future N+3N+4 air vehicles.
Triangle based adaptive stencils for the solution of hyperbolic conservation laws
NASA Technical Reports Server (NTRS)
Durlofsky, Louis J.; Engquist, Bjorn; Osher, Stanley
1992-01-01
A triangle based total variation diminishing (TVD) scheme for the numerical approximation of hyperbolic conservation laws in two space dimensions is constructed. The novelty of the scheme lies in the nature of the preprocessing of the cell averaged data, which is accomplished via a nearest neighbor linear interpolation followed by a slope limiting procedures. Two such limiting procedures are suggested. The resulting method is considerably more simple than other triangle based non-oscillatory approximations which, like this scheme, approximate the flux up to second order accuracy. Numerical results for linear advection and Burgers' equation are presented.
Panico, Francesco; Sagliano, Laura; Grossi, Dario; Trojano, Luigi
2016-06-01
The aim of this study is to clarify the specific role of the cerebellum during prism adaptation procedure (PAP), considering its involvement in early prism exposure (i.e., in the recalibration process) and in post-exposure phase (i.e., in the after-effect, related to spatial realignment). For this purpose we interfered with cerebellar activity by means of cathodal transcranial direct current stimulation (tDCS), while young healthy individuals were asked to perform a pointing task on a touch screen before, during and after wearing base-left prism glasses. The distance from the target dot in each trial (in terms of pixels) on horizontal and vertical axes was recorded and served as an index of accuracy. Results on horizontal axis, that was shifted by prism glasses, revealed that participants who received cathodal stimulation showed increased rightward deviation from the actual position of the target while wearing prisms and a larger leftward deviation from the target after prisms removal. Results on vertical axis, in which no shift was induced, revealed a general trend in the two groups to improve accuracy through the different phases of the task, and a trend, more visible in cathodal stimulated participants, to worsen accuracy from the first to the last movements in each phase. Data on horizontal axis allow to confirm that the cerebellum is involved in all stages of PAP, contributing to early strategic recalibration process, as well as to spatial realignment. On vertical axis, the improving performance across the different stages of the task and the worsening accuracy within each task phase can be ascribed, respectively, to a learning process and to the task-related fatigue.
Adaptive solution of the biharmonic problem with shortly supported cubic spline-wavelets
NASA Astrophysics Data System (ADS)
Černá, Dana; Finěk, Václav
2012-09-01
In our contribution, we design a cubic spline-wavelet basis on the interval. The basis functions have small support and wavelets have vanishing moments. We show that stiffness matrices arising from discretization of the two-dimensional biharmonic problem using a constructed wavelet basis have uniformly bounded condition numbers and these condition numbers are very small. We compare quantitative behavior of adaptive wavelet method with a constructed basis and other cubic spline-wavelet bases, and show the superiority of our construction.
Embedded pitch adapters: A high-yield interconnection solution for strip sensors
NASA Astrophysics Data System (ADS)
Ullán, M.; Allport, P. P.; Baca, M.; Broughton, J.; Chisholm, A.; Nikolopoulos, K.; Pyatt, S.; Thomas, J. P.; Wilson, J. A.; Kierstead, J.; Kuczewski, P.; Lynn, D.; Hommels, L. B. A.; Fleta, C.; Fernandez-Tejero, J.; Quirion, D.; Bloch, I.; Díez, S.; Gregor, I. M.; Lohwasser, K.; Poley, L.; Tackmann, K.; Hauser, M.; Jakobs, K.; Kuehn, S.; Mahboubi, K.; Mori, R.; Parzefall, U.; Clark, A.; Ferrere, D.; Gonzalez Sevilla, S.; Ashby, J.; Blue, A.; Bates, R.; Buttar, C.; Doherty, F.; McMullen, T.; McEwan, F.; O'Shea, V.; Kamada, S.; Yamamura, K.; Ikegami, Y.; Nakamura, K.; Takubo, Y.; Unno, Y.; Takashima, R.; Chilingarov, A.; Fox, H.; Affolder, A. A.; Casse, G.; Dervan, P.; Forshaw, D.; Greenall, A.; Wonsak, S.; Wormald, M.; Cindro, V.; Kramberger, G.; Mandić, I.; Mikuž, M.; Gorelov, I.; Hoeferkamp, M.; Palni, P.; Seidel, S.; Taylor, A.; Toms, K.; Wang, R.; Hessey, N. P.; Valencic, N.; Hanagaki, K.; Dolezal, Z.; Kodys, P.; Bohm, J.; Mikestikova, M.; Bevan, A.; Beck, G.; Milke, C.; Domingo, M.; Fadeyev, V.; Galloway, Z.; Hibbard-Lubow, D.; Liang, Z.; Sadrozinski, H. F.-W.; Seiden, A.; To, K.; French, R.; Hodgson, P.; Marin-Reyes, H.; Parker, K.; Jinnouchi, O.; Hara, K.; Bernabeu, J.; Civera, J. V.; Garcia, C.; Lacasta, C.; Marti i Garcia, S.; Rodriguez, D.; Santoyo, D.; Solaz, C.; Soldevila, U.
2016-09-01
A proposal to fabricate large area strip sensors with integrated, or embedded, pitch adapters is presented for the End-cap part of the Inner Tracker in the ATLAS experiment. To implement the embedded pitch adapters, a second metal layer is used in the sensor fabrication, for signal routing to the ASICs. Sensors with different embedded pitch adapters have been fabricated in order to optimize the design and technology. Inter-strip capacitance, noise, pick-up, cross-talk, signal efficiency, and fabrication yield have been taken into account in their design and fabrication. Inter-strip capacitance tests taking into account all channel neighbors reveal the important differences between the various designs considered. These tests have been correlated with noise figures obtained in full assembled modules, showing that the tests performed on the bare sensors are a valid tool to estimate the final noise in the full module. The full modules have been subjected to test beam experiments in order to evaluate the incidence of cross-talk, pick-up, and signal loss. The detailed analysis shows no indication of cross-talk or pick-up as no additional hits can be observed in any channel not being hit by the beam above 170 mV threshold, and the signal in those channels is always below 1% of the signal recorded in the channel being hit, above 100 mV threshold. First results on irradiated mini-sensors with embedded pitch adapters do not show any change in the interstrip capacitance measurements with only the first neighbors connected.
Advances in sensor adaptation to changes in ambient light: a bio-inspired solution - biomed 2010.
Dean, Brian; Wright, Cameron H G; Barrett, Stephen F
2010-01-01
Fly-inspired sensors have been shown to have many interesting qualities such as hyperacuity (or an ability to achieve movement resolution beyond the theoretical limit), extreme sensitivity to motion, and (through software simulation) image edge extraction, motion detection, and orientation and location of a line. Many of these qualities are beyond the ability of traditional computer vision sensors such as charge-coupled device (CCD) arrays. To obtain these characteristics, a prototype fly-inspired sensor has been built and tested in a laboratory environment and shows promise. Any sophisticated visual system, whether man made or natural, must adequately adapt to lighting conditions; therefore, light adaptation is a vital milestone in getting the fly eye vision sensor prototype working in real-world conditions. A design based on the common house fly, Musca domestica, was suggested in a paper presented to RMBS 2009 and showed an ability to remove 72-86% of effects due to ambient light changes. In this paper, a more advanced version of this design is discussed. This new design is able to remove 97-99% of the effects due to changes in ambient light, by more accurately approximating the light adaptation process used by the common house fly.
Copper-Adapted Suillus luteus, a Symbiotic Solution for Pines Colonizing Cu Mine Spoils
Adriaensen, K.; Vrålstad, T.; Noben, J.-P.; Vangronsveld, J.; Colpaert, J. V.
2005-01-01
Natural populations thriving in heavy-metal-contaminated ecosystems are often subjected to selective pressures for increased resistance to toxic metals. In the present study we describe a population of the ectomycorrhizal fungus Suillus luteus that colonized a toxic Cu mine spoil in Norway. We hypothesized that this population had developed adaptive Cu tolerance and was able to protect pine trees against Cu toxicity. We also tested for the existence of cotolerance to Cu and Zn in S. luteus. Isolates from Cu-polluted, Zn-polluted, and nonpolluted sites were grown in vitro on Cu- or Zn-supplemented medium. The Cu mine isolates exhibited high Cu tolerance, whereas the Zn-tolerant isolates were shown to be Cu sensitive, and vice versa. This indicates the evolution of metal-specific tolerance mechanisms is strongly triggered by the pollution in the local environment. Cotolerance does not occur in the S. luteus isolates studied. In a dose-response experiment, the Cu sensitivity of nonmycorrhizal Pinus sylvestris seedlings was compared to the sensitivity of mycorrhizal seedlings colonized either by a Cu-sensitive or Cu-tolerant S. luteus isolate. In nonmycorrhizal plants and plants colonized by the Cu-sensitive isolate, root growth and nutrient uptake were strongly inhibited under Cu stress conditions. In contrast, plants colonized by the Cu-tolerant isolate were hardly affected. The Cu-adapted S. luteus isolate provided excellent insurance against Cu toxicity in pine seedlings exposed to elevated Cu levels. Such a metal-adapted Suillus-Pinus combination might be suitable for large-scale land reclamation at phytotoxic metalliferous and industrial sites. PMID:16269769
Copper-adapted Suillus luteus, a symbiotic solution for pines colonizing Cu mine spoils.
Adriaensen, K; Vrålstad, T; Noben, J-P; Vangronsveld, J; Colpaert, J V
2005-11-01
Natural populations thriving in heavy-metal-contaminated ecosystems are often subjected to selective pressures for increased resistance to toxic metals. In the present study we describe a population of the ectomycorrhizal fungus Suillus luteus that colonized a toxic Cu mine spoil in Norway. We hypothesized that this population had developed adaptive Cu tolerance and was able to protect pine trees against Cu toxicity. We also tested for the existence of cotolerance to Cu and Zn in S. luteus. Isolates from Cu-polluted, Zn-polluted, and nonpolluted sites were grown in vitro on Cu- or Zn-supplemented medium. The Cu mine isolates exhibited high Cu tolerance, whereas the Zn-tolerant isolates were shown to be Cu sensitive, and vice versa. This indicates the evolution of metal-specific tolerance mechanisms is strongly triggered by the pollution in the local environment. Cotolerance does not occur in the S. luteus isolates studied. In a dose-response experiment, the Cu sensitivity of nonmycorrhizal Pinus sylvestris seedlings was compared to the sensitivity of mycorrhizal seedlings colonized either by a Cu-sensitive or Cu-tolerant S. luteus isolate. In nonmycorrhizal plants and plants colonized by the Cu-sensitive isolate, root growth and nutrient uptake were strongly inhibited under Cu stress conditions. In contrast, plants colonized by the Cu-tolerant isolate were hardly affected. The Cu-adapted S. luteus isolate provided excellent insurance against Cu toxicity in pine seedlings exposed to elevated Cu levels. Such a metal-adapted Suillus-Pinus combination might be suitable for large-scale land reclamation at phytotoxic metalliferous and industrial sites.
Brantley, P S
2006-08-08
The double spherical harmonics angular approximation in the lowest order, i.e. double P{sub 0} (DP{sub 0}), is developed for the solution of time-dependent non-equilibrium grey radiative transfer problems in planar geometry. Although the DP{sub 0} diffusion approximation is expected to be less accurate than the P{sub 1} diffusion approximation at and near thermodynamic equilibrium, the DP{sub 0} angular approximation can more accurately capture the complicated angular dependence near a non-equilibrium radiation wave front. In addition, the DP{sub 0} approximation should be more accurate in non-equilibrium optically thin regions where the positive and negative angular domains are largely decoupled. We develop an adaptive angular technique that locally uses either the DP{sub 0} or P{sub 1} flux-limited diffusion approximation depending on the degree to which the radiation and material fields are in thermodynamic equilibrium. Numerical results are presented for two test problems due to Su and Olson and to Ganapol and Pomraning for which semi-analytic transport solutions exist. These numerical results demonstrate that the adaptive P{sub 1}-DP{sub 0} diffusion approximation can yield improvements in accuracy over the standard P{sub 1} diffusion approximation, both without and with flux-limiting, for non-equilibrium grey radiative transfer.
Brantley, P S
2005-12-13
The double spherical harmonics angular approximation in the lowest order, i.e. double P{sub 0} (DP{sub 0}), is developed for the solution of time-dependent non-equilibrium grey radiative transfer problems in planar geometry. Although the DP{sub 0} diffusion approximation is expected to be less accurate than the P{sub 1} diffusion approximation at and near thermodynamic equilibrium, the DP{sub 0} angular approximation can more accurately capture the complicated angular dependence near a non-equilibrium radiation wave front. In addition, the DP{sub 0} approximation should be more accurate in non-equilibrium optically thin regions where the positive and negative angular domains are largely decoupled. We develop an adaptive angular technique that locally uses either the DP{sub 0} or P{sub 1} flux-limited diffusion approximation depending on the degree to which the radiation and material fields are in thermodynamic equilibrium. Numerical results are presented for two test problems due to Su and Olson and to Ganapol and Pomraning for which semi-analytic transport solutions exist. These numerical results demonstrate that the adaptive P{sub 1}-DP{sub 0} diffusion approximation can yield improvements in accuracy over the standard P{sub 1} diffusion approximation, both without and with flux-limiting, for non-equilibrium grey radiative transfer.
NASA Technical Reports Server (NTRS)
Schoenauer, W.; Daeubler, H. G.; Glotz, G.; Gruening, J.
1986-01-01
An implicit difference procedure for the solution of equations for a chemically reacting hypersonic boundary layer is described. Difference forms of arbitrary error order in the x and y coordinate plane were used to derive estimates for discretization error. Computational complexity and time were minimized by the use of this difference method and the iteration of the nonlinear boundary layer equations was regulated by discretization error. Velocity and temperature profiles are presented for Mach 20.14 and Mach 18.5; variables are velocity profiles, temperature profiles, mass flow factor, Stanton number, and friction drag coefficient; three figures include numeric data.
NASA Technical Reports Server (NTRS)
Brislawn, Kristi D.; Brown, David L.; Chesshire, Geoffrey S.; Saltzman, Jeffrey S.
1995-01-01
Adaptive mesh refinement (AMR) in conjunction with higher-order upwind finite-difference methods have been used effectively on a variety of problems in two and three dimensions. In this paper we introduce an approach for resolving problems that involve complex geometries in which resolution of boundary geometry is important. The complex geometry is represented by using the method of overlapping grids, while local resolution is obtained by refining each component grid with the AMR algorithm, appropriately generalized for this situation. The CMPGRD algorithm introduced by Chesshire and Henshaw is used to automatically generate the overlapping grid structure for the underlying mesh.
NASA Astrophysics Data System (ADS)
Liolios, K.; Tsihrintzis, V.; Angelidis, P.; Georgiev, K.; Georgiev, I.
2016-10-01
Current developments on modeling of groundwater flow and contaminant transport and removal in the porous media of Horizontal Subsurface Flow Constructed Wetlands (HSF CWs) are first reviewed in a short way. The two usual environmental engineering approaches, the black-box and the process-based one, are briefly presented. Next, recent research results obtained by using these two approaches are briefly discussed as application examples, where emphasis is given to the evaluation of the optimal design and operation parameters concerning HSF CWs. For the black-box approach, the use of Artificial Neural Networks is discussed for the formulation of models, which predict the removal performance of HSF CWs. A novel mathematical prove is presented, which concerns the dependence of the first-order removal coefficient on the Temperature and the Hydraulic Residence Time. For the process-based approach, an application example is first discussed which concerns procedures to evaluate the optimal range of values for the removal coefficient, dependent on either the Temperature or the Hydraulic Residence Time. This evaluation is based on simulating available experimental results of pilot-scale units operated in Democritus University of Thrace, Xanthi, Greece. Further, in a second example, a novel enlargement of the system of Partial Differential Equations is presented, in order to include geothermal effects. Finally, in a third example, the case of parameters uncertainty concerning biodegradation procedures is considered and the use of upper and a novel approach is presented, which concerns the upper and the lower solution bound for the practical draft design of HSF CWs.
NASA Astrophysics Data System (ADS)
Zheng, H. W.; Shu, C.; Chew, Y. T.
2008-07-01
In this paper, an object-oriented and quadrilateral-mesh based solution adaptive algorithm for the simulation of compressible multi-fluid flows is presented. The HLLC scheme (Harten, Lax and van Leer approximate Riemann solver with the Contact wave restored) is extended to adaptively solve the compressible multi-fluid flows under complex geometry on unstructured mesh. It is also extended to the second-order of accuracy by using MUSCL extrapolation. The node, edge and cell are arranged in such an object-oriented manner that each of them inherits from a basic object. A home-made double link list is designed to manage these objects so that the inserting of new objects and removing of the existing objects (nodes, edges and cells) are independent of the number of objects and only of the complexity of O( 1). In addition, the cells with different levels are further stored in different lists. This avoids the recursive calculation of solution of mother (non-leaf) cells. Thus, high efficiency is obtained due to these features. Besides, as compared to other cell-edge adaptive methods, the separation of nodes would reduce the memory requirement of redundant nodes, especially in the cases where the level number is large or the space dimension is three. Five two-dimensional examples are used to examine its performance. These examples include vortex evolution problem, interface only problem under structured mesh and unstructured mesh, bubble explosion under the water, bubble-shock interaction, and shock-interface interaction inside the cylindrical vessel. Numerical results indicate that there is no oscillation of pressure and velocity across the interface and it is feasible to apply it to solve compressible multi-fluid flows with large density ratio (1000) and strong shock wave (the pressure ratio is 10,000) interaction with the interface.
Shave, Steven; Auer, Manfred
2013-12-23
Combinatorial chemical libraries produced on solid support offer fast and cost-effective access to a large number of unique compounds. If such libraries are screened directly on-bead, the speed at which chemical space can be explored by chemists is much greater than that addressable using solution based synthesis and screening methods. Solution based screening has a large supporting body of software such as structure-based virtual screening tools which enable the prediction of protein-ligand complexes. Use of these techniques to predict the protein bound complexes of compounds synthesized on solid support neglects to take into account the conjugation site on the small molecule ligand. This may invalidate predicted binding modes, the linker may be clashing with protein atoms. We present CSBB-ConeExclusion, a methodology and computer program which provides a measure of the applicability of solution dockings to solid support. Output is given in the form of statistics for each docking pose, a unique 2D visualization method which can be used to determine applicability at a glance, and automatically generated PyMol scripts allowing visualization of protein atom incursion into a defined exclusion volume. CSBB-ConeExclusion is then exemplarically used to determine the optimum attachment point for a purine library targeting cyclin-dependent kinase 2 CDK2.
A local anisotropic adaptive algorithm for the solution of low-Mach transient combustion problems
NASA Astrophysics Data System (ADS)
Carpio, Jaime; Prieto, Juan Luis; Vera, Marcos
2016-02-01
A novel numerical algorithm for the simulation of transient combustion problems at low Mach and moderately high Reynolds numbers is presented. These problems are often characterized by the existence of a large disparity of length and time scales, resulting in the development of directional flow features, such as slender jets, boundary layers, mixing layers, or flame fronts. This makes local anisotropic adaptive techniques quite advantageous computationally. In this work we propose a local anisotropic refinement algorithm using, for the spatial discretization, unstructured triangular elements in a finite element framework. For the time integration, the problem is formulated in the context of semi-Lagrangian schemes, introducing the semi-Lagrange-Galerkin (SLG) technique as a better alternative to the classical semi-Lagrangian (SL) interpolation. The good performance of the numerical algorithm is illustrated by solving a canonical laminar combustion problem: the flame/vortex interaction. First, a premixed methane-air flame/vortex interaction with simplified transport and chemistry description (Test I) is considered. Results are found to be in excellent agreement with those in the literature, proving the superior performance of the SLG scheme when compared with the classical SL technique, and the advantage of using anisotropic adaptation instead of uniform meshes or isotropic mesh refinement. As a more realistic example, we then conduct simulations of non-premixed hydrogen-air flame/vortex interactions (Test II) using a more complex combustion model which involves state-of-the-art transport and chemical kinetics. In addition to the analysis of the numerical features, this second example allows us to perform a satisfactory comparison with experimental visualizations taken from the literature.
A cellular automaton model adapted to sandboxes to simulate the transport of solutes
NASA Astrophysics Data System (ADS)
Lora, Boris; Donado, Leonardo; Castro, Eduardo; Bayuelo, Alfredo
2016-04-01
The increasingly use of groundwater sources for human consumption and the growth of the levels of these hydric sources contamination make imperative to reach a deeper understanding how the contaminants are transported by the water, in particular through a heterogeneous porous medium. Accordingly, the present research aims to design a model, which simulates the transport of solutes through a heterogeneous porous medium, using cellular automata. Cellular automata (CA) are a class of spatially (pixels) and temporally discrete mathematical systems characterized by local interaction (neighborhoods). The pixel size and the CA neighborhood were determined in order to reproduce accurately the solute behavior (Ilachinski, 2001). For the design and corresponding validation of the CA model were developed different conservative tracer tests using a sandbox packed heterogeneously with a coarse sand (size # 20 grain diameter 0,85 to 0,6 mm) and clay. We use Uranine and a saline solution with NaCl as a tracer which were measured taking snapshots each 20 seconds. A calibration curve (pixel intensity Vs Concentration) was used to obtain concentration maps. The sandbox was constructed of acrylic (caliber 0,8 cms) with 70 x 45 x 4 cms of dimensions. The "sandbox" had a grid of 35 transversal holes with a diameter of 4 mm each and an uniform separation from one to another of 10 cms. To validate the CA-model it was used a metric consisting in rating the number of correctly predicted pixels over the total per image throughout the entire test run. The CA-model shows that calibrations of pixels and neighborhoods allow reaching results over the 60 % of correctly predictions usually. This makes possible to think that the application of the CA- model could be useful in further researches regarding the transport of contaminants in hydrogeology.
Hoste, H; Torres-Acosta, J F J
2011-08-04
Infections with gastrointestinal nematodes (GIN) remain a major threat for ruminant production, health and welfare associated with outdoor breeding. The control of these helminth parasites has relied on the strategic or tactical use of chemical anthelmintic (AH) drugs. However, the expanding development and diffusion of anthelmintic resistance in nematode populations imposes the need to explore and validate novel solutions (or to re-discover old knowledge) for a more sustainable control of GIN. The different solutions refer to three main principles of action. The first one is to limit the contact between the hosts and the infective larvae in the field through grazing management methods. The latter were described since the 1970s and, at present, they benefit from innovations based on computer models. Several biological control agents have also been studied in the last three decades as potential tools to reduce the infective larvae in the field. The second principle aims at improving the host response against GIN infections relying on the genetic selection between or within breeds of sheep or goats, crossbreeding of resistant and susceptible breeds and/or the manipulation of nutrition. These approaches may benefit from a better understanding of the potential underlying mechanisms, in particular in regard of the host immune response against the worms. The third principle is the control of GIN based on non-conventional AH materials (plant or mineral compounds). Worldwide studies show that non conventional AH materials can eliminate worms and/or negatively affect the parasite's biology. The recent developments and pros and cons concerning these various options are discussed. Last, some results are presented which illustrate how the integration of these different solutions can be efficient and applicable in different systems of production and/or epidemiological conditions. The integration of different control tools seems to be a pre-requisite for the sustainable
Practical Study and Solutions Adapted For The Road Noise In The Algiers City
NASA Astrophysics Data System (ADS)
Iddir, R.; Boukhaloua, N.; Saadi, T.
At the present hour where the city spreads on a big space, the road network devel- opment was a following logical of this movement. Generating a considerable impact thus on the environment. This last is a resulting open system of the interaction be- tween the man and the nature, it's affected all side by the different means of transport and by their increasing demand of mobility. The contemporary city development be- got problems bound to the environment and among it : The road noise. This last is a complex phenomenon, essentially by reason of its humans sensory effects, its impact on the environment is considerable, this one concerns the life quality directly, mainly in population zones to strong density. The resonant pollution reached its paroxysm; the road network of Algiers is not conceived to satisfy requirements in resonant pol- lution matter. For it arrangements soundproof should be adapted in order to face of it these new requirements in matter of the acoustic comfort. All these elements drove to the process aiming the attenuation of the hindrance caused by the road traffic and it by actions essentially aiming: vehicles, the structure of the road and the immediate envi- ronment of the system road - structure. From these results, we note that the situation in resonant nuisance matter in this zone with strong traffic is disturbing, and especially on the residents health.
Brantley, P S
2005-06-06
The double spherical harmonics angular approximation in the lowest order, i.e. double P{sub 0} (DP{sub 0}), is developed for the solution of time-dependent non-equilibrium grey radiative transfer problems in planar geometry. The standard P{sub 1} angular approximation represents the angular dependence of the radiation specific intensity using a linear function in the angular domain -1 {le} {mu} {le} 1. In contrast, the DP{sub 0} angular approximation represents the angular dependence as isotropic in each half angular range -1 {le} {mu} < 0 and 0 < {mu} {le} 1. Neglecting the time derivative of the radiation flux, both the P{sub 1} and DP{sub 0} equations can be written as a single diffusion equation for the radiation energy density. Although the DP{sub 0} diffusion approximation is expected to be less accurate than the P{sub 1} diffusion approximation at and near thermodynamic equilibrium, the DP{sub 0} angular approximation can more accurately capture the complicated angular dependence near the non-equilibrium wave front. We develop an adaptive angular technique that locally uses either the DP{sub 0} or the P{sub 1} diffusion approximation depending on the degree to which the radiation and material fields are in thermodynamic equilibrium. Numerical results are presented for a test problem due to Su and Olson for which a semi-analytic transport solution exists. The numerical results demonstrate that the adaptive P{sub 1}-DP{sub 0} diffusion approximation can yield improvements in accuracy over the standard P{sub 1} diffusion approximation for non-equilibrium grey radiative transfer.
Heidenreich, Elvio A; Ferrero, José M; Doblaré, Manuel; Rodríguez, José F
2010-07-01
Many problems in biology and engineering are governed by anisotropic reaction-diffusion equations with a very rapidly varying reaction term. This usually implies the use of very fine meshes and small time steps in order to accurately capture the propagating wave while avoiding the appearance of spurious oscillations in the wave front. This work develops a family of macro finite elements amenable for solving anisotropic reaction-diffusion equations with stiff reactive terms. The developed elements are incorporated on a semi-implicit algorithm based on operator splitting that includes adaptive time stepping for handling the stiff reactive term. A linear system is solved on each time step to update the transmembrane potential, whereas the remaining ordinary differential equations are solved uncoupled. The method allows solving the linear system on a coarser mesh thanks to the static condensation of the internal degrees of freedom (DOF) of the macroelements while maintaining the accuracy of the finer mesh. The method and algorithm have been implemented in parallel. The accuracy of the method has been tested on two- and three-dimensional examples demonstrating excellent behavior when compared to standard linear elements. The better performance and scalability of different macro finite elements against standard finite elements have been demonstrated in the simulation of a human heart and a heterogeneous two-dimensional problem with reentrant activity. Results have shown a reduction of up to four times in computational cost for the macro finite elements with respect to equivalent (same number of DOF) standard linear finite elements as well as good scalability properties.
Ma Xiang; Zabaras, Nicholas
2010-05-20
A computational methodology is developed to address the solution of high-dimensional stochastic problems. It utilizes high-dimensional model representation (HDMR) technique in the stochastic space to represent the model output as a finite hierarchical correlated function expansion in terms of the stochastic inputs starting from lower-order to higher-order component functions. HDMR is efficient at capturing the high-dimensional input-output relationship such that the behavior for many physical systems can be modeled to good accuracy only by the first few lower-order terms. An adaptive version of HDMR is also developed to automatically detect the important dimensions and construct higher-order terms using only the important dimensions. The newly developed adaptive sparse grid collocation (ASGC) method is incorporated into HDMR to solve the resulting sub-problems. By integrating HDMR and ASGC, it is computationally possible to construct a low-dimensional stochastic reduced-order model of the high-dimensional stochastic problem and easily perform various statistic analysis on the output. Several numerical examples involving elementary mathematical functions and fluid mechanics problems are considered to illustrate the proposed method. The cases examined show that the method provides accurate results for stochastic dimensionality as high as 500 even with large-input variability. The efficiency of the proposed method is examined by comparing with Monte Carlo (MC) simulation.
Shen, Yi; Dai, Wei; Richards, Virginia M
2015-03-01
A MATLAB toolbox for the efficient estimation of the threshold, slope, and lapse rate of the psychometric function is described. The toolbox enables the efficient implementation of the updated maximum-likelihood (UML) procedure. The toolbox uses an object-oriented architecture for organizing the experimental variables and computational algorithms, which provides experimenters with flexibility in experimental design and data management. Descriptions of the UML procedure and the UML Toolbox are provided, followed by toolbox use examples. Finally, guidelines and recommendations of parameter configurations are given.
Borges, Sivanildo S.; Vieira, Gláucia P.; Reis, Boaventura F.
2007-01-01
In this work, an automatic device to deliver titrant solution into a titration chamber with the ability to determine the dispensed volume of solution, with good precision independent of both elapsed time and flow rate, is proposed. A glass tube maintained at the vertical position was employed as a container for the titrant solution. Electronic devices were coupled to the glass tube in order to control its filling with titrant solution, as well as the stepwise solution delivering into the titration chamber. The detection of the titration end point was performed employing a photometer designed using a green LED (λ=545 nm) and a phototransistor. The titration flow system comprised three-way solenoid valves, which were assembled to allow that the steps comprising the solution container loading and the titration run were carried out automatically. The device for the solution volume determination was designed employing an infrared LED (λ=930 nm) and a photodiode. When solution volume delivered from proposed device was within the range of 5 to 105 μl, a linear relationship (R = 0.999) between the delivered volumes and the generated potential difference was achieved. The usefulness of the proposed device was proved performing photometric titration of hydrochloric acid solution with a standardized sodium hydroxide solution and using phenolphthalein as an external indicator. The achieved results presented relative standard deviation of 1.5%. PMID:18317510
NASA Astrophysics Data System (ADS)
Castin, N.; Fernandez, J. R.; Terentyev, D.; Malerba, L.; Pasianot, R. C.
2014-06-01
We propose a novel approach for simulating, with atomistic kinetic Monte Carlo, the segregation or depletion of solute atoms at interfaces, via transport by vacancies. Differently from classical lattice KMC, no assumption is made regarding the crystallographic structure. The model can thus potentially be applied to any type of interfaces, e.g. grain boundaries. Fully off-lattice KMC models were already proposed in the literature, but are rather demanding in CPU time, mainly because of the necessity to perform static relaxation several times at every step of the simulation, and to calculate migration energies between different metastable states. In our LA-KMC model, we aim at performing static relaxation only once per step at the most, and define possible transitions to other metastable states following a generic predefined procedure. The corresponding migration energies can then be calculated using artificial neural networks, trained to predict them as a function of a full description of the local atomic environment, in term of both the exact location in space of atoms and in term of their chemical nature. Our model is thus a compromise between fully off-lattice and fully on-lattice models: (a) The description of the system is not bound to strict assumptions, but is readapted automatically performing the minimum required amount of static relaxation; (b) The procedure to define transition events is not guaranteed to find all important transitions, and is thereby potentially disregarding some mechanisms of system evolution. This shortcoming is in fact classical to non-fully off-lattice models, but is in our case limited thanks to the application of relaxation at every step; (c) Computing time is largely reduced thanks to the use of neural network to calculate the migration energies. In this presentation, we show the premises of this novel approach, in the case of grain-boundaries for bcc Fe-Cr alloys.
NASA Astrophysics Data System (ADS)
Schmitt, Kara Anne
This research aims to prove that strict adherence to procedures and rigid compliance to process in the US Nuclear Industry may not prevent incidents or increase safety. According to the Institute of Nuclear Power Operations, the nuclear power industry has seen a recent rise in events, and this research claims that a contributing factor to this rise is organizational, cultural, and based on peoples overreliance on procedures and policy. Understanding the proper balance of function allocation, automation and human decision-making is imperative to creating a nuclear power plant that is safe, efficient, and reliable. This research claims that new generations of operators are less engaged and thinking because they have been instructed to follow procedures to a fault. According to operators, they were once to know the plant and its interrelations, but organizationally more importance is now put on following procedure and policy. Literature reviews were performed, experts were questioned, and a model for context analysis was developed. The Context Analysis Method for Identifying Design Solutions (CAMIDS) Model was created, verified and validated through both peer review and application in real world scenarios in active nuclear power plant simulators. These experiments supported the claim that strict adherence and rigid compliance to procedures may not increase safety by studying the industry's propensity for following incorrect procedures, and when it directly affects the outcome of safety or security of the plant. The findings of this research indicate that the younger generations of operators rely highly on procedures, and the organizational pressures of required compliance to procedures may lead to incidents within the plant because operators feel pressured into following the rules and policy above performing the correct actions in a timely manner. The findings support computer based procedures, efficient alarm systems, and skill of the craft matrices. The solution to
Amiri, Mohammad J; Abedi-Koupai, Jahangir; Eslamian, Sayed S; Mousavi, Sayed F; Hasheminejad, Hasti
2013-01-01
To evaluate the performance of Adaptive Neural-Based Fuzzy Inference System (ANFIS) model in estimating the efficiency of Pb (II) ions removal from aqueous solution by ostrich bone ash, a batch experiment was conducted. Five operational parameters including adsorbent dosage (C(s)), initial concentration of Pb (II) ions (C(o)), initial pH, temperature (T) and contact time (t) were taken as the input data and the adsorption efficiency (AE) of bone ash as the output. Based on the 31 different structures, 5 ANFIS models were tested against the measured adsorption efficiency to assess the accuracy of each model. The results showed that ANFIS5, which used all input parameters, was the most accurate (RMSE = 2.65 and R(2) = 0.95) and ANFIS1, which used only the contact time input, was the worst (RMSE = 14.56 and R(2) = 0.46). In ranking the models, ANFIS4, ANFIS3 and ANFIS2 ranked second, third and fourth, respectively. The sensitivity analysis revealed that the estimated AE is more sensitive to the contact time, followed by pH, initial concentration of Pb (II) ions, adsorbent dosage, and temperature. The results showed that all ANFIS models overestimated the AE. In general, this study confirmed the capabilities of ANFIS model as an effective tool for estimation of AE.
Fukuda, Ryoichi Ehara, Masahiro; Cammi, Roberto
2014-02-14
A perturbative approximation of the state specific polarizable continuum model (PCM) symmetry-adapted cluster-configuration interaction (SAC-CI) method is proposed for efficient calculations of the electronic excitations and absorption spectra of molecules in solutions. This first-order PCM SAC-CI method considers the solvent effects on the energies of excited states up to the first-order with using the zeroth-order wavefunctions. This method can avoid the costly iterative procedure of the self-consistent reaction field calculations. The first-order PCM SAC-CI calculations well reproduce the results obtained by the iterative method for various types of excitations of molecules in polar and nonpolar solvents. The first-order contribution is significant for the excitation energies. The results obtained by the zeroth-order PCM SAC-CI, which considers the fixed ground-state reaction field for the excited-state calculations, are deviated from the results by the iterative method about 0.1 eV, and the zeroth-order PCM SAC-CI cannot predict even the direction of solvent shifts in n-hexane for many cases. The first-order PCM SAC-CI is applied to studying the solvatochromisms of (2,2{sup ′}-bipyridine)tetracarbonyltungsten [W(CO){sub 4}(bpy), bpy = 2,2{sup ′}-bipyridine] and bis(pentacarbonyltungsten)pyrazine [(OC){sub 5}W(pyz)W(CO){sub 5}, pyz = pyrazine]. The SAC-CI calculations reveal the detailed character of the excited states and the mechanisms of solvent shifts. The energies of metal to ligand charge transfer states are significantly sensitive to solvents. The first-order PCM SAC-CI well reproduces the observed absorption spectra of the tungsten carbonyl complexes in several solvents.
Lindskog, Marcus; Winman, Anders; Juslin, Peter; Poom, Leo
2013-01-01
Two studies investigated the reliability and predictive validity of commonly used measures and models of Approximate Number System acuity (ANS). Study 1 investigated reliability by both an empirical approach and a simulation of maximum obtainable reliability under ideal conditions. Results showed that common measures of the Weber fraction (w) are reliable only when using a substantial number of trials, even under ideal conditions. Study 2 compared different purported measures of ANS acuity as for convergent and predictive validity in a within-subjects design and evaluated an adaptive test using the ZEST algorithm. Results showed that the adaptive measure can reduce the number of trials needed to reach acceptable reliability. Only direct tests with non-symbolic numerosity discriminations of stimuli presented simultaneously were related to arithmetic fluency. This correlation remained when controlling for general cognitive ability and perceptual speed. Further, the purported indirect measure of ANS acuity in terms of the Numeric Distance Effect (NDE) was not reliable and showed no sign of predictive validity. The non-symbolic NDE for reaction time was significantly related to direct w estimates in a direction contrary to the expected. Easier stimuli were found to be more reliable, but only harder (7:8 ratio) stimuli contributed to predictive validity. PMID:23964256
NASA Astrophysics Data System (ADS)
Tari, H.; Scheidler, J. J.; Dapino, M. J.
2015-06-01
A reformulation of the Discrete Energy-Averaged model for the calculation of 3D hysteretic magnetization and magnetostriction of iron-gallium (Galfenol) alloys is presented in this paper. An analytical solution procedure based on an eigenvalue decomposition is developed. This procedure avoids the singularities present in the existing approximate solution by offering multiple local minimum energy directions for each easy crystallographic direction. This improved robustness is crucial for use in finite element codes. Analytical simplifications of the 3D model to 2D and 1D applications are also presented. In particular, the 1D model requires calculation for only one easy direction, while all six easy directions must be considered for general applications. Compared to the approximate solution procedure, it is shown that the resulting robustness comes at no expense for 1D applications, but requires almost twice the computational effort for 3D applications. To find model parameters, we employ the average of the hysteretic data, rather than anhysteretic curves, which would require additional measurements. An efficient optimization routine is developed that retains the dimensionality of the prior art. The routine decouples the parameters into exclusive sets, some of which are found directly through a fast preprocessing step to improve accuracy and computational efficiency. The effectiveness of the model is verified by comparison with existing measurement data.
NASA Astrophysics Data System (ADS)
Abramova, Victoriya V.; Abramov, Sergey K.; Lukin, Vladimir V.; Vozel, Benoit; Chehdi, Kacem
2016-10-01
Several modifications of scatter-plot-based method for mixed noise parameters estimation are proposed. The modifications relate to the stage of image segmentation and they are intended to adaptively separate image blocks into clusters taking into account image peculiarities and to choose a required number of clusters. Comparative performance analysis of the proposed modifications for images from TID2008 database is performed. It is shown that the best estimation accuracy is provided by a method with automatic determination of a required number of clusters followed by block separation into clusters using k-means method. This modification allows improving the accuracy of noise characteristics estimation by up to 5% for both signal-independent and signal-dependent noise components in comparison to the basic method. The results for real-life data are presented.
Tetrahedral and Hexahedral Mesh Adaptation for CFD Problems
NASA Technical Reports Server (NTRS)
Biswas, Rupak; Strawn, Roger C.; Chancellor, Marisa K. (Technical Monitor)
1997-01-01
This paper presents two unstructured mesh adaptation schemes for problems in computational fluid dynamics. The procedures allow localized grid refinement and coarsening to efficiently capture aerodynamic flow features of interest. The first procedure is for purely tetrahedral grids; unfortunately, repeated anisotropic adaptation may significantly deteriorate the quality of the mesh. Hexahedral elements, on the other hand, can be subdivided anisotropically without mesh quality problems. Furthermore, hexahedral meshes yield more accurate solutions than their tetrahedral counterparts for the same number of edges. Both the tetrahedral and hexahedral mesh adaptation procedures use edge-based data structures that facilitate efficient subdivision by allowing individual edges to be marked for refinement or coarsening. However, for hexahedral adaptation, pyramids, prisms, and tetrahedra are used as buffer elements between refined and unrefined regions to eliminate hanging vertices. Computational results indicate that the hexahedral adaptation procedure is a viable alternative to adaptive tetrahedral schemes.
ERIC Educational Resources Information Center
Rittle-Johnson, Bethany; Star, Jon R.
2007-01-01
Encouraging students to share and compare solution methods is a key component of reform efforts in mathematics, and comparison is emerging as a fundamental learning mechanism. To experimentally evaluate the effects of comparison for mathematics learning, the authors randomly assigned 70 seventh-grade students to learn about algebra equation…
Edwards, Andrew G; Teoh, Mark; Hodges, Ryan J; Palma-Dias, Ricardo; Cole, Stephen A; Fung, Alison M; Walker, Susan P
2016-06-01
The benefits of fetoscopic laser photocoagulation (FLP) for treatment of twin-to-twin transfusion syndrome (TTTS) have been recognized for over a decade, yet access to FLP remains limited in many settings. This means at a population level, the potential benefits of FLP for TTTS are far from being fully realized. In part, this is because there are many centers where the case volume is relatively low. This creates an inevitable tension; on one hand, wanting FLP to be readily accessible to all women who may need it, yet on the other, needing to ensure that a high degree of procedural competence is maintained. Some of the solutions to these apparently competing priorities may be found in novel training solutions to achieve, and maintain, procedural proficiency, and with the increased utilization of 'competence based' assessment and credentialing frameworks. We suggest an under-utilized approach is the development of collaborative surgical services, where pooling of personnel and resources can improve timely access to surgery, improve standardized assessment and management of TTTS, minimize the impact of the surgical learning curve, and facilitate audit, education, and research. When deciding which centers should offer laser for TTTS and how we decide, we propose some solutions from a collaborative model.
NASA Technical Reports Server (NTRS)
Stein, M.; Stein, P. A.
1978-01-01
Approximate solutions for three nonlinear orthotropic plate problems are presented: (1) a thick plate attached to a pad having nonlinear material properties which, in turn, is attached to a substructure which is then deformed; (2) a long plate loaded in inplane longitudinal compression beyond its buckling load; and (3) a long plate loaded in inplane shear beyond its buckling load. For all three problems, the two dimensional plate equations are reduced to one dimensional equations in the y-direction by using a one dimensional trigonometric approximation in the x-direction. Each problem uses different trigonometric terms. Solutions are obtained using an existing algorithm for simultaneous, first order, nonlinear, ordinary differential equations subject to two point boundary conditions. Ordinary differential equations are derived to determine the variable coefficients of the trigonometric terms.
Mikulec, Anthony A.; Hartsock, Jared J.; Salt, Alec N.
2008-01-01
Introduction Intratympanic drug delivery has become widely used in the clinic but little is known about how clinically-utilized drug preparations affect round window membrane permeability or how much drug is actually delivered to the cochlea. This study evaluated the effect of clinically relevant carrier solutions and of suction near the round window membrane (RWM) on the permeability properties of the RWM. Methods RWM permeability was assessed by perfusion of the marker TMPA into the round window niche while monitoring entry into perilymph using TMPA-selective electrodes sealed into scala tympani. Results High osmolarity solution increased RWM permeability by a factor of 2 to 3, benzyl alcohol (a preservative used in some drug formulations) increased permeability by a factor of 3 to 5, and suctioning near the RWM increased permeability by a factor of 10 to 15. Conclusions Variations in available drug formulations can potentially alter RWM permeability properties and affect the amount of drug delivered to the inner ear. Drug solution osmolarity, benzyl alcohol content and possible drying of the round window membrane during suctioning the middle ear can all have a substantial influence of the perilymph levels of drug achieved. PMID:18758387
Tel, R M; Berends, G T
1980-10-01
Aqueous solutions of cholesterol and some cholesteryl esters were prepared. The hydrolysis of cholesteryl esters with enzymatic methods could therefore be studied in some detail. The total cholesterol concentration of the aqueous cholesterol and cholesteryl ester solutions was determined by 6 different enzymatic procedures as well as the Liebermann-Burchard method. For some esters (acetate and arachidonate esters) the esterase reaction is not complete within the usual reaction time, whereas most other esters gave analytical results lower than the theoretical. With the Liebermann-Burchard method all esters reacted completely within the reaction time. The esterase have very different specificities for the various cholesteryl esters. With the enzymatic method several commercial control sera as well as human sera gave lower cholesterol concentrations than the Liebermann-Burchard method. These differences can be explained mainly by this incomplete hydrolysis. Some practical recommendations are given.
Dean, Brian; Wright, Cameron H G; Barrett, Steven F
2009-01-01
Fly inspired vision sensors have been shown to have many interesting qualities such as hyperacuity (or an ability to achieve movement resolution beyond the theoretical limit), extreme sensitivity to motion, and (through software simulation) image edge extraction, motion detection, and orientation and location of a line. Many of these qualities are beyond the ability of traditional computer vision sensors such as charge-coupled device (CCD) arrays. To obtain these characteristics, a prototype fly inspired sensor has been built and tested in a laboratory environment and shows promise. Any sophisticated visual system, whether man made or natural, must adequately adapt to lighting conditions, therefore light adaptation is a vital milestone in getting the afore mentioned prototype working in real-world conditions. By studying how the common house fly, Musca domestica, achieves this adaptation it was possible to design an analog solution to this problem. The solution utilizes instrumentation amplifiers and an additional sensor to sense the ambient light. This paper will examine this circuitry in greater detail and will explore the characterization and limitations of this solution.
Self-adaptive incremental Newton-Raphson algorithms
NASA Technical Reports Server (NTRS)
Padovan, J.
1980-01-01
Multilevel self-adaptive Newton-Raphson type strategies are developed to improve the solution efficiency of nonlinear finite element simulations of statically loaded structures. The overall strategy involves three basic levels. The first level involves preliminary solution tunneling via primative operators. Secondly, the solution is constantly monitored via quality/convergence/nonlinearity tests. Lastly, the third level involves self-adaptive algorithmic update procedures aimed at improving the convergence characteristics of the Newton-Raphson strategy. Numerical experiments are included to illustrate the results of the procedure.
Lasne, Françoise
2009-01-01
Nonspecific interactions between blotted proteins and unrelated secondary antibodies generate false positives in immunoblotting techniques. Some procedures have been developed to reduce this adsorption but they may work in specific applications and be ineffective in other ones. "Double-blotting" has been developed to overcome this problem. It consists of interpolating a second blotting step between the usual probings of the blot membrane with the primary antibody and the secondary antibodies. This step, by isolating the primary antibody from the interfering proteins, guarantees the specificity of the probing with the secondary antibody. This method has been developed for the study of erythropoietin in concentrated urine since a strong nonspecific binding of biotinylated secondary antibodies to some urinary proteins is observed using classical immunoblotting protocols. However, its concept makes it usable in other applications that come up against this kind of problem. This method is expected to be especially useful for investigating proteins that are present in minute amounts in complex biological media.
Lasne, Françoise
2015-01-01
Nonspecific interactions between blotted proteins and unrelated secondary antibodies generate false positives in immunoblotting techniques. Some procedures have been developed to reduce this adsorption, but they may work in specific applications and be ineffective in others. "Double-blotting" has been developed to overcome this problem. It consists of interpolating a second blotting step between the usual probings of the blot membrane with the primary antibody and the secondary antibodies. This step, by isolating the primary antibody from the interfering proteins, guarantees the specificity of the probing with the secondary antibody. This method has been developed for the study of erythropoietin in concentrated urine since a strong nonspecific binding of biotinylated secondary antibodies to some urinary proteins is observed using classical immunoblotting protocols. However, its concept makes it usable in other applications that come up against this kind of problem. This method is expected to be especially useful for investigating proteins that are present in minute amounts in complex biological media.
Schneider, André
2006-01-01
The understanding of the availability of a metal in soil necessitates a minimum knowledge about its speciation in the soil solution. Here, we evaluated an alternative to the use of ion exchangers for estimating the free ionic fraction of cadmium (FCd) in solution. It is based on the exchange selectivity coefficient (VK) rather than the distribution coefficient (DK) to estimate FCd. Because VK for the Cd-Ca exchange for the used Amberlite resin was independent of the solution Ca concentration (0.5-7.5 mM) and pH (range: 4.5-6), the experiment on a solution mimicking the analyzed solution to estimate VK was not necessary. The influence of variable Ca and Mg concentrations in solution on FCd was assessed in synthetic solutions containing either citrate or malate. The best way to estimate FCd seemed to treat the exchange data as if Ca was solely present. However, neither the proposed approach nor those applying DK prevent the overestimation of FCd when Ca is partly complexed in the analyzed solution. A method intending to estimate two replicates of FCd for a given, unique solution was also studied on solutions issued from sorption-desorption experiments performed on a humic podzol. It consists of two successive supplies of a known resin mass to a unique sample. Both estimates were close and not significantly different.
Adaptive superposition of finite element meshes in linear and nonlinear dynamic analysis
NASA Astrophysics Data System (ADS)
Yue, Zhihua
2005-11-01
The numerical analysis of transient phenomena in solids, for instance, wave propagation and structural dynamics, is a very important and active area of study in engineering. Despite the current evolutionary state of modern computer hardware, practical analysis of large scale, nonlinear transient problems requires the use of adaptive methods where computational resources are locally allocated according to the interpolation requirements of the solution form. Adaptive analysis of transient problems involves obtaining solutions at many different time steps, each of which requires a sequence of adaptive meshes. Therefore, the execution speed of the adaptive algorithm is of paramount importance. In addition, transient problems require that the solution must be passed from one adaptive mesh to the next adaptive mesh with a bare minimum of solution-transfer error since this form of error compromises the initial conditions used for the next time step. A new adaptive finite element procedure (s-adaptive) is developed in this study for modeling transient phenomena in both linear elastic solids and nonlinear elastic solids caused by progressive damage. The adaptive procedure automatically updates the time step size and the spatial mesh discretization in transient analysis, achieving the accuracy and the efficiency requirements simultaneously. The novel feature of the s-adaptive procedure is the original use of finite element mesh superposition to produce spatial refinement in transient problems. The use of mesh superposition enables the s-adaptive procedure to completely avoid the need for cumbersome multipoint constraint algorithms and mesh generators, which makes the s-adaptive procedure extremely fast. Moreover, the use of mesh superposition enables the s-adaptive procedure to minimize the solution-transfer error. In a series of different solid mechanics problem types including 2-D and 3-D linear elastic quasi-static problems, 2-D material nonlinear quasi-static problems
NASA Astrophysics Data System (ADS)
Ranjan, Srikant
2005-11-01
Fatigue-induced failures in aircraft gas turbine and rocket engine turbopump blades and vanes are a pervasive problem. Turbine blades and vanes represent perhaps the most demanding structural applications due to the combination of high operating temperature, corrosive environment, high monotonic and cyclic stresses, long expected component lifetimes and the enormous consequence of structural failure. Single crystal nickel-base superalloy turbine blades are being utilized in rocket engine turbopumps and jet engines because of their superior creep, stress rupture, melt resistance, and thermomechanical fatigue capabilities over polycrystalline alloys. These materials have orthotropic properties making the position of the crystal lattice relative to the part geometry a significant factor in the overall analysis. Computation of stress intensity factors (SIFs) and the ability to model fatigue crack growth rate at single crystal cracks subject to mixed-mode loading conditions are important parts of developing a mechanistically based life prediction for these complex alloys. A general numerical procedure has been developed to calculate SIFs for a crack in a general anisotropic linear elastic material subject to mixed-mode loading conditions, using three-dimensional finite element analysis (FEA). The procedure does not require an a priori assumption of plane stress or plane strain conditions. The SIFs KI, KII, and KIII are shown to be a complex function of the coupled 3D crack tip displacement field. A comprehensive study of variation of SIFs as a function of crystallographic orientation, crack length, and mode-mixity ratios is presented, based on the 3D elastic orthotropic finite element modeling of tensile and Brazilian Disc (BD) specimens in specific crystal orientations. Variation of SIF through the thickness of the specimens is also analyzed. The resolved shear stress intensity coefficient or effective SIF, Krss, can be computed as a function of crack tip SIFs and the
Aich, Udayanath; Liu, Aston; Lakbub, Jude; Mozdzanowski, Jacek; Byrne, Michael; Shah, Nilesh; Galosy, Sybille; Patel, Pramthesh; Bam, Narendra
2016-03-01
Consistent glycosylation in therapeutic monoclonal antibodies is a major concern in the biopharmaceutical industry as it impacts the drug's safety and efficacy and manufacturing processes. Large numbers of samples are created for the analysis of glycans during various stages of recombinant proteins drug development. Profiling and quantifying protein N-glycosylation is important but extremely challenging due to its microheterogeneity and more importantly the limitations of existing time-consuming sample preparation methods. Thus, a quantitative method with fast sample preparation is crucial for understanding, controlling, and modifying the glycoform variance in therapeutic monoclonal antibody development. Presented here is a rapid and highly quantitative method for the analysis of N-glycans from monoclonal antibodies. The method comprises a simple and fast solution-based sample preparation method that uses nontoxic reducing reagents for direct labeling of N-glycans. The complete work flow for the preparation of fluorescently labeled N-glycans takes a total of 3 h with less than 30 min needed for the release of N-glycans from monoclonal antibody samples.
Digital adaptive flight controller development
NASA Technical Reports Server (NTRS)
Kaufman, H.; Alag, G.; Berry, P.; Kotob, S.
1974-01-01
A design study of adaptive control logic suitable for implementation in modern airborne digital flight computers was conducted. Two designs are described for an example aircraft. Each of these designs uses a weighted least squares procedure to identify parameters defining the dynamics of the aircraft. The two designs differ in the way in which control law parameters are determined. One uses the solution of an optimal linear regulator problem to determine these parameters while the other uses a procedure called single stage optimization. Extensive simulation results and analysis leading to the designs are presented.
NASA Astrophysics Data System (ADS)
Kopera, Michal A.; Giraldo, Francis X.
2014-10-01
The resolutions of interests in atmospheric simulations require prohibitively large computational resources. Adaptive mesh refinement (AMR) tries to mitigate this problem by putting high resolution in crucial areas of the domain. We investigate the performance of a tree-based AMR algorithm for the high order discontinuous Galerkin method on quadrilateral grids with non-conforming elements. We perform a detailed analysis of the cost of AMR by comparing this to uniform reference simulations of two standard atmospheric test cases: density current and rising thermal bubble. The analysis shows up to 15 times speed-up of the AMR simulations with the cost of mesh adaptation below 1% of the total runtime. We pay particular attention to the implicit-explicit (IMEX) time integration methods and show that the ARK2 method is more robust with respect to dynamically adapting meshes than BDF2. Preliminary analysis of preconditioning reveals that it can be an important factor in the AMR overhead. The compiler optimizations provide significant runtime reduction and positively affect the effectiveness of AMR allowing for speed-ups greater than it would follow from the simple performance model.
Watanabe, Hiroshi C; Banno, Misa; Sakurai, Minoru
2016-03-14
Quantum effects in solute-solvent interactions, such as the many-body effect and the dipole-induced dipole, are known to be critical factors influencing the infrared spectra of species in the liquid phase. For accurate spectrum evaluation, the surrounding solvent molecules, in addition to the solute of interest, should be treated using a quantum mechanical method. However, conventional quantum mechanics/molecular mechanics (QM/MM) methods cannot handle free QM solvent molecules during molecular dynamics (MD) simulation because of the diffusion problem. To deal with this problem, we have previously proposed an adaptive QM/MM "size-consistent multipartitioning (SCMP) method". In the present study, as the first application of the SCMP method, we demonstrate the reproduction of the infrared spectrum of liquid-phase water, and evaluate the quantum effect in comparison with conventional QM/MM simulations.
McAllister, M; Billett, S; Moyle, W; Zimmer-Gembeck, M
2009-03-01
Self-harm is a risk factor for further episodes of self-harm and suicide. The most common service used by self-injurers is the emergency department. However, very often, nurses have received no special training to identify and address the needs of these patients. In addition this care context is typically biomedical and without psychosocial skills, nurses can tend to feel unprepared and lacking in confidence, particularly on the issue of self-harm. In a study that aimed to improve understanding and teach solution-focused skills to emergency nurses so that they may be more helpful with patients who self-harm, several outcome measures were considered, including knowledge, professional identity and clinical reasoning. The think-aloud procedure was used as a way of exploring and improving the solution-focused nature of nurses' clinical reasoning in a range of self-harm scenarios. A total of 28 emergency nurses completed the activity. Data were audiotaped, transcribed and analysed. The results indicated that significant improvements were noted in nurses' ability to consider the patients' psychosocial needs following the intervention. Thus this study has shown that interactive education not only improves attitude and confidence but enlarges nurses' reasoning skills to include psychosocial needs. This is likely to improve the quality of care provided to patients with mental health problems who present to emergency settings, reducing stigma for patients and providing the important first steps to enduring change - acknowledgment and respect.
Beauvais, Z S; Thompson, K H; Kearfott, K J
2009-07-01
Due to a recent upward trend in the price of uranium and subsequent increased interest in uranium mining, accurate modeling of baseline dose from environmental sources of radioactivity is of increasing interest. Residual radioactivity model and code (RESRAD) is a program used to model environmental movement and calculate the dose due to the inhalation, ingestion, and exposure to radioactive materials following a placement. This paper presents a novel use of RESRAD for the calculation of dose from non-enhanced, or ancient, naturally occurring radioactive material (NORM). In order to use RESRAD to calculate the total effective dose (TED) due to ancient NORM, a procedural adaptation was developed to negate the effects of time progressive distribution of radioactive materials. A dose due to United States' average concentrations of uranium, actinium, and thorium series radionuclides was then calculated. For adults exposed in a residential setting and assumed to eat significant amounts of food grown in NORM concentrated areas, the annual dose due to national average NORM concentrations was 0.935 mSv y(-1). A set of environmental dose factors were calculated for simple estimation of dose from uranium, thorium, and actinium series radionuclides for various age groups and exposure scenarios as a function of elemental uranium and thorium activity concentrations in groundwater and soil. The values of these factors for uranium were lowest for an adult exposed in an industrial setting: 0.00476 microSv kg Bq(-1) y(-1) for soil and 0.00596 microSv m(3) Bq(-1) y(-1) for water (assuming a 1:1 234U:238U activity ratio in water). The uranium factors were highest for infants exposed in a residential setting and assumed to ingest food grown onsite: 34.8 microSv kg Bq(-1) y(-1) in soil and 13.0 microSv m(3) Bq(-1) y(-1) in water.
NASA Astrophysics Data System (ADS)
Akhunov, R. R.; Gazizov, T. R.; Kuksenko, S. P.
2016-08-01
The mean time needed to solve a series of systems of linear algebraic equations (SLAEs) as a function of the number of SLAEs is investigated. It is proved that this function has an extremum point. An algorithm for adaptively determining the time when the preconditioner matrix should be recalculated when a series of SLAEs is solved is developed. A numerical experiment with multiply solving a series of SLAEs using the proposed algorithm for computing 100 capacitance matrices with two different structures—microstrip when its thickness varies and a modal filter as the gap between the conductors varies—is carried out. The speedups turned out to be close to the optimal ones.
NASA Astrophysics Data System (ADS)
Jiang, Xiao-Yu; Zong, Yan-Tao; Wang, Xi; Chen, Zhuo; Liu, Zhong-Xuan
2010-11-01
MEMS gyro is used in inertial measuring fields more and more widely, but random drift is considered as an important error restricting the precision of it. Establishing the proper models closed to actual state of movement and random drift, and designing a kind of effective filter are available to enhance the precision of the MEMS gyro. The dynamic model of angle movement is studied, the ARMA model describing random drift is established based on time series analysis method, and a modified self-adapted Kalman filter is designed for the signal processing. Finally, the random drift is distinguished and analyzed clearly by Allan variance. It is included that the above method can effectively eliminate the random drift and improve the precision of MEMS gyro.
NASA Technical Reports Server (NTRS)
Radhakrishnan, Krishnan
1994-01-01
LSENS, the Lewis General Chemical Kinetics and Sensitivity Analysis Code, has been developed for solving complex, homogeneous, gas-phase chemical kinetics problems and contains sensitivity analysis for a variety of problems, including nonisothermal situations. This report is part 1 of a series of three reference publications that describe LENS, provide a detailed guide to its usage, and present many example problems. Part 1 derives the governing equations and describes the numerical solution procedures for the types of problems that can be solved. The accuracy and efficiency of LSENS are examined by means of various test problems, and comparisons with other methods and codes are presented. LSENS is a flexible, convenient, accurate, and efficient solver for chemical reaction problems such as static system; steady, one-dimensional, inviscid flow; reaction behind incident shock wave, including boundary layer correction; and perfectly stirred (highly backmixed) reactor. In addition, the chemical equilibrium state can be computed for the following assigned states: temperature and pressure, enthalpy and pressure, temperature and volume, and internal energy and volume. For static problems the code computes the sensitivity coefficients of the dependent variables and their temporal derivatives with respect to the initial values of the dependent variables and/or the three rate coefficient parameters of the chemical reactions.
Fukuda, Ryoichi Ehara, Masahiro
2015-12-31
The effects from solvent environment are specific to the electronic states; therefore, a computational scheme for solvent effects consistent with the electronic states is necessary to discuss electronic excitation of molecules in solution. The PCM (polarizable continuum model) SAC (symmetry-adapted cluster) and SAC-CI (configuration interaction) methods are developed for such purposes. The PCM SAC-CI adopts the state-specific (SS) solvation scheme where solvent effects are self-consistently considered for every ground and excited states. For efficient computations of many excited states, we develop a perturbative approximation for the PCM SAC-CI method, which is called corrected linear response (cLR) scheme. Our test calculations show that the cLR PCM SAC-CI is a very good approximation of the SS PCM SAC-CI method for polar and nonpolar solvents.
Pérez-Jordá, José M
2010-01-14
A new method for solving the Schrödinger equation is proposed, based on the following details. First, a map u=u(r) from Cartesian coordinates r to a new coordinate system u is chosen. Second, the solution (orbital) psi(r) is written in terms of a function U depending on u so that psi(r)=/J(u)/(-1/2)U(u), where /J(u)/ is the Jacobian determinant of the map. Third, U is expressed as a linear combination of plane waves in the u coordinate, U(u)= sum (k)c(k)e(ik x u). Finally, the coefficients c(k) are variationally optimized to obtain the best energy, using a generalization of an algorithm originally developed for the Coulomb potential [J. M. Perez-Jorda, Phys. Rev. B 58, 1230 (1998)]. The method is tested for the radial Schrödinger equation in the hydrogen atom, resulting in micro-Hartree accuracy or better for the energy of ns and np orbitals (with n up to 5) using expansions of moderate length.
NASA Technical Reports Server (NTRS)
Johnson, F. T.; Samant, S. S.; Bieterman, M. B.; Melvin, R. G.; Young, D. P.; Bussoletti, J. E.; Hilmes, C. L.
1992-01-01
A new computer program, called TranAir, for analyzing complex configurations in transonic flow (with subsonic or supersonic freestream) was developed. This program provides accurate and efficient simulations of nonlinear aerodynamic flows about arbitrary geometries with the ease and flexibility of a typical panel method program. The numerical method implemented in TranAir is described. The method solves the full potential equation subject to a set of general boundary conditions and can handle regions with differing total pressure and temperature. The boundary value problem is discretized using the finite element method on a locally refined rectangular grid. The grid is automatically constructed by the code and is superimposed on the boundary described by networks of panels; thus no surface fitted grid generation is required. The nonlinear discrete system arising from the finite element method is solved using a preconditioned Krylov subspace method embedded in an inexact Newton method. The solution is obtained on a sequence of successively refined grids which are either constructed adaptively based on estimated solution errors or are predetermined based on user inputs. Many results obtained by using TranAir to analyze aerodynamic configurations are presented.
Allen, Craig R.; Garmestani, Ahjond S.
2015-01-01
Adaptive management is an approach to natural resource management that emphasizes learning through management where knowledge is incomplete, and when, despite inherent uncertainty, managers and policymakers must act. Unlike a traditional trial and error approach, adaptive management has explicit structure, including a careful elucidation of goals, identification of alternative management objectives and hypotheses of causation, and procedures for the collection of data followed by evaluation and reiteration. The process is iterative, and serves to reduce uncertainty, build knowledge and improve management over time in a goal-oriented and structured process.
NASA Astrophysics Data System (ADS)
Ren, Zhengyong; Kalscheuer, Thomas; Greenhalgh, Stewart; Maurer, Hansruedi
2013-02-01
We have developed a generalized and stable surface integral formula for 3-D uniform inducing field and plane wave electromagnetic induction problems, which works reliably over a wide frequency range. Vector surface electric currents and magnetic currents, scalar surface electric charges and magnetic charges are treated as the variables. This surface integral formula is successfully applied to compute the electromagnetic responses of 3-D topography to low frequency magnetotelluric and high frequency radio-magnetotelluric fields. The standard boundary element method which is used to solve this surface integral formula quickly exceeds the memory capacity of modern computers for problems involving hundreds of thousands of unknowns. To make the surface integral formulation applicable and capable of dealing with large-scale 3-D geo-electromagnetic problems, we have developed a matrix-free adaptive multilevel fast multipole boundary element solver. By means of the fast multipole approach, the time-complexity of solving the final system of linear equations is reduced to O(m log m) and the memory cost is reduced to O(m), where m is the number of unknowns. The analytical solutions for a half-space model were used to verify our numerical solutions over the frequency range 0.001-300 kHz. In addition, our numerical solution shows excellent agreement with a published numerical solution for an edge-based finite-element method on a trapezoidal hill model at a frequency of 2 Hz. Then, a high frequency simulation for a similar trapezoidal hill model was used to study the effects of displacement currents in the radio-magnetotelluric frequency range. Finally, the newly developed algorithm was applied to study the effect of moderate topography and to evaluate the applicability of a 2-D RMT inversion code that assumes a flat air-Earth interface, on RMT field data collected at Smørgrav, southern Norway. This paper constitutes the first part of a hybrid boundary element-finite element
Cartesian Off-Body Grid Adaption for Viscous Time- Accurate Flow Simulation
NASA Technical Reports Server (NTRS)
Buning, Pieter G.; Pulliam, Thomas H.
2011-01-01
An improved solution adaption capability has been implemented in the OVERFLOW overset grid CFD code. Building on the Cartesian off-body approach inherent in OVERFLOW and the original adaptive refinement method developed by Meakin, the new scheme provides for automated creation of multiple levels of finer Cartesian grids. Refinement can be based on the undivided second-difference of the flow solution variables, or on a specific flow quantity such as vorticity. Coupled with load-balancing and an inmemory solution interpolation procedure, the adaption process provides very good performance for time-accurate simulations on parallel compute platforms. A method of using refined, thin body-fitted grids combined with adaption in the off-body grids is presented, which maximizes the part of the domain subject to adaption. Two- and three-dimensional examples are used to illustrate the effectiveness and performance of the adaption scheme.
Johnson, Richard Wayne
2003-05-01
The application of collocation methods using spline basis functions to solve differential model equations has been in use for a few decades. However, the application of spline collocation to the solution of the nonlinear, coupled, partial differential equations (in primitive variables) that define the motion of fluids has only recently received much attention. The issues that affect the effectiveness and accuracy of B-spline collocation for solving differential equations include which points to use for collocation, what degree B-spline to use and what level of continuity to maintain. Success using higher degree B-spline curves having higher continuity at the knots, as opposed to more traditional approaches using orthogonal collocation, have recently been investigated along with collocation at the Greville points for linear (1D) and rectangular (2D) geometries. The development of automatic knot insertion techniques to provide sufficient accuracy for B-spline collocation has been underway. The present article reviews recent progress for the application of B-spline collocation to fluid motion equations as well as new work in developing a novel adaptive knot insertion algorithm for a 1D convection-diffusion model equation.
NASA Astrophysics Data System (ADS)
Aghajani, Khadijeh; Tayebi, Habib-Allah
2017-01-01
In this study, the Mesoporous material SBA-15 were synthesized and then, the surface was modified by the surfactant Cetyltrimethylammoniumbromide (CTAB). Finally, the obtained adsorbent was used in order to remove Reactive Red 198 (RR 198) from aqueous solution. Transmission electron microscope (TEM), Fourier transform infra-red spectroscopy (FTIR), Thermogravimetric analysis (TGA), X-ray diffraction (XRD), and BET were utilized for the purpose of examining the structural characteristics of obtained adsorbent. Parameters affecting the removal of RR 198 such as pH, the amount of adsorbent, and contact time were investigated at various temperatures and were also optimized. The obtained optimized condition is as follows: pH = 2, time = 60 min and adsorbent dose = 1 g/l. Moreover, a predictive model based on ANFIS for predicting the adsorption amount according to the input variables is presented. The presented model can be used for predicting the adsorption rate based on the input variables include temperature, pH, time, dosage, concentration. The error between actual and approximated output confirm the high accuracy of the proposed model in the prediction process. This fact results in cost reduction because prediction can be done without resorting to costly experimental efforts. SBA-15, CTAB, Reactive Red 198, adsorption study, Adaptive Neuro-Fuzzy Inference systems (ANFIS).
An assessment of the adaptive unstructured tetrahedral grid, Euler Flow Solver Code FELISA
NASA Technical Reports Server (NTRS)
Djomehri, M. Jahed; Erickson, Larry L.
1994-01-01
A three-dimensional solution-adaptive Euler flow solver for unstructured tetrahedral meshes is assessed, and the accuracy and efficiency of the method for predicting sonic boom pressure signatures about simple generic models are demonstrated. Comparison of computational and wind tunnel data and enhancement of numerical solutions by means of grid adaptivity are discussed. The mesh generation is based on the advancing front technique. The FELISA code consists of two solvers, the Taylor-Galerkin and the Runge-Kutta-Galerkin schemes, both of which are spacially discretized by the usual Galerkin weighted residual finite-element methods but with different explicit time-marching schemes to steady state. The solution-adaptive grid procedure is based on either remeshing or mesh refinement techniques. An alternative geometry adaptive procedure is also incorporated.
An adaptive remeshing scheme for vortex dominated flows using three-dimensional unstructured grids
NASA Astrophysics Data System (ADS)
Parikh, Paresh
1995-10-01
An adaptive remeshing procedure for vortex dominated flows is described, which uses three-dimensional unstructured grids. Surface grid adaptation is achieved using the static pressure as an adaptation parameter, while entropy is used in the field to accurately identify high vorticity regions. An emphasis has been placed in making the scheme as automatic as possible so that a minimum user interaction is required between remeshing cycles. Adapted flow solutions are obtained on two sharp-edged configurations at low speed, high angle-of-attack flow conditions. The results thus obtained are compared with fine grid CFD solutions and experimental data, and conclusions are drawn as to the efficiency of the adaptive procedure.
This SOP describes the method used for preparing surrogate recovery standard and internal standard solutions for the analysis of polar target analytes. It also describes the method for preparing calibration standard solutions for polar analytes used for gas chromatography/mass sp...
Cao, Youfang; Liang, Jie
2013-07-14
Critical events that occur rarely in biological processes are of great importance, but are challenging to study using Monte Carlo simulation. By introducing biases to reaction selection and reaction rates, weighted stochastic simulation algorithms based on importance sampling allow rare events to be sampled more effectively. However, existing methods do not address the important issue of barrier crossing, which often arises from multistable networks and systems with complex probability landscape. In addition, the proliferation of parameters and the associated computing cost pose significant problems. Here we introduce a general theoretical framework for obtaining optimized biases in sampling individual reactions for estimating probabilities of rare events. We further describe a practical algorithm called adaptively biased sequential importance sampling (ABSIS) method for efficient probability estimation. By adopting a look-ahead strategy and by enumerating short paths from the current state, we estimate the reaction-specific and state-specific forward and backward moving probabilities of the system, which are then used to bias reaction selections. The ABSIS algorithm can automatically detect barrier-crossing regions, and can adjust bias adaptively at different steps of the sampling process, with bias determined by the outcome of exhaustively generated short paths. In addition, there are only two bias parameters to be determined, regardless of the number of the reactions and the complexity of the network. We have applied the ABSIS method to four biochemical networks: the birth-death process, the reversible isomerization, the bistable Schlögl model, and the enzymatic futile cycle model. For comparison, we have also applied the finite buffer discrete chemical master equation (dCME) method recently developed to obtain exact numerical solutions of the underlying discrete chemical master equations of these problems. This allows us to assess sampling results objectively
NASA Astrophysics Data System (ADS)
Cao, Youfang; Liang, Jie
2013-07-01
Critical events that occur rarely in biological processes are of great importance, but are challenging to study using Monte Carlo simulation. By introducing biases to reaction selection and reaction rates, weighted stochastic simulation algorithms based on importance sampling allow rare events to be sampled more effectively. However, existing methods do not address the important issue of barrier crossing, which often arises from multistable networks and systems with complex probability landscape. In addition, the proliferation of parameters and the associated computing cost pose significant problems. Here we introduce a general theoretical framework for obtaining optimized biases in sampling individual reactions for estimating probabilities of rare events. We further describe a practical algorithm called adaptively biased sequential importance sampling (ABSIS) method for efficient probability estimation. By adopting a look-ahead strategy and by enumerating short paths from the current state, we estimate the reaction-specific and state-specific forward and backward moving probabilities of the system, which are then used to bias reaction selections. The ABSIS algorithm can automatically detect barrier-crossing regions, and can adjust bias adaptively at different steps of the sampling process, with bias determined by the outcome of exhaustively generated short paths. In addition, there are only two bias parameters to be determined, regardless of the number of the reactions and the complexity of the network. We have applied the ABSIS method to four biochemical networks: the birth-death process, the reversible isomerization, the bistable Schlögl model, and the enzymatic futile cycle model. For comparison, we have also applied the finite buffer discrete chemical master equation (dCME) method recently developed to obtain exact numerical solutions of the underlying discrete chemical master equations of these problems. This allows us to assess sampling results objectively
Adaptive Image Denoising by Mixture Adaptation
NASA Astrophysics Data System (ADS)
Luo, Enming; Chan, Stanley H.; Nguyen, Truong Q.
2016-10-01
We propose an adaptive learning procedure to learn patch-based image priors for image denoising. The new algorithm, called the Expectation-Maximization (EM) adaptation, takes a generic prior learned from a generic external database and adapts it to the noisy image to generate a specific prior. Different from existing methods that combine internal and external statistics in ad-hoc ways, the proposed algorithm is rigorously derived from a Bayesian hyper-prior perspective. There are two contributions of this paper: First, we provide full derivation of the EM adaptation algorithm and demonstrate methods to improve the computational complexity. Second, in the absence of the latent clean image, we show how EM adaptation can be modified based on pre-filtering. Experimental results show that the proposed adaptation algorithm yields consistently better denoising results than the one without adaptation and is superior to several state-of-the-art algorithms.
Adaptive Image Denoising by Mixture Adaptation.
Luo, Enming; Chan, Stanley H; Nguyen, Truong Q
2016-10-01
We propose an adaptive learning procedure to learn patch-based image priors for image denoising. The new algorithm, called the expectation-maximization (EM) adaptation, takes a generic prior learned from a generic external database and adapts it to the noisy image to generate a specific prior. Different from existing methods that combine internal and external statistics in ad hoc ways, the proposed algorithm is rigorously derived from a Bayesian hyper-prior perspective. There are two contributions of this paper. First, we provide full derivation of the EM adaptation algorithm and demonstrate methods to improve the computational complexity. Second, in the absence of the latent clean image, we show how EM adaptation can be modified based on pre-filtering. The experimental results show that the proposed adaptation algorithm yields consistently better denoising results than the one without adaptation and is superior to several state-of-the-art algorithms.
AEST: Adaptive Eigenvalue Stability Code
NASA Astrophysics Data System (ADS)
Zheng, L.-J.; Kotschenreuther, M.; Waelbroeck, F.; van Dam, J. W.; Berk, H.
2002-11-01
An adaptive eigenvalue linear stability code is developed. The aim is on one hand to include the non-ideal MHD effects into the global MHD stability calculation for both low and high n modes and on the other hand to resolve the numerical difficulty involving MHD singularity on the rational surfaces at the marginal stability. Our code follows some parts of philosophy of DCON by abandoning relaxation methods based on radial finite element expansion in favor of an efficient shooting procedure with adaptive gridding. The δ W criterion is replaced by the shooting procedure and subsequent matrix eigenvalue problem. Since the technique of expanding a general solution into a summation of the independent solutions employed, the rank of the matrices involved is just a few hundreds. This makes easier to solve the eigenvalue problem with non-ideal MHD effects, such as FLR or even full kinetic effects, as well as plasma rotation effect, taken into account. To include kinetic effects, the approach of solving for the distribution function as a local eigenvalue ω problem as in the GS2 code will be employed in the future. Comparison of the ideal MHD version of the code with DCON, PEST, and GATO will be discussed. The non-ideal MHD version of the code will be employed to study as an application the transport barrier physics in tokamak discharges.
McCarey, Bernard E.; Edelhauser, Henry F.; Lynn, Michael J.
2010-01-01
Specular microscopy can provide a non-invasive morphological analysis of the corneal endothelial cell layer from subjects enrolled in clinical trials. The analysis provides a measure of the endothelial cell physiological reserve from aging, ocular surgical procedures, pharmaceutical exposure, and general health of the corneal endothelium. The purpose of this review is to discuss normal and stressed endothelial cell morphology, the techniques for determining the morphology parameters, and clinical trial applications. PMID:18245960
NASA Astrophysics Data System (ADS)
Barton, P.
1987-04-01
The basic principles of adaptive antennas are outlined in terms of the Wiener-Hopf expression for maximizing signal to noise ratio in an arbitrary noise environment; the analogy with generalized matched filter theory provides a useful aid to understanding. For many applications, there is insufficient information to achieve the above solution and thus non-optimum constrained null steering algorithms are also described, together with a summary of methods for preventing wanted signals being nulled by the adaptive system. The three generic approaches to adaptive weight control are discussed; correlation steepest descent, weight perturbation and direct solutions based on sample matrix conversion. The tradeoffs between hardware complexity and performance in terms of null depth and convergence rate are outlined. The sidelobe cancellor technique is described. Performance variation with jammer power and angular distribution is summarized and the key performance limitations identified. The configuration and performance characteristics of both multiple beam and phase scan array antennas are covered, with a brief discussion of performance factors.
Guy, Joshua H; Deakin, Glen B; Edwards, Andrew M; Miller, Catherine M; Pyne, David B
2015-03-01
Extreme environmental conditions present athletes with diverse challenges; however, not all sporting events are limited by thermoregulatory parameters. The purpose of this leading article is to identify specific instances where hot environmental conditions either compromise or augment performance and, where heat acclimation appears justified, evaluate the effectiveness of pre-event acclimation processes. To identify events likely to be receptive to pre-competition heat adaptation protocols, we clustered and quantified the magnitude of difference in performance of elite athletes competing in International Association of Athletics Federations (IAAF) World Championships (1999-2011) in hot environments (>25 °C) with those in cooler temperate conditions (<25 °C). Athletes in endurance events performed worse in hot conditions (~3 % reduction in performance, Cohen's d > 0.8; large impairment), while in contrast, performance in short-duration sprint events was augmented in the heat compared with temperate conditions (~1 % improvement, Cohen's d > 0.8; large performance gain). As endurance events were identified as compromised by the heat, we evaluated common short-term heat acclimation (≤7 days, STHA) and medium-term heat acclimation (8-14 days, MTHA) protocols. This process identified beneficial effects of heat acclimation on performance using both STHA (2.4 ± 3.5 %) and MTHA protocols (10.2 ± 14.0 %). These effects were differentially greater for MTHA, which also demonstrated larger reductions in both endpoint exercise heart rate (STHA: -3.5 ± 1.8 % vs MTHA: -7.0 ± 1.9 %) and endpoint core temperature (STHA: -0.7 ± 0.7 % vs -0.8 ± 0.3 %). It appears that worthwhile acclimation is achievable for endurance athletes via both short-and medium-length protocols but more is gained using MTHA. Conversely, it is also conceivable that heat acclimation may be counterproductive for sprinters. As high-performance athletes are often time-poor, shorter duration protocols may
NASA Astrophysics Data System (ADS)
Abdel Wahab, N. H.; Salah, Ahmed
2015-05-01
In this paper, the interaction of a three-level -configration atom and a one-mode quantized electromagnetic cavity field has been studied. The detuning parameters, the Kerr nonlinearity and the arbitrary form of both the field and intensity-dependent atom-field coupling have been taken into account. The wave function when the atom and the field are initially prepared in the excited state and coherent state, respectively, by using the Schrödinger equation has been given. The analytical approximation solution of this model has been obtained by using the modified homotopy analysis method (MHAM). The homotopy analysis method is mentioned summarily. MHAM can be obtained from the homotopy analysis method (HAM) applied to Laplace, inverse Laplace transform and Pade approximate. MHAM is used to increase the accuracy and accelerate the convergence rate of truncated series solution obtained by the HAM. The time-dependent parameters of the anti-bunching of photons, the amplitude-squared squeezing and the coherent properties have been calculated. The influence of the detuning parameters, Kerr nonlinearity and photon number operator on the temporal behavior of these phenomena have been analyzed. We noticed that the considered system is sensitive to variations in the presence of these parameters.
Gonçalves, F S; Barretto, L S S; Arruda, R P; Perri, S H V; Mingoti, G Z
2014-01-01
The presence of heparin and a mixture of penicillamine, hypotaurine, and epinephrine (PHE) solution in the in vitro fertilization (IVF) media seem to be a prerequisite when bovine spermatozoa are capacitated in vitro, in order to stimulate sperm motility and acrosome reaction. The present study was designed to determine the effect of the addition of heparin and PHE during IVF on the quality and penetrability of spermatozoa into bovine oocytes and on subsequent embryo development. Sperm quality, evaluated by the integrity of plasma and acrosomal membranes and mitochondrial function, was diminished (P<0.05) in the presence of heparin and PHE. Oocyte penetration and normal pronuclear formation rates, as well as the percentage of zygotes presenting more than two pronuclei, was higher (P<0.05) in the presence of heparin and PHE. No differences were observed in cleavage rates between treatment and control (P>0.05). However, the developmental rate to the blastocyst stage was increased in the presence of heparin and PHE (P>0.05). The quality of embryos that reached the blastocyst stage was evaluated by counting the inner cell mass (ICM) and trophectoderm (TE) cell numbers and total number of cells; the percentage of ICM and TE cells was unaffected (P>0.05) in the presence of heparin and PHE (P<0.05). In conclusion, this study demonstrated that while the supplementation of IVF media with heparin and PHE solution impairs spermatozoa quality, it plays an important role in sperm capacitation, improving pronuclear formation, and early embryonic development.
On the dynamics of some grid adaption schemes
NASA Technical Reports Server (NTRS)
Sweby, Peter K.; Yee, Helen C.
1994-01-01
The dynamics of a one-parameter family of mesh equidistribution schemes coupled with finite difference discretisations of linear and nonlinear convection-diffusion model equations is studied numerically. It is shown that, when time marched to steady state, the grid adaption not only influences the stability and convergence rate of the overall scheme, but can also introduce spurious dynamics to the numerical solution procedure.
Near-Body Grid Adaption for Overset Grids
NASA Technical Reports Server (NTRS)
Buning, Pieter G.; Pulliam, Thomas H.
2016-01-01
A solution adaption capability for curvilinear near-body grids has been implemented in the OVERFLOW overset grid computational fluid dynamics code. The approach follows closely that used for the Cartesian off-body grids, but inserts refined grids in the computational space of original near-body grids. Refined curvilinear grids are generated using parametric cubic interpolation, with one-sided biasing based on curvature and stretching ratio of the original grid. Sensor functions, grid marking, and solution interpolation tasks are implemented in the same fashion as for off-body grids. A goal-oriented procedure, based on largest error first, is included for controlling growth rate and maximum size of the adapted grid system. The adaption process is almost entirely parallelized using MPI, resulting in a capability suitable for viscous, moving body simulations. Two- and three-dimensional examples are presented.
Brünisholz, H P; Schwarzwald, C C; Bettschart-Wolfensberger, R; Ringer, S K
2015-12-01
The aim of the present study was to investigate the effect of pentastarch on colloid osmotic pressure (COP) and cardiopulmonary function during and up to 24 h after anaesthesia in horses. Twenty-five systemically healthy horses were anaesthetised using isoflurane-medetomidine balanced anaesthesia. Twelve were assigned to treatment with hydroxyethyl starch (HES) (H group) and 13 to no HES (NH group). In the H group, 6 mL/kg of pentastarch 10% HES (200/0.5) was infused over 1 h starting 30 min after induction of anaesthesia. Horses of the NH group received an equal amount of lactated Ringer's solution (LRS). COP and blood biochemical, cardiopulmonary and anaesthesia-related variables were measured at different time points before and after treatment. Pentastarch was effective in correcting the decrease in COP observed with LRS administration. No differences between treatments were detected for blood glucose, lactate, total proteins and electrolytes. Packed cell volume was lower with the H group immediately after finishing HES-administration and for an additional 30 min. In all horses, all blood biochemical variables other than lactate returned to normal after 12 h. No clinically relevant differences between treatments were detected for cardiopulmonary variables, although 23.1% of the NH-horses needed rescue-HES to maintain cardiovascular function, while none of the H-horses needed additional colloids. Overall, 6 mL/kg HES (200/0.5) was found to be effective in maintaining COP during anaesthesia in systemically healthy horses. Intermediate and long-term effects were below the limit of detection. The potentially beneficial effects on cardiovascular function need further investigation, especially in critically ill horses.
Psychometric Function Reconstruction from Adaptive Tracking Procedures
1988-11-29
reduced variability and length of the track can be shown by the use of the "sweat factor" defined by Taylor and Creelman (1967). This is a measure of...Psychophysics, 35, 385-392. Taylor, M. M., and Creelman , C. D. (1967). PEST: Efficient estimates on probability functions. Journal of the Acoustical Society of
Adaptive neuro-control for large flexible structures
NASA Astrophysics Data System (ADS)
Krishna Kumar, K.; Montgomery, L.
1992-12-01
Special problems related to control system design for large flexible structures include the inherent low damping, wide range of modal frequencies, unmodeled dynamics, and possibility of system failures. Neuro-control, which combines concepts from artificial neural networks and adaptive control is investigated as a solution to some of these problems. Specifically, the roles of neutro-controllers in learning unmodeled dynamics and adaptive control for system failures are investigated. The neuro-controller synthesis procedure and its capabilities in adaptively controlling the structure are demonstrated using a mathematical model of an existing structure, the advanced control evaluation for systems test article located at NASA/Marshall Space Flight Center. Also, the real-time adaptive capability of neuro-controllers is demonstrated via an experiment utilizing a flexible clamped-free beam equipped with an actuator that uses a bang-bang controller.
hp-Adaptive time integration based on the BDF for viscous flows
NASA Astrophysics Data System (ADS)
Hay, A.; Etienne, S.; Pelletier, D.; Garon, A.
2015-06-01
This paper presents a procedure based on the Backward Differentiation Formulas of order 1 to 5 to obtain efficient time integration of the incompressible Navier-Stokes equations. The adaptive algorithm performs both stepsize and order selections to control respectively the solution accuracy and the computational efficiency of the time integration process. The stepsize selection (h-adaptivity) is based on a local error estimate and an error controller to guarantee that the numerical solution accuracy is within a user prescribed tolerance. The order selection (p-adaptivity) relies on the idea that low-accuracy solutions can be computed efficiently by low order time integrators while accurate solutions require high order time integrators to keep computational time low. The selection is based on a stability test that detects growing numerical noise and deems a method of order p stable if there is no method of lower order that delivers the same solution accuracy for a larger stepsize. Hence, it guarantees both that (1) the used method of integration operates inside of its stability region and (2) the time integration procedure is computationally efficient. The proposed time integration procedure also features a time-step rejection and quarantine mechanisms, a modified Newton method with a predictor and dense output techniques to compute solution at off-step points.
Numerical Boundary Condition Procedures
NASA Technical Reports Server (NTRS)
1981-01-01
Topics include numerical procedures for treating inflow and outflow boundaries, steady and unsteady discontinuous surfaces, far field boundaries, and multiblock grids. In addition, the effects of numerical boundary approximations on stability, accuracy, and convergence rate of the numerical solution are discussed.
Ryan, P C; Hillier, S; Wall, A J
2008-12-15
Sequential extraction procedures (SEPs) are commonly used to determine speciation of trace metals in soils and sediments. However, the non-selectivity of reagents for targeted phases has remained a lingering concern. Furthermore, potentially reactive phases such as phyllosilicate clay minerals often contain trace metals in structural sites, and their reactivity has not been quantified. Accordingly, the objective of this study is to analyze the behavior of trace metal-bearing clay minerals exposed to the revised BCR 3-step plus aqua regia SEP. Mineral quantification based on stoichiometric analysis and quantitative powder X-ray diffraction (XRD) documents progressive dissolution of chlorite (CCa-2 ripidolite) and two varieties of smectite (SapCa-2 saponite and SWa-1 nontronite) during steps 1-3 of the BCR procedure. In total, 8 (+/-1) % of ripidolite, 19 (+/-1) % of saponite, and 19 (+/-3) % of nontronite (% mineral mass) dissolved during extractions assumed by many researchers to release trace metals from exchange sites, carbonates, hydroxides, sulfides and organic matter. For all three reference clays, release of Ni into solution is correlated with clay dissolution. Hydrolysis of relatively weak Mg-O bonds (362 kJ/mol) during all stages, reduction of Fe(III) during hydroxylamine hydrochloride extraction and oxidation of Fe(II) during hydrogen peroxide extraction are the main reasons for clay mineral dissolution. These findings underscore the need for precise mineral quantification when using SEPs to understand the origin/partitioning of trace metals with solid phases.
Adaptive unstructured meshing for thermal stress analysis of built-up structures
NASA Technical Reports Server (NTRS)
Dechaumphai, Pramote
1992-01-01
An adaptive unstructured meshing technique for mechanical and thermal stress analysis of built-up structures has been developed. A triangular membrane finite element and a new plate bending element are evaluated on a panel with a circular cutout and a frame stiffened panel. The adaptive unstructured meshing technique, without a priori knowledge of the solution to the problem, generates clustered elements only where needed. An improved solution accuracy is obtained at a reduced problem size and analysis computational time as compared to the results produced by the standard finite element procedure.
Time domain and frequency domain design techniques for model reference adaptive control systems
NASA Technical Reports Server (NTRS)
Boland, J. S., III
1971-01-01
Some problems associated with the design of model-reference adaptive control systems are considered and solutions to these problems are advanced. The stability of the adapted system is a primary consideration in the development of both the time-domain and the frequency-domain design techniques. Consequentially, the use of Liapunov's direct method forms an integral part of the derivation of the design procedures. The application of sensitivity coefficients to the design of model-reference adaptive control systems is considered. An application of the design techniques is also presented.
Agyepong, Irene Akua; Kodua, Augustina; Adjei, Sam; Adam, Taghreed
2012-10-01
Implementation of policies (decisions) in the health sector is sometimes defeated by the system's response to the policy itself. This can lead to counter-intuitive, unanticipated, or more modest effects than expected by those who designed the policy. The health sector fits the characteristics of complex adaptive systems (CAS) and complexity is at the heart of this phenomenon. Anticipating both positive and negative effects of policy decisions, understanding the interests, power and interaction between multiple actors; and planning for the delayed and distal impact of policy decisions are essential for effective decision making in CAS. Failure to appreciate these elements often leads to a series of reductionist approach interventions or 'fixes'. This in turn can initiate a series of negative feedback loops that further complicates the situation over time. In this paper we use a case study of the Additional Duty Hours Allowance (ADHA) policy in Ghana to illustrate these points. Using causal loop diagrams, we unpack the intended and unintended effects of the policy and how these effects evolved over time. The overall goal is to advance our understanding of decision making in complex adaptive systems; and through this process identify some essential elements in formulating, updating and implementing health policy that can help to improve attainment of desired outcomes and minimize negative unintended effects.
Structured adaptive grid generation using algebraic methods
NASA Technical Reports Server (NTRS)
Yang, Jiann-Cherng; Soni, Bharat K.; Roger, R. P.; Chan, Stephen C.
1993-01-01
The accuracy of the numerical algorithm depends not only on the formal order of approximation but also on the distribution of grid points in the computational domain. Grid adaptation is a procedure which allows optimal grid redistribution as the solution progresses. It offers the prospect of accurate flow field simulations without the use of an excessively timely, computationally expensive, grid. Grid adaptive schemes are divided into two basic categories: differential and algebraic. The differential method is based on a variational approach where a function which contains a measure of grid smoothness, orthogonality and volume variation is minimized by using a variational principle. This approach provided a solid mathematical basis for the adaptive method, but the Euler-Lagrange equations must be solved in addition to the original governing equations. On the other hand, the algebraic method requires much less computational effort, but the grid may not be smooth. The algebraic techniques are based on devising an algorithm where the grid movement is governed by estimates of the local error in the numerical solution. This is achieved by requiring the points in the large error regions to attract other points and points in the low error region to repel other points. The development of a fast, efficient, and robust algebraic adaptive algorithm for structured flow simulation applications is presented. This development is accomplished in a three step process. The first step is to define an adaptive weighting mesh (distribution mesh) on the basis of the equidistribution law applied to the flow field solution. The second, and probably the most crucial step, is to redistribute grid points in the computational domain according to the aforementioned weighting mesh. The third and the last step is to reevaluate the flow property by an appropriate search/interpolate scheme at the new grid locations. The adaptive weighting mesh provides the information on the desired concentration
Sowers, K.R.; Gunsalus, R.P.
1995-12-01
The methanogenic Archaea, like the Bacteria and Eucarya, possess several osmoregulatory strategies that enable them to adapt to osmotic changes in their environment. The physiological responses of Methanosarcina species to different osmotic pressures were studied in extracellular osmolalities ranging from 0.3 to 2.0 osmol/kg. Regardless of the isolation source, the maximum rate of growth for species from freshwater, sewage, and marine sources occurred in extracellular osmolalities between 0.62 and 1.0 osmol/kg and decreased to minimal detectable growth as the solute concentration approached 2.0 osmol/kg. The distribution and concentration of compatible solutes in eight strains representing five Methanosarcina spp. were similar to those found in M. thermophila grown in extracellular osmolalities of 0.3 and 2.0 osmol/kg. Results of this study demonstrate that the mechanism of halotolerance in Methanosarcina spp. involves the regulation of K{sup +}, {alpha}-glutamate, N{sup {epsilon}}-acetyl-{beta}-lysine, and glycine betaine accumulation in response to the osmotic effects of extracellular solute.
Adapted Canoeing for the Handicapped.
ERIC Educational Resources Information Center
Frith, Greg H.; Warren, L. D.
1984-01-01
Safety as well as instructional recommendations are offered for adapting canoeing as a recreationial activity for handicapped students. Major steps of the instructional program feature orientation to the water and canoe, entry and exit techinques, and mobility procedures. (CL)
NASA Technical Reports Server (NTRS)
Narendra, K. S.; Annaswamy, A. M.
1985-01-01
Several concepts and results in robust adaptive control are are discussed and is organized in three parts. The first part surveys existing algorithms. Different formulations of the problem and theoretical solutions that have been suggested are reviewed here. The second part contains new results related to the role of persistent excitation in robust adaptive systems and the use of hybrid control to improve robustness. In the third part promising new areas for future research are suggested which combine different approaches currently known.
Three-dimensional adaptive grid-embedding Euler technique
NASA Astrophysics Data System (ADS)
Davis, Roger L.; Dannenhoffer, John F., III
1994-06-01
A new three-dimensional adaptive-grid Euler procedure is presented that automatically detects high-gradient regions in the flow and locally subdivides the computational grid in these regions to provide a uniform, high level of accuracy over the entire domain. A tunable, semistructured data system is utilized that provides global topological unstructured-grid flexibility along with the efficiency of a local, structured-grid system. In addition, this structure data allows for the flow solution algorithm to be executed on a wide variety of parallel/vector computing platforms. An explicit, time-marching, control volume procedure is used to integrate the Euler equations to a steady state. In addition, a multiple-grid procedure is used throughout the embedded-grid regions as well as on subgrids coarser than the initial grid to accelerate convergence and properly propagate disturbance waves through refined-grid regions. Upon convergence, high flow gradient regions, where it is assumed that large truncation errors in the solution exist, are detected using a combination of directional refinement vectors that have large components in areas of these gradients. The local computational grid is directionally subdivided in these regions and the flow solution is reinitiated. Overall convergence occurs when a prespecified level of accuracy is reached. Solutions are presented that demonstrate the efficiency and accuracy of the present procedure.
Coughlan, B M; Moroney, G A; van Pelt, F N A M; O'Brien, N M; Davenport, J; O'Halloran, J
2009-11-01
This study investigated the internal osmotic regulatory capabilities of the Manila clam (Ruditapes philippinarum) following in vivo exposure to a range of salinities. A second objective was to measure the health status of the Manila clam following exposure to different salinities using the neutral red retention (NRR) assay, and to compare results using a range of physiological saline solutions (PSS). On exposure to seawater of differing salinities, the Manila clam followed a pattern of an osmoconformer, although they seemed to partially regulate their circulatory haemolytic fluids to be hyperosmotic to the surrounding aqueous environment. Significant differences were found when different PSS were used, emphasizing the importance of using a suitable PSS to reduce additional osmotic stress. Using PSS in the NRR assay that do not exert additional damage to lysosomal membrane integrity will help to more accurately quantify the effects of exposure to pollutants on the organism(s) under investigation.
Prism Adaptation in Schizophrenia
ERIC Educational Resources Information Center
Bigelow, Nirav O.; Turner, Beth M.; Andreasen, Nancy C.; Paulsen, Jane S.; O'Leary, Daniel S.; Ho, Beng-Choon
2006-01-01
The prism adaptation test examines procedural learning (PL) in which performance facilitation occurs with practice on tasks without the need for conscious awareness. Dynamic interactions between frontostriatal cortices, basal ganglia, and the cerebellum have been shown to play key roles in PL. Disruptions within these neural networks have also…
ERIC Educational Resources Information Center
Flournoy, Nancy
Designs for sequential sampling procedures that adapt to cumulative information are discussed. A familiar illustration is the play-the-winner rule in which there are two treatments; after a random start, the same treatment is continued as long as each successive subject registers a success. When a failure occurs, the other treatment is used until…
Refined numerical solution of the transonic flow past a wedge
NASA Technical Reports Server (NTRS)
Liang, S.-M.; Fung, K.-Y.
1985-01-01
A numerical procedure combining the ideas of solving a modified difference equation and of adaptive mesh refinement is introduced. The numerical solution on a fixed grid is improved by using better approximations of the truncation error computed from local subdomain grid refinements. This technique is used to obtain refined solutions of steady, inviscid, transonic flow past a wedge. The effects of truncation error on the pressure distribution, wave drag, sonic line, and shock position are investigated. By comparing the pressure drag on the wedge and wave drag due to the shocks, a supersonic-to-supersonic shock originating from the wedge shoulder is confirmed.
Ramponi, Denise R
2016-01-01
Dental problems are a common complaint in emergency departments in the United States. There are a wide variety of dental issues addressed in emergency department visits such as dental caries, loose teeth, dental trauma, gingival infections, and dry socket syndrome. Review of the most common dental blocks and dental procedures will allow the practitioner the opportunity to make the patient more comfortable and reduce the amount of analgesia the patient will need upon discharge. Familiarity with the dental equipment, tooth, and mouth anatomy will help prepare the practitioner for to perform these dental procedures.
SAGE - MULTIDIMENSIONAL SELF-ADAPTIVE GRID CODE
NASA Technical Reports Server (NTRS)
Davies, C. B.
1994-01-01
SAGE, Self Adaptive Grid codE, is a flexible tool for adapting and restructuring both 2D and 3D grids. Solution-adaptive grid methods are useful tools for efficient and accurate flow predictions. In supersonic and hypersonic flows, strong gradient regions such as shocks, contact discontinuities, shear layers, etc., require careful distribution of grid points to minimize grid error and produce accurate flow-field predictions. SAGE helps the user obtain more accurate solutions by intelligently redistributing (i.e. adapting) the original grid points based on an initial or interim flow-field solution. The user then computes a new solution using the adapted grid as input to the flow solver. The adaptive-grid methodology poses the problem in an algebraic, unidirectional manner for multi-dimensional adaptations. The procedure is analogous to applying tension and torsion spring forces proportional to the local flow gradient at every grid point and finding the equilibrium position of the resulting system of grid points. The multi-dimensional problem of grid adaption is split into a series of one-dimensional problems along the computational coordinate lines. The reduced one dimensional problem then requires a tridiagonal solver to find the location of grid points along a coordinate line. Multi-directional adaption is achieved by the sequential application of the method in each coordinate direction. The tension forces direct the redistribution of points to the strong gradient region. To maintain smoothness and a measure of orthogonality of grid lines, torsional forces are introduced that relate information between the family of lines adjacent to one another. The smoothness and orthogonality constraints are direction-dependent, since they relate only the coordinate lines that are being adapted to the neighboring lines that have already been adapted. Therefore the solutions are non-unique and depend on the order and direction of adaption. Non-uniqueness of the adapted grid is
Webster, Michael A.
2015-01-01
Sensory systems continuously mold themselves to the widely varying contexts in which they must operate. Studies of these adaptations have played a long and central role in vision science. In part this is because the specific adaptations remain a powerful tool for dissecting vision, by exposing the mechanisms that are adapting. That is, “if it adapts, it's there.” Many insights about vision have come from using adaptation in this way, as a method. A second important trend has been the realization that the processes of adaptation are themselves essential to how vision works, and thus are likely to operate at all levels. That is, “if it's there, it adapts.” This has focused interest on the mechanisms of adaptation as the target rather than the probe. Together both approaches have led to an emerging insight of adaptation as a fundamental and ubiquitous coding strategy impacting all aspects of how we see. PMID:26858985
Adaptive process control using fuzzy logic and genetic algorithms
NASA Technical Reports Server (NTRS)
Karr, C. L.
1993-01-01
Researchers at the U.S. Bureau of Mines have developed adaptive process control systems in which genetic algorithms (GA's) are used to augment fuzzy logic controllers (FLC's). GA's are search algorithms that rapidly locate near-optimum solutions to a wide spectrum of problems by modeling the search procedures of natural genetics. FLC's are rule based systems that efficiently manipulate a problem environment by modeling the 'rule-of-thumb' strategy used in human decision making. Together, GA's and FLC's possess the capabilities necessary to produce powerful, efficient, and robust adaptive control systems. To perform efficiently, such control systems require a control element to manipulate the problem environment, and a learning element to adjust to the changes in the problem environment. Details of an overall adaptive control system are discussed. A specific laboratory acid-base pH system is used to demonstrate the ideas presented.
Adaptive Process Control with Fuzzy Logic and Genetic Algorithms
NASA Technical Reports Server (NTRS)
Karr, C. L.
1993-01-01
Researchers at the U.S. Bureau of Mines have developed adaptive process control systems in which genetic algorithms (GA's) are used to augment fuzzy logic controllers (FLC's). GA's are search algorithms that rapidly locate near-optimum solutions to a wide spectrum of problems by modeling the search procedures of natural genetics. FLC's are rule based systems that efficiently manipulate a problem environment by modeling the 'rule-of-thumb' strategy used in human decision-making. Together, GA's and FLC's possess the capabilities necessary to produce powerful, efficient, and robust adaptive control systems. To perform efficiently, such control systems require a control element to manipulate the problem environment, an analysis element to recognize changes in the problem environment, and a learning element to adjust to the changes in the problem environment. Details of an overall adaptive control system are discussed. A specific laboratory acid-base pH system is used to demonstrate the ideas presented.
Fukuda, Ryoichi Ehara, Masahiro
2014-10-21
Solvent effects on electronic excitation spectra are considerable in many situations; therefore, we propose an efficient and reliable computational scheme that is based on the symmetry-adapted cluster-configuration interaction (SAC-CI) method and the polarizable continuum model (PCM) for describing electronic excitations in solution. The new scheme combines the recently proposed first-order PCM SAC-CI method with the PTE (perturbation theory at the energy level) PCM SAC scheme. This is essentially equivalent to the usual SAC and SAC-CI computations with using the PCM Hartree-Fock orbital and integrals, except for the additional correction terms that represent solute-solvent interactions. The test calculations demonstrate that the present method is a very good approximation of the more costly iterative PCM SAC-CI method for excitation energies of closed-shell molecules in their equilibrium geometry. This method provides very accurate values of electric dipole moments but is insufficient for describing the charge-transfer (CT) indices in polar solvent. The present method accurately reproduces the absorption spectra and their solvatochromism of push-pull type 2,2{sup ′}-bithiophene molecules. Significant solvent and substituent effects on these molecules are intuitively visualized using the CT indices. The present method is the simplest and theoretically consistent extension of SAC-CI method for including PCM environment, and therefore, it is useful for theoretical and computational spectroscopy.
Topology and grid adaption for high-speed flow computations
NASA Astrophysics Data System (ADS)
Abolhassani, Jamshid S.; Tiwari, Surendra N.
1989-03-01
This study investigates the effects of grid topology and grid adaptation on numerical solutions of the Navier-Stokes equations. In the first part of this study, a general procedure is presented for computation of high-speed flow over complex three-dimensional configurations. The flow field is simulated on the surface of a Butler wing in a uniform stream. Results are presented for Mach number 3.5 and a Reynolds number of 2,000,000. The O-type and H-type grids have been used for this study, and the results are compared together and with other theoretical and experimental results. The results demonstrate that while the H-type grid is suitable for the leading and trailing edges, a more accurate solution can be obtained for the middle part of the wing with an O-type grid. In the second part of this study, methods of grid adaption are reviewed and a method is developed with the capability of adapting to several variables. This method is based on a variational approach and is an algebraic method. Also, the method has been formulated in such a way that there is no need for any matrix inversion. This method is used in conjunction with the calculation of hypersonic flow over a blunt-nose body. A movie has been produced which shows simultaneously the transient behavior of the solution and the grid adaption.
Topology and grid adaption for high-speed flow computations
NASA Technical Reports Server (NTRS)
Abolhassani, Jamshid S.; Tiwari, Surendra N.
1989-01-01
This study investigates the effects of grid topology and grid adaptation on numerical solutions of the Navier-Stokes equations. In the first part of this study, a general procedure is presented for computation of high-speed flow over complex three-dimensional configurations. The flow field is simulated on the surface of a Butler wing in a uniform stream. Results are presented for Mach number 3.5 and a Reynolds number of 2,000,000. The O-type and H-type grids have been used for this study, and the results are compared together and with other theoretical and experimental results. The results demonstrate that while the H-type grid is suitable for the leading and trailing edges, a more accurate solution can be obtained for the middle part of the wing with an O-type grid. In the second part of this study, methods of grid adaption are reviewed and a method is developed with the capability of adapting to several variables. This method is based on a variational approach and is an algebraic method. Also, the method has been formulated in such a way that there is no need for any matrix inversion. This method is used in conjunction with the calculation of hypersonic flow over a blunt-nose body. A movie has been produced which shows simultaneously the transient behavior of the solution and the grid adaption.
Adaptive management is an approach to natural resource management that emphasizes learning through management where knowledge is incomplete, and when, despite inherent uncertainty, managers and policymakers must act. Unlike a traditional trial and error approach, adaptive managem...
NASA Astrophysics Data System (ADS)
Webster, Michael A.; Webster, Shernaaz M.; MacDonald, Jennifer; Bahradwadj, Shrikant R.
2001-06-01
Blur is an intrinsic property of the retinal image that can vary substantially in natural viewing. We examined how processes of contrast adaptation might adjust the visual system to regulate the perception of blur. Observers viewed a blurred or sharpened image for 2-5 minutes, and then judged the apparent focus of a series of 0.5-sec test images interleaved with 6-sec of readaptation. A 2AFC staircase procedure was used to vary the amplitude spectrum of successive test to find the image that appeared in focus. Adapting to a blurred image causes a physically focused image to appear too sharp. Opposite after-effects occur for sharpened adapting images. Pronounced biases were observed over a wide range of magnitudes of adapting blur, and were similar for different types of blur. After-effects were also similar for different classes of images but were generally weaker when the adapting and test stimuli were different images, showing that the adaptation is not adjusting simply to blur per se. These adaptive adjustments may strongly influence the perception of blur in normal vision and how it changes with refractive errors.
Interdisciplinarity in Adapted Physical Activity
ERIC Educational Resources Information Center
Bouffard, Marcel; Spencer-Cavaliere, Nancy
2016-01-01
It is commonly accepted that inquiry in adapted physical activity involves the use of different disciplines to address questions. It is often advanced today that complex problems of the kind frequently encountered in adapted physical activity require a combination of disciplines for their solution. At the present time, individual research…
Adaptive management: Chapter 1
Allen, Craig R.; Garmestani, Ahjond S.; Allen, Craig R.; Garmestani, Ahjond S.
2015-01-01
Adaptive management is an approach to natural resource management that emphasizes learning through management where knowledge is incomplete, and when, despite inherent uncertainty, managers and policymakers must act. Unlike a traditional trial and error approach, adaptive management has explicit structure, including a careful elucidation of goals, identification of alternative management objectives and hypotheses of causation, and procedures for the collection of data followed by evaluation and reiteration. The process is iterative, and serves to reduce uncertainty, build knowledge and improve management over time in a goal-oriented and structured process.
Inferential Aspects of Adaptive Allocation Rules.
ERIC Educational Resources Information Center
Berry, Donald A.
In clinical trials, adaptive allocation means that the therapies assigned to the next patient or patients depend on the results obtained thus far in the trial. Although many adaptive allocation procedures have been proposed for clinical trials, few have actually used adaptive assignment, largely because classical frequentist measures of inference…
Developing Flexible Procedural Knowledge in Undergraduate Calculus
ERIC Educational Resources Information Center
Maciejewski, Wes; Star, Jon R.
2016-01-01
Mathematics experts often choose appropriate procedures to produce an efficient or elegant solution to a mathematical task. This "flexible procedural knowledge" distinguishes novice and expert procedural performances. This article reports on an intervention intended to aid the development of undergraduate calculus students' flexible use…
NASA Technical Reports Server (NTRS)
Banks, D. W.; Hafez, M. M.
1996-01-01
Grid adaptation for structured meshes is the art of using information from an existing, but poorly resolved, solution to automatically redistribute the grid points in such a way as to improve the resolution in regions of high error, and thus the quality of the solution. This involves: (1) generate a grid vis some standard algorithm, (2) calculate a solution on this grid, (3) adapt the grid to this solution, (4) recalculate the solution on this adapted grid, and (5) repeat steps 3 and 4 to satisfaction. Steps 3 and 4 can be repeated until some 'optimal' grid is converged to but typically this is not worth the effort and just two or three repeat calculations are necessary. They also may be repeated every 5-10 time steps for unsteady calculations.
Cyclic creep analysis from elastic finite-element solutions
NASA Technical Reports Server (NTRS)
Kaufman, A.; Hwang, S. Y.
1986-01-01
A uniaxial approach was developed for calculating cyclic creep and stress relaxation at the critical location of a structure subjected to cyclic thermomechanical loading. This approach was incorporated into a simplified analytical procedure for predicting the stress-strain history at a crack initiation site for life prediction purposes. An elastic finite-element solution for the problem was used as input for the simplified procedure. The creep analysis includes a self-adaptive time incrementing scheme. Cumulative creep is the sum of the initial creep, the recovery from the stress relaxation and the incremental creep. The simplified analysis was exercised for four cases involving a benchmark notched plate problem. Comparisons were made with elastic-plastic-creep solutions for these cases using the MARC nonlinear finite-element computer code.
NASA Technical Reports Server (NTRS)
Georgeff, Michael P.; Lansky, Amy L.
1986-01-01
Much of commonsense knowledge about the real world is in the form of procedures or sequences of actions for achieving particular goals. In this paper, a formalism is presented for representing such knowledge using the notion of process. A declarative semantics for the representation is given, which allows a user to state facts about the effects of doing things in the problem domain of interest. An operational semantics is also provided, which shows how this knowledge can be used to achieve particular goals or to form intentions regarding their achievement. Given both semantics, the formalism additionally serves as an executable specification language suitable for constructing complex systems. A system based on this formalism is described, and examples involving control of an autonomous robot and fault diagnosis for NASA's Space Shuttle are provided.
The development and application of the self-adaptive grid code, SAGE
NASA Technical Reports Server (NTRS)
Davies, Carol B.
1993-01-01
The multidimensional self-adaptive grid code, SAGE, has proven to be a flexible and useful tool in the solution of complex flow problems. Both 2- and 3-D examples given in this report show the code to be reliable and to substantially improve flowfield solutions. Since the adaptive procedure is a marching scheme the code is extremely fast and uses insignificant CPU time compared to the corresponding flow solver. The SAGE program is also machine and flow solver independent. Significant effort was made to simplify user interaction, though some parameters still need to be chosen with care. It is also difficult to tell when the adaption process has provided its best possible solution. This is particularly true if no experimental data are available or if there is a lack of theoretical understanding of the flow. Another difficulty occurs if local features are important but missing in the original grid; the adaption to this solution will not result in any improvement, and only grid refinement can result in an improved solution. These are complex issues that need to be explored within the context of each specific problem.
Barrett, Harrison H.; Furenlid, Lars R.; Freed, Melanie; Hesterman, Jacob Y.; Kupinski, Matthew A.; Clarkson, Eric; Whitaker, Meredith K.
2008-01-01
Adaptive imaging systems alter their data-acquisition configuration or protocol in response to the image information received. An adaptive pinhole single-photon emission computed tomography (SPECT) system might acquire an initial scout image to obtain preliminary information about the radiotracer distribution and then adjust the configuration or sizes of the pinholes, the magnifications, or the projection angles in order to improve performance. This paper briefly describes two small-animal SPECT systems that allow this flexibility and then presents a framework for evaluating adaptive systems in general, and adaptive SPECT systems in particular. The evaluation is in terms of the performance of linear observers on detection or estimation tasks. Expressions are derived for the ideal linear (Hotelling) observer and the ideal linear (Wiener) estimator with adaptive imaging. Detailed expressions for the performance figures of merit are given, and possible adaptation rules are discussed. PMID:18541485
Climate Literacy and Adaptation Solutions for Society
NASA Astrophysics Data System (ADS)
Sohl, L. E.; Chandler, M. A.
2011-12-01
Many climate literacy programs and resources are targeted specifically at children and young adults, as part of the concerted effort to improve STEM education in the U.S. This work is extremely important in building a future society that is well prepared to adopt policies promoting climate change resilience. What these climate literacy efforts seldom do, however, is reach the older adult population that is making economic decisions right now (or not, as the case may be) on matters that can be impacted by climate change. The result is a lack of appreciation of "climate intelligence" - information that could be incorporated into the decision-making process, to maximize opportunities, minimize risk, and create a climate-resilient economy. A National Climate Service, akin to the National Weather Service, would help provide legitimacy to the need for climate intelligence, and would certainly also be the first stop for both governments and private sector concerns seeking climate information for operational purposes. However, broader collaboration between the scientific and business communities is also needed, so that they become co-creators of knowledge that is beneficial and informative to all. The stakeholder-driven research that is the focus of NOAA's RISA (Regional Integrated Sciences and Assessments) projects is one example of how such collaborations can be developed.
A two-dimensional adaptive mesh generation method
NASA Astrophysics Data System (ADS)
Altas, Irfan; Stephenson, John W.
1991-05-01
The present, two-dimensional adaptive mesh-generation method allows selective modification of a small portion of the mesh without affecting large areas of adjacent mesh-points, and is applicable with or without boundary-fitted coordinate-generation procedures. The cases of differential equation discretization by, on the one hand, classical difference formulas designed for uniform meshes, and on the other the present difference formulas, are illustrated through the application of the method to the Hiemenz flow for which the Navier-Stokes equation's exact solution is known, as well as to a two-dimensional viscous internal flow problem.
Adjoint-Based, Three-Dimensional Error Prediction and Grid Adaptation
NASA Technical Reports Server (NTRS)
Park, Michael A.
2002-01-01
Engineering computational fluid dynamics (CFD) analysis and design applications focus on output functions (e.g., lift, drag). Errors in these output functions are generally unknown and conservatively accurate solutions may be computed. Computable error estimates can offer the possibility to minimize computational work for a prescribed error tolerance. Such an estimate can be computed by solving the flow equations and the linear adjoint problem for the functional of interest. The computational mesh can be modified to minimize the uncertainty of a computed error estimate. This robust mesh-adaptation procedure automatically terminates when the simulation is within a user specified error tolerance. This procedure for estimating and adapting to error in a functional is demonstrated for three-dimensional Euler problems. An adaptive mesh procedure that links to a Computer Aided Design (CAD) surface representation is demonstrated for wing, wing-body, and extruded high lift airfoil configurations. The error estimation and adaptation procedure yielded corrected functions that are as accurate as functions calculated on uniformly refined grids with ten times as many grid points.
Application of Sequential Interval Estimation to Adaptive Mastery Testing
ERIC Educational Resources Information Center
Chang, Yuan-chin Ivan
2005-01-01
In this paper, we apply sequential one-sided confidence interval estimation procedures with beta-protection to adaptive mastery testing. The procedures of fixed-width and fixed proportional accuracy confidence interval estimation can be viewed as extensions of one-sided confidence interval procedures. It can be shown that the adaptive mastery…
NASA Astrophysics Data System (ADS)
Kinzig, Ann P.
2015-03-01
This paper is intended as a brief introduction to climate adaptation in a conference devoted otherwise to the physics of sustainable energy. Whereas mitigation involves measures to reduce the probability of a potential event, such as climate change, adaptation refers to actions that lessen the impact of climate change. Mitigation and adaptation differ in other ways as well. Adaptation does not necessarily have to be implemented immediately to be effective; it only needs to be in place before the threat arrives. Also, adaptation does not necessarily require global, coordinated action; many effective adaptation actions can be local. Some urban communities, because of land-use change and the urban heat-island effect, currently face changes similar to some expected under climate change, such as changes in water availability, heat-related morbidity, or changes in disease patterns. Concern over those impacts might motivate the implementation of measures that would also help in climate adaptation, despite skepticism among some policy makers about anthropogenic global warming. Studies of ancient civilizations in the southwestern US lends some insight into factors that may or may not be important to successful adaptation.
Adaptive Force Control in Compliant Motion
NASA Technical Reports Server (NTRS)
Seraji, H.
1994-01-01
This paper addresses the problem of controlling a manipulator in compliant motion while in contact with an environment having an unknown stiffness. Two classes of solutions are discussed: adaptive admittance control and adaptive compliance control. In both admittance and compliance control schemes, compensator adaptation is used to ensure a stable and uniform system performance.
Adaptive multiresolution modeling of groundwater flow in heterogeneous porous media
NASA Astrophysics Data System (ADS)
Malenica, Luka; Gotovac, Hrvoje; Srzic, Veljko; Andric, Ivo
2016-04-01
Proposed methodology was originally developed by our scientific team in Split who designed multiresolution approach for analyzing flow and transport processes in highly heterogeneous porous media. The main properties of the adaptive Fup multi-resolution approach are: 1) computational capabilities of Fup basis functions with compact support capable to resolve all spatial and temporal scales, 2) multi-resolution presentation of heterogeneity as well as all other input and output variables, 3) accurate, adaptive and efficient strategy and 4) semi-analytical properties which increase our understanding of usually complex flow and transport processes in porous media. The main computational idea behind this approach is to separately find the minimum number of basis functions and resolution levels necessary to describe each flow and transport variable with the desired accuracy on a particular adaptive grid. Therefore, each variable is separately analyzed, and the adaptive and multi-scale nature of the methodology enables not only computational efficiency and accuracy, but it also describes subsurface processes closely related to their understood physical interpretation. The methodology inherently supports a mesh-free procedure, avoiding the classical numerical integration, and yields continuous velocity and flux fields, which is vitally important for flow and transport simulations. In this paper, we will show recent improvements within the proposed methodology. Since "state of the art" multiresolution approach usually uses method of lines and only spatial adaptive procedure, temporal approximation was rarely considered as a multiscale. Therefore, novel adaptive implicit Fup integration scheme is developed, resolving all time scales within each global time step. It means that algorithm uses smaller time steps only in lines where solution changes are intensive. Application of Fup basis functions enables continuous time approximation, simple interpolation calculations across
Adaptive Algebraic Multigrid Methods
Brezina, M; Falgout, R; MacLachlan, S; Manteuffel, T; McCormick, S; Ruge, J
2004-04-09
Our ability to simulate physical processes numerically is constrained by our ability to solve the resulting linear systems, prompting substantial research into the development of multiscale iterative methods capable of solving these linear systems with an optimal amount of effort. Overcoming the limitations of geometric multigrid methods to simple geometries and differential equations, algebraic multigrid methods construct the multigrid hierarchy based only on the given matrix. While this allows for efficient black-box solution of the linear systems associated with discretizations of many elliptic differential equations, it also results in a lack of robustness due to assumptions made on the near-null spaces of these matrices. This paper introduces an extension to algebraic multigrid methods that removes the need to make such assumptions by utilizing an adaptive process. The principles which guide the adaptivity are highlighted, as well as their application to algebraic multigrid solution of certain symmetric positive-definite linear systems.
Solutions For Smart Metering Under Harsh Environmental Condicions
NASA Astrophysics Data System (ADS)
Kunicina, N.; Zabasta, A.; Kondratjevs, K.; Asmanis, G.
2015-02-01
The described case study concerns application of wireless sensor networks to the smart control of power supply substations. The solution proposed for metering is based on the modular principle and has been tested in the intersystem communication paradigm using selectable interface modules (IEEE 802.3, ISM radio interface, GSM/GPRS). The solution modularity gives 7 % savings of maintenance costs. The developed solution can be applied to the control of different critical infrastructure networks using adapted modules. The proposed smart metering is suitable for outdoor installation, indoor industrial installations, operation under electromagnetic pollution, temperature and humidity impact. The results of tests have shown a good electromagnetic compatibility of the prototype meter with other electronic devices. The metering procedure is exemplified by operation of a testing company's workers under harsh environmental conditions.
Feline onychectomy and elective procedures.
Young, William Phillip
2002-05-01
The development of the carbon dioxide (CO2) surgical laser has given veterinarians a new perspective in the field of surgery. Recently developed techniques and improvisations of established procedures have opened the field of surgery to infinite applications never before dreamed of as little as 10 years ago. Today's CO2 surgical laser is an adaptable, indispensable tool for the everyday veterinary practitioner. Its use is becoming a common occurrence in offices of veterinarians around the world.
Lattice model for water-solute mixtures
NASA Astrophysics Data System (ADS)
Furlan, A. P.; Almarza, N. G.; Barbosa, M. C.
2016-10-01
A lattice model for the study of mixtures of associating liquids is proposed. Solvent and solute are modeled by adapting the associating lattice gas (ALG) model. The nature of interaction of solute/solvent is controlled by tuning the energy interactions between the patches of ALG model. We have studied three set of parameters, resulting in, hydrophilic, inert, and hydrophobic interactions. Extensive Monte Carlo simulations were carried out, and the behavior of pure components and the excess properties of the mixtures have been studied. The pure components, water (solvent) and solute, have quite similar phase diagrams, presenting gas, low density liquid, and high density liquid phases. In the case of solute, the regions of coexistence are substantially reduced when compared with both the water and the standard ALG models. A numerical procedure has been developed in order to attain series of results at constant pressure from simulations of the lattice gas model in the grand canonical ensemble. The excess properties of the mixtures, volume and enthalpy as the function of the solute fraction, have been studied for different interaction parameters of the model. Our model is able to reproduce qualitatively well the excess volume and enthalpy for different aqueous solutions. For the hydrophilic case, we show that the model is able to reproduce the excess volume and enthalpy of mixtures of small alcohols and amines. The inert case reproduces the behavior of large alcohols such as propanol, butanol, and pentanol. For the last case (hydrophobic), the excess properties reproduce the behavior of ionic liquids in aqueous solution.
Adaptive building skin structures
NASA Astrophysics Data System (ADS)
Del Grosso, A. E.; Basso, P.
2010-12-01
The concept of adaptive and morphing structures has gained considerable attention in the recent years in many fields of engineering. In civil engineering very few practical applications are reported to date however. Non-conventional structural concepts like deployable, inflatable and morphing structures may indeed provide innovative solutions to some of the problems that the construction industry is being called to face. To give some examples, searches for low-energy consumption or even energy-harvesting green buildings are amongst such problems. This paper first presents a review of the above problems and technologies, which shows how the solution to these problems requires a multidisciplinary approach, involving the integration of architectural and engineering disciplines. The discussion continues with the presentation of a possible application of two adaptive and dynamically morphing structures which are proposed for the realization of an acoustic envelope. The core of the two applications is the use of a novel optimization process which leads the search for optimal solutions by means of an evolutionary technique while the compatibility of the resulting configurations of the adaptive envelope is ensured by the virtual force density method.
ERIC Educational Resources Information Center
Exceptional Parent, 1987
1987-01-01
Suggestions are presented for helping disabled individuals learn to use or adapt toothbrushes for proper dental care. A directory lists dental health instructional materials available from various organizations. (CB)
NASA Astrophysics Data System (ADS)
Van Den Daele, W.; Malaquin, C.; Baumel, N.; Kononchuk, O.; Cristoloveanu, S.
2013-10-01
This paper revisits and adapts of the pseudo-MOSFET (Ψ-MOSFET) characterization technique for advanced fully depleted silicon on insulator (FDSOI) wafers. We review the current challenges for standard Ψ-MOSFET set-up on ultra-thin body (12 nm) over ultra-thin buried oxide (25 nm BOX) and propose a novel set-up enabling the technique on FDSOI structures. This novel configuration embeds 4 probes with large tip radius (100-200 μm) and low pressure to avoid oxide damage. Compared with previous 4-point probe measurements, we introduce a simplified and faster methodology together with an adapted Y-function. The models for parameters extraction are revisited and calibrated through systematic measurements of SOI wafers with variable film thickness. We propose an in-depth analysis of the FDSOI structure through comparison of experimental data, TCAD (Technology Computed Aided Design) simulations, and analytical modeling. TCAD simulations are used to unify previously reported thickness-dependent analytical models by analyzing the BOX/substrate potential and the electrical field in ultrathin films. Our updated analytical models are used to explain the results and to extract correct electrical parameters such as low-field electron and hole mobility, subthreshold slope, and film/BOX interface traps density.
Davidson, Rebecca K; Oines, Oivind; Madslien, Knut; Mathis, Alexander
2009-02-01
Echinococcus multilocularis, causing alveolar echinococcosis in humans, is a highly pathogenic emerging zoonotic disease in central Europe. The gold standard for the identification of this parasite in the main host, the red fox, namely identification of the adult parasite in the intestine at necropsy, is very laborious. Copro-enzyme-linked immunosorbent assay (ELISA) with confirmatory polymerase chain reaction (PCR) has been suggested as an acceptable alternative, but no commercial copro-ELISA tests are currently available and an in-house test is therefore required. Published methods for taeniid egg isolation and a multiplex PCR assay for simultaneous identification of E. multilocularis, E. granulosus and other cestodes were adapted to be carried out on pooled faecal samples from red foxes in Norway. None of the 483 fox faecal samples screened were PCR-positive for E. multilocularis, indicating an apparent prevalence of between 0% and 1.5%. The advantages and disadvantages of using the adapted method are discussed as well as the results pertaining to taeniid and non-taeniid cestodes as identified by multiplex PCR.
Development of a Countermeasure to Enhance Postflight Locomotor Adaptability
NASA Technical Reports Server (NTRS)
Bloomberg, Jacob J.
2006-01-01
Astronauts returning from space flight experience locomotor dysfunction following their return to Earth. Our laboratory is currently developing a gait adaptability training program that is designed to facilitate recovery of locomotor function following a return to a gravitational environment. The training program exploits the ability of the sensorimotor system to generalize from exposure to multiple adaptive challenges during training so that the gait control system essentially learns to learn and therefore can reorganize more rapidly when faced with a novel adaptive challenge. We have previously confirmed that subjects participating in adaptive generalization training programs using a variety of visuomotor distortions can enhance their ability to adapt to a novel sensorimotor environment. Importantly, this increased adaptability was retained even one month after completion of the training period. Adaptive generalization has been observed in a variety of other tasks requiring sensorimotor transformations including manual control tasks and reaching (Bock et al., 2001, Seidler, 2003) and obstacle avoidance during walking (Lam and Dietz, 2004). Taken together, the evidence suggests that a training regimen exposing crewmembers to variation in locomotor conditions, with repeated transitions among states, may enhance their ability to learn how to reassemble appropriate locomotor patterns upon return from microgravity. We believe exposure to this type of training will extend crewmembers locomotor behavioral repertoires, facilitating the return of functional mobility after long duration space flight. Our proposed training protocol will compel subjects to develop new behavioral solutions under varying sensorimotor demands. Over time subjects will learn to create appropriate locomotor solution more rapidly enabling acquisition of mobility sooner after long-duration space flight. Our laboratory is currently developing adaptive generalization training procedures and the
Ritz Procedure for COSMIC/NASTRAN
NASA Technical Reports Server (NTRS)
Citerley, R. L.; Woytowitz, P. J.
1985-01-01
An analysis procedure has been developed and incorporated into COSMIC/NASTRAN that permits large dynamic degree of freedom models to be processed accurately with little or no extra effort required by the user. The method employs existing capabilities without the need for approximate Guyan reduction techniques. Comparisons to existing solution procedures presently within NASTRAN are discussed.
The benefits of using customized procedure packs.
Baines, R; Colquhoun, G; Jones, N; Bateman, R
2001-01-01
Discrete item purchasing is the traditional approach for hospitals to obtain consumable supplies for theatre procedures. Although most items are relatively low cost, the management and co-ordination of the supply chain, raising orders, controlling stock, picking and delivering to each operating theatre can be complex and costly. Customized procedure packs provide a solution.
Visualizing Search Behavior with Adaptive Discriminations
Cook, Robert G.; Qadri, Muhammad A. J.
2014-01-01
We examined different aspects of the visual search behavior of a pigeon using an open-ended, adaptive testing procedure controlled by a genetic algorithm. The animal had to accurately search for and peck a gray target element randomly located from among a variable number of surrounding darker and lighter distractor elements. Display composition was controlled by a genetic algorithm involving the multivariate configuration of different parameters or genes (number of distractors, element size, shape, spacing, target brightness, and distractor brightness). Sessions were composed of random displays, testing randomized combinations of these genes, and selected displays, representing the varied descendants of displays correctly identified by the pigeon. Testing a larger number of random displays than done previously, it was found that the bird’s solution to the search task was highly stable and did not change with extensive experience in the task. The location and shape of this attractor was visualized using multivariate behavioral surfaces in which element size and the number of distractors were the most important factors controlling search accuracy and search time. The resulting visualizations of the bird’s search behavior are discussed with reference to the potential of using adaptive, open-ended experimental techniques for investigating animal cognition and their implications for Bond and Kamil’s innovative development of virtual ecologies using an analogous methodology. PMID:24370702
Pipe Cleaning Operating Procedures
Clark, D.; Wu, J.; /Fermilab
1991-01-24
This cleaning procedure outlines the steps involved in cleaning the high purity argon lines associated with the DO calorimeters. The procedure is broken down into 7 cycles: system setup, initial flush, wash, first rinse, second rinse, final rinse and drying. The system setup involves preparing the pump cart, line to be cleaned, distilled water, and interconnecting hoses and fittings. The initial flush is an off-line flush of the pump cart and its plumbing in order to preclude contaminating the line. The wash cycle circulates the detergent solution (Micro) at 180 degrees Fahrenheit through the line to be cleaned. The first rinse is then intended to rid the line of the majority of detergent and only needs to run for 30 minutes and at ambient temperature. The second rinse (if necessary) should eliminate the remaining soap residue. The final rinse is then intended to be a check that there is no remaining soap or other foreign particles in the line, particularly metal 'chips.' The final rinse should be run at 180 degrees Fahrenheit for at least 90 minutes. The filters should be changed after each cycle, paying particular attention to the wash cycle and the final rinse cycle return filters. These filters, which should be bagged and labeled, prove that the pipeline is clean. Only distilled water should be used for all cycles, especially rinsing. The level in the tank need not be excessive, merely enough to cover the heater float switch. The final rinse, however, may require a full 50 gallons. Note that most of the details of the procedure are included in the initial flush description. This section should be referred to if problems arise in the wash or rinse cycles.
Adaptive Units of Learning and Educational Videogames
ERIC Educational Resources Information Center
Moreno-Ger, Pablo; Thomas, Pilar Sancho; Martinez-Ortiz, Ivan; Sierra, Jose Luis; Fernandez-Manjon, Baltasar
2007-01-01
In this paper, we propose three different ways of using IMS Learning Design to support online adaptive learning modules that include educational videogames. The first approach relies on IMS LD to support adaptation procedures where the educational games are considered as Learning Objects. These games can be included instead of traditional content…
Bremer, P. -T.
2014-08-26
ADAPT is a topological analysis code that allow to compute local threshold, in particular relevance based thresholds for features defined in scalar fields. The initial target application is vortex detection but the software is more generally applicable to all threshold based feature definitions.
Adaptive neuro-control for large flexible structures
NASA Astrophysics Data System (ADS)
Krishankumar, K.; Montgomery, L.
Special problems related to control system design for large flexible structures include the inherent low structural damping, wide range of modal frequencies, unmodeled dynamics, and possibility of system failures. Neuro-control, which combines concepts from artificial neural networks and adaptive control is investigated as a solution to some of these problems. Specifically, the roles of neuro-controllers in learning unmodeled dynamics and adaptive control for system failures are investigated. Satisfying these objectives requires training a neural network model (neuro-model) to simulate the actual structure, and then training a neural network controller (neuro-controller) to minimize structural response resulting from an arbitrary disturbance. The neuro-controller synthesis procedure and its capabilities in adaptively controlling the structure are demonstrated using a mathematical model of an existing structure, the Advanced Control Evaluation for Systems test article located at NASA/Marshall Space Flight Center, Huntsville, Alabama. Also, the real-time adaptive capability of neuro-controllers is demonstrated via an experiment utilizing a flexible clamped-free beam equipped with an actuator that uses a bang-bang controller.
Adaptive remeshing method in 2D based on refinement and coarsening techniques
NASA Astrophysics Data System (ADS)
Giraud-Moreau, L.; Borouchaki, H.; Cherouat, A.
2007-04-01
The analysis of mechanical structures using the Finite Element Method, in the framework of large elastoplastic strains, needs frequent remeshing of the deformed domain during computation. Remeshing is necessary for two main reasons, the large geometric distortion of finite elements and the adaptation of the mesh size to the physical behavior of the solution. This paper presents an adaptive remeshing method to remesh a mechanical structure in two dimensions subjected to large elastoplastic deformations with damage. The proposed remeshing technique includes adaptive refinement and coarsening procedures, based on geometrical and physical criteria. The proposed method has been integrated in a computational environment using the ABAQUS solver. Numerical examples show the efficiency of the proposed approach.
PLUM: Parallel Load Balancing for Unstructured Adaptive Meshes. Degree awarded by Colorado Univ.
NASA Technical Reports Server (NTRS)
Oliker, Leonid
1998-01-01
Dynamic mesh adaption on unstructured grids is a powerful tool for computing large-scale problems that require grid modifications to efficiently resolve solution features. By locally refining and coarsening the mesh to capture physical phenomena of interest, such procedures make standard computational methods more cost effective. Unfortunately, an efficient parallel implementation of these adaptive methods is rather difficult to achieve, primarily due to the load imbalance created by the dynamically-changing nonuniform grid. This requires significant communication at runtime, leading to idle processors and adversely affecting the total execution time. Nonetheless, it is generally thought that unstructured adaptive- grid techniques will constitute a significant fraction of future high-performance supercomputing. Various dynamic load balancing methods have been reported to date; however, most of them either lack a global view of loads across processors or do not apply their techniques to realistic large-scale applications.
Adaptation of adaptive optics systems.
NASA Astrophysics Data System (ADS)
Xin, Yu; Zhao, Dazun; Li, Chen
1997-10-01
In the paper, a concept of an adaptation of adaptive optical system (AAOS) is proposed. The AAOS has certain real time optimization ability against the variation of the brightness of detected objects m, atmospheric coherence length rO and atmospheric time constant τ by means of changing subaperture number and diameter, dynamic range, and system's temporal response. The necessity of AAOS using a Hartmann-Shack wavefront sensor and some technical approaches are discussed. Scheme and simulation of an AAOS with variable subaperture ability by use of both hardware and software are presented as an example of the system.
Collected radiochemical and geochemical procedures
Kleinberg, J
1990-05-01
This revision of LA-1721, 4th Ed., Collected Radiochemical Procedures, reflects the activities of two groups in the Isotope and Nuclear Chemistry Division of the Los Alamos National Laboratory: INC-11, Nuclear and radiochemistry; and INC-7, Isotope Geochemistry. The procedures fall into five categories: I. Separation of Radionuclides from Uranium, Fission-Product Solutions, and Nuclear Debris; II. Separation of Products from Irradiated Targets; III. Preparation of Samples for Mass Spectrometric Analysis; IV. Dissolution Procedures; and V. Geochemical Procedures. With one exception, the first category of procedures is ordered by the positions of the elements in the Periodic Table, with separate parts on the Representative Elements (the A groups); the d-Transition Elements (the B groups and the Transition Triads); and the Lanthanides (Rare Earths) and Actinides (the 4f- and 5f-Transition Elements). The members of Group IIIB-- scandium, yttrium, and lanthanum--are included with the lanthanides, elements they resemble closely in chemistry and with which they occur in nature. The procedures dealing with the isolation of products from irradiated targets are arranged by target element.
Evaluating Content Alignment in Computerized Adaptive Testing
ERIC Educational Resources Information Center
Wise, Steven L.; Kingsbury, G. Gage; Webb, Norman L.
2015-01-01
The alignment between a test and the content domain it measures represents key evidence for the validation of test score inferences. Although procedures have been developed for evaluating the content alignment of linear tests, these procedures are not readily applicable to computerized adaptive tests (CATs), which require large item pools and do…
... Center Access to Care Toolkit EHB Access Toolkit Bariatric Surgery Procedures Bariatric surgical procedures cause weight loss by ... minimally invasive techniques (laparoscopic surgery). The most common bariatric surgery procedures are gastric bypass, sleeve gastrectomy, adjustable gastric ...
Adaptive Texture Synthesis for Large Scale City Modeling
NASA Astrophysics Data System (ADS)
Despine, G.; Colleu, T.
2015-02-01
Large scale city models textured with aerial images are well suited for bird-eye navigation but generally the image resolution does not allow pedestrian navigation. One solution to face this problem is to use high resolution terrestrial photos but it requires huge amount of manual work to remove occlusions. Another solution is to synthesize generic textures with a set of procedural rules and elementary patterns like bricks, roof tiles, doors and windows. This solution may give realistic textures but with no correlation to the ground truth. Instead of using pure procedural modelling we present a method to extract information from aerial images and adapt the texture synthesis to each building. We describe a workflow allowing the user to drive the information extraction and to select the appropriate texture patterns. We also emphasize the importance to organize the knowledge about elementary pattern in a texture catalogue allowing attaching physical information, semantic attributes and to execute selection requests. Roofs are processed according to the detected building material. Façades are first described in terms of principal colours, then opening positions are detected and some window features are computed. These features allow selecting the most appropriate patterns from the texture catalogue. We experimented this workflow on two samples with 20 cm and 5 cm resolution images. The roof texture synthesis and opening detection were successfully conducted on hundreds of buildings. The window characterization is still sensitive to the distortions inherent to the projection of aerial images onto the facades.
NASA Astrophysics Data System (ADS)
Qureshi, S. U. H.
1985-09-01
Theoretical work which has been effective in improving data transmission by telephone and radio links using adaptive equalization (AE) techniques is reviewed. AE has been applied to reducing the temporal dispersion effects, such as intersymbol interference, caused by the channel accessed. Attention is given to the Nyquist telegraph transmission theory, least mean square error adaptive filtering and the theory and structure of linear receive and transmit filters for reducing error. Optimum nonlinear receiver structures are discussed in terms of optimality criteria as a function of error probability. A suboptimum receiver structure is explored in the form of a decision-feedback equalizer. Consideration is also given to quadrature amplitude modulation and transversal equalization for receivers.
NASA Technical Reports Server (NTRS)
Hacker, Scott C. (Inventor); Dean, Richard J. (Inventor); Burge, Scott W. (Inventor); Dartez, Toby W. (Inventor)
2007-01-01
An adapter for installing a connector to a terminal post, wherein the connector is attached to a cable, is presented. In an embodiment, the adapter is comprised of an elongated collet member having a longitudinal axis comprised of a first collet member end, a second collet member end, an outer collet member surface, and an inner collet member surface. The inner collet member surface at the first collet member end is used to engage the connector. The outer collet member surface at the first collet member end is tapered for a predetermined first length at a predetermined taper angle. The collet includes a longitudinal slot that extends along the longitudinal axis initiating at the first collet member end for a predetermined second length. The first collet member end is formed of a predetermined number of sections segregated by a predetermined number of channels and the longitudinal slot.
Watson, B.L.; Aeby, I.
1980-08-26
An adaptive data compression device for compressing data is described. The device has a frequency content, including a plurality of digital filters for analyzing the content of the data over a plurality of frequency regions, a memory, and a control logic circuit for generating a variable rate memory clock corresponding to the analyzed frequency content of the data in the frequency region and for clocking the data into the memory in response to the variable rate memory clock.
Gault, M. H.
1973-01-01
Certain preventable complications in the treatment of renal failure, in part related to the composition of commercially prepared peritoneal dialysis solutions, continue to occur. Solutions are advocated which would contain sodium 132, calcium 3.5, magnesium 1.5, chloride 102 and lactate or acetate 35 mEq./1., and dextrose 1.5% or about 4.25%. Elimination of 7% dextrose solutions and a reduction of the sodium and lactate concentrations should reduce complications due to hypovolemia, hyperglycemia, hypernatremia and alkalosis. Reduction in the number of solutions should simplify the procedure and perhaps reduce costs. It is anticipated that some of the changes discussed will soon be introduced by industry. PMID:4691094
Climate adaptation: Holistic thinking beyond technology
NASA Astrophysics Data System (ADS)
Boyd, Emily
2017-02-01
The countries most vulnerable to climate change impacts are among the poorest in the world. A recent evaluation of Least Developed Countries Fund projects suggests that adaptation efforts must move beyond technological solutions.
Adaptive sampling for noisy problems
Cantu-Paz, E
2004-03-26
The usual approach to deal with noise present in many real-world optimization problems is to take an arbitrary number of samples of the objective function and use the sample average as an estimate of the true objective value. The number of samples is typically chosen arbitrarily and remains constant for the entire optimization process. This paper studies an adaptive sampling technique that varies the number of samples based on the uncertainty of deciding between two individuals. Experiments demonstrate the effect of adaptive sampling on the final solution quality reached by a genetic algorithm and the computational cost required to find the solution. The results suggest that the adaptive technique can effectively eliminate the need to set the sample size a priori, but in many cases it requires high computational costs.
40 CFR 60.648 - Optional procedure for measuring hydrogen sulfide in acid gas-Tutwiler Procedure. 1
Code of Federal Regulations, 2013 CFR
2013-07-01
... per cubic feet of gas. (3) Starch solution. Rub into a thin paste about one teaspoonful of wheat starch with a little water; pour into about a pint of boiling water; stir; let cool and decant off clear solution. Make fresh solution every few days. (d) Procedure. Fill leveling bulb with starch solution....
40 CFR 60.648 - Optional procedure for measuring hydrogen sulfide in acid gas-Tutwiler Procedure. 1
Code of Federal Regulations, 2014 CFR
2014-07-01
... per cubic feet of gas. (3) Starch solution. Rub into a thin paste about one teaspoonful of wheat starch with a little water; pour into about a pint of boiling water; stir; let cool and decant off clear solution. Make fresh solution every few days. (d) Procedure. Fill leveling bulb with starch solution....
Structured programming: Principles, notation, procedure
NASA Technical Reports Server (NTRS)
JOST
1978-01-01
Structured programs are best represented using a notation which gives a clear representation of the block encapsulation. In this report, a set of symbols which can be used until binding directives are republished is suggested. Structured programming also allows a new method of procedure for design and testing. Programs can be designed top down, that is, they can start at the highest program plane and can penetrate to the lowest plane by step-wise refinements. The testing methodology also is adapted to this procedure. First, the highest program plane is tested, and the programs which are not yet finished in the next lower plane are represented by so-called dummies. They are gradually replaced by the real programs.
Adaptive Process Control in Rubber Industry.
Brause, Rüdiger W; Pietruschka, Ulf
1998-01-01
This paper describes the problems and an adaptive solution for process control in rubber industry. We show that the human and economical benefits of an adaptive solution for the approximation of process parameters are very attractive. The modeling of the industrial problem is done by the means of artificial neural networks. For the example of the extrusion of a rubber profile in tire production our method shows good resuits even using only a few training samples.
Strategies: Office Procedures with Communications Math.
ERIC Educational Resources Information Center
Wyoming Univ., Laramie. Coll. of Education.
This booklet contains 30 one-page strategies for teaching mathematical skills needed for office procedures. All the strategies are suitable for or can be adapted for special needs students. Each strategy is a classroom activity and is matched with the skill that it develops and its technology/content area (communications and/or mathematics). Some…
Analyzing Hedges in Verbal Communication: An Adaptation-Based Approach
ERIC Educational Resources Information Center
Wang, Yuling
2010-01-01
Based on Adaptation Theory, the article analyzes the production process of hedges. The procedure consists of the continuous making of choices in linguistic forms and communicative strategies. These choices are made just for adaptation to the contextual correlates. Besides, the adaptation process is dynamic, intentional and bidirectional.
Implementation and Measurement Efficiency of Multidimensional Computerized Adaptive Testing
ERIC Educational Resources Information Center
Wang, Wen-Chung; Chen, Po-Hsi
2004-01-01
Multidimensional adaptive testing (MAT) procedures are proposed for the measurement of several latent traits by a single examination. Bayesian latent trait estimation and adaptive item selection are derived. Simulations were conducted to compare the measurement efficiency of MAT with those of unidimensional adaptive testing and random…
NASA Astrophysics Data System (ADS)
Robock, A.
2010-12-01
Geoengineering by carbon capture and storage (CCS) or solar radiation management (SRM) has been suggested as a possible solution to global warming. However, it is clear that mitigation should be the main response of society, quickly reducing emissions of greenhouse gases. While there is no concerted mitigation effort yet, even if the world moves quickly to reduce emissions, the gases that are already in the atmosphere will continue to warm the planet. CCS, if a system that is efficacious, safe, and not costly could be developed, would slowly remove CO2 from the atmosphere, but this will have a gradual effect on concentrations. SRM, if a system could be developed to produce stratospheric aerosols or brighten marine stratocumulus clouds, could be quickly effective in cooling, but could also have so many negative side effects that it would be better not do it at all. This means that, in spite of a concerted effort at mitigation and to develop CCS, there will be a certain amount of global warming in our future. Because CCS geoengineering will be too slow and SRM geoengineering is not a practical or safe solution to geoengineering, adaptation will be needed. Our current understanding of geoengineering makes it even more important to focus on adaptation responses to global warming.
Countermeasures to Enhance Sensorimotor Adaptability
NASA Technical Reports Server (NTRS)
Bloomberg, J. J.; Peters, B. T.; Mulavara, A. P.; Brady, R. A.; Batson, C. C.; Miller, C. A.; Cohen, H. S.
2011-01-01
adaptability. These results indicate that SA training techniques can be added to existing treadmill exercise equipment and procedures to produce a single integrated countermeasure system to improve performance of astro/cosmonauts during prolonged exploratory space missions.
Coelho, V N; Coelho, I M; Souza, M J F; Oliveira, T A; Cota, L P; Haddad, M N; Mladenovic, N; Silva, R C P; Guimarães, F G
2016-01-01
This article presents an Evolution Strategy (ES)--based algorithm, designed to self-adapt its mutation operators, guiding the search into the solution space using a Self-Adaptive Reduced Variable Neighborhood Search procedure. In view of the specific local search operators for each individual, the proposed population-based approach also fits into the context of the Memetic Algorithms. The proposed variant uses the Greedy Randomized Adaptive Search Procedure with different greedy parameters for generating its initial population, providing an interesting exploration-exploitation balance. To validate the proposal, this framework is applied to solve three different [Formula: see text]-Hard combinatorial optimization problems: an Open-Pit-Mining Operational Planning Problem with dynamic allocation of trucks, an Unrelated Parallel Machine Scheduling Problem with Setup Times, and the calibration of a hybrid fuzzy model for Short-Term Load Forecasting. Computational results point out the convergence of the proposed model and highlight its ability in combining the application of move operations from distinct neighborhood structures along the optimization. The results gathered and reported in this article represent a collective evidence of the performance of the method in challenging combinatorial optimization problems from different application domains. The proposed evolution strategy demonstrates an ability of adapting the strength of the mutation disturbance during the generations of its evolution process. The effectiveness of the proposal motivates the application of this novel evolutionary framework for solving other combinatorial optimization problems.
Computerized procedures system
Lipner, Melvin H.; Mundy, Roger A.; Franusich, Michael D.
2010-10-12
An online data driven computerized procedures system that guides an operator through a complex process facility's operating procedures. The system monitors plant data, processes the data and then, based upon this processing, presents the status of the current procedure step and/or substep to the operator. The system supports multiple users and a single procedure definition supports several interface formats that can be tailored to the individual user. Layered security controls access privileges and revisions are version controlled. The procedures run on a server that is platform independent of the user workstations that the server interfaces with and the user interface supports diverse procedural views.
Designing Flightdeck Procedures
NASA Technical Reports Server (NTRS)
Barshi, Immanuel; Mauro, Robert; Degani, Asaf; Loukopoulou, Loukia
2016-01-01
The primary goal of this document is to provide guidance on how to design, implement, and evaluate flight deck procedures. It provides a process for developing procedures that meet clear and specific requirements. This document provides a brief overview of: 1) the requirements for procedures, 2) a process for the design of procedures, and 3) a process for the design of checklists. The brief overview is followed by amplified procedures that follow the above steps and provide details for the proper design, implementation and evaluation of good flight deck procedures and checklists.
Adaptive evolution of molecular phenotypes
NASA Astrophysics Data System (ADS)
Held, Torsten; Nourmohammad, Armita; Lässig, Michael
2014-09-01
Molecular phenotypes link genomic information with organismic functions, fitness, and evolution. Quantitative traits are complex phenotypes that depend on multiple genomic loci. In this paper, we study the adaptive evolution of a quantitative trait under time-dependent selection, which arises from environmental changes or through fitness interactions with other co-evolving phenotypes. We analyze a model of trait evolution under mutations and genetic drift in a single-peak fitness seascape. The fitness peak performs a constrained random walk in the trait amplitude, which determines the time-dependent trait optimum in a given population. We derive analytical expressions for the distribution of the time-dependent trait divergence between populations and of the trait diversity within populations. Based on this solution, we develop a method to infer adaptive evolution of quantitative traits. Specifically, we show that the ratio of the average trait divergence and the diversity is a universal function of evolutionary time, which predicts the stabilizing strength and the driving rate of the fitness seascape. From an information-theoretic point of view, this function measures the macro-evolutionary entropy in a population ensemble, which determines the predictability of the evolutionary process. Our solution also quantifies two key characteristics of adapting populations: the cumulative fitness flux, which measures the total amount of adaptation, and the adaptive load, which is the fitness cost due to a population's lag behind the fitness peak.
Adaptive Logistics Support for Combat
1990-09-01
is clear that under some circumstances such procedures can be useful adaptations. w C. GOALS AND SCOPE The present work attempts to exploit stochastic...problem directly. Ga,,er, Isaacson and Pilnick [Ref. 9] exploit these models and presented various applications. The results are summarized here. a...plan, for both FCFS and LAIN. The numerical example attempts to exploit a situation where the modules show large diversity in terms of failure and
Bullock, Jonathan S.; Harper, William L.; Peck, Charles G.
1976-06-22
This invention is directed to an aqueous halogen-free electromarking solution which possesses the capacity for marking a broad spectrum of metals and alloys selected from different classes. The aqueous solution comprises basically the nitrate salt of an amphoteric metal, a chelating agent, and a corrosion-inhibiting agent.
Time-dependent grid adaptation for meshes of triangles and tetrahedra
NASA Technical Reports Server (NTRS)
Rausch, Russ D.
1993-01-01
This paper presents in viewgraph form a method of optimizing grid generation for unsteady CFD flow calculations that distributes the numerical error evenly throughout the mesh. Adaptive meshing is used to locally enrich in regions of relatively large errors and to locally coarsen in regions of relatively small errors. The enrichment/coarsening procedures are robust for isotropic cells; however, enrichment of high aspect ratio cells may fail near boundary surfaces with relatively large curvature. The enrichment indicator worked well for the cases shown, but in general requires user supervision for a more efficient solution.
Adaptive implicit-explicit and parallel element-by-element iteration schemes
NASA Astrophysics Data System (ADS)
Tezduyar, T. E.; Liou, J.; Nguyen, T.; Poole, S.
Adaptive implicit-explicit (AIE) and grouped element-by-element (GEBE) iteration schemes are presented for the finite element solution of large-scale problems in computational mechanics and physics. The AIE approach is based on the dynamic arrangement of the elements into differently treated groups. The GEBE procedure, which is a way of rewriting the EBE formulation to make its parallel processing potential and implementation more clear, is based on the static arrangement of the elements into groups with no inter-element coupling within each group. Various numerical tests performed demonstrate the savings in the CPU time and memory.
Adaptive implicit-explicit and parallel element-by-element iteration schemes
NASA Technical Reports Server (NTRS)
Tezduyar, T. E.; Liou, J.; Nguyen, T.; Poole, S.
1989-01-01
Adaptive implicit-explicit (AIE) and grouped element-by-element (GEBE) iteration schemes are presented for the finite element solution of large-scale problems in computational mechanics and physics. The AIE approach is based on the dynamic arrangement of the elements into differently treated groups. The GEBE procedure, which is a way of rewriting the EBE formulation to make its parallel processing potential and implementation more clear, is based on the static arrangement of the elements into groups with no inter-element coupling within each group. Various numerical tests performed demonstrate the savings in the CPU time and memory.
Parallel Adaptive Mesh Refinement
Diachin, L; Hornung, R; Plassmann, P; WIssink, A
2005-03-04
As large-scale, parallel computers have become more widely available and numerical models and algorithms have advanced, the range of physical phenomena that can be simulated has expanded dramatically. Many important science and engineering problems exhibit solutions with localized behavior where highly-detailed salient features or large gradients appear in certain regions which are separated by much larger regions where the solution is smooth. Examples include chemically-reacting flows with radiative heat transfer, high Reynolds number flows interacting with solid objects, and combustion problems where the flame front is essentially a two-dimensional sheet occupying a small part of a three-dimensional domain. Modeling such problems numerically requires approximating the governing partial differential equations on a discrete domain, or grid. Grid spacing is an important factor in determining the accuracy and cost of a computation. A fine grid may be needed to resolve key local features while a much coarser grid may suffice elsewhere. Employing a fine grid everywhere may be inefficient at best and, at worst, may make an adequately resolved simulation impractical. Moreover, the location and resolution of fine grid required for an accurate solution is a dynamic property of a problem's transient features and may not be known a priori. Adaptive mesh refinement (AMR) is a technique that can be used with both structured and unstructured meshes to adjust local grid spacing dynamically to capture solution features with an appropriate degree of resolution. Thus, computational resources can be focused where and when they are needed most to efficiently achieve an accurate solution without incurring the cost of a globally-fine grid. Figure 1.1 shows two example computations using AMR; on the left is a structured mesh calculation of a impulsively-sheared contact surface and on the right is the fuselage and volume discretization of an RAH-66 Comanche helicopter [35]. Note the
Alternative Refractive Surgery Procedures
... LASIK Alternative Refractive Surgery Procedures Laser Surgery Recovery Alternative Refractive Surgery Procedures Dec. 12, 2015 Today's refractive ... that releases controlled amounts of radio frequency (RF) energy, instead of a laser, to apply heat to ...
... Stretch Marks Sun-damaged Skin Unwanted Hair Unwanted Tattoos Varicose Veins Vitiligo Wrinkles Treatments and Procedures Ambulatory ... Stretch Marks Sun-damaged Skin Unwanted Hair Unwanted Tattoos Varicose Veins Vitiligo Wrinkles Treatments and Procedures Ambulatory ...
Pelvic exenteration – reconsidering the procedure
Bacalbasa, N; Balescu, I
2015-01-01
Pelvic exenteration remains one of the most destructive surgical procedures in gynecologic oncology, performed in patients with locally advanced malignancies who were considered for a long time as unresectable. However, for these patients, an aggressive surgical approach seems to be the only potential curative solution. This is a literature review of the most important studies, which analyzes the benefits and the secondary risks of this demanding procedure. PMID:25866569
Optimum Testing Procedures for System Diagnosis and Fault Isolation.
1981-03-31
fault detection and isolation procedures are directed...Conference, Vol. 32 (1968), pp. 529-534. 4. Cohn, H. Y. and Ott, G., "Design of Adaptive Procedures for Fault Detection and Isolation ," IEEE... detection and isolation Built-in-test Optimum sequenceof testing Branch-and Bound 20. ABSTRACT (Contin... on revera. .ide f nwcider’ and identify
Adapting agriculture to climate change.
Howden, S Mark; Soussana, Jean-François; Tubiello, Francesco N; Chhetri, Netra; Dunlop, Michael; Meinke, Holger
2007-12-11
The strong trends in climate change already evident, the likelihood of further changes occurring, and the increasing scale of potential climate impacts give urgency to addressing agricultural adaptation more coherently. There are many potential adaptation options available for marginal change of existing agricultural systems, often variations of existing climate risk management. We show that implementation of these options is likely to have substantial benefits under moderate climate change for some cropping systems. However, there are limits to their effectiveness under more severe climate changes. Hence, more systemic changes in resource allocation need to be considered, such as targeted diversification of production systems and livelihoods. We argue that achieving increased adaptation action will necessitate integration of climate change-related issues with other risk factors, such as climate variability and market risk, and with other policy domains, such as sustainable development. Dealing with the many barriers to effective adaptation will require a comprehensive and dynamic policy approach covering a range of scales and issues, for example, from the understanding by farmers of change in risk profiles to the establishment of efficient markets that facilitate response strategies. Science, too, has to adapt. Multidisciplinary problems require multidisciplinary solutions, i.e., a focus on integrated rather than disciplinary science and a strengthening of the interface with decision makers. A crucial component of this approach is the implementation of adaptation assessment frameworks that are relevant, robust, and easily operated by all stakeholders, practitioners, policymakers, and scientists.
Properties of Some Bayesian Scoring Procedures for Computerized Adaptive Tests
1987-08-01
444))1 irdj 4 m -I Io, t ) ht7 , e R i 10208 -1cu* ~ I ia t ’inir 22 302-)28 o , S..:€ . -. S 5 ** -. 5l . . . . . ., "- " 0 , ABSTRACT The computerized...unlimited. 4 PERFORFsNG ORGA%, " ’ON REPORT \\%IMBER(SI 5 MAONITORING ORGANIZATION REPORT NUMBER(S) , - ’. x-. CRM 87-161 6j NAMEOFPEFORMIGORGANiZA’ON bo...Month, Day) 5 PAGE COUNT Final FROM TO August 1987 24 T6 SUPPLEMENTARY NOTATION 17 COSATI CODES T8 SUBJECT TERMS (Continue on reverse if necessary and
Crew procedures development techniques
NASA Technical Reports Server (NTRS)
Arbet, J. D.; Benbow, R. L.; Hawk, M. L.; Mangiaracina, A. A.; Mcgavern, J. L.; Spangler, M. C.
1975-01-01
The study developed requirements, designed, developed, checked out and demonstrated the Procedures Generation Program (PGP). The PGP is a digital computer program which provides a computerized means of developing flight crew procedures based on crew action in the shuttle procedures simulator. In addition, it provides a real time display of procedures, difference procedures, performance data and performance evaluation data. Reconstruction of displays is possible post-run. Data may be copied, stored on magnetic tape and transferred to the document processor for editing and documentation distribution.
Topical Hazard Evaluation Program Procedural Guide.
1982-01-01
conditions and are percent (w/v) Oil of tion reaction under test not expected to cause a Bergamot solution conditions. photochemical irritation...photochemical skin irritant ( Bergamot oil). d. All compounds-are handled with caution. Current test procedures cannot eliminate the possibility of individual...percent ethyl alcohol. One additional compound applied along with the test compounds is a 10 percent solution (w/v) of Bergamot oil" in 95 percent ethyl
Clause Elimination Procedures for CNF Formulas
NASA Astrophysics Data System (ADS)
Heule, Marijn; Järvisalo, Matti; Biere, Armin
We develop and analyze clause elimination procedures, a specific family of simplification techniques for conjunctive normal form (CNF) formulas. Extending known procedures such as tautology, subsumption, and blocked clause elimination, we introduce novel elimination procedures based on hidden and asymmetric variants of these techniques. We analyze the resulting nine (including five new) clause elimination procedures from various perspectives: size reduction, BCP-preservance, confluence, and logical equivalence. For the variants not preserving logical equivalence, we show how to reconstruct solutions to original CNFs from satisfying assignments to simplified CNFs. We also identify a clause elimination procedure that does a transitive reduction of the binary implication graph underlying any CNF formula purely on the CNF level.
Research in digital adaptive flight controllers
NASA Technical Reports Server (NTRS)
Kaufman, H.
1976-01-01
A design study of adaptive control logic suitable for implementation in modern airborne digital flight computers was conducted. Both explicit controllers which directly utilize parameter identification and implicit controllers which do not require identification were considered. Extensive analytical and simulation efforts resulted in the recommendation of two explicit digital adaptive flight controllers. Interface weighted least squares estimation procedures with control logic were developed using either optimal regulator theory or with control logic based upon single stage performance indices.
Modified Sham Feeding of Sweet Solutions in Women with and without Bulimia Nervosa
Klein, DA; Schebendach, JE; Brown, AJ; Smith, GP; Walsh, BT
2009-01-01
Although it is possible that binge eating in humans is due to increased responsiveness of orosensory excitatory controls of eating, there is no direct evidence for this because food ingested during a test meal stimulates both orosensory excitatory and postingestive inhibitory controls. To overcome this problem, we adapted the modified sham feeding technique (MSF) to measure the orosensory excitatory control of intake of a series of sweetened solutions. Previously published data showed the feasibility of a “sip-and-spit” procedure in nine healthy control women using solutions flavored with cherry Kool Aid® and sweetened with sucrose (0-20%)1. The current study extended this technique to measure the intake of artificially sweetened solutions in women with bulimia nervosa (BN) and in women with no history of eating disorders. Ten healthy women and 11 women with BN were randomly presented with cherry Kool Aid® solutions sweetened with five concentrations of aspartame (0, 0.01, 0.03, 0.08 and 0.28%) in a closed opaque container fitted with a straw. They were instructed to sip as much as they wanted of the solution during 1-minute trials and to spit the fluid out into another opaque container. Across all subjects, presence of sweetener increased intake (p<0.001). Women with BN sipped 40.5-53.1% more of all solutions than controls (p=0.03 for total intake across all solutions). Self-report ratings of liking, wanting and sweetness of solutions did not differ between groups. These results support the feasibility of a MSF procedure using artificially sweetened solutions, and the hypothesis that the orosensory stimulation of MSF provokes larger intake in women with BN than controls. PMID:18773914
Local adaptive tone mapping for video enhancement
NASA Astrophysics Data System (ADS)
Lachine, Vladimir; Dai, Min (.
2015-03-01
As new technologies like High Dynamic Range cameras, AMOLED and high resolution displays emerge on consumer electronics market, it becomes very important to deliver the best picture quality for mobile devices. Tone Mapping (TM) is a popular technique to enhance visual quality. However, the traditional implementation of Tone Mapping procedure is limited by pixel's value to value mapping, and the performance is restricted in terms of local sharpness and colorfulness. To overcome the drawbacks of traditional TM, we propose a spatial-frequency based framework in this paper. In the proposed solution, intensity component of an input video/image signal is split on low pass filtered (LPF) and high pass filtered (HPF) bands. Tone Mapping (TM) function is applied to LPF band to improve the global contrast/brightness, and HPF band is added back afterwards to keep the local contrast. The HPF band may be adjusted by a coring function to avoid noise boosting and signal overshooting. Colorfulness of an original image may be preserved or enhanced by chroma components correction by means of saturation function. Localized content adaptation is further improved by dividing an image to a set of non-overlapped regions and modifying each region individually. The suggested framework allows users to implement a wide range of tone mapping applications with perceptional local sharpness and colorfulness preserved or enhanced. Corresponding hardware circuit may be integrated in camera, video or display pipeline with minimal hardware budget
Milne, Roger Brent
1995-12-01
This thesis describes a new method for the numerical solution of partial differential equations of the parabolic type on an adaptively refined mesh in two or more spatial dimensions. The method is motivated and developed in the context of the level set formulation for the curvature dependent propagation of surfaces in three dimensions. In that setting, it realizes the multiple advantages of decreased computational effort, localized accuracy enhancement, and compatibility with problems containing a range of length scales.
Making Intelligent Systems Adaptive. (Revision)
1988-10-01
eventually produce solutions. BY contrast, human beinge and other intelligent animls continuously adapt to the demands and opportunities presented by a...such as monitoring critically ill medical patients or controlling a manufacturing process. Following the model set by human intelligence, we define...signs probabilistically, using a belief network, as well as from first principles, using explicit models of system structure and function. Concurrent
A grid generation and flow solution method for the Euler equations on unstructured grids
Anderson, W.K. )
1994-01-01
A grid generation and flow solution algorithm for the Euler equations on unstructured grids is presented. The grid generation scheme utilizes Delaunay triangulation and self-generates the field points for the mesh based on cell aspect ratios and allows for clustering near solid surfaces. The flow solution method is an implicit algorithm in which the linear set or equations arising at each time step is solved using a Gauss Seidel procedure which is completely vectorizable. In addition, a study is conducted to examine the number of subiterations required for good convergence of the overall algorithm. Grid generation results are shown in two dimensions for a NACA 0012 airfoil as well as two-element configuration. Flow solution results are shown for two-dimensional flow over the NACA 0012 airfoil and for a two-element configuration in which the solution has been obtained through an adaptation procedure and compared to an exact solution. Preliminary three-dimensional results are also shown in which subsonic flow over a business jet is computed. 31 refs. 30 figs.
Grid generation and flow solution method for Euler equations on unstructured grids
NASA Technical Reports Server (NTRS)
Anderson, W. Kyle
1992-01-01
A grid generation and flow solution algorithm for the Euler equations on unstructured grids is presented. The grid generation scheme, which uses Delaunay triangulation, generates the field points for the mesh based on cell aspect ratios and allows clustering of grid points near solid surfaces. The flow solution method is an implicit algorithm in which the linear set of equations arising at each time step is solved using a Gauss-Seidel procedure that is completely vectorizable. Also, a study is conducted to examine the number of subiterations required for good convergence of the overall algorithm. Grid generation results are shown in two dimensions for an NACA 0012 airfoil as well as a two element configuration. Flow solution results are shown for a two dimensional flow over the NACA 0012 airfoil and for a two element configuration in which the solution was obtained through an adaptation procedure and compared with an exact solution. Preliminary three dimensional results also are shown in which the subsonic flow over a business jet is computed.
Control of microorganisms in flowing nutrient solutions
NASA Astrophysics Data System (ADS)
Evans, R. D.
1994-11-01
Controlling microorganisms in flowing nutrient solutions involves different techniques when targeting the nutrient solution, hardware surfaces in contact with the solution, or the active root zone. This review presents basic principles and applications of a number of treatment techniques, including disinfection by chemicals, ultrafiltration, ultrasonics, and heat treatment, with emphasis on UV irradiation and ozone treatment. Procedures for control of specific pathogens by nutrient solution conditioning also are reviewed.
Control of microorganisms in flowing nutrient solutions.
Evans, R D
1994-11-01
Controlling microorganisms in flowing nutrient solutions involves different techniques when targeting the nutrient solution, hardware surfaces in contact with the solution, or the active root zone. This review presents basic principles and applications of a number of treatment techniques, including disinfection by chemicals, ultrafiltration, ultrasonics, and heat treatment, with emphasis on UV irradiation and ozone treatment. Procedures for control of specific pathogens by nutrient solution conditioning also are reviewed.
NASA Astrophysics Data System (ADS)
Lee, Go-Eun; Kim, Il-Ho; Lim, Young Soo; Seo, Won-Seon; Choi, Byeong-Jun; Hwang, Chang-Won
2014-06-01
Since Bi2Te3 and Bi2Se3 have the same crystal structure, they form a homogeneous solid solution. Therefore, the thermal conductivity of the solid solution can be reduced by phonon scattering. The thermoelectric figure of merit can be improved by controlling the carrier concentration through doping. In this study, Bi2Te2.85Se0.15:D m (D: dopants such as I, Cu, Ag, Ni, Zn) solid solutions were prepared by encapsulated melting and hot pressing. All specimens exhibited n-type conduction in the measured temperature range (323 K to 523 K), and their electrical conductivities decreased slightly with increasing temperature. The undoped solid solution showed a carrier concentration of 7.37 × 1019 cm-3, power factor of 2.1 mW m-1 K-1, and figure of merit of 0.56 at 323 K. The figure of merit ( ZT) was improved due to the increased power factor by I, Cu, and Ag dopings, and maximum ZT values were obtained as 0.76 at 323 K for Bi2Te2.85Se0.15:Cu0.01 and 0.90 at 423 K for Bi2Te2.85Se0.15:I0.005. However, the thermoelectric properties of Ni- and Zn-doped solid solutions were not enhanced.
Pyroshock prediction procedures
NASA Astrophysics Data System (ADS)
Piersol, Allan G.
2002-05-01
Given sufficient effort, pyroshock loads can be predicted by direct analytical procedures using Hydrocodes that analytically model the details of the pyrotechnic explosion and its interaction with adjacent structures, including nonlinear effects. However, it is more common to predict pyroshock environments using empirical procedures based upon extensive studies of past pyroshock data. Various empirical pyroshock prediction procedures are discussed, including those developed by the Jet Propulsion Laboratory, Lockheed-Martin, and Boeing.
Candidate CDTI procedures study
NASA Technical Reports Server (NTRS)
Ace, R. E.
1981-01-01
A concept with potential for increasing airspace capacity by involving the pilot in the separation control loop is discussed. Some candidate options are presented. Both enroute and terminal area procedures are considered and, in many cases, a technologically advanced Air Traffic Control structure is assumed. Minimum display characteristics recommended for each of the described procedures are presented. Recommended sequencing of the operational testing of each of the candidate procedures is presented.
Procedural pediatric dermatology.
Metz, Brandie J
2013-04-01
Due to many factors, including parental anxiety, a child's inability to understand the necessity of a procedure and a child's unwillingness to cooperate, it can be much more challenging to perform dermatologic procedures in children. This article reviews pre-procedural preparation of patients and parents, techniques for minimizing injection-related pain and optimal timing of surgical intervention. The risks and benefits of general anesthesia in the setting of pediatric dermatologic procedures are discussed. Additionally, the surgical approach to a few specific types of birthmarks is addressed.
Modified arthroscopic Brostrom procedure.
Lui, Tun Hing
2015-09-01
The open modified Brostrom anatomic repair technique is widely accepted as the reference standard for lateral ankle stabilization. However, there is high incidence of intra-articular pathologies associated with chronic lateral ankle instability which may not be addressed by an isolated open Brostrom procedure. Arthroscopic Brostrom procedure with suture anchor has been described for anatomic repair of chronic lateral ankle instability and management of intra-articular lesions. However, the complication rates seemed to be higher than open Brostrom procedure. Modification of the arthroscopic Brostrom procedure with the use of bone tunnel may reduce the risk of certain complications.
Zijp, Michiel C; Posthuma, Leo; Wintersen, Arjen; Devilee, Jeroen; Swartjes, Frank A
2016-05-01
This paper introduces Solution-focused Sustainability Assessment (SfSA), provides practical guidance formatted as a versatile process framework, and illustrates its utility for solving a wicked environmental management problem. Society faces complex and increasingly wicked environmental problems for which sustainable solutions are sought. Wicked problems are multi-faceted, and deriving of a management solution requires an approach that is participative, iterative, innovative, and transparent in its definition of sustainability and translation to sustainability metrics. We suggest to add the use of a solution-focused approach. The SfSA framework is collated from elements from risk assessment, risk governance, adaptive management and sustainability assessment frameworks, expanded with the 'solution-focused' paradigm as recently proposed in the context of risk assessment. The main innovation of this approach is the broad exploration of solutions upfront in assessment projects. The case study concerns the sustainable management of slightly contaminated sediments continuously formed in ditches in rural, agricultural areas. This problem is wicked, as disposal of contaminated sediment on adjacent land is potentially hazardous to humans, ecosystems and agricultural products. Non-removal would however reduce drainage capacity followed by increased risks of flooding, while contaminated sediment removal followed by offsite treatment implies high budget costs and soil subsidence. Application of the steps in the SfSA-framework served in solving this problem. Important elements were early exploration of a wide 'solution-space', stakeholder involvement from the onset of the assessment, clear agreements on the risk and sustainability metrics of the problem and on the interpretation and decision procedures, and adaptive management. Application of the key elements of the SfSA approach eventually resulted in adoption of a novel sediment management policy. The stakeholder
Higher-order numerical solutions using cubic splines
NASA Technical Reports Server (NTRS)
Rubin, S. G.; Khosla, P. K.
1976-01-01
A cubic spline collocation procedure was developed for the numerical solution of partial differential equations. This spline procedure is reformulated so that the accuracy of the second-derivative approximation is improved and parallels that previously obtained for lower derivative terms. The final result is a numerical procedure having overall third-order accuracy of a nonuniform mesh. Solutions using both spline procedures, as well as three-point finite difference methods, are presented for several model problems.
NASA Astrophysics Data System (ADS)
Eliyan, Faysal Fayez; Alfantazi, Akram
2014-11-01
This paper presents an electrochemical study on the corrosion behavior of API-X100 steel, heat-treated to have microstructures similar to those of the heat-affected zones (HAZs) of pipeline welding, in bicarbonate-CO2 saturated solutions. The corrosion reactions, onto the surface and through the passive films, are simulated by cyclic voltammetry. The interrelation between bicarbonate concentration and CO2 hydration is analyzed during the filming process at the open-circuit potentials. In dilute bicarbonate solutions, H2CO3 drives more dominantly the cathodic reduction and the passive films form slowly. In the concentrated solutions, bicarbonate catalyzes both the anodic and cathodic reactions, only initially, after which it drives a fast-forming thick passivation that inhibits the underlying dissolution and impedes the cathodic reduction. The significance of the substrate is as critical as that of passivation in controlling the course of the corrosion reactions in the dilute solutions. For fast-cooled (heat treatment) HAZs, its metallurgical significance becomes more comparable to that of slower-cooled HAZs as the bicarbonate concentration is higher.
ERIC Educational Resources Information Center
Starkman, Neal
2007-01-01
Poor classroom acoustics are impairing students' hearing and their ability to learn. However, technology has come up with a solution: tools that focus voices in a way that minimizes intrusive ambient noise and gets to the intended receiver--not merely amplifying the sound, but also clarifying and directing it. One provider of classroom audio…
Krawczyk, Gerhard Erich; Miller, Kevin Michael
2011-07-26
There is provided a method of making a polymer solution comprising polymerizing one or more monomer in a solvent, wherein said monomer comprises one or more ethylenically unsaturated monomer that is a multi-functional Michael donor, and wherein said solvent comprises 40% or more by weight, based on the weight of said solvent, one or more multi-functional Michael donor.
A multigrid method for steady Euler equations on unstructured adaptive grids
NASA Technical Reports Server (NTRS)
Riemslagh, Kris; Dick, Erik
1993-01-01
A flux-difference splitting type algorithm is formulated for the steady Euler equations on unstructured grids. The polynomial flux-difference splitting technique is used. A vertex-centered finite volume method is employed on a triangular mesh. The multigrid method is in defect-correction form. A relaxation procedure with a first order accurate inner iteration and a second-order correction performed only on the finest grid, is used. A multi-stage Jacobi relaxation method is employed as a smoother. Since the grid is unstructured a Jacobi type is chosen. The multi-staging is necessary to provide sufficient smoothing properties. The domain is discretized using a Delaunay triangular mesh generator. Three grids with more or less uniform distribution of nodes but with different resolution are generated by successive refinement of the coarsest grid. Nodes of coarser grids appear in the finer grids. The multigrid method is started on these grids. As soon as the residual drops below a threshold value, an adaptive refinement is started. The solution on the adaptively refined grid is accelerated by a multigrid procedure. The coarser multigrid grids are generated by successive coarsening through point removement. The adaption cycle is repeated a few times. Results are given for the transonic flow over a NACA-0012 airfoil.
Procedural Learning and Dyslexia
ERIC Educational Resources Information Center
Nicolson, R. I.; Fawcett, A. J.; Brookes, R. L.; Needle, J.
2010-01-01
Three major "neural systems", specialized for different types of information processing, are the sensory, declarative, and procedural systems. It has been proposed ("Trends Neurosci.",30(4), 135-141) that dyslexia may be attributable to impaired function in the procedural system together with intact declarative function. We provide a brief…
ERIC Educational Resources Information Center
Davis, Kevin; Poston, George
This manual provides information on the enucleation procedure (removal of the eyes for organ banks). An introductory section focuses on the anatomy of the eye and defines each of the parts. Diagrams of the eye are provided. A list of enucleation materials follows. Other sections present outlines of (1) a sterile procedure; (2) preparation for eye…
Connectionist Learning Procedures.
ERIC Educational Resources Information Center
Hinton, Geoffrey E.
A major goal of research on networks of neuron-like processing units is to discover efficient learning procedures that allow these networks to construct complex internal representations of their environment. The learning procedures must be capable of modifying the connection strengths in such a way that internal units which are not part of the…
An adaptive gridless methodology in one dimension
Snyder, N.T.; Hailey, C.E.
1996-09-01
Gridless numerical analysis offers great potential for accurately solving for flow about complex geometries or moving boundary problems. Because gridless methods do not require point connection, the mesh cannot twist or distort. The gridless method utilizes a Taylor series about each point to obtain the unknown derivative terms from the current field variable estimates. The governing equation is then numerically integrated to determine the field variables for the next iteration. Effects of point spacing and Taylor series order on accuracy are studied, and they follow similar trends of traditional numerical techniques. Introducing adaption by point movement using a spring analogy allows the solution method to track a moving boundary. The adaptive gridless method models linear, nonlinear, steady, and transient problems. Comparison with known analytic solutions is given for these examples. Although point movement adaption does not provide a significant increase in accuracy, it helps capture important features and provides an improved solution.
Computerized Adaptive Testing with Item Cloning.
ERIC Educational Resources Information Center
Glas, Cees A. W.; van der Linden, Wim J.
2003-01-01
Developed a multilevel item response (IRT) model that allows for differences between the distributions of item parameters of families of item clones. Results from simulation studies based on an item pool from the Law School Admission Test illustrate the accuracy of the item pool calibration and adaptive testing procedures based on the model. (SLD)
Adapting Aquatic Circuit Training for Special Populations.
ERIC Educational Resources Information Center
Thome, Kathleen
1980-01-01
The author discusses how land activities can be adapted to water so that individuals with handicapping conditions can participate in circuit training activities. An initial section lists such organizational procedures as providing vocal and/or visual cues for activities, having assistants accompany the performers throughout the circuit, and…
Adaptive Discontinuous Galerkin Approximation to Richards' Equation
NASA Astrophysics Data System (ADS)
Li, H.; Farthing, M. W.; Miller, C. T.
2006-12-01
Due to the occurrence of large gradients in fluid pressure as a function of space and time resulting from nonlinearities in closure relations, numerical solutions to Richards' equations are notoriously difficult for certain media properties and auxiliary conditions that occur routinely in describing physical systems of interest. These difficulties have motivated a substantial amount of work aimed at improving numerical approximations to this physically important and mathematically rich model. In this work, we build upon recent advances in temporal and spatial discretization methods by developing spatially and temporally adaptive solution approaches based upon the local discontinuous Galerkin method in space and a higher order backward difference method in time. Spatial step-size adaption, h adaption, approaches are evaluated and a so-called hp-adaption strategy is considered as well, which adjusts both the step size and the order of the approximation. Solution algorithms are advanced and performance is evaluated. The spatially and temporally adaptive approaches are shown to be robust and offer significant increases in computational efficiency compared to similar state-of-the-art methods that adapt in time alone. In addition, we extend the proposed methods to two dimensions and provide preliminary numerical results.
Classical FEM-BEM coupling methods: nonlinearities, well-posedness, and adaptivity
NASA Astrophysics Data System (ADS)
Aurada, Markus; Feischl, Michael; Führer, Thomas; Karkulik, Michael; Melenk, Jens Markus; Praetorius, Dirk
2013-04-01
We consider a (possibly) nonlinear interface problem in 2D and 3D, which is solved by use of various adaptive FEM-BEM coupling strategies, namely the Johnson-Nédélec coupling, the Bielak-MacCamy coupling, and Costabel's symmetric coupling. We provide a framework to prove that the continuous as well as the discrete Galerkin solutions of these coupling methods additionally solve an appropriate operator equation with a Lipschitz continuous and strongly monotone operator. Therefore, the original coupling formulations are well-defined, and the Galerkin solutions are quasi-optimal in the sense of a Céa-type lemma. For the respective Galerkin discretizations with lowest-order polynomials, we provide reliable residual-based error estimators. Together with an estimator reduction property, we prove convergence of the adaptive FEM-BEM coupling methods. A key point for the proof of the estimator reduction are novel inverse-type estimates for the involved boundary integral operators which are advertized. Numerical experiments conclude the work and compare performance and effectivity of the three adaptive coupling procedures in the presence of generic singularities.
Residual bone growth after lengthening procedures.
Journeau, Pierre; Lascombes, Pierre; Barbier, Dominique; Popkov, Dmitry
2016-12-01
The prognosis of limb length discrepancy is a major subject in paediatric orthopaedic surgery. The strategy depends on the prognosis and must be adapted to each patient. The residual growth of the lengthened segment often remains unknown, but is dependent on age, the percentage of lengthening and other factors. Using a large cohort of 150 children who had undergone bone lengthening procedures, we describe five patterns of post-intervention growth and identify factors that are favourable for normal residual growth. The criteria for bone lengthening which should maintain good residual growth are-bone age at lengthening should be before the pubertal growth spurt; the interval between two lengthening procedures should be over three years; the percentage of lengthening should be <30% of the initial segment; and no more than two lengthening procedures should be carried out during infancy.
Application of the Flood-IMPAT procedure in the Valle d'Aosta Region, Italy
NASA Astrophysics Data System (ADS)
Minucci, Guido; Mendoza, Marina Tamara; Molinari, Daniela; Atun, Funda; Menoni, Scira; Ballio, Francesco
2016-04-01
Flood Risk Management Plans (FRMPs) established by European "Floods" Directive (Directive 2007/60/EU) to Member States in order to address all aspects of flood risk management, taking into account costs and benefits of proposed mitigation tools must be reviewed by the same law every six years. This is aimed at continuously increasing the effectiveness of risk management, on the bases of the most advanced knowledge of flood risk and most (economically) feasible solutions, also taking into consideration achievements of the previous management cycle. Within this context, the Flood-IMPAT (i.e. Integrated Meso-scale Procedure to Assess Territorial flood risk) procedure has been developed aiming at overcoming limits of risk maps produced by the Po River Basin Authority and adopted for the first version of the Po River FRMP. The procedure allows the estimation of flood risk at the meso-scale and it is characterized by three main peculiarities. First is its feasibility for the entire Italian territory. Second is the possibility to express risk in monetary terms (i.e. expected damage), at least for those categories of damage for which suitable models are available. Finally, independent modules compose the procedure: each module allows the estimation of a certain type of damage (i.e. direct, indirect, intangibles) on a certain sector (e.g. residential, industrial, agriculture, environment, etc.) separately, guaranteeing flexibility in the implementation. This paper shows the application of the Flood-IMPAT procedure and the recent advancements in the procedure, aiming at increasing its reliability and usability. Through a further implementation of the procedure in the Dora Baltea River Basin (North of Italy), it was possible to test the sensitivity of risk estimates supplied by Flood-IMPAT with respect to different damage models and different approaches for the estimation of assets at risk. Risk estimates were also compared with observed damage data in the investigated areas
Baltayiannis, Nikolaos; Michail, Chandrinos; Lazaridis, George; Anagnostopoulos, Dimitrios; Baka, Sofia; Mpoukovinas, Ioannis; Karavasilis, Vasilis; Lampaki, Sofia; Papaiwannou, Antonis; Karavergou, Anastasia; Kioumis, Ioannis; Pitsiou, Georgia; Katsikogiannis, Nikolaos; Tsakiridis, Kosmas; Rapti, Aggeliki; Trakada, Georgia; Zissimopoulos, Athanasios; Zarogoulidis, Konstantinos
2015-01-01
Minimally invasive procedures, which include laparoscopic surgery, use state-of-the-art technology to reduce the damage to human tissue when performing surgery. Minimally invasive procedures require small “ports” from which the surgeon inserts thin tubes called trocars. Carbon dioxide gas may be used to inflate the area, creating a space between the internal organs and the skin. Then a miniature camera (usually a laparoscope or endoscope) is placed through one of the trocars so the surgical team can view the procedure as a magnified image on video monitors in the operating room. Specialized equipment is inserted through the trocars based on the type of surgery. There are some advanced minimally invasive surgical procedures that can be performed almost exclusively through a single point of entry—meaning only one small incision, like the “uniport” video-assisted thoracoscopic surgery (VATS). Not only do these procedures usually provide equivalent outcomes to traditional “open” surgery (which sometimes require a large incision), but minimally invasive procedures (using small incisions) may offer significant benefits as well: (I) faster recovery; (II) the patient remains for less days hospitalized; (III) less scarring and (IV) less pain. In our current mini review we will present the minimally invasive procedures for thoracic surgery. PMID:25861610
Adaptivity in space and time for shallow water equations
NASA Astrophysics Data System (ADS)
Morandi Cecchi, M.; Marcuzzi, F.
1999-09-01
In this paper, adaptive algorithms for time and space discretizations are added to an existing solution method previously applied to the Venice Lagoon Tidal Circulation problem. An analysis of the interactions between space and time discretizations adaptation algorithms is presented. In particular, it turns out that both error estimations in space and time must be present for maintaining the adaptation efficiency. Several advantages, for adaptivity and for time decoupling of the equations, offered by the operator-splitting adopted for shallow water equations solution are presented. Copyright
Habituation of visual adaptation
Dong, Xue; Gao, Yi; Lv, Lili; Bao, Min
2016-01-01
Our sensory system adjusts its function driven by both shorter-term (e.g. adaptation) and longer-term (e.g. learning) experiences. Most past adaptation literature focuses on short-term adaptation. Only recently researchers have begun to investigate how adaptation changes over a span of days. This question is important, since in real life many environmental changes stretch over multiple days or longer. However, the answer to the question remains largely unclear. Here we addressed this issue by tracking perceptual bias (also known as aftereffect) induced by motion or contrast adaptation across multiple daily adaptation sessions. Aftereffects were measured every day after adaptation, which corresponded to the degree of adaptation on each day. For passively viewed adapters, repeated adaptation attenuated aftereffects. Once adapters were presented with an attentional task, aftereffects could either reduce for easy tasks, or initially show an increase followed by a later decrease for demanding tasks. Quantitative analysis of the decay rates in contrast adaptation showed that repeated exposure of the adapter appeared to be equivalent to adaptation to a weaker stimulus. These results suggest that both attention and a non-attentional habituation-like mechanism jointly determine how adaptation develops across multiple daily sessions. PMID:26739917
The Adaptive Analysis of Visual Cognition using Genetic Algorithms
Cook, Robert G.; Qadri, Muhammad A. J.
2014-01-01
Two experiments used a novel, open-ended, and adaptive test procedure to examine visual cognition in animals. Using a genetic algorithm, a pigeon was tested repeatedly from a variety of different initial conditions for its solution to an intermediate brightness search task. On each trial, the animal had to accurately locate and peck a target element of intermediate brightness from among a variable number of surrounding darker and lighter distractor elements. Displays were generated from six parametric variables, or genes (distractor number, element size, shape, spacing, target brightness, distractor brightness). Display composition changed over time, or evolved, as a function of the bird’s differential accuracy within the population of values for each gene. Testing three randomized initial conditions and one set of controlled initial conditions, element size and number of distractors were identified as the most important factors controlling search accuracy, with distractor brightness, element shape, and spacing making secondary contributions. The resulting changes in this multidimensional stimulus space suggested the existence of a set of conditions that the bird repeatedly converged upon regardless of initial conditions. This psychological “attractor” represents the cumulative action of the cognitive operations used by the pigeon in solving and performing this search task. The results are discussed regarding their implications for visual cognition in pigeons and the usefulness of adaptive, subject-driven experimentation for investigating human and animal cognition more generally. PMID:24000905
Adaptive Assessment of Young Children with Visual Impairment
ERIC Educational Resources Information Center
Ruiter, Selma; Nakken, Han; Janssen, Marleen; Van Der Meulen, Bieuwe; Looijestijn, Paul
2011-01-01
The aim of this study was to assess the effect of adaptations for children with low vision of the Bayley Scales, a standardized developmental instrument widely used to assess development in young children. Low vision adaptations were made to the procedures, item instructions and play material of the Dutch version of the Bayley Scales of Infant…
Canalith Repositioning Procedure
... repositioning procedure can help relieve benign paroxysmal positional vertigo (BPPV), a condition in which you have brief, but intense, episodes of dizziness that occur when you move your head. Vertigo ...
Extracorporeal shock wave lithotripsy (ESWL) is a procedure used to shatter simple stones in the kidney or upper urinary tract. Ultrasonic waves are passed through the body until they strike the dense stones. Pulses of ...
Dynamic alarm response procedures
Martin, J.; Gordon, P.; Fitch, K.
2006-07-01
The Dynamic Alarm Response Procedure (DARP) system provides a robust, Web-based alternative to existing hard-copy alarm response procedures. This paperless system improves performance by eliminating time wasted looking up paper procedures by number, looking up plant process values and equipment and component status at graphical display or panels, and maintenance of the procedures. Because it is a Web-based system, it is platform independent. DARP's can be served from any Web server that supports CGI scripting, such as Apache{sup R}, IIS{sup R}, TclHTTPD, and others. DARP pages can be viewed in any Web browser that supports Javascript and Scalable Vector Graphics (SVG), such as Netscape{sup R}, Microsoft Internet Explorer{sup R}, Mozilla Firefox{sup R}, Opera{sup R}, and others. (authors)
2016-01-01
The Nuss procedure is now the preferred operation for surgical correction of pectus excavatum (PE). It is a minimally invasive technique, whereby one to three curved metal bars are inserted behind the sternum in order to push it into a normal position. The bars are left in situ for three years and then removed. This procedure significantly improves quality of life and, in most cases, also improves cardiac performance. Previously, the modified Ravitch procedure was used with resection of cartilage and the use of posterior support. This article details the new modified Nuss procedure, which requires the use of shorter bars than specified by the original technique. This technique facilitates the operation as the bar may be guided manually through the chest wall and no additional stabilizing sutures are necessary. PMID:27747185
Controlling chaos in a defined trajectory using adaptive fuzzy logic algorithm
NASA Astrophysics Data System (ADS)
Sadeghi, Maryam; Menhaj, Bagher
2012-09-01
Chaos is a nonlinear behavior of chaotic system with the extreme sensitivity to the initial conditions. Chaos control is so complicated that solutions never converge to a specific numbers and vary chaotically from one amount to the other next. A tiny perturbation in a chaotic system may result in chaotic, periodic, or stationary behavior. Modern controllers are introduced for controlling the chaotic behavior. In this research an adaptive Fuzzy Logic Controller (AFLC) is proposed to control the chaotic system with two equilibrium points. This method is introduced as an adaptive progressed fashion with the full ability to control the nonlinear systems even in the undertrained conditions. Using AFLC designers are released to determine the precise mathematical model of system and satisfy the vast adaption that is needed for a rapid variation which may be caused in the dynamic of nonlinear system. Rules and system parameters are generated through the AFLC and expert knowledge is downright only in the initialization stage. So if the knowledge was not assuring the dynamic of system it could be changed through the adaption procedure of parameters values. AFLC methodology is an advanced control fashion in control yielding to both robustness and smooth motion in nonlinear system control.
Adaptive statistical pattern classifiers for remotely sensed data
NASA Technical Reports Server (NTRS)
Gonzalez, R. C.; Pace, M. O.; Raulston, H. S.
1975-01-01
A technique for the adaptive estimation of nonstationary statistics necessary for Bayesian classification is developed. The basic approach to the adaptive estimation procedure consists of two steps: (1) an optimal stochastic approximation of the parameters of interest and (2) a projection of the parameters in time or position. A divergence criterion is developed to monitor algorithm performance. Comparative results of adaptive and nonadaptive classifier tests are presented for simulated four dimensional spectral scan data.
A Comparison of Exposure Control Procedures in CATs Using the 3PL Model
ERIC Educational Resources Information Center
Leroux, Audrey J.; Lopez, Myriam; Hembry, Ian; Dodd, Barbara G.
2013-01-01
This study compares the progressive-restricted standard error (PR-SE) exposure control procedure to three commonly used procedures in computerized adaptive testing, the randomesque, Sympson-Hetter (SH), and no exposure control methods. The performance of these four procedures is evaluated using the three-parameter logistic model under the…
ERIC Educational Resources Information Center
Geri, George A.; Hubbard, David C.
Two adaptive psychophysical procedures (tracking and "yes-no" staircase) for obtaining human visual contrast sensitivity functions (CSF) were evaluated. The procedures were chosen based on their proven validity and the desire to evaluate the practical effects of stimulus transients, since tracking procedures traditionally employ gradual…
Horst, Reto; Wüthrich, Kurt
2015-07-20
Reconstitution of integral membrane proteins (IMP) in aqueous solutions of detergent micelles has been extensively used in structural biology, using either X-ray crystallography or NMR in solution. Further progress could be achieved by establishing a rational basis for the selection of detergent and buffer conditions, since the stringent bottleneck that slows down the structural biology of IMPs is the preparation of diffracting crystals or concentrated solutions of stable isotope labeled IMPs. Here, we describe procedures to monitor the quality of aqueous solutions of [(2)H, (15)N]-labeled IMPs reconstituted in detergent micelles. This approach has been developed for studies of β-barrel IMPs, where it was successfully applied for numerous NMR structure determinations, and it has also been adapted for use with α-helical IMPs, in particular GPCRs, in guiding crystallization trials and optimizing samples for NMR studies (Horst et al., 2013). 2D [(15)N, (1)H]-correlation maps are used as "fingerprints" to assess the foldedness of the IMP in solution. For promising samples, these "inexpensive" data are then supplemented with measurements of the translational and rotational diffusion coefficients, which give information on the shape and size of the IMP/detergent mixed micelles. Using microcoil equipment for these NMR experiments enables data collection with only micrograms of protein and detergent. This makes serial screens of variable solution conditions viable, enabling the optimization of parameters such as the detergent concentration, sample temperature, pH and the composition of the buffer.
Horst, Reto; Wüthrich, Kurt
2016-01-01
Reconstitution of integral membrane proteins (IMP) in aqueous solutions of detergent micelles has been extensively used in structural biology, using either X-ray crystallography or NMR in solution. Further progress could be achieved by establishing a rational basis for the selection of detergent and buffer conditions, since the stringent bottleneck that slows down the structural biology of IMPs is the preparation of diffracting crystals or concentrated solutions of stable isotope labeled IMPs. Here, we describe procedures to monitor the quality of aqueous solutions of [2H, 15N]-labeled IMPs reconstituted in detergent micelles. This approach has been developed for studies of β-barrel IMPs, where it was successfully applied for numerous NMR structure determinations, and it has also been adapted for use with α-helical IMPs, in particular GPCRs, in guiding crystallization trials and optimizing samples for NMR studies (Horst et al., 2013). 2D [15N, 1H]-correlation maps are used as “fingerprints” to assess the foldedness of the IMP in solution. For promising samples, these “inexpensive” data are then supplemented with measurements of the translational and rotational diffusion coefficients, which give information on the shape and size of the IMP/detergent mixed micelles. Using microcoil equipment for these NMR experiments enables data collection with only micrograms of protein and detergent. This makes serial screens of variable solution conditions viable, enabling the optimization of parameters such as the detergent concentration, sample temperature, pH and the composition of the buffer. PMID:27077076
On Browne's Solution for Oblique Procrustes Rotation
ERIC Educational Resources Information Center
Cramer, Elliot M.
1974-01-01
A form of Browne's (1967) solution of finding a least squares fit to a specified factor structure is given which does not involve solution of an eigenvalue problem. It suggests the possible existence of a singularity, and a simple modification of Browne's computational procedure is proposed. (Author/RC)
Lehmann, S; Blödow, A; Flügel, W; Renner-Lützkendorf, H; Isbruch, A; Siegling, F; Untch, M; Strauß, J; Bloching, M B
2013-08-01
The ex utero intrapartum treatment (EXIT) procedure is used for unborn fetuses in cases of predictable complications of postpartum airway obstruction. Indications for the EXIT procedure are fetal neck tumors, obstruction of the trachea, hiatus hernia of the diaphragm and congenital high airway obstruction syndrome (CHAOS). Large cervical tumors prevent normal delivery of a fetus due to reclination of the head with airway obstruction. Therefore, a primary caesarean section or the EXIT procedure has to be considered. The EXIT procedure has time limitations as the blood supply by the placenta only lasts for 30-60 min. Airway protection has to be ensured during parturition.This article reports the case of an unborn fetus with a large cervical teratoma where an obstruction of the cervical airway was detected and monitored by ultrasound and magnetic resonance imaging (MRI) during pregnancy. The EXIT procedure was therefore used and successfully accomplished. The features of the interdisciplinary aspects of the EXIT procedure are described with the special aspects of each medical discipline.
Awareness of sensorimotor adaptation to visual rotations of different size.
Werner, Susen; van Aken, Bernice C; Hulst, Thomas; Frens, Maarten A; van der Geest, Jos N; Strüder, Heiko K; Donchin, Opher
2015-01-01
Previous studies on sensorimotor adaptation revealed no awareness of the nature of the perturbation after adaptation to an abrupt 30° rotation of visual feedback or after adaptation to gradually introduced perturbations. Whether the degree of awareness depends on the magnitude of the perturbation, though, has as yet not been tested. Instead of using questionnaires, as was often done in previous work, the present study used a process dissociation procedure to measure awareness and unawareness. A naïve, implicit group and a group of subjects using explicit strategies adapted to 20°, 40° and 60° cursor rotations in different adaptation blocks that were each followed by determination of awareness and unawareness indices. The awareness index differed between groups and increased from 20° to 60° adaptation. In contrast, there was no group difference for the unawareness index, but it also depended on the size of the rotation. Early adaptation varied between groups and correlated with awareness: The more awareness a participant had developed the more the person adapted in the beginning of the adaptation block. In addition, there was a significant group difference for savings but it did not correlate with awareness. Our findings suggest that awareness depends on perturbation size and that aware and strategic processes are differentially involved during adaptation and savings. Moreover, the use of the process dissociation procedure opens the opportunity to determine awareness and unawareness indices in future sensorimotor adaptation research.
Post-processing procedure for industrial quantum key distribution systems
NASA Astrophysics Data System (ADS)
Kiktenko, Evgeny; Trushechkin, Anton; Kurochkin, Yury; Fedorov, Aleksey
2016-08-01
We present algorithmic solutions aimed on post-processing procedure for industrial quantum key distribution systems with hardware sifting. The main steps of the procedure are error correction, parameter estimation, and privacy amplification. Authentication of classical public communication channel is also considered.
Expressing Adaptation Strategies Using Adaptation Patterns
ERIC Educational Resources Information Center
Zemirline, N.; Bourda, Y.; Reynaud, C.
2012-01-01
Today, there is a real challenge to enable personalized access to information. Several systems have been proposed to address this challenge including Adaptive Hypermedia Systems (AHSs). However, the specification of adaptation strategies remains a difficult task for creators of such systems. In this paper, we consider the problem of the definition…
Alpha-Stratified Multistage Computerized Adaptive Testing with beta Blocking.
ERIC Educational Resources Information Center
Chang, Hua-Hua; Qian, Jiahe; Yang, Zhiliang
2001-01-01
Proposed a refinement, based on the stratification of items developed by D. Weiss (1973), of the computerized adaptive testing item selection procedure of H. Chang and Z. Ying (1999). Simulation studies using an item bank from the Graduate Record Examination show the benefits of the new procedure. (SLD)
Some Properties of a Bayesian Adaptive Ability Testing Strategy.
ERIC Educational Resources Information Center
McBride, James R.; Weiss, David J.
Four monte carlo simulation studies of Owen's Bayesian sequential procedure for adaptive mental testing were conducted. Whereas previous simulation studies of this procedure have concentrated on evaluating it in terms of the correlation of its test scores with simulated ability in a normal population, these four studies explored a number of…
A Pilot Program in Adapted Physical Education: Hillsborough High School.
ERIC Educational Resources Information Center
Thompson, Vince
The instructor of an adapted physical education program describes his experiences and suggests guidelines for implementing other programs. Reviewed are such aspects as program orientation, class procedures, identification of student participants, and grading procedures. Objectives, lesson plans and evaluations are presented for the following units…
Assessing the Efficiency of Item Selection in Computerized Adaptive Testing.
ERIC Educational Resources Information Center
Weissman, Alexander
This study investigated the efficiency of item selection in a computerized adaptive test (CAT), where efficiency was defined in terms of the accumulated test information at an examinee's true ability level. A simulation methodology compared the efficiency of 2 item selection procedures with 5 ability estimation procedures for CATs of 5, 10, 15,…
Balancing Flexible Constraints and Measurement Precision in Computerized Adaptive Testing
ERIC Educational Resources Information Center
Moyer, Eric L.; Galindo, Jennifer L.; Dodd, Barbara G.
2012-01-01
Managing test specifications--both multiple nonstatistical constraints and flexibly defined constraints--has become an important part of designing item selection procedures for computerized adaptive tests (CATs) in achievement testing. This study compared the effectiveness of three procedures: constrained CAT, flexible modified constrained CAT,…
Adaptive structures - Test hardware and experimental results
NASA Technical Reports Server (NTRS)
Wada, Ben K.; Fanson, James L.; Chen, Gun-Shing; Kuo, Chin-Po
1990-01-01
The facilities and procedures used at JPL to test adaptive structures such as the large deployable reflector (LDR) are described and preliminary results are reported. The applications of adaptive structures in future NASA missions are outlined, and the techniques which are employed to modify damping, stiffness, and isolation characteristics, as well as geometric changes, are listed. The development of adaptive structures is shown to be effective as a result of new actuators and sensors, and examples are listed for categories such as fiber optics, shape-memory materials, piezoelectrics, and electrorheological fluids. Some ground test results are described for laboratory truss structures and truss test beds, which are shown to be efficient and easy to assemble in space. Adaptive structures are shown to be important for precision space structures such as the LDR, and can alleviate ground test requirements.
A goal-oriented adaptive finite-element approach for plane wave 3-D electromagnetic modelling
NASA Astrophysics Data System (ADS)
Ren, Zhengyong; Kalscheuer, Thomas; Greenhalgh, Stewart; Maurer, Hansruedi
2013-08-01
We have developed a novel goal-oriented adaptive mesh refinement approach for finite-element methods to model plane wave electromagnetic (EM) fields in 3-D earth models based on the electric field differential equation. To handle complicated models of arbitrary conductivity, magnetic permeability and dielectric permittivity involving curved boundaries and surface topography, we employ an unstructured grid approach. The electric field is approximated by linear curl-conforming shape functions which guarantee the divergence-free condition of the electric field within each tetrahedron and continuity of the tangential component of the electric field across the interior boundaries. Based on the non-zero residuals of the approximated electric field and the yet to be satisfied boundary conditions of continuity of both the normal component of the total current density and the tangential component of the magnetic field strength across the interior interfaces, three a-posterior error estimators are proposed as a means to drive the goal-oriented adaptive refinement procedure. The first a-posterior error estimator relies on a combination of the residual of the electric field, the discontinuity of the normal component of the total current density and the discontinuity of the tangential component of the magnetic field strength across the interior faces shared by tetrahedra. The second a-posterior error estimator is expressed in terms of the discontinuity of the normal component of the total current density (conduction plus displacement current). The discontinuity of the tangential component of the magnetic field forms the third a-posterior error estimator. Analytical solutions for magnetotelluric (MT) and radiomagnetotelluric (RMT) fields impinging on a homogeneous half-space model are used to test the performances of the newly developed goal-oriented algorithms using the above three a-posterior error estimators. A trapezoidal topographical model, using normally incident EM waves
Procedural learning and dyslexia.
Nicolson, R I; Fawcett, A J; Brookes, R L; Needle, J
2010-08-01
Three major 'neural systems', specialized for different types of information processing, are the sensory, declarative, and procedural systems. It has been proposed (Trends Neurosci., 30(4), 135-141) that dyslexia may be attributable to impaired function in the procedural system together with intact declarative function. We provide a brief overview of the increasing evidence relating to the hypothesis, noting that the framework involves two main claims: first that 'neural systems' provides a productive level of description avoiding the underspecificity of cognitive descriptions and the overspecificity of brain structural accounts; and second that a distinctive feature of procedural learning is its extended time course, covering from minutes to months. In this article, we focus on the second claim. Three studies-speeded single word reading, long-term response learning, and overnight skill consolidation-are reviewed which together provide clear evidence of difficulties in procedural learning for individuals with dyslexia, even when the tasks are outside the literacy domain. The educational implications of the results are then discussed, and in particular the potential difficulties that impaired overnight procedural consolidation would entail. It is proposed that response to intervention could be better predicted if diagnostic tests on the different forms of learning were first undertaken.
Sheta, Saad A
2010-01-01
The number of noninvasive and minimally invasive procedures performed outside of the operating room has grown exponentially over the last several decades.Sedation, analgesia, or both may be needed for many of these interventional or diagnostic procedures. Individualized care is important when determining if a patient requires procedural sedation analgesia (PSA). The patient might need an anti-anxiety drug, pain medicine, immobilization, simple reassurance, or a combination of these interventions. The goals of PSA in four different multidisciplinary practices namely; emergency, dentistry, radiology and gastrointestinal endoscopy are discussed in this review article. Some procedures are painful, others painless. Therefore, goals of PSA vary widely. Sedation management can range from minimal sedation, to the extent of minimal anesthesia. Procedural sedation in emergency department (ED) usually requires combinations of multiple agents to reach desired effects of analgesia plus anxiolysis. However, in dental practice, moderate sedation analgesia (known to the dentists as conscious sedation) is usually what is required. It is usually most effective with the combined use of local anesthesia. The mainstay of success for painless imaging is absolute immobility. Immobility can be achieved by deep sedation or minimal anesthesia. On the other hand, moderate sedation, deep sedation, minimal anesthesia and conventional general anesthesia can be all utilized for management of gastrointestinal endoscopy. PMID:20668560
Mobile Energy Laboratory Procedures
Armstrong, P.R.; Batishko, C.R.; Dittmer, A.L.; Hadley, D.L.; Stoops, J.L.
1993-09-01
Pacific Northwest Laboratory (PNL) has been tasked to plan and implement a framework for measuring and analyzing the efficiency of on-site energy conversion, distribution, and end-use application on federal facilities as part of its overall technical support to the US Department of Energy (DOE) Federal Energy Management Program (FEMP). The Mobile Energy Laboratory (MEL) Procedures establish guidelines for specific activities performed by PNL staff. PNL provided sophisticated energy monitoring, auditing, and analysis equipment for on-site evaluation of energy use efficiency. Specially trained engineers and technicians were provided to conduct tests in a safe and efficient manner with the assistance of host facility staff and contractors. Reports were produced to describe test procedures, results, and suggested courses of action. These reports may be used to justify changes in operating procedures, maintenance efforts, system designs, or energy-using equipment. The MEL capabilities can subsequently be used to assess the results of energy conservation projects. These procedures recognize the need for centralized NM administration, test procedure development, operator training, and technical oversight. This need is evidenced by increasing requests fbr MEL use and the economies available by having trained, full-time MEL operators and near continuous MEL operation. DOE will assign new equipment and upgrade existing equipment as new capabilities are developed. The equipment and trained technicians will be made available to federal agencies that provide funding for the direct costs associated with MEL use.
Method of adaptive artificial viscosity
NASA Astrophysics Data System (ADS)
Popov, I. V.; Fryazinov, I. V.
2011-09-01
A new finite-difference method for the numerical solution of gas dynamics equations is proposed. This method is a uniform monotonous finite-difference scheme of second-order approximation on time and space outside of domains of shock and compression waves. This method is based on inputting adaptive artificial viscosity (AAV) into gas dynamics equations. In this paper, this method is analyzed for 2D geometry. The testing computations of the movement of contact discontinuities and shock waves and the breakup of discontinuities are demonstrated.
Computerized operating procedures
Ness, E.; Teigen, J.
1994-12-31
A number of observed and potential problems in the nuclear industry are related to the quality of operating procedures. Many of the problems identified in operating procedure preparation, implementation, and maintenance have a technical nature, which can be directly addressed by developing computerized procedure handling tools. The Halden Reactor Project (HRP) of the Organization for Economic Cooperation and Development has since 1985 performed research work within this field. A product of this effort is the development of a second version of the computerized operation manuals (COPMA) system. This paper summarizes the most important characteristics of the COPMA-II system and discusses some of the experiences in using a system like COPMA-II.
Reasoning about procedural knowledge
NASA Technical Reports Server (NTRS)
Georgeff, M. P.
1985-01-01
A crucial aspect of automated reasoning about space operations is that knowledge of the problem domain is often procedural in nature - that is, the knowledge is often in the form of sequences of actions or procedures for achieving given goals or reacting to certain situations. In this paper a system is described that explicitly represents and reasons about procedural knowledge. The knowledge representation used is sufficiently rich to describe the effects of arbitrary sequences of tests and actions, and the inference mechanism provides a means for directly using this knowledge to reach desired operational goals. Furthermore, the representation has a declarative semantics that provides for incremental changes to the system, rich explanatory capabilities, and verifiability. The approach also provides a mechanism for reasoning about the use of this knowledge, thus enabling the system to choose effectively between alternative courses of action.
Environmental Test Screening Procedure
NASA Technical Reports Server (NTRS)
Zeidler, Janet
2000-01-01
This procedure describes the methods to be used for environmental stress screening (ESS) of the Lightning Mapper Sensor (LMS) lens assembly. Unless otherwise specified, the procedures shall be completed in the order listed, prior to performance of the Acceptance Test Procedure (ATP). The first unit, S/N 001, will be subjected to the Qualification Vibration Levels, while the remainder will be tested at the Operational Level. Prior to ESS, all units will undergo Pre-ESS Functional Testing that includes measuring the on-axis and plus or minus 0.95 full field Modulation Transfer Function and Back Focal Length. Next, all units will undergo ESS testing, and then Acceptance testing per PR 460.
Mahillo-Isla, R; Gonźalez-Morales, M J; Dehesa-Martínez, C
2011-06-01
The slowly varying envelope approximation is applied to the radiation problems of the Helmholtz equation with a planar single-layer and dipolar sources. The analyses of such problems provide procedures to recover solutions of the Helmholtz equation based on the evaluation of solutions of the parabolic wave equation at a given plane. Furthermore, the conditions that must be fulfilled to apply each procedure are also discussed. The relations to previous work are given as well.
Procedure and Program Examples
NASA Astrophysics Data System (ADS)
Britz, Dieter
Here some modules, procedures and whole programs are described, that may be useful to the reader, as they have been, to the author. They are all in Fortran 90/95 and start with a generally useful module, that will be used in most procedures and programs in the examples, and another module useful for programs using a Rosenbrock variant. The source texts (except for the two modules) are not reproduced here, but can be downloaded from the web site www.springerlink.com/openurl.asp?genre=issue &issn=1616-6361&volume=666 (the two lines form one contiguous URL!).
Solution of plane cascade flow using improved surface singularity methods
NASA Technical Reports Server (NTRS)
Mcfarland, E. R.
1981-01-01
A solution method has been developed for calculating compressible inviscid flow through a linear cascade of arbitrary blade shapes. The method uses advanced surface singularity formulations which were adapted from those found in current external flow analyses. The resulting solution technique provides a fast flexible calculation for flows through turbomachinery blade rows. The solution method and some examples of the method's capabilities are presented.
Adaptive Finite-Element Computation In Fracture Mechanics
NASA Technical Reports Server (NTRS)
Min, J. B.; Bass, J. M.; Spradley, L. W.
1995-01-01
Report discusses recent progress in use of solution-adaptive finite-element computational methods to solve two-dimensional problems in linear elastic fracture mechanics. Method also shown extensible to three-dimensional problems.
Toddler test or procedure preparation
Preparing toddler for test/procedure; Test/procedure preparation - toddler; Preparing for a medical test or procedure - toddler ... Before the test, know that your child will probably cry. Even if you prepare, your child may feel some discomfort or ...
Preschooler test or procedure preparation
Preparing preschoolers for test/procedure; Test/procedure preparation - preschooler ... Preparing children for medical tests can reduce their anxiety. It can also make them less likely to cry and resist the procedure. Research shows that ...
Comparing Anisotropic Output-Based Grid Adaptation Methods by Decomposition
NASA Technical Reports Server (NTRS)
Park, Michael A.; Loseille, Adrien; Krakos, Joshua A.; Michal, Todd
2015-01-01
Anisotropic grid adaptation is examined by decomposing the steps of flow solution, ad- joint solution, error estimation, metric construction, and simplex grid adaptation. Multiple implementations of each of these steps are evaluated by comparison to each other and expected analytic results when available. For example, grids are adapted to analytic metric fields and grid measures are computed to illustrate the properties of multiple independent implementations of grid adaptation mechanics. Different implementations of each step in the adaptation process can be evaluated in a system where the other components of the adaptive cycle are fixed. Detailed examination of these properties allows comparison of different methods to identify the current state of the art and where further development should be targeted.
40 CFR 60.648 - Optional procedure for measuring hydrogen sulfide in acid gas-Tutwiler Procedure. 1
Code of Federal Regulations, 2012 CFR
2012-07-01
...) Starch solution. Rub into a thin paste about one teaspoonful of wheat starch with a little water; pour... every few days. (d) Procedure. Fill leveling bulb with starch solution. Raise (L), open cock (G), open... the 100 ml mark, close (G) and (F), and disconnect sampling tube. Open (G) and bring starch...
40 CFR 60.648 - Optional procedure for measuring hydrogen sulfide in acid gas-Tutwiler Procedure. 1
Code of Federal Regulations, 2010 CFR
2010-07-01
...) Starch solution. Rub into a thin paste about one teaspoonful of wheat starch with a little water; pour... every few days. (d) Procedure. Fill leveling bulb with starch solution. Raise (L), open cock (G), open... the 100 ml mark, close (G) and (F), and disconnect sampling tube. Open (G) and bring starch...
40 CFR 60.648 - Optional procedure for measuring hydrogen sulfide in acid gas-Tutwiler Procedure. 1
Code of Federal Regulations, 2011 CFR
2011-07-01
...) Starch solution. Rub into a thin paste about one teaspoonful of wheat starch with a little water; pour... every few days. (d) Procedure. Fill leveling bulb with starch solution. Raise (L), open cock (G), open... the 100 ml mark, close (G) and (F), and disconnect sampling tube. Open (G) and bring starch...
A Novel Approach for Adaptive Signal Processing
NASA Technical Reports Server (NTRS)
Chen, Ya-Chin; Juang, Jer-Nan
1998-01-01
Adaptive linear predictors have been used extensively in practice in a wide variety of forms. In the main, their theoretical development is based upon the assumption of stationarity of the signals involved, particularly with respect to the second order statistics. On this basis, the well-known normal equations can be formulated. If high- order statistical stationarity is assumed, then the equivalent normal equations involve high-order signal moments. In either case, the cross moments (second or higher) are needed. This renders the adaptive prediction procedure non-blind. A novel procedure for blind adaptive prediction has been proposed and considerable implementation has been made in our contributions in the past year. The approach is based upon a suitable interpretation of blind equalization methods that satisfy the constant modulus property and offers significant deviations from the standard prediction methods. These blind adaptive algorithms are derived by formulating Lagrange equivalents from mechanisms of constrained optimization. In this report, other new update algorithms are derived from the fundamental concepts of advanced system identification to carry out the proposed blind adaptive prediction. The results of the work can be extended to a number of control-related problems, such as disturbance identification. The basic principles are outlined in this report and differences from other existing methods are discussed. The applications implemented are speech processing, such as coding and synthesis. Simulations are included to verify the novel modelling method.
NASA Astrophysics Data System (ADS)
Morris, Simon Conway
2004-11-01
Life's Solution builds a persuasive case for the predictability of evolutionary outcomes. The case rests on a remarkable compilation of examples of convergent evolution, in which two or more lineages have independently evolved similar structures and functions. The examples range from the aerodynamics of hovering moths and hummingbirds to the use of silk by spiders and some insects to capture prey. Going against the grain of Darwinian orthodoxy, this book is a must read for anyone grappling with the meaning of evolution and our place in the Universe. Simon Conway Morris is the Ad Hominen Professor in the Earth Science Department at the University of Cambridge and a Fellow of St. John's College and the Royal Society. His research focuses on the study of constraints on evolution, and the historical processes that lead to the emergence of complexity, especially with respect to the construction of the major animal body parts in the Cambrian explosion. Previous books include The Crucible of Creation (Getty Center for Education in the Arts, 1999) and co-author of Solnhofen (Cambridge, 1990). Hb ISBN (2003) 0-521-82704-3
NASA Astrophysics Data System (ADS)
Morris, Simon Conway
2003-09-01
Life's Solution builds a persuasive case for the predictability of evolutionary outcomes. The case rests on a remarkable compilation of examples of convergent evolution, in which two or more lineages have independently evolved similar structures and functions. The examples range from the aerodynamics of hovering moths and hummingbirds to the use of silk by spiders and some insects to capture prey. Going against the grain of Darwinian orthodoxy, this book is a must read for anyone grappling with the meaning of evolution and our place in the Universe. Simon Conway Morris is the Ad Hominen Professor in the Earth Science Department at the University of Cambridge and a Fellow of St. John's College and the Royal Society. His research focuses on the study of constraints on evolution, and the historical processes that lead to the emergence of complexity, especially with respect to the construction of the major animal body parts in the Cambrian explosion. Previous books include The Crucible of Creation (Getty Center for Education in the Arts, 1999) and co-author of Solnhofen (Cambridge, 1990). Hb ISBN (2003) 0-521-82704-3
Alloy solution hardening with solute pairs
Mitchell, John W.
1976-08-24
Solution hardened alloys are formed by using at least two solutes which form associated solute pairs in the solvent metal lattice. Copper containing equal atomic percentages of aluminum and palladium is an example.
Attractor mechanism as a distillation procedure
Levay, Peter; Szalay, Szilard
2010-07-15
In a recent paper it was shown that for double extremal static spherical symmetric BPS black hole solutions in the STU model the well-known process of moduli stabilization at the horizon can be recast in a form of a distillation procedure of a three-qubit entangled state of a Greenberger-Horne-Zeilinger type. By studying the full flow in moduli space in this paper we investigate this distillation procedure in more detail. We introduce a three-qubit state with amplitudes depending on the conserved charges, the warp factor, and the moduli. We show that for the recently discovered non-BPS solutions it is possible to see how the distillation procedure unfolds itself as we approach the horizon. For the non-BPS seed solutions at the asymptotically Minkowski region we are starting with a three-qubit state having seven nonequal nonvanishing amplitudes and finally at the horizon we get a Greenberger-Horne-Zeilinger state with merely four nonvanishing ones with equal magnitudes. The magnitude of the surviving nonvanishing amplitudes is proportional to the macroscopic black hole entropy. A systematic study of such attractor states shows that their properties reflect the structure of the fake superpotential. We also demonstrate that when starting with the very special values for the moduli corresponding to flat directions the uniform structure at the horizon deteriorates due to errors generalizing the usual bit flips acting on the qubits of the attractor states.
Evaluation Perspectives and Procedures.
ERIC Educational Resources Information Center
Scriven, Michael
This article on evaluation perspectives and procedures is divided into six sections. The first section briefly discusses qualitative and quantitative research and evaluation. In the second section there is an exploration of the utility and validity of a checklist that can be used to evaluate products, as an instrument for evaluating producers, for…
Educational Accounting Procedures.
ERIC Educational Resources Information Center
Tidwell, Sam B.
This chapter of "Principles of School Business Management" reviews the functions, procedures, and reports with which school business officials must be familiar in order to interpret and make decisions regarding the school district's financial position. Among the accounting functions discussed are financial management, internal auditing,…
Student Loan Collection Procedures.
ERIC Educational Resources Information Center
National Association of College and University Business Officers, Washington, DC.
This manual on the collection of student loans is intended for the use of business officers and loan collection personnel of colleges and universities of all sizes. The introductory chapter is an overview of sound collection practices and procedures. It discusses the making of a loan, in-school servicing of the accounts, the exit interview, the…
ERIC Educational Resources Information Center
Cubberley, Carol W.
1991-01-01
Discusses written procedures that explain library tasks and describes methods for writing them clearly and coherently. The use of appropriate terminology and vocabulary is discussed; the value of illustrations, typography, and format to enhance the visual effect is explained; the intended audience is considered; and examples are given. (seven…
Simulating Laboratory Procedures.
ERIC Educational Resources Information Center
Baker, J. E.; And Others
1986-01-01
Describes the use of computer assisted instruction in a medical microbiology course. Presents examples of how computer assisted instruction can present case histories in which the laboratory procedures are simulated. Discusses an authoring system used to prepare computer simulations and provides one example of a case history dealing with fractured…
ERIC Educational Resources Information Center
Blount, Ronald L.; Piira, Tiina; Cohen, Lindsey L.; Cheng, Patricia S.
2006-01-01
This article reviews the various settings in which infants, children, and adolescents experience pain during acute medical procedures and issues related to referral of children to pain management teams. In addition, self-report, reports by others, physiological monitoring, and direct observation methods of assessment of pain and related constructs…
Visual Screening: A Procedure.
ERIC Educational Resources Information Center
Williams, Robert T.
Vision is a complex process involving three phases: physical (acuity), physiological (integrative), and psychological (perceptual). Although these phases cannot be considered discrete, they provide the basis for the visual screening procedure used by the Reading Services of Colorado State University and described in this document. Ten tests are…
Special Education: Procedural Guide.
ERIC Educational Resources Information Center
Dependents Schools (DOD), Washington, DC.
The guide is intended to provide information to administrators and regional and local case study committees on special education procedures within Department of Defense Dependents Schools (DoDDS). The manual addresses a step-by step approach from referral to the implementation of individualized education programs (IEP). The following topics are…
ERIC Educational Resources Information Center
Ercikan, Kadriye; Alper, Naim
2009-01-01
This commentary first summarizes and discusses the analysis of the two translation processes described in the Oliveira, Colak, and Akerson article and the inferences these researchers make based on their research. In the second part of the commentary, we describe procedures and criteria used in adapting tests into different languages and how they…
Numerical implementation of the integral-transform solution to Lamb's point-load problem
NASA Astrophysics Data System (ADS)
Georgiadis, H. G.; Vamvatsikos, D.; Vardoulakis, I.
The present work describes a procedure for the numerical evaluation of the classical integral-transform solution of the transient elastodynamic point-load (axisymmetric) Lamb's problem. This solution involves integrals of rapidly oscillatory functions over semi-infinite intervals and inversion of one-sided (time) Laplace transforms. These features introduce difficulties for a numerical treatment and constitute a challenging problem in trying to obtain results for quantities (e.g. displacements) in the interior of the half-space. To deal with the oscillatory integrands, which in addition may take very large values (pseudo-pole behavior) at certain points, we follow the concept of Longman's method but using as accelerator in the summation procedure a modified Epsilon algorithm instead of the standard Euler's transformation. Also, an adaptive procedure using the Gauss 32-point rule is introduced to integrate in the vicinity of the pseudo-pole. The numerical Laplace-transform inversion is based on the robust Fourier-series technique of Dubner/Abate-Crump-Durbin. Extensive results are given for sub-surface displacements, whereas the limit-case results for the surface displacements compare very favorably with previous exact results.
NASA Astrophysics Data System (ADS)
Grothmann, T.; Grecksch, K.; Winges, M.; Siebenhüner, B.
2013-03-01
Several case studies show that "soft social factors" (e.g. institutions, perceptions, social capital) strongly affect social capacities to adapt to climate change. Many soft social factors can probably be changed faster than "hard social factors" (e.g. economic and technological development) and are therefore particularly important for building social capacities. However, there are almost no methodologies for the systematic assessment of soft social factors. Gupta et al. (2010) have developed the Adaptive Capacity Wheel (ACW) for assessing the adaptive capacity of institutions. The ACW differentiates 22 criteria to assess six dimensions: variety, learning capacity, room for autonomous change, leadership, availability of resources, fair governance. To include important psychological factors we extended the ACW by two dimensions: "adaptation motivation" refers to actors' motivation to realise, support and/or promote adaptation to climate. "Adaptation belief" refers to actors' perceptions of realisability and effectiveness of adaptation measures. We applied the extended ACW to assess adaptive capacities of four sectors - water management, flood/coastal protection, civil protection and regional planning - in North Western Germany. The assessments of adaptation motivation and belief provided a clear added value. The results also revealed some methodological problems in applying the ACW (e.g. overlap of dimensions), for which we propose methodological solutions.
NASA Astrophysics Data System (ADS)
Grothmann, T.; Grecksch, K.; Winges, M.; Siebenhüner, B.
2013-12-01
Several case studies show that social factors like institutions, perceptions and social capital strongly affect social capacities to adapt to climate change. Together with economic and technological development they are important for building social capacities. However, there are almost no methodologies for the systematic assessment of social factors. After reviewing existing methodologies we identify the Adaptive Capacity Wheel (ACW) by Gupta et al. (2010), developed for assessing the adaptive capacity of institutions, as the most comprehensive and operationalised framework to assess social factors. The ACW differentiates 22 criteria to assess 6 dimensions: variety, learning capacity, room for autonomous change, leadership, availability of resources, fair governance. To include important psychological factors we extended the ACW by two dimensions: "adaptation motivation" refers to actors' motivation to realise, support and/or promote adaptation to climate; "adaptation belief" refers to actors' perceptions of realisability and effectiveness of adaptation measures. We applied the extended ACW to assess adaptive capacities of four sectors - water management, flood/coastal protection, civil protection and regional planning - in northwestern Germany. The assessments of adaptation motivation and belief provided a clear added value. The results also revealed some methodological problems in applying the ACW (e.g. overlap of dimensions), for which we propose methodological solutions.
van Voorn, George A. K.; Ligtenberg, Arend; Molenaar, Jaap
2017-01-01
Adaptation of agents through learning or evolution is an important component of the resilience of Complex Adaptive Systems (CAS). Without adaptation, the flexibility of such systems to cope with outside pressures would be much lower. To study the capabilities of CAS to adapt, social simulations with agent-based models (ABMs) provide a helpful tool. However, the value of ABMs for studying adaptation depends on the availability of methodologies for sensitivity analysis that can quantify resilience and adaptation in ABMs. In this paper we propose a sensitivity analysis methodology that is based on comparing time-dependent probability density functions of output of ABMs with and without agent adaptation. The differences between the probability density functions are quantified by the so-called earth-mover’s distance. We use this sensitivity analysis methodology to quantify the probability of occurrence of critical transitions and other long-term effects of agent adaptation. To test the potential of this new approach, it is used to analyse the resilience of an ABM of adaptive agents competing for a common-pool resource. Adaptation is shown to contribute positively to the resilience of this ABM. If adaptation proceeds sufficiently fast, it may delay or avert the collapse of this system. PMID:28196372
Examining adaptations of evidence-based programs in natural contexts.
Moore, Julia E; Bumbarger, Brian K; Cooper, Brittany Rhoades
2013-06-01
When evidence-based programs (EBPs) are scaled up in natural, or non-research, settings, adaptations are commonly made. Given the fidelity-versus-adaptation debate, theoretical rationales have been provided for the pros and cons of adaptations. Yet the basis of this debate is theoretical; thus, empirical evidence is needed to understand the types of adaptations made in natural settings. In the present study, we introduce a taxonomy for understanding adaptations. This taxonomy addresses several aspects of adaptations made to programs including the fit (philosophical or logistical), timing (proactive or reactive), and valence, or the degree to which the adaptations align with the program's goals and theory, (positive, negative, or neutral). Self-reported qualitative data from communities delivering one of ten state-funded EBPs were coded based on the taxonomy constructs; additionally, quantitative data were used to examine the types and reasons for making adaptations under natural conditions. Forty-four percent of respondents reported making adaptations. Adaptations to the procedures, dosage, and content were cited most often. Lack of time, limited resources, and difficulty retaining participants were listed as the most common reasons for making adaptations. Most adaptations were made reactively, as a result of issues of logistical fit, and were not aligned with, or deviated from, the program's goals and theory.
PROCESS OF ELIMINATING HYDROGEN PEROXIDE IN SOLUTIONS CONTAINING PLUTONIUM VALUES
Barrick, J.G.; Fries, B.A.
1960-09-27
A procedure is given for peroxide precipitation processes for separating and recovering plutonium values contained in an aqueous solution. When plutonium peroxide is precipitated from an aqueous solution, the supernatant contains appreciable quantities of plutonium and peroxide. It is desirable to process this solution further to recover plutonium contained therein, but the presence of the peroxide introduces difficulties; residual hydrogen peroxide contained in the supernatant solution is eliminated by adding a nitrite or a sulfite to this solution.
NASA Astrophysics Data System (ADS)
Schwing, Alan Michael
For computational fluid dynamics, the governing equations are solved on a discretized domain of nodes, faces, and cells. The quality of the grid or mesh can be a driving source for error in the results. While refinement studies can help guide the creation of a mesh, grid quality is largely determined by user expertise and understanding of the flow physics. Adaptive mesh refinement is a technique for enriching the mesh during a simulation based on metrics for error, impact on important parameters, or location of important flow features. This can offload from the user some of the difficult and ambiguous decisions necessary when discretizing the domain. This work explores the implementation of adaptive mesh refinement in an implicit, unstructured, finite-volume solver. Consideration is made for applying modern computational techniques in the presence of hanging nodes and refined cells. The approach is developed to be independent of the flow solver in order to provide a path for augmenting existing codes. It is designed to be applicable for unsteady simulations and refinement and coarsening of the grid does not impact the conservatism of the underlying numerics. The effect on high-order numerical fluxes of fourth- and sixth-order are explored. Provided the criteria for refinement is appropriately selected, solutions obtained using adapted meshes have no additional error when compared to results obtained on traditional, unadapted meshes. In order to leverage large-scale computational resources common today, the methods are parallelized using MPI. Parallel performance is considered for several test problems in order to assess scalability of both adapted and unadapted grids. Dynamic repartitioning of the mesh during refinement is crucial for load balancing an evolving grid. Development of the methods outlined here depend on a dual-memory approach that is described in detail. Validation of the solver developed here against a number of motivating problems shows favorable
Huang, Liping; Wang, Qiang; Jiang, Linjie; Zhou, Peng; Quan, Xie; Logan, Bruce E
2015-08-18
Bioelectrochemical systems (BESs) have been shown to be useful in removing individual metals from solutions, but effective treatment of electroplating and mining wastewaters requires simultaneous removal of several metals in a single system. To develop multiple-reactor BESs for metals removal, biocathodes were first individually acclimated to three different metals using microbial fuel cells with Cr(VI) or Cu(II) as these metals have relatively high redox potentials, and microbial electrolysis cells for reducing Cd(II) as this metal has a more negative redox potential. The BESs were then acclimated to low concentrations of a mixture of metals, followed by more elevated concentrations. This procedure resulted in complete and selective metal reduction at rates of 1.24 ± 0.01 mg/L-h for Cr(VI), 1.07 ± 0.01 mg/L-h for Cu(II), and 0.98 ± 0.01 mg/L-h for Cd(II). These reduction rates were larger than the no adaptive controls by factors of 2.5 for Cr(VI), 2.9 for Cu(II), and 3.6 for Cd(II). This adaptive procedure produced less diverse microbial communities and changes in the microbial communities at the phylum and genus levels. These results demonstrated that bacterial communities can adaptively evolve to utilize solutions containing mixtures of metals, providing a strategy for remediating wastewaters containing Cr(VI), Cu(II), and Cd(II).
NASA Technical Reports Server (NTRS)
Coirier, William John
1994-01-01
A Cartesian, cell-based scheme for solving the Euler and Navier-Stokes equations in two dimensions is developed and tested. Grids about geometrically complicated bodies are generated automatically, by recursive subdivision of a single Cartesian cell encompassing the entire flow domain. Where the resulting cells intersect bodies, polygonal 'cut' cells are created. The geometry of the cut cells is computed using polygon-clipping algorithms. The grid is stored in a binary-tree data structure which provides a natural means of obtaining cell-to-cell connectivity and of carrying out solution-adaptive refinement. The Euler and Navier-Stokes equations are solved on the resulting grids using a finite-volume formulation. The convective terms are upwinded, with a limited linear reconstruction of the primitive variables used to provide input states to an approximate Riemann solver for computing the fluxes between neighboring cells. A multi-stage time-stepping scheme is used to reach a steady-state solution. Validation of the Euler solver with benchmark numerical and exact solutions is presented. An assessment of the accuracy of the approach is made by uniform and adaptive grid refinements for a steady, transonic, exact solution to the Euler equations. The error of the approach is directly compared to a structured solver formulation. A non smooth flow is also assessed for grid convergence, comparing uniform and adaptively refined results. Several formulations of the viscous terms are assessed analytically, both for accuracy and positivity. The two best formulations are used to compute adaptively refined solutions of the Navier-Stokes equations. These solutions are compared to each other, to experimental results and/or theory for a series of low and moderate Reynolds numbers flow fields. The most suitable viscous discretization is demonstrated for geometrically-complicated internal flows. For flows at high Reynolds numbers, both an altered grid-generation procedure and a
Hybrid Surface Mesh Adaptation for Climate Modeling
Ahmed Khamayseh; Valmor de Almeida; Glen Hansen
2008-10-01
Solution-driven mesh adaptation is becoming quite popular for spatial error control in the numerical simulation of complex computational physics applications, such as climate modeling. Typically, spatial adaptation is achieved by element subdivision (h adaptation) with a primary goal of resolving the local length scales of interest. A second, less-popular method of spatial adaptivity is called “mesh motion” (r adaptation); the smooth repositioning of mesh node points aimed at resizing existing elements to capture the local length scales. This paper proposes an adaptation method based on a combination of both element subdivision and node point repositioning (rh adaptation). By combining these two methods using the notion of a mobility function, the proposed approach seeks to increase the flexibility and extensibility of mesh motion algorithms while providing a somewhat smoother transition between refined regions than is produced by element subdivision alone. Further, in an attempt to support the requirements of a very general class of climate simulation applications, the proposed method is designed to accommodate unstructured, polygonal mesh topologies in addition to the most popular mesh types.
Dynamic Adaption of Vascular Morphology
Okkels, Fridolin; Jacobsen, Jens Christian Brings
2012-01-01
The structure of vascular networks adapts continuously to meet changes in demand of the surrounding tissue. Most of the known vascular adaptation mechanisms are based on local reactions to local stimuli such as pressure and flow, which in turn reflects influence from the surrounding tissue. Here we present a simple two-dimensional model in which, as an alternative approach, the tissue is modeled as a porous medium with intervening sharply defined flow channels. Based on simple, physiologically realistic assumptions, flow-channel structure adapts so as to reach a configuration in which all parts of the tissue are supplied. A set of model parameters uniquely determine the model dynamics, and we have identified the region of the best-performing model parameters (a global optimum). This region is surrounded in parameter space by less optimal model parameter values, and this separation is characterized by steep gradients in the related fitness landscape. Hence it appears that the optimal set of parameters tends to localize close to critical transition zones. Consequently, while the optimal solution is stable for modest parameter perturbations, larger perturbations may cause a profound and permanent shift in systems characteristics. We suggest that the system is driven toward a critical state as a consequence of the ongoing parameter optimization, mimicking an evolutionary pressure on the system. PMID:23060814
NASA Technical Reports Server (NTRS)
2001-01-01
REI Systems, Inc. developed a software solution that uses the Internet to eliminate the paperwork typically required to document and manage complex business processes. The data management solution, called Electronic Handbooks (EHBs), is presently used for the entire SBIR program processes at NASA. The EHB-based system is ideal for programs and projects whose users are geographically distributed and are involved in complex management processes and procedures. EHBs provide flexible access control and increased communications while maintaining security for systems of all sizes. Through Internet Protocol- based access, user authentication and user-based access restrictions, role-based access control, and encryption/decryption, EHBs provide the level of security required for confidential data transfer. EHBs contain electronic forms and menus, which can be used in real time to execute the described processes. EHBs use standard word processors that generate ASCII HTML code to set up electronic forms that are viewed within a web browser. EHBs require no end-user software distribution, significantly reducing operating costs. Each interactive handbook simulates a hard-copy version containing chapters with descriptions of participants' roles in the online process.
Organizational Adaptation and Higher Education.
ERIC Educational Resources Information Center
Cameron, Kim S.
1984-01-01
Organizational adaptation and types of adaptation needed in academe in the future are reviewed and major conceptual approaches to organizational adaptation are presented. The probable environment that institutions will face in the future that will require adaptation is discussed. (MLW)
Taylor, Nigel A S
2014-01-01
In this overview, human morphological and functional adaptations during naturally and artificially induced heat adaptation are explored. Through discussions of adaptation theory and practice, a theoretical basis is constructed for evaluating heat adaptation. It will be argued that some adaptations are specific to the treatment used, while others are generalized. Regarding ethnic differences in heat tolerance, the case is put that reported differences in heat tolerance are not due to natural selection, but can be explained on the basis of variations in adaptation opportunity. These concepts are expanded to illustrate how traditional heat adaptation and acclimatization represent forms of habituation, and thermal clamping (controlled hyperthermia) is proposed as a superior model for mechanistic research. Indeed, this technique has led to questioning the perceived wisdom of body-fluid changes, such as the expansion and subsequent decay of plasma volume, and sudomotor function, including sweat habituation and redistribution. Throughout, this contribution was aimed at taking another step toward understanding the phenomenon of heat adaptation and stimulating future research. In this regard, research questions are posed concerning the influence that variations in morphological configuration may exert upon adaptation, the determinants of postexercise plasma volume recovery, and the physiological mechanisms that modify the cholinergic sensitivity of sweat glands, and changes in basal metabolic rate and body core temperature following adaptation.
Assessing Children's Implicit Attitudes Using the Affect Misattribution Procedure
ERIC Educational Resources Information Center
Williams, Amanda; Steele, Jennifer R.; Lipman, Corey
2016-01-01
In the current research, we examined whether the Affect Misattribution Procedure (AMP) could be successfully adapted as an implicit measure of children's attitudes. We tested this possibility in 3 studies with 5- to 10-year-old children. In Study 1, we found evidence that children misattribute affect elicited by attitudinally positive (e.g., cute…
49 CFR 572.142 - Head assembly and test procedure.
Code of Federal Regulations, 2013 CFR
2013-10-01
... assembly (refer to § 572.140(a)(1)(i)) for this test consists of the head (drawing 210-1000), adapter plate.... (2) Prior to the test, clean the impact surface of the head skin and the steel impact plate surface... 49 Transportation 7 2013-10-01 2013-10-01 false Head assembly and test procedure. 572.142...
49 CFR 572.142 - Head assembly and test procedure.
Code of Federal Regulations, 2014 CFR
2014-10-01
... assembly (refer to § 572.140(a)(1)(i)) for this test consists of the head (drawing 210-1000), adapter plate.... (2) Prior to the test, clean the impact surface of the head skin and the steel impact plate surface... 49 Transportation 7 2014-10-01 2014-10-01 false Head assembly and test procedure. 572.142...
49 CFR 572.142 - Head assembly and test procedure.
Code of Federal Regulations, 2011 CFR
2011-10-01
... assembly (refer to § 572.140(a)(1)(i)) for this test consists of the head (drawing 210-1000), adapter plate.... (2) Prior to the test, clean the impact surface of the head skin and the steel impact plate surface... 49 Transportation 7 2011-10-01 2011-10-01 false Head assembly and test procedure. 572.142...
49 CFR 572.142 - Head assembly and test procedure.
Code of Federal Regulations, 2010 CFR
2010-10-01
... assembly (refer to § 572.140(a)(1)(i)) for this test consists of the head (drawing 210-1000), adapter plate.... (2) Prior to the test, clean the impact surface of the head skin and the steel impact plate surface... 49 Transportation 7 2010-10-01 2010-10-01 false Head assembly and test procedure. 572.142...
49 CFR 572.142 - Head assembly and test procedure.
Code of Federal Regulations, 2012 CFR
2012-10-01
... assembly (refer to § 572.140(a)(1)(i)) for this test consists of the head (drawing 210-1000), adapter plate.... (2) Prior to the test, clean the impact surface of the head skin and the steel impact plate surface... 49 Transportation 7 2012-10-01 2012-10-01 false Head assembly and test procedure. 572.142...
Aaroe, R.; Lund, B.F.; Onshus, T.
1995-12-31
The paper is based on a feasibility study investigating the possibilities of using a HIPPS (High Integrity Pressure Protection System) to protect a subsea pipeline that is not rated for full wellhead shut-in pressure. The study was called the Subsea OPPS Feasibility Study, and was performed by SINTEF, Norway. Here, OPPS is an acronym for Overpressure Pipeline Protection System. A design procedure for a subsea HIPPS is described, based on the experience and knowledge gained through the ``Subsea OPPS Feasibility Study``. Before a subsea HIPPS can be applied, its technical feasibility, reliability and profitability must be demonstrated. The subsea HIPPS design procedure will help to organize and plan the design activities both with respect to development and verification of a subsea HIPPS. The paper also gives examples of how some of the discussed design steps were performed in the Subsea OPPS Feasibility Study. Finally, further work required to apply a subsea HIPPS is discussed.
Procedure for freeze-drying molecules adsorbed to mica flakes.
Heuser, J E
1983-09-05
The quick-freeze, deep-etch, rotary-replication technique is useful for visualizing cells and cell fractions but does not work with suspensions of macromolecules. These inevitably clump or collapse during deep-etching, presumably due to surface tension forces that develop during their transfer from ice to vacuum. Previous protocols have attempted to overcome such forces by attaching macromolecules to freshly cleaved mica before drying and replication. I describe here an adaptation of this procedure to the deep-etch technique as otherwise practiced. My innovation is to mix the molecules with an aqueous suspension of tiny flakes of mica and then to quick-freeze and freeze-fracture the suspension exactly as if one were dealing with cells. The fracture inevitably strikes the surfaces of many mica flakes and thereby cleaves the adsorbed macromolecules cleanly enough to reveal interesting substructure within them. The subsequent step of deep-etching exposes large expanses of unfractured mica and thus reveals intact macromolecules. These macromolecules are not obscured by salt deposits, even if they were frozen in hypertonic solutions, apparently because the fracturing step removes nearly all of the overlying electrolyte. Moreover, these macromolecules are minimally freeze-dried (since exposure is sufficient after only 3 min of etching at -102 degrees C) so they retain their three-dimensional topology. I show that molluscan hemocyanin is a good internal standard for this new technique. It is available commercially in stable solutions, mixes well with all sizes of macromolecules, and consists of particles that display distinct five-start surface helices, which have been measured carefully in the past and which possess a known handedness, useful for determining the orientation of micrographs when examining the various helical patterns possessed by most types of extended macromolecules. The fractured hemocyanin particles also display characteristic internal structures, which
Adaptive Grid Generation for Numerical Solution of Partial Differential Equations.
1983-12-01
RETURN 65 Bibliography 1. Thompson , J . F ., "A Survey of Grid Generation Tecniques in Computational Fluid Dynamics," AIAA Paper No. 83-0447, 1-36...edited by K. N. Ghia and U. Ghia. ASME FED, 5: 35-47 (1983). 3. Thompson , J . F ., Thames, F. C., and Mastin, C. W., "Automated Numerical Generation...Equations," Numerical Grid Generation, Edited by J. F. Thompson. New York: North Holland, 1982. 10. Thompson , J . F ., and Mastin, C. W., "Grid Generation
Comparative study of infrared wavefront sensing solutions for adaptive optics
NASA Astrophysics Data System (ADS)
Plantet, C.; Fusco, T.; Guerineau, N.; Derelle, S.; Robert, C.
2016-07-01
The development of new low-noise infrared detectors, such as RAPID (CEA LETI/Sofradir) or SAPHIRA (Selex), has given the possibility to consider infrared wavefront sensing at low ux. We propose here a comparative study of near infrared (J and H bands) wavefront sensing concepts for mid and high orders estimation on a 8m- class telescope, relying on three existing wavefront sensors: the Shack-Hartmann sensor, the pyramid sensor and the quadri-wave lateral shearing interferometer. We consider several conceptual designs using the RAPID camera, making a trade-off between background flux, optical thickness and compatibility with a compact cryostat integration. We then study their sensitivity to noise in order to compare them in different practical scenarios. The pyramid provides the best performance, with a gain up to 0.5 magnitude, and has an advantageous setup.
Developing an Adaptive ADL Solution for Training Medical Teams
2003-01-01
The depth and extent of this pre-exercise briefing depends on students’ mastery levels and learning styles , with some students being given... learning styles , and motivational issues. RELATED WORK [Shaw et. al. 1999] have developed an agent-based intelligent tutoring system called ADELE
Adaptive local discontinuous Galerkin approximation to Richards’ equation
NASA Astrophysics Data System (ADS)
Li, H.; Farthing, M. W.; Miller, C. T.
2007-09-01
We propose a spatially and temporally adaptive solution to Richards' equation based upon a local discontinuous Galerkin approximation in space and a high-order, backward difference method in time. We cast our approach in terms of a general, decoupled adaption algorithm based upon operators. We define non-unique instances of all operators to result in an adaption method from within the general class of methods that is defined. We formally decouple the spatial adaption from the temporal adaption using a method of lines approach and limit the temporal truncation error so that the total error is dominated by the spatial component. We use a multiple grid approach to guide adaption and support the data structures. Spatial adaption decisions are based upon error and regularity indicators, which are economical to compute. The resultant methods are compared for two test problems. The results show that the proposed adaption methods are superior to methods that adapt only in time and that in cases in which the problem has sufficient smoothness, adapting the order of the elements in addition to the grid spacing can further improve the efficiency of this robust solution approach.
Residual Distribution Schemes for Conservation Laws Via Adaptive Quadrature
NASA Technical Reports Server (NTRS)
Barth, Timothy; Abgrall, Remi; Biegel, Bryan (Technical Monitor)
2000-01-01
This paper considers a family of nonconservative numerical discretizations for conservation laws which retains the correct weak solution behavior in the limit of mesh refinement whenever sufficient order numerical quadrature is used. Our analysis of 2-D discretizations in nonconservative form follows the 1-D analysis of Hou and Le Floch. For a specific family of nonconservative discretizations, it is shown under mild assumptions that the error arising from non-conservation is strictly smaller than the discretization error in the scheme. In the limit of mesh refinement under the same assumptions, solutions are shown to satisfy an entropy inequality. Using results from this analysis, a variant of the "N" (Narrow) residual distribution scheme of van der Weide and Deconinck is developed for first-order systems of conservation laws. The modified form of the N-scheme supplants the usual exact single-state mean-value linearization of flux divergence, typically used for the Euler equations of gasdynamics, by an equivalent integral form on simplex interiors. This integral form is then numerically approximated using an adaptive quadrature procedure. This renders the scheme nonconservative in the sense described earlier so that correct weak solutions are still obtained in the limit of mesh refinement. Consequently, we then show that the modified form of the N-scheme can be easily applied to general (non-simplicial) element shapes and general systems of first-order conservation laws equipped with an entropy inequality where exact mean-value linearization of the flux divergence is not readily obtained, e.g. magnetohydrodynamics, the Euler equations with certain forms of chemistry, etc. Numerical examples of subsonic, transonic and supersonic flows containing discontinuities together with multi-level mesh refinement are provided to verify the analysis.
Usuelli, F G; Montrasio, U Alfieri
2012-06-01
Flexible flatfoot is one of the most common deformities. Arthroereisis procedures are designed to correct this deformity. Among them, the calcaneo-stop is a procedure with both biomechanical and proprioceptive properties. It is designed for pediatric treatment. Results similar to endorthesis procedure are reported. Theoretically the procedure can be applied to adults if combined with other procedures to obtain a stable plantigrade foot, but medium-term follow up studies are missing.
Pollutant Assessments Group Procedures Manual
Chavarria, D.E.; Davidson, J.R.; Espegren, M.L.; Kearl, P.M.; Knott, R.R.; Pierce, G.A.; Retolaza, C.D.; Smuin, D.R.; Wilson, M.J.; Witt, D.A. ); Conklin, N.G.; Egidi, P.V.; Ertel, D.B.; Foster, D.S.; Krall, B.J.; Meredith, R.L.; Rice, J.A.; Roemer, E.K. )
1991-02-01
This procedures manual combines the existing procedures for radiological and chemical assessment of hazardous wastes used by the Pollutant Assessments Group at the time of manuscript completion (October 1, 1990). These procedures will be revised in an ongoing process to incorporate new developments in hazardous waste assessment technology and changes in administrative policy and support procedures. Format inconsistencies will be corrected in subsequent revisions of individual procedures.
Adaptation of Selenastrum capricornutum (Chlorophyceae) to copper
Kuwabara, J.S.; Leland, H.V.
1986-01-01
Selenastrum capricornutum Printz, growing in a chemically defined medium, was used as a model for studying adaptation of algae to a toxic metal (copper) ion. Cells exhibited lag-phase adaptation to 0.8 ??M total Cu (10-12 M free ion concentration) after 20 generations of Cu exposure. Selenastrum adapted to the same concentration when Cu was gradually introduced over an 8-h period using a specially designed apparatus that provided a transient increase in exposure concentration. Cu adaptation was not attributable to media conditioning by algal exudates. Duration of lag phase was a more sensitive index of copper toxicity to Selenastrum that was growth rate or stationary-phase cell density under the experimental conditions used. Chemical speciation of the Cu dosing solution influenced the duration of lag phase even when media formulations were identical after dosing. Selenastrum initially exposed to Cu in a CuCl2 injection solution exhibited a lag phase of 3.9 d, but this was reduced to 1.5 d when a CuEDTA solution was used to achieve the same total Cu and EDTA concentrations. Physical and chemical processes that accelerated the rate of increase in cupric ion concentration generally increased the duration of lag phase. ?? 1986.
Adaptive mesh strategies for the spectral element method
NASA Technical Reports Server (NTRS)
Mavriplis, Catherine
1992-01-01
An adaptive spectral method was developed for the efficient solution of time dependent partial differential equations. Adaptive mesh strategies that include resolution refinement and coarsening by three different methods are illustrated on solutions to the 1-D viscous Burger equation and the 2-D Navier-Stokes equations for driven flow in a cavity. Sharp gradients, singularities, and regions of poor resolution are resolved optimally as they develop in time using error estimators which indicate the choice of refinement to be used. The adaptive formulation presents significant increases in efficiency, flexibility, and general capabilities for high order spectral methods.
Physiotherapy following elective orthopaedic procedures.
De Kleijn, P; Blamey, G; Zourikian, N; Dalzell, R; Lobet, S
2006-07-01
As haemophilic arthropathy and chronic synovitis are still the most important clinical features in people with haemophilia, different kinds of invasive and orthopaedic procedures have become more common during the last decades. The availability of clotting factor has made arthroplasty of one, or even multiple joints possible. This article highlights the role of physiotherapy before and after such procedures. Synovectomies are sometimes advocated in people with haemophilia to stop repetitive cycles of intra-articular bleeds and/or chronic synovitis. The synovectomy itself, however, does not solve the muscle atrophy, loss of range of motion (ROM), instability and poor propriocepsis, often developed during many years. The key is in taking advantage of the subsequent, relatively safe, bleed-free period to address these important issues. Although the preoperative ROM is the most important variable influencing the postoperative ROM after total knee arthroplasty, there are a few key points that should be considered to improve the outcome. Early mobilization, either manual or by means of a continuous passive mobilization machine, can be an optimal solution during the very first postoperative days. Muscle isometric contractions and light open kinetic chain exercises should also be started in order to restore the quadriceps control. Partial weight bearing can be started shortly after, because of quadriceps inhibition and to avoid excessive swelling. The use of continuous clotting factor replacement permits earlier and intensive rehabilitation during the postoperative period. During the rehabilitation of shoulder arthroplasty restoring the function of the rotator cuff is of utmost importance. Often the rotator cuff muscles are inhibited in the presence of pain and loss of ROM. Physiotherapy also assists in improving pain and maintaining ROM and strength. Functional weight-bearing tasks, such as using the upper limbs to sit and stand, are often discouraged during the first 6
46 CFR 153.1035 - Acetone cyanohydrin or lactonitrile solutions.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 46 Shipping 5 2013-10-01 2013-10-01 false Acetone cyanohydrin or lactonitrile solutions. 153.1035... Special Cargo Procedures § 153.1035 Acetone cyanohydrin or lactonitrile solutions. No person may operate a tankship carrying a cargo of acetone cyanohydrin or lactonitrile solutions, unless that cargo is...
46 CFR 153.1035 - Acetone cyanohydrin or lactonitrile solutions.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 46 Shipping 5 2014-10-01 2014-10-01 false Acetone cyanohydrin or lactonitrile solutions. 153.1035... Special Cargo Procedures § 153.1035 Acetone cyanohydrin or lactonitrile solutions. No person may operate a tankship carrying a cargo of acetone cyanohydrin or lactonitrile solutions, unless that cargo is...
46 CFR 153.1035 - Acetone cyanohydrin or lactonitrile solutions.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 46 Shipping 5 2011-10-01 2011-10-01 false Acetone cyanohydrin or lactonitrile solutions. 153.1035... Special Cargo Procedures § 153.1035 Acetone cyanohydrin or lactonitrile solutions. No person may operate a tankship carrying a cargo of acetone cyanohydrin or lactonitrile solutions, unless that cargo is...
46 CFR 153.1035 - Acetone cyanohydrin or lactonitrile solutions.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 46 Shipping 5 2010-10-01 2010-10-01 false Acetone cyanohydrin or lactonitrile solutions. 153.1035... Special Cargo Procedures § 153.1035 Acetone cyanohydrin or lactonitrile solutions. No person may operate a tankship carrying a cargo of acetone cyanohydrin or lactonitrile solutions, unless that cargo is...
46 CFR 153.1035 - Acetone cyanohydrin or lactonitrile solutions.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 46 Shipping 5 2012-10-01 2012-10-01 false Acetone cyanohydrin or lactonitrile solutions. 153.1035... Special Cargo Procedures § 153.1035 Acetone cyanohydrin or lactonitrile solutions. No person may operate a tankship carrying a cargo of acetone cyanohydrin or lactonitrile solutions, unless that cargo is...
Favorite Demonstrations: Exothermic Crystallization from a Supersaturated Solution.
ERIC Educational Resources Information Center
Kauffman, George B.; And Others
1986-01-01
The use of sodium acetate solution to show supersaturation is a favorite among lecture demonstrations. However, careful adjustment of the solute-to-water ratio must be made to attain the most spectacular effect--complete solidification of the solution. Procedures to accomplish this are provided and discussed. (JN)
46 CFR 153.1065 - Sodium chlorate solutions.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 46 Shipping 5 2011-10-01 2011-10-01 false Sodium chlorate solutions. 153.1065 Section 153.1065... Procedures § 153.1065 Sodium chlorate solutions. (a) No person may load sodium chlorate solutions into a... before loading. (b) The person in charge of cargo transfer shall make sure that spills of sodium...
46 CFR 153.1065 - Sodium chlorate solutions.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 46 Shipping 5 2012-10-01 2012-10-01 false Sodium chlorate solutions. 153.1065 Section 153.1065... Procedures § 153.1065 Sodium chlorate solutions. (a) No person may load sodium chlorate solutions into a... before loading. (b) The person in charge of cargo transfer shall make sure that spills of sodium...
46 CFR 153.1065 - Sodium chlorate solutions.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 46 Shipping 5 2010-10-01 2010-10-01 false Sodium chlorate solutions. 153.1065 Section 153.1065... Procedures § 153.1065 Sodium chlorate solutions. (a) No person may load sodium chlorate solutions into a... before loading. (b) The person in charge of cargo transfer shall make sure that spills of sodium...
Solution-Focused Therapy: Toward the Identification of Therapeutic Tasks.
ERIC Educational Resources Information Center
Molnar, Alex; de Shazer, Steve
1987-01-01
Notes that brief therapy has often been regarded as "problem solving therapy." Discusses development of a solution-focused approach to clinical practice and describes solution-focused therapeutic tasks and interventions. Outlines some of clinical procedures and interventions possible when a solution-focused approach is used. (Author/NB)
46 CFR 153.1065 - Sodium chlorate solutions.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 46 Shipping 5 2013-10-01 2013-10-01 false Sodium chlorate solutions. 153.1065 Section 153.1065... Procedures § 153.1065 Sodium chlorate solutions. (a) No person may load sodium chlorate solutions into a... before loading. (b) The person in charge of cargo transfer shall make sure that spills of sodium...
46 CFR 153.1065 - Sodium chlorate solutions.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 46 Shipping 5 2014-10-01 2014-10-01 false Sodium chlorate solutions. 153.1065 Section 153.1065... Procedures § 153.1065 Sodium chlorate solutions. (a) No person may load sodium chlorate solutions into a... before loading. (b) The person in charge of cargo transfer shall make sure that spills of sodium...
Gravitational adaptation of animals
NASA Technical Reports Server (NTRS)
Smith, A. H.; Burton, R. R.
1982-01-01
The effect of gravitational adaptation is studied in a group of five Leghorn cocks which had become physiologically adapted to 2 G after 162 days of centrifugation. After this period of adaptation, they are periodically exposed to a 2 G field, accompanied by five previously unexposed hatch-mates, and the degree of retained acceleration adaptation is estimated from the decrease in lymphocyte frequency after 24 hr at 2 G. Results show that the previously adapted birds exhibit an 84% greater lymphopenia than the unexposed birds, and that the lymphocyte frequency does not decrease to a level below that found at the end of 162 days at 2 G. In addition, the capacity for adaptation to chronic acceleration is found to be highly heritable. An acceleration tolerant strain of birds shows lesser mortality during chronic acceleration, particularly in intermediate fields, although the result of acceleration selection is largely quantitative (a greater number of survivors) rather than qualitative (behavioral or physiological changes).
Technology transfer for adaptation
NASA Astrophysics Data System (ADS)
Biagini, Bonizella; Kuhl, Laura; Gallagher, Kelly Sims; Ortiz, Claudia
2014-09-01
Technology alone will not be able to solve adaptation challenges, but it is likely to play an important role. As a result of the role of technology in adaptation and the importance of international collaboration for climate change, technology transfer for adaptation is a critical but understudied issue. Through an analysis of Global Environment Facility-managed adaptation projects, we find there is significantly more technology transfer occurring in adaptation projects than might be expected given the pessimistic rhetoric surrounding technology transfer for adaptation. Most projects focused on demonstration and early deployment/niche formation for existing technologies rather than earlier stages of innovation, which is understandable considering the pilot nature of the projects. Key challenges for the transfer process, including technology selection and appropriateness under climate change, markets and access to technology, and diffusion strategies are discussed in more detail.
Parallel Anisotropic Tetrahedral Adaptation
NASA Technical Reports Server (NTRS)
Park, Michael A.; Darmofal, David L.
2008-01-01
An adaptive method that robustly produces high aspect ratio tetrahedra to a general 3D metric specification without introducing hybrid semi-structured regions is presented. The elemental operators and higher-level logic is described with their respective domain-decomposed parallelizations. An anisotropic tetrahedral grid adaptation scheme is demonstrated for 1000-1 stretching for a simple cube geometry. This form of adaptation is applicable to more complex domain boundaries via a cut-cell approach as demonstrated by a parallel 3D supersonic simulation of a complex fighter aircraft. To avoid the assumptions and approximations required to form a metric to specify adaptation, an approach is introduced that directly evaluates interpolation error. The grid is adapted to reduce and equidistribute this interpolation error calculation without the use of an intervening anisotropic metric. Direct interpolation error adaptation is illustrated for 1D and 3D domains.
Liongue, Clifford; John, Liza B; Ward, Alister
2011-01-01
Adaptive immunity, involving distinctive antibody- and cell-mediated responses to specific antigens based on "memory" of previous exposure, is a hallmark of higher vertebrates. It has been argued that adaptive immunity arose rapidly, as articulated in the "big bang theory" surrounding its origins, which stresses the importance of coincident whole-genome duplications. Through a close examination of the key molecules and molecular processes underpinning adaptive immunity, this review suggests a less-extreme model, in which adaptive immunity emerged as part of longer evolutionary journey. Clearly, whole-genome duplications provided additional raw genetic materials that were vital to the emergence of adaptive immunity, but a variety of other genetic events were also required to generate some of the key molecules, whereas others were preexisting and simply co-opted into adaptive immunity.
Adaptive Bayes classifiers for remotely sensed data
NASA Technical Reports Server (NTRS)
Raulston, H. S.; Pace, M. O.; Gonzalez, R. C.
1975-01-01
An algorithm is developed for a learning, adaptive, statistical pattern classifier for remotely sensed data. The estimation procedure consists of two steps: (1) an optimal stochastic approximation of the parameters of interest, and (2) a projection of the parameters in time and space. The results reported are for Gaussian data in which the mean vector of each class may vary with time or position after the classifier is trained.
NASA Astrophysics Data System (ADS)
Falugi, P.; Olaru, S.; Dumur, D.
2010-08-01
This article proposes an explicit robust predictive control solution based on linear matrix inequalities (LMIs). The considered predictive control strategy uses different local descriptions of the system dynamics and uncertainties and thus allows the handling of less conservative input constraints. The computed control law guarantees constraint satisfaction and asymptotic stability. The technique is effective for a class of nonlinear systems embedded into polytopic models. A detailed discussion of the procedures which adapt the partition of the state space is presented. For the practical implementation the construction of suitable (explicit) descriptions of the control law are described upon concrete algorithms.
Quantifying the adaptive cycle
Angeler, David G.; Allen, Craig R.; Garmestani, Ahjond S.; Gunderson, Lance H.; Hjerne, Olle; Winder, Monika
2015-01-01
The adaptive cycle was proposed as a conceptual model to portray patterns of change in complex systems. Despite the model having potential for elucidating change across systems, it has been used mainly as a metaphor, describing system dynamics qualitatively. We use a quantitative approach for testing premises (reorganisation, conservatism, adaptation) in the adaptive cycle, using Baltic Sea phytoplankton communities as an example of such complex system dynamics. Phytoplankton organizes in recurring spring and summer blooms, a well-established paradigm in planktology and succession theory, with characteristic temporal trajectories during blooms that may be consistent with adaptive cycle phases. We used long-term (1994–2011) data and multivariate analysis of community structure to assess key components of the adaptive cycle. Specifically, we tested predictions about: reorganisation: spring and summer blooms comprise distinct community states; conservatism: community trajectories during individual adaptive cycles are conservative; and adaptation: phytoplankton species during blooms change in the long term. All predictions were supported by our analyses. Results suggest that traditional ecological paradigms such as phytoplankton successional models have potential for moving the adaptive cycle from a metaphor to a framework that can improve our understanding how complex systems organize and reorganize following collapse. Quantifying reorganization, conservatism and adaptation provides opportunities to cope with the intricacies and uncertainties associated with fast ecological change, driven by shifting system controls. Ultimately, combining traditional ecological paradigms with heuristics of complex system dynamics using quantitative approaches may help refine ecological theory and improve our understanding of the resilience of ecosystems.
Evans, G.W. Jacobs, S.V.; Frager, N.B.
1982-10-01
This study examined the health effects of human adaptation to photochemical smog. A group of recent arrivals to the Los Angeles air basin were compared to long-term residents of the basin. Evidence for adaptation included greater irritation and respiratory problems among the recent arrivals and desensitization among the long-term residents in their judgments of the severity of the smog problem to their health. There was no evidence for biochemical adaptation as measured by hemoglobin response to oxidant challenge. The results were discussed in terms of psychological adaption to chronic environmental stressors.
Quantifying the Adaptive Cycle.
Angeler, David G; Allen, Craig R; Garmestani, Ahjond S; Gunderson, Lance H; Hjerne, Olle; Winder, Monika
2015-01-01
The adaptive cycle was proposed as a conceptual model to portray patterns of change in complex systems. Despite the model having potential for elucidating change across systems, it has been used mainly as a metaphor, describing system dynamics qualitatively. We use a quantitative approach for testing premises (reorganisation, conservatism, adaptation) in the adaptive cycle, using Baltic Sea phytoplankton communities as an example of such complex system dynamics. Phytoplankton organizes in recurring spring and summer blooms, a well-established paradigm in planktology and succession theory, with characteristic temporal trajectories during blooms that may be consistent with adaptive cycle phases. We used long-term (1994-2011) data and multivariate analysis of community structure to assess key components of the adaptive cycle. Specifically, we tested predictions about: reorganisation: spring and summer blooms comprise distinct community states; conservatism: community trajectories during individual adaptive cycles are conservative; and adaptation: phytoplankton species during blooms change in the long term. All predictions were supported by our analyses. Results suggest that traditional ecological paradigms such as phytoplankton successional models have potential for moving the adaptive cycle from a metaphor to a framework that can improve our understanding how complex systems organize and reorganize following collapse. Quantifying reorganization, conservatism and adaptation provides opportunities to cope with the intricacies and uncertainties associated with fast ecological change, driven by shifting system controls. Ultimately, combining traditional ecological paradigms with heuristics of complex system dynamics using quantitative approaches may help refine ecological theory and improve our understanding of the resilience of ecosystems.
Adaptive parallel logic networks
NASA Technical Reports Server (NTRS)
Martinez, Tony R.; Vidal, Jacques J.
1988-01-01
Adaptive, self-organizing concurrent systems (ASOCS) that combine self-organization with massive parallelism for such applications as adaptive logic devices, robotics, process control, and system malfunction management, are presently discussed. In ASOCS, an adaptive network composed of many simple computing elements operating in combinational and asynchronous fashion is used and problems are specified by presenting if-then rules to the system in the form of Boolean conjunctions. During data processing, which is a different operational phase from adaptation, the network acts as a parallel hardware circuit.
Stabilized Conservative Level Set Method with Adaptive Wavelet-based Mesh Refinement
NASA Astrophysics Data System (ADS)
Shervani-Tabar, Navid; Vasilyev, Oleg V.
2016-11-01
This paper addresses one of the main challenges of the conservative level set method, namely the ill-conditioned behavior of the normal vector away from the interface. An alternative formulation for reconstruction of the interface is proposed. Unlike the commonly used methods which rely on the unit normal vector, Stabilized Conservative Level Set (SCLS) uses a modified renormalization vector with diminishing magnitude away from the interface. With the new formulation, in the vicinity of the interface the reinitialization procedure utilizes compressive flux and diffusive terms only in the normal direction to the interface, thus, preserving the conservative level set properties, while away from the interfaces the directional diffusion mechanism automatically switches to homogeneous diffusion. The proposed formulation is robust and general. It is especially well suited for use with adaptive mesh refinement (AMR) approaches due to need for a finer resolution in the vicinity of the interface in comparison with the rest of the domain. All of the results were obtained using the Adaptive Wavelet Collocation Method, a general AMR-type method, which utilizes wavelet decomposition to adapt on steep gradients in the solution while retaining a predetermined order of accuracy.
Educational Software and Adaptive Technology for Students with Learning Disabilities.
ERIC Educational Resources Information Center
Payne, Mario D.; Sachs, Rose
Technological solutions have enabled postsecondary students with learning disabilities to compete equally with nondisabled peers in the educational environment. Such solutions have included a variety of educational software, word processing applications, and adaptive technology. Educational software has many benefits over more traditional…
Neural Adaptation Effects in Conceptual Processing.
Marino, Barbara F M; Borghi, Anna M; Gemmi, Luca; Cacciari, Cristina; Riggio, Lucia
2015-07-31
We investigated the conceptual processing of nouns referring to objects characterized by a highly typical color and orientation. We used a go/no-go task in which we asked participants to categorize each noun as referring or not to natural entities (e.g., animals) after a selective adaptation of color-edge neurons in the posterior LV4 region of the visual cortex was induced by means of a McCollough effect procedure. This manipulation affected categorization: the green-vertical adaptation led to slower responses than the green-horizontal adaptation, regardless of the specific color and orientation of the to-be-categorized noun. This result suggests that the conceptual processing of natural entities may entail the activation of modality-specific neural channels with weights proportional to the reliability of the signals produced by these channels during actual perception. This finding is discussed with reference to the debate about the grounded cognition view.
Neural Adaptation Effects in Conceptual Processing
Marino, Barbara F. M.; Borghi, Anna M.; Gemmi, Luca; Cacciari, Cristina; Riggio, Lucia
2015-01-01
We investigated the conceptual processing of nouns referring to objects characterized by a highly typical color and orientation. We used a go/no-go task in which we asked participants to categorize each noun as referring or not to natural entities (e.g., animals) after a selective adaptation of color-edge neurons in the posterior LV4 region of the visual cortex was induced by means of a McCollough effect procedure. This manipulation affected categorization: the green-vertical adaptation led to slower responses than the green-horizontal adaptation, regardless of the specific color and orientation of the to-be-categorized noun. This result suggests that the conceptual processing of natural entities may entail the activation of modality-specific neural channels with weights proportional to the reliability of the signals produced by these channels during actual perception. This finding is discussed with reference to the debate about the grounded cognition view. PMID:26264031
Robust adaptive control of MEMS triaxial gyroscope using fuzzy compensator.
Fei, Juntao; Zhou, Jian
2012-12-01
In this paper, a robust adaptive control strategy using a fuzzy compensator for MEMS triaxial gyroscope, which has system nonlinearities, including model uncertainties and external disturbances, is proposed. A fuzzy logic controller that could compensate for the model uncertainties and external disturbances is incorporated into the adaptive control scheme in the Lyapunov framework. The proposed adaptive fuzzy controller can guarantee the convergence and asymptotical stability of the closed-loop system. The proposed adaptive fuzzy control strategy does not depend on accurate mathematical models, which simplifies the design procedure. The innovative development of intelligent control methods incorporated with conventional control for the MEMS gyroscope is derived with the strict theoretical proof of the Lyapunov stability. Numerical simulations are investigated to verify the effectiveness of the proposed adaptive fuzzy control scheme and demonstrate the satisfactory tracking performance and robustness against model uncertainties and external disturbances compared with conventional adaptive control method.
Surface cleanliness measurement procedure
Schroder, Mark Stewart; Woodmansee, Donald Ernest; Beadie, Douglas Frank
2002-01-01
A procedure and tools for quantifying surface cleanliness are described. Cleanliness of a target surface is quantified by wiping a prescribed area of the surface with a flexible, bright white cloth swatch, preferably mounted on a special tool. The cloth picks up a substantial amount of any particulate surface contamination. The amount of contamination is determined by measuring the reflectivity loss of the cloth before and after wiping on the contaminated system and comparing that loss to a previous calibration with similar contamination. In the alternative, a visual comparison of the contaminated cloth to a contamination key provides an indication of the surface cleanliness.
Radiometric correction procedure study
NASA Technical Reports Server (NTRS)
Colby, C.; Sands, R.; Murphrey, S.
1978-01-01
A comparison of MSS radiometric processing techniques identified as a preferred radiometric processing technique a procedure which equalizes the mean and standard deviation of detector-specific histograms of uncalibrated scene data. Evaluation of MSS calibration data demonstrated that the relationship between detector responses is essentially linear over the range of intensities typically observed in MSS data, and that the calibration wedge data possess a high degree of temporal stability. An analysis of the preferred radiometric processing technique showed that it could be incorporated into the MDP-MSS system without a major redesign of the system, and with minimal impact on system throughput.
Interventional radiology neck procedures.
Zabala Landa, R M; Korta Gómez, I; Del Cura Rodríguez, J L
2016-05-01
Ultrasonography has become extremely useful in the evaluation of masses in the head and neck. It enables us to determine the anatomic location of the masses as well as the characteristics of the tissues that compose them, thus making it possible to orient the differential diagnosis toward inflammatory, neoplastic, congenital, traumatic, or vascular lesions, although it is necessary to use computed tomography or magnetic resonance imaging to determine the complete extension of certain lesions. The growing range of interventional procedures, mostly guided by ultrasonography, now includes biopsies, drainages, infiltrations, sclerosing treatments, and tumor ablation.
Augmented reality in surgical procedures
NASA Astrophysics Data System (ADS)
Samset, E.; Schmalstieg, D.; Vander Sloten, J.; Freudenthal, A.; Declerck, J.; Casciaro, S.; Rideng, Ø.; Gersak, B.
2008-02-01
Minimally invasive therapy (MIT) is one of the most important trends in modern medicine. It includes a wide range of therapies in videoscopic surgery and interventional radiology and is performed through small incisions. It reduces hospital stay-time by allowing faster recovery and offers substantially improved cost-effectiveness for the hospital and the society. However, the introduction of MIT has also led to new problems. The manipulation of structures within the body through small incisions reduces dexterity and tactile feedback. It requires a different approach than conventional surgical procedures, since eye-hand co-ordination is not based on direct vision, but more predominantly on image guidance via endoscopes or radiological imaging modalities. ARIS*ER is a multidisciplinary consortium developing a new generation of decision support tools for MIT by augmenting visual and sensorial feedback. We will present tools based on novel concepts in visualization, robotics and haptics providing tailored solutions for a range of clinical applications. Examples from radio-frequency ablation of liver-tumors, laparoscopic liver surgery and minimally invasive cardiac surgery will be presented. Demonstrators were developed with the aim to provide a seamless workflow for the clinical user conducting image-guided therapy.
Adaptive Management for a Turbulent Future
Allen, Craig R.; Fontaine, Joseph J.; Pope, Kevin L.; Garmestani, Ahjond S.
2011-01-01
The challenges that face humanity today differ from the past because as the scale of human influence has increased, our biggest challenges have become global in nature, and formerly local problems that could be addressed by shifting populations or switching resources, now aggregate (i.e., "scale up") limiting potential management options. Adaptive management is an approach to natural resource management that emphasizes learning through management based on the philosophy that knowledge is incomplete and much of what we think we know is actually wrong. Adaptive management has explicit structure, including careful elucidation of goals, identification of alternative management objectives and hypotheses of causation, and procedures for the collection of data followed by evaluation and reiteration. It is evident that adaptive management has matured, but it has also reached a crossroads. Practitioners and scientists have developed adaptive management and structured decision making techniques, and mathematicians have developed methods to reduce the uncertainties encountered in resource management, yet there continues to be misapplication of the method and misunderstanding of its purpose. Ironically, the confusion over the term "adaptive management" may stem from the flexibility inherent in the approach, which has resulted in multiple interpretations of "adaptive management" that fall along a continuum of complexity and a priori design. Adaptive management is not a panacea for the navigation of 'wicked problems' as it does not produce easy answers, and is only appropriate in a subset of natural resource management problems where both uncertainty and controllability are high. Nonetheless, the conceptual underpinnings of adaptive management are simple; there will always be inherent uncertainty and unpredictability in the dynamics and behavior of complex social-ecological systems, but management decisions must still be made, and whenever possible, we should incorporate
Adaptive management for a turbulent future
Allen, C.R.; Fontaine, J.J.; Pope, K.L.; Garmestani, A.S.
2011-01-01
The challenges that face humanity today differ from the past because as the scale of human influence has increased, our biggest challenges have become global in nature, and formerly local problems that could be addressed by shifting populations or switching resources, now aggregate (i.e., "scale up") limiting potential management options. Adaptive management is an approach to natural resource management that emphasizes learning through management based on the philosophy that knowledge is incomplete and much of what we think we know is actually wrong. Adaptive management has explicit structure, including careful elucidation of goals, identification of alternative management objectives and hypotheses of causation, and procedures for the collection of data followed by evaluation and reiteration. It is evident that adaptive management has matured, but it has also reached a crossroads. Practitioners and scientists have developed adaptive management and structured decision making techniques, and mathematicians have developed methods to reduce the uncertainties encountered in resource management, yet there continues to be misapplication of the method and misunderstanding of its purpose. Ironically, the confusion over the term "adaptive management" may stem from the flexibility inherent in the approach, which has resulted in multiple interpretations of "adaptive management" that fall along a continuum of complexity and a priori design. Adaptive management is not a panacea for the navigation of 'wicked problems' as it does not produce easy answers, and is only appropriate in a subset of natural resource management problems where both uncertainty and controllability are high. Nonetheless, the conceptual underpinnings of adaptive management are simple; there will always be inherent uncertainty and unpredictability in the dynamics and behavior of complex social-ecological systems, but management decisions must still be made, and whenever possible, we should incorporate
Manifold For Flushing Tubes With Cleaning Solution
NASA Technical Reports Server (NTRS)
Morgan, Gene E.; Fogel, Irving
1995-01-01
Custom-built manifold mounted on cleaning basket enables simultaneous flushing of 80 tubes with cleaning solution. In original application, tubes components of rocket-engine nozzle under construction. However, basic manifold configuration adapted to other applications (e.g., fabrication of heat exchangers) in which there is need for simultaneous cleaning of many tubes of identical size and shape.
Advanced crew procedures development techniques: Procedures and performance program description
NASA Technical Reports Server (NTRS)
Arbet, J. D.; Mangiaracina, A. A.
1975-01-01
The Procedures and Performance Program (PPP) for operation in conjunction with the Shuttle Procedures Simulator (SPS) is described. The PPP user interface, the SPS/PPP interface, and the PPP applications software are discussed.
Physiologic adaptation to space - Space adaptation syndrome
NASA Technical Reports Server (NTRS)
Vanderploeg, J. M.
1985-01-01
The adaptive changes of the neurovestibular system to microgravity, which result in space motion sickness (SMS), are studied. A list of symptoms, which range from vomiting to drowsiness, is provided. The two patterns of symptom development, rapid and gradual, and the duration of the symptoms are described. The concept of sensory conflict and rearrangements to explain SMS is being investigated.
Adaptive optical interconnects: the ADDAPT project
NASA Astrophysics Data System (ADS)
Henker, Ronny; Pliva, Jan; Khafaji, Mahdi; Ellinger, Frank; Toifl, Thomas; Offrein, Bert; Cevrero, Alessandro; Oezkaya, Ilter; Seifried, Marc; Ledentsov, Nikolay; Kropp, Joerg-R.; Shchukin, Vitaly; Zoldak, Martin; Halmo, Leos; Turkiewicz, Jaroslaw; Meredith, Wyn; Eddie, Iain; Georgiades, Michael; Charalambides, Savvas; Duis, Jeroen; van Leeuwen, Pieter
2015-09-01
Existing optical networks are driven by dynamic user and application demands but operate statically at their maximum performance. Thus, optical links do not offer much adaptability and are not very energy-efficient. In this paper a novel approach of implementing performance and power adaptivity from system down to optical device, electrical circuit and transistor level is proposed. Depending on the actual data load, the number of activated link paths and individual device parameters like bandwidth, clock rate, modulation format and gain are adapted to enable lowering the components supply power. This enables flexible energy-efficient optical transmission links which pave the way for massive reductions of CO2 emission and operating costs in data center and high performance computing applications. Within the FP7 research project Adaptive Data and Power Aware Transceivers for Optical Communications (ADDAPT) dynamic high-speed energy-efficient transceiver subsystems are developed for short-range optical interconnects taking up new adaptive technologies and methods. The research of eight partners from industry, research and education spanning seven European countries includes the investigation of several adaptive control types and algorithms, the development of a full transceiver system, the design and fabrication of optical components and integrated circuits as well as the development of high-speed, low loss packaging solutions. This paper describes and discusses the idea of ADDAPT and provides an overview about the latest research results in this field.
Regulations and Procedures Manual
Young, Lydia J.
2011-07-25
The purpose of the Regulations and Procedures Manual (RPM) is to provide LBNL personnel with a reference to University and Lawrence Berkeley National Laboratory (LBNL or Laboratory) policies and regulations by outlining normal practices and answering most policy questions that arise in the day-to-day operations of Laboratory organizations. Much of the information in this manual has been condensed from detail provided in LBNL procedure manuals, Department of Energy (DOE) directives, and Contract DE-AC02-05CH11231. This manual is not intended, however, to replace any of those documents. RPM sections on personnel apply only to employees who are not represented by unions. Personnel policies pertaining to employees represented by unions may be found in their labor agreements. Questions concerning policy interpretation should be directed to the LBNL organization responsible for the particular policy. A link to the Managers Responsible for RPM Sections is available on the RPM home page. If it is not clear which organization is responsible for a policy, please contact Requirements Manager Lydia Young or the RPM Editor.
Designing Flight Deck Procedures
NASA Technical Reports Server (NTRS)
Degani, Asaf; Wiener, Earl
2005-01-01
Three reports address the design of flight-deck procedures and various aspects of human interaction with cockpit systems that have direct impact on flight safety. One report, On the Typography of Flight- Deck Documentation, discusses basic research about typography and the kind of information needed by designers of flight deck documentation. Flight crews reading poorly designed documentation may easily overlook a crucial item on the checklist. The report surveys and summarizes the available literature regarding the design and typographical aspects of printed material. It focuses on typographical factors such as proper typefaces, character height, use of lower- and upper-case characters, line length, and spacing. Graphical aspects such as layout, color coding, fonts, and character contrast are discussed; and several cockpit conditions such as lighting levels and glare are addressed, as well as usage factors such as angular alignment, paper quality, and colors. Most of the insights and recommendations discussed in this report are transferable to paperless cockpit systems of the future and computer-based procedure displays (e.g., "electronic flight bag") in aerospace systems and similar systems that are used in other industries such as medical, nuclear systems, maritime operations, and military systems.
Regulations and Procedures Manual
Young, Lydia
2010-09-30
The purpose of the Regulations and Procedures Manual (RPM) is to provide Laboratory personnel with a reference to University and Lawrence Berkeley National Laboratory policies and regulations by outlining the normal practices and answering most policy questions that arise in the day-to-day operations of Laboratory departments. Much of the information in this manual has been condensed from detail provided in Laboratory procedure manuals, Department of Energy (DOE) directives, and Contract DE-AC02-05CH11231. This manual is not intended, however, to replace any of those documents. The sections on personnel apply only to employees who are not represented by unions. Personnel policies pertaining to employees represented by unions may be found in their labor agreements. Questions concerning policy interpretation should be directed to the department responsible for the particular policy. A link to the Managers Responsible for RPM Sections is available on the RPM home page. If it is not clear which department should be called, please contact the Associate Laboratory Director of Operations.
Water Resource Adaptation Program
The Water Resource Adaptation Program (WRAP) contributes to the U.S. Environmental Protection Agency’s (U.S. EPA) efforts to provide water resource managers and decision makers with the tools needed to adapt water resources to demographic and economic development, and future clim...
ERIC Educational Resources Information Center
Corno, Lyn
2008-01-01
New theory on adaptive teaching reflects the social dynamics of classrooms to explain what practicing teachers do to address student differences related to learning. In teaching adaptively, teachers respond to learners as they work. Teachers read student signals to diagnose needs on the fly and tap previous experience with similar learners to…
Computerized Adaptive Ability Measurement.
ERIC Educational Resources Information Center
Weiss, David J.
The general objective of a research program on adaptive testing was to identify several sources of potential error in test scores, and to study adaptive testing as a means for reducing these errors. Errors can result from the mismatch of item difficulty to the individual's ability; the psychological effects of testing and the test environment; the…
Uncertainty in adaptive capacity
NASA Astrophysics Data System (ADS)
Adger, W. Neil; Vincent, Katharine
2005-03-01
The capacity to adapt is a critical element of the process of adaptation: it is the vector of resources that represent the asset base from which adaptation actions can be made. Adaptive capacity can in theory be identified and measured at various scales, from the individual to the nation. The assessment of uncertainty within such measures comes from the contested knowledge domain and theories surrounding the nature of the determinants of adaptive capacity and the human action of adaptation. While generic adaptive capacity at the national level, for example, is often postulated as being dependent on health, governance and political rights, and literacy, and economic well-being, the determinants of these variables at national levels are not widely understood. We outline the nature of this uncertainty for the major elements of adaptive capacity and illustrate these issues with the example of a social vulnerability index for countries in Africa. To cite this article: W.N. Adger, K. Vincent, C. R. Geoscience 337 (2005).
Retinal Imaging: Adaptive Optics
NASA Astrophysics Data System (ADS)
Goncharov, A. S.; Iroshnikov, N. G.; Larichev, Andrey V.
This chapter describes several factors influencing the performance of ophthalmic diagnostic systems with adaptive optics compensation of human eye aberration. Particular attention is paid to speckle modulation, temporal behavior of aberrations, and anisoplanatic effects. The implementation of a fundus camera with adaptive optics is considered.
Research, Adaptation, & Change.
ERIC Educational Resources Information Center
Morris, Lee A., Ed.; And Others
Research adaptation is an endeavor that implies solid collaboration among school practitioners and university and college researchers. This volume addresses the broad issues of research as an educational endeavor, adaptation as a necessary function associated with applying research findings to school situations, and change as an inevitable…
Szu, H.; Hsu, C.
1996-12-31
Human sensors systems (HSS) may be approximately described as an adaptive or self-learning version of the Wavelet Transforms (WT) that are capable to learn from several input-output associative pairs of suitable transform mother wavelets. Such an Adaptive WT (AWT) is a redundant combination of mother wavelets to either represent or classify inputs.
Fast autodidactic adaptive equalization algorithms
NASA Astrophysics Data System (ADS)
Hilal, Katia
Autodidactic equalization by adaptive filtering is addressed in a mobile radio communication context. A general method, using an adaptive stochastic gradient Bussgang type algorithm, to deduce two low cost computation algorithms is given: one equivalent to the initial algorithm and the other having improved convergence properties thanks to a block criteria minimization. Two start algorithms are reworked: the Godard algorithm and the decision controlled algorithm. Using a normalization procedure, and block normalization, the performances are improved, and their common points are evaluated. These common points are used to propose an algorithm retaining the advantages of the two initial algorithms. This thus inherits the robustness of the Godard algorithm and the precision and phase correction of the decision control algorithm. The work is completed by a study of the stable states of Bussgang type algorithms and of the stability of the Godard algorithms, initial and normalized. The simulation of these algorithms, carried out in a mobile radio communications context, and under severe conditions on the propagation channel, gave a 75% reduction in the number of samples required for the processing in relation with the initial algorithms. The improvement of the residual error was of a much lower return. These performances are close to making possible the use of autodidactic equalization in the mobile radio system.
Self-adaptive predictor-corrector algorithm for static nonlinear structural analysis
NASA Technical Reports Server (NTRS)
Padovan, J.
1981-01-01
A multiphase selfadaptive predictor corrector type algorithm was developed. This algorithm enables the solution of highly nonlinear structural responses including kinematic, kinetic and material effects as well as pro/post buckling behavior. The strategy involves three main phases: (1) the use of a warpable hyperelliptic constraint surface which serves to upperbound dependent iterate excursions during successive incremental Newton Ramphson (INR) type iterations; (20 uses an energy constraint to scale the generation of successive iterates so as to maintain the appropriate form of local convergence behavior; (3) the use of quality of convergence checks which enable various self adaptive modifications of the algorithmic structure when necessary. The restructuring is achieved by tightening various conditioning parameters as well as switch to different algorithmic levels to improve the convergence process. The capabilities of the procedure to handle various types of static nonlinear structural behavior are illustrated.
Expressed emotion measure adaptation into a foreign language.
Rein, Z; Duclos, J; Perdereau, F; Curt, F; Apfel, Alexandre; Wallier, J; Verdier, A; Fermanian, J; Falissard, B; Zaden, S; Godart, N T
2011-01-01
Expressed emotion (EE) measures have been created in English; adaptation into a foreign language is difficult. The aim of this study was to adapt the five minutes speech sample (FMSS), with a designed procedure ensuring optimum quality of the adaptation, and thus better trans-cultural validity. A strategy for improving inter-rater agreement comprised three phases: (1) phase of initial ratings (70 French samples), (2) experimental phase in two steps: ratings of 40 other samples in French, followed by analysis of differences between the French-language ratings and English-language ratings and (3) final rating phase of the initial 70 samples. For each phase, the κ coefficients measuring inter-rater agreement were calculated and compared using a bootstrap procedure. The improvements between these scorings were significant at p < 0.05 (phase 2 initial versus phase 2 final and phases 1 versus 3). The French inter-rater agreement significantly improved after this procedure.
75 FR 57859 - Specially Adapted Housing and Special Home Adaptation
Federal Register 2010, 2011, 2012, 2013, 2014
2010-09-23
... AFFAIRS 38 CFR Part 3 RIN 2900-AN21 Specially Adapted Housing and Special Home Adaptation AGENCY... housing and special home adaptation grants. This final rule incorporates certain provisions from the... adapted housing (SAH) grants and special home adaptation (SHA) grants. The public comment period ended...
Financing climate change adaptation.
Bouwer, Laurens M; Aerts, Jeroen C J H
2006-03-01
This paper examines the topic of financing adaptation in future climate change policies. A major question is whether adaptation in developing countries should be financed under the 1992 United Nations Framework Convention on Climate Change (UNFCCC), or whether funding should come from other sources. We present an overview of financial resources and propose the employment of a two-track approach: one track that attempts to secure climate change adaptation funding under the UNFCCC; and a second track that improves mainstreaming of climate risk management in development efforts. Developed countries would need to demonstrate much greater commitment to the funding of adaptation measures if the UNFCCC were to cover a substantial part of the costs. The mainstreaming of climate change adaptation could follow a risk management path, particularly in relation to disaster risk reduction. 'Climate-proofing' of development projects that currently do not consider climate and weather risks could improve their sustainability.
Bayesian adaptive estimation of the auditory filter.
Shen, Yi; Richards, Virginia M
2013-08-01
A Bayesian adaptive procedure for estimating the auditory-filter shape was proposed and evaluated using young, normal-hearing listeners at moderate stimulus levels. The resulting quick-auditory-filter (qAF) procedure assumed the power spectrum model of masking with the auditory-filter shape being modeled using a spectrally symmetric, two-parameter rounded-exponential (roex) function. During data collection using the qAF procedure, listeners detected the presence of a pure-tone signal presented in the spectral notch of a noise masker. Dependent on the listener's response on each trial, the posterior probability distributions of the model parameters were updated, and the resulting parameter estimates were then used to optimize the choice of stimulus parameters for the subsequent trials. Results showed that the qAF procedure gave similar parameter estimates to the traditional threshold-based procedure in many cases and was able to reasonably predict the masked signal thresholds. Additional measurements suggested that occasional failures of the qAF procedure to reliably converge could be a consequence of incorrect responses early in a qAF track. The addition of a parameter describing lapses of attention reduced the likelihood of such failures.
An adaptive pseudospectral method for discontinuous problems
NASA Technical Reports Server (NTRS)
Augenbaum, Jeffrey M.
1988-01-01
The accuracy of adaptively chosen, mapped polynomial approximations is studied for functions with steep gradients or discontinuities. It is shown that, for steep gradient functions, one can obtain spectral accuracy in the original coordinate system by using polynomial approximations in a transformed coordinate system with substantially fewer collocation points than are necessary using polynomial expansion directly in the original, physical, coordinate system. It is also shown that one can avoid the usual Gibbs oscillation associated with steep gradient solutions of hyperbolic pde's by approximation in suitably chosen coordinate systems. Continuous, high gradient solutions are computed with spectral accuracy (as measured in the physical coordinate system). Discontinuous solutions associated with nonlinear hyperbolic equations can be accurately computed by using an artificial viscosity chosen to smooth out the solution in the mapped, computational domain. Thus, shocks can be effectively resolved on a scale that is subgrid to the resolution available with collocation only in the physical domain. Examples with Fourier and Chebyshev collocation are given.
A conjugate heat transfer procedure for gas turbine blades.
Croce, G
2001-05-01
A conjugate heat transfer procedure, allowing for the use of different solvers on the solid and fluid domain(s), is presented. Information exchange between solid and fluid solution is limited to boundary condition values, and this exchange is carried out at any pseudo-time step. Global convergence rate of the procedure is, thus, of the same order of magnitude of stand-alone computations.
An Efficient Microscale Procedure for the Synthesis of Aspirin
NASA Astrophysics Data System (ADS)
Pandita, Sangeeta; Goyal, Samta
1998-06-01
The synthesis of aspirin is a part of many undergraduate organic synthesis labs and is frequently used in qualitative organic analysis laboratory for the identification of salicylic acid. We have found that aspirin can be synthesized on microscale by a simple and efficient procedure that eliminates the heating step employed in literature procedures and gives a pure, ferric-negative product (no purple color with alcoholic ferric chloride solution).
Asymptotic Linearity of Optimal Control Modification Adaptive Law with Analytical Stability Margins
NASA Technical Reports Server (NTRS)
Nguyen, Nhan T.
2010-01-01
Optimal control modification has been developed to improve robustness to model-reference adaptive control. For systems with linear matched uncertainty, optimal control modification adaptive law can be shown by a singular perturbation argument to possess an outer solution that exhibits a linear asymptotic property. Analytical expressions of phase and time delay margins for the outer solution can be obtained. Using the gradient projection operator, a free design parameter of the adaptive law can be selected to satisfy stability margins.
Thermodynamics of Dilute Solutions.
ERIC Educational Resources Information Center
Jancso, Gabor; Fenby, David V.
1983-01-01
Discusses principles and definitions related to the thermodynamics of dilute solutions. Topics considered include dilute solution, Gibbs-Duhem equation, reference systems (pure gases and gaseous mixtures, liquid mixtures, dilute solutions), real dilute solutions (focusing on solute and solvent), terminology, standard states, and reference systems.…
Group Syntality and Parliamentary Procedure.
ERIC Educational Resources Information Center
Winn, Larry James; Kell, Carl L.
The group syntality concept of Raymond B. Cattell furnishes a useful framework for teaching parliamentary procedure. Although there are contrasts between the histories, subject matters, and perspectives of the areas of parliamentary procedure and group dynamics, teachers and students of parliamentary procedure might profitably draw from some of…
Policy and Procedures Manual. Revised.
ERIC Educational Resources Information Center
Mississippi State Board for Community and Junior Colleges, Jackson.
The Mississippi State Board for Community and Junior College Policy and Procedures Manual has been established by the State Board to govern its actions and activities and those of the staff. It describes polices and procedures regarding board operations, staff employment, staff workplace, employee performance/grievance procedure, staff positions,…
Grid adaption using Chimera composite overlapping meshes
NASA Technical Reports Server (NTRS)
Kao, Kai-Hsiung; Liou, Meng-Sing; Chow, Chuen-Yen
1993-01-01
The objective of this paper is to perform grid adaptation using composite over-lapping meshes in regions of large gradient to capture the salient features accurately during computation. The Chimera grid scheme, a multiple overset mesh technique, is used in combination with a Navier-Stokes solver. The numerical solution is first converged to a steady state based on an initial coarse mesh. Solution-adaptive enhancement is then performed by using a secondary fine grid system which oversets on top of the base grid in the high-gradient region, but without requiring the mesh boundaries to join in any special way. Communications through boundary interfaces between those separated grids are carried out using tri-linear interpolation. Applications to the Euler equations for shock reflections and to a shock wave/boundary layer interaction problem are tested. With the present method, the salient features are well resolved.