Interactive solution-adaptive grid generation procedure
NASA Technical Reports Server (NTRS)
Henderson, Todd L.; Choo, Yung K.; Lee, Ki D.
1992-01-01
TURBO-AD is an interactive solution adaptive grid generation program under development. The program combines an interactive algebraic grid generation technique and a solution adaptive grid generation technique into a single interactive package. The control point form uses a sparse collection of control points to algebraically generate a field grid. This technique provides local grid control capability and is well suited to interactive work due to its speed and efficiency. A mapping from the physical domain to a parametric domain was used to improve difficulties encountered near outwardly concave boundaries in the control point technique. Therefore, all grid modifications are performed on the unit square in the parametric domain, and the new adapted grid is then mapped back to the physical domain. The grid adaption is achieved by adapting the control points to a numerical solution in the parametric domain using control sources obtained from the flow properties. Then a new modified grid is generated from the adapted control net. This process is efficient because the number of control points is much less than the number of grid points and the generation of the grid is an efficient algebraic process. TURBO-AD provides the user with both local and global controls.
Kim, D.; Ghanem, R.
1994-12-31
Multigrid solution technique to solve a material nonlinear problem in a visual programming environment using the finite element method is discussed. The nonlinear equation of equilibrium is linearized to incremental form using Newton-Rapson technique, then multigrid solution technique is used to solve linear equations at each Newton-Rapson step. In the process, adaptive mesh refinement, which is based on the bisection of a pair of triangles, is used to form grid hierarchy for multigrid iteration. The solution process is implemented in a visual programming environment with distributed computing capability, which enables more intuitive understanding of solution process, and more effective use of resources.
NASA Technical Reports Server (NTRS)
Padovan, J.; Tovichakchaikul, S.
1983-01-01
This paper will develop a new solution strategy which can handle elastic-plastic-creep problems in an inherently stable manner. This is achieved by introducing a new constrained time stepping algorithm which will enable the solution of creep initiated pre/postbuckling behavior where indefinite tangent stiffnesses are encountered. Due to the generality of the scheme, both monotone and cyclic loading histories can be handled. The presentation will give a thorough overview of current solution schemes and their short comings, the development of constrained time stepping algorithms as well as illustrate the results of several numerical experiments which benchmark the new procedure.
Parallel automated adaptive procedures for unstructured meshes
NASA Technical Reports Server (NTRS)
Shephard, M. S.; Flaherty, J. E.; Decougny, H. L.; Ozturan, C.; Bottasso, C. L.; Beall, M. W.
1995-01-01
Consideration is given to the techniques required to support adaptive analysis of automatically generated unstructured meshes on distributed memory MIMD parallel computers. The key areas of new development are focused on the support of effective parallel computations when the structure of the numerical discretization, the mesh, is evolving, and in fact constructed, during the computation. All the procedures presented operate in parallel on already distributed mesh information. Starting from a mesh definition in terms of a topological hierarchy, techniques to support the distribution, redistribution and communication among the mesh entities over the processors is given, and algorithms to dynamically balance processor workload based on the migration of mesh entities are given. A procedure to automatically generate meshes in parallel, starting from CAD geometric models, is given. Parallel procedures to enrich the mesh through local mesh modifications are also given. Finally, the combination of these techniques to produce a parallel automated finite element analysis procedure for rotorcraft aerodynamics calculations is discussed and demonstrated.
Interactive solution-adaptive grid generation
NASA Technical Reports Server (NTRS)
Choo, Yung K.; Henderson, Todd L.
1992-01-01
TURBO-AD is an interactive solution-adaptive grid generation program under development. The program combines an interactive algebraic grid generation technique and a solution-adaptive grid generation technique into a single interactive solution-adaptive grid generation package. The control point form uses a sparse collection of control points to algebraically generate a field grid. This technique provides local grid control capability and is well suited to interactive work due to its speed and efficiency. A mapping from the physical domain to a parametric domain was used to improve difficulties that had been encountered near outwardly concave boundaries in the control point technique. Therefore, all grid modifications are performed on a unit square in the parametric domain, and the new adapted grid in the parametric domain is then mapped back to the physical domain. The grid adaptation is achieved by first adapting the control points to a numerical solution in the parametric domain using control sources obtained from flow properties. Then a new modified grid is generated from the adapted control net. This solution-adaptive grid generation process is efficient because the number of control points is much less than the number of grid points and the generation of a new grid from the adapted control net is an efficient algebraic process. TURBO-AD provides the user with both local and global grid controls.
Adaptive EAGLE dynamic solution adaptation and grid quality enhancement
NASA Technical Reports Server (NTRS)
Luong, Phu Vinh; Thompson, J. F.; Gatlin, B.; Mastin, C. W.; Kim, H. J.
1992-01-01
In the effort described here, the elliptic grid generation procedure in the EAGLE grid code was separated from the main code into a subroutine, and a new subroutine which evaluates several grid quality measures at each grid point was added. The elliptic grid routine can now be called, either by a computational fluid dynamics (CFD) code to generate a new adaptive grid based on flow variables and quality measures through multiple adaptation, or by the EAGLE main code to generate a grid based on quality measure variables through static adaptation. Arrays of flow variables can be read into the EAGLE grid code for use in static adaptation as well. These major changes in the EAGLE adaptive grid system make it easier to convert any CFD code that operates on a block-structured grid (or single-block grid) into a multiple adaptive code.
Multigrid solution of internal flows using unstructured solution adaptive meshes
NASA Astrophysics Data System (ADS)
Smith, Wayne A.; Blake, Kenneth R.
1992-11-01
This is the final report of the NASA Lewis SBIR Phase 2 Contract Number NAS3-25785, Multigrid Solution of Internal Flows Using Unstructured Solution Adaptive Meshes. The objective of this project, as described in the Statement of Work, is to develop and deliver to NASA a general three-dimensional Navier-Stokes code using unstructured solution-adaptive meshes for accuracy and multigrid techniques for convergence acceleration. The code will primarily be applied, but not necessarily limited, to high speed internal flows in turbomachinery.
Combined LAURA-UPS hypersonic solution procedure
NASA Technical Reports Server (NTRS)
Wood, William A.; Thompson, Richard A.
1993-01-01
A combined solution procedure for hypersonic flowfields around blunted slender bodies was implemented using a thin-layer Navier-Stokes code (LAURA) in the nose region and a parabolized Navier-Stokes code (UPS) on the after body region. Perfect gas, equilibrium air, and non-equilibrium air solutions to sharp cones and a sharp wedge were obtained using UPS alone as a preliminary step. Surface heating rates are presented for two slender bodies with blunted noses, having used LAURA to provide a starting solution to UPS downstream of the sonic line. These are an 8 deg sphere-cone in Mach 5, perfect gas, laminar flow at 0 and 4 deg angles of attack and the Reentry F body at Mach 20, 80,000 ft equilibrium gas conditions for 0 and 0.14 deg angles of attack. The results indicate that this procedure is a timely and accurate method for obtaining aerothermodynamic predictions on slender hypersonic vehicles.
Sweet solutions for procedural pain in infants.
2013-08-01
A sweet solution, such as sucrose or glucose, can be used for analgesia for minor short term procedural pain, such as immunisation, in infants up to 12 months of age. The sweet solution is given orally and provides short term analgesia. It has National Health and Medical Research Council (NHMRC) Level I evidence of efficacy and no serious adverse effects have been reported. This article is part of a series on non drug treatments summarising indications, considerations, evidence and where clinicians and patients can get further information. PMID:23971067
Staggered solution procedures for multibody dynamics simulation
NASA Technical Reports Server (NTRS)
Park, K. C.; Chiou, J. C.; Downer, J. D.
1990-01-01
The numerical solution procedure for multibody dynamics (MBD) systems is termed a staggered MBD solution procedure that solves the generalized coordinates in a separate module from that for the constraint force. This requires a reformulation of the constraint conditions so that the constraint forces can also be integrated in time. A major advantage of such a partitioned solution procedure is that additional analysis capabilities such as active controller and design optimization modules can be easily interfaced without embedding them into a monolithic program. After introducing the basic equations of motion for MBD system in the second section, Section 3 briefly reviews some constraint handling techniques and introduces the staggered stabilized technique for the solution of the constraint forces as independent variables. The numerical direct time integration of the equations of motion is described in Section 4. As accurate damping treatment is important for the dynamics of space structures, we have employed the central difference method and the mid-point form of the trapezoidal rule since they engender no numerical damping. This is in contrast to the current practice in dynamic simulations of ground vehicles by employing a set of backward difference formulas. First, the equations of motion are partitioned according to the translational and the rotational coordinates. This sets the stage for an efficient treatment of the rotational motions via the singularity-free Euler parameters. The resulting partitioned equations of motion are then integrated via a two-stage explicit stabilized algorithm for updating both the translational coordinates and angular velocities. Once the angular velocities are obtained, the angular orientations are updated via the mid-point implicit formula employing the Euler parameters. When the two algorithms, namely, the two-stage explicit algorithm for the generalized coordinates and the implicit staggered procedure for the constraint Lagrange
Spatial adaptation procedures on tetrahedral meshes for unsteady aerodynamic flow calculations
NASA Technical Reports Server (NTRS)
Rausch, Russ D.; Batina, John T.; Yang, Henry T. Y.
1993-01-01
Spatial adaptation procedures for the accurate and efficient solution of steady and unsteady inviscid flow problems are described. The adaptation procedures were developed and implemented within a three-dimensional, unstructured-grid, upwind-type Euler code. These procedures involve mesh enrichment and mesh coarsening to either add points in high gradient regions of the flow or remove points where they are not needed, respectively, to produce solutions of high spatial accuracy at minimal computational cost. A detailed description of the enrichment and coarsening procedures are presented and comparisons with experimental data for an ONERA M6 wing and an exact solution for a shock-tube problem are presented to provide an assessment of the accuracy and efficiency of the capability. Steady and unsteady results, obtained using spatial adaptation procedures, are shown to be of high spatial accuracy, primarily in that discontinuities such as shock waves are captured very sharply.
Adaptive Distributed Environment for Procedure Training (ADEPT)
NASA Technical Reports Server (NTRS)
Domeshek, Eric; Ong, James; Mohammed, John
2013-01-01
ADEPT (Adaptive Distributed Environment for Procedure Training) is designed to provide more effective, flexible, and portable training for NASA systems controllers. When creating a training scenario, an exercise author can specify a representative rationale structure using the graphical user interface, annotating the results with instructional texts where needed. The author's structure may distinguish between essential and optional parts of the rationale, and may also include "red herrings" - hypotheses that are essential to consider, until evidence and reasoning allow them to be ruled out. The system is built from pre-existing components, including Stottler Henke's SimVentive? instructional simulation authoring tool and runtime. To that, a capability was added to author and exploit explicit control decision rationale representations. ADEPT uses SimVentive's Scalable Vector Graphics (SVG)- based interactive graphic display capability as the basis of the tool for quickly noting aspects of decision rationale in graph form. The ADEPT prototype is built in Java, and will run on any computer using Windows, MacOS, or Linux. No special peripheral equipment is required. The software enables a style of student/ tutor interaction focused on the reasoning behind systems control behavior that better mimics proven Socratic human tutoring behaviors for highly cognitive skills. It supports fast, easy, and convenient authoring of such tutoring behaviors, allowing specification of detailed scenario-specific, but content-sensitive, high-quality tutor hints and feedback. The system places relatively light data-entry demands on the student to enable its rationale-centered discussions, and provides a support mechanism for fostering coherence in the student/ tutor dialog by including focusing, sequencing, and utterance tuning mechanisms intended to better fit tutor hints and feedback into the ongoing context.
Effects of adaptive refinement on the inverse EEG solution
NASA Astrophysics Data System (ADS)
Weinstein, David M.; Johnson, Christopher R.; Schmidt, John A.
1995-10-01
One of the fundamental problems in electroencephalography can be characterized by an inverse problem. Given a subset of electrostatic potentials measured on the surface of the scalp and the geometry and conductivity properties within the head, calculate the current vectors and potential fields within the cerebrum. Mathematically the generalized EEG problem can be stated as solving Poisson's equation of electrical conduction for the primary current sources. The resulting problem is mathematically ill-posed i.e., the solution does not depend continuously on the data, such that small errors in the measurement of the voltages on the scalp can yield unbounded errors in the solution, and, for the general treatment of a solution of Poisson's equation, the solution is non-unique. However, if accurate solutions the general treatment of a solution of Poisson's equation, the solution is non-unique. However, if accurate solutions to such problems could be obtained, neurologists would gain noninvasive accesss to patient-specific cortical activity. Access to such data would ultimately increase the number of patients who could be effectively treated for pathological cortical conditions such as temporal lobe epilepsy. In this paper, we present the effects of spatial adaptive refinement on the inverse EEG problem and show that the use of adaptive methods allow for significantly better estimates of electric and potential fileds within the brain through an inverse procedure. To test these methods, we have constructed several finite element head models from magneteic resonance images of a patient. The finite element meshes ranged in size from 2724 nodes and 12,812 elements to 5224 nodes and 29,135 tetrahedral elements, depending on the level of discretization. We show that an adaptive meshing algorithm minimizes the error in the forward problem due to spatial discretization and thus increases the accuracy of the inverse solution.
Self-adaptive closed constrained solution algorithms for nonlinear conduction
NASA Technical Reports Server (NTRS)
Padovan, J.; Tovichakchaikul, S.
1982-01-01
Self-adaptive solution algorithms are developed for nonlinear heat conduction problems encountered in analyzing materials for use in high temperature or cryogenic conditions. The nonlinear effects are noted to occur due to convection and radiation effects, as well as temperature-dependent properties of the materials. Incremental successive substitution (ISS) and Newton-Raphson (NR) procedures are treated as extrapolation schemes which have solution projections bounded by a hyperline with an externally applied thermal load vector arising from internal heat generation and boundary conditions. Closed constraints are formulated which improve the efficiency and stability of the procedures by employing closed ellipsoidal surfaces to control the size of successive iterations. Governing equations are defined for nonlinear finite element models, and comparisons are made of results using the the new method and the ISS and NR schemes for epoxy, PVC, and CuGe.
An adaptive embedded mesh procedure for leading-edge vortex flows
NASA Technical Reports Server (NTRS)
Powell, Kenneth G.; Beer, Michael A.; Law, Glenn W.
1989-01-01
A procedure for solving the conical Euler equations on an adaptively refined mesh is presented, along with a method for determining which cells to refine. The solution procedure is a central-difference cell-vertex scheme. The adaptation procedure is made up of a parameter on which the refinement decision is based, and a method for choosing a threshold value of the parameter. The refinement parameter is a measure of mesh-convergence, constructed by comparison of locally coarse- and fine-grid solutions. The threshold for the refinement parameter is based on the curvature of the curve relating the number of cells flagged for refinement to the value of the refinement threshold. Results for three test cases are presented. The test problem is that of a delta wing at angle of attack in a supersonic free-stream. The resulting vortices and shocks are captured efficiently by the adaptive code.
Anisotropic Solution Adaptive Unstructured Grid Generation Using AFLR
NASA Technical Reports Server (NTRS)
Marcum, David L.
2007-01-01
An existing volume grid generation procedure, AFLR3, was successfully modified to generate anisotropic tetrahedral elements using a directional metric transformation defined at source nodes. The procedure can be coupled with a solver and an error estimator as part of an overall anisotropic solution adaptation methodology. It is suitable for use with an error estimator based on an adjoint, optimization, sensitivity derivative, or related approach. This offers many advantages, including more efficient point placement along with robust and efficient error estimation. It also serves as a framework for true grid optimization wherein error estimation and computational resources can be used as cost functions to determine the optimal point distribution. Within AFLR3 the metric transformation is implemented using a set of transformation vectors and associated aspect ratios. The modified overall procedure is presented along with details of the anisotropic transformation implementation. Multiple two-and three-dimensional examples are also presented that demonstrate the capability of the modified AFLR procedure to generate anisotropic elements using a set of source nodes with anisotropic transformation metrics. The example cases presented use moderate levels of anisotropy and result in usable element quality. Future testing with various flow solvers and methods for obtaining transformation metric information is needed to determine practical limits and evaluate the efficacy of the overall approach.
An Upgrading Procedure for Adaptive Assessment of Knowledge.
Anselmi, Pasquale; Robusto, Egidio; Stefanutti, Luca; de Chiusole, Debora
2016-06-01
In knowledge space theory, existing adaptive assessment procedures can only be applied when suitable estimates of their parameters are available. In this paper, an iterative procedure is proposed, which upgrades its parameters with the increasing number of assessments. The first assessments are run using parameter values that favor accuracy over efficiency. Subsequent assessments are run using new parameter values estimated on the incomplete response patterns from previous assessments. Parameter estimation is carried out through a new probabilistic model for missing-at-random data. Two simulation studies show that, with the increasing number of assessments, the performance of the proposed procedure approaches that of gold standards. PMID:27071952
a Procedural Solution to Model Roman Masonry Structures
NASA Astrophysics Data System (ADS)
Cappellini, V.; Saleri, R.; Stefani, C.; Nony, N.; De Luca, L.
2013-07-01
The paper will describe a new approach based on the development of a procedural modelling methodology for archaeological data representation. This is a custom-designed solution based on the recognition of the rules belonging to the construction methods used in roman times. We have conceived a tool for 3D reconstruction of masonry structures starting from photogrammetric surveying. Our protocol considers different steps. Firstly we have focused on the classification of opus based on the basic interconnections that can lead to a descriptive system used for their unequivocal identification and design. Secondly, we have chosen an automatic, accurate, flexible and open-source photogrammetric pipeline named Pastis Apero Micmac - PAM, developed by IGN (Paris). We have employed it to generate ortho-images from non-oriented images, using a user-friendly interface implemented by CNRS Marseille (France). Thirdly, the masonry elements are created in parametric and interactive way, and finally they are adapted to the photogrammetric data. The presented application, currently under construction, is developed with an open source programming language called Processing, useful for visual, animated or static, 2D or 3D, interactive creations. Using this computer language, a Java environment has been developed. Therefore, even if the procedural modelling reveals an accuracy level inferior to the one obtained by manual modelling (brick by brick), this method can be useful when taking into account the static evaluation on buildings (requiring quantitative aspects) and metric measures for restoration purposes.
Effect of cooling procedure on final denture base adaptation.
Ganzarolli, S M; Rached, R N; Garcia, R C M R; Del Bel Cury, A A
2002-08-01
Well-fitted dentures prevent hyperplasic lesions, provide chewing efficiency and promote patient's comfort. Several factors may affect final adaptation of dentures, as the type of the acrylic resin, the flask cooling procedure and the water uptake. This investigation evaluated the effect of water storage and two different cooling procedures [bench cooling (BC) for 2 h; running water (RW) at 20 degrees C for 45 min] on the final adaptation of denture bases. A heat-cured acrylic resin (CL, Clássico, Clássico Artigos Odontológicos) and two microwave-cured acrylic resins [Acron MC, (AC) GC Dent. Ind. Corp.; Onda Cryl (OC), Clássico Artigos Odontológicos] were used to make the bases. Adaptation was assessed by measuring the weight of an intervening layer of silicone impression material between the base and the master die. Data was submitted to ANOVA and Tukey's test (0.05). The following means were found: (BC) CL=0.72 +/- 0.03 a; AC=0.70 +/- 0.03 b; OC=0.76 +/- 0.04 c//(RW) CL= 1.00 +/- 0.11 a; AC=1.00 +/- 0.12 a; OC=0.95 +/- 0.10 a. Different labels join groups that are not statistically different (P > 0.05). Comparisons are made among groups submitted to the same cooling procedure (BC or RW). The conclusions are: interaction of type of material and cooling procedure had a statistically significant effect on the final adaptation of the denture bases (P < 0.05); water storage was not detected as a source of variance (P > 0.05) on the final adaptation. PMID:12220348
An adaptive mesh-moving and refinement procedure for one-dimensional conservation laws
NASA Technical Reports Server (NTRS)
Biswas, Rupak; Flaherty, Joseph E.; Arney, David C.
1993-01-01
We examine the performance of an adaptive mesh-moving and /or local mesh refinement procedure for the finite difference solution of one-dimensional hyperbolic systems of conservation laws. Adaptive motion of a base mesh is designed to isolate spatially distinct phenomena, and recursive local refinement of the time step and cells of the stationary or moving base mesh is performed in regions where a refinement indicator exceeds a prescribed tolerance. These adaptive procedures are incorporated into a computer code that includes a MacCormack finite difference scheme wih Davis' artificial viscosity model and a discretization error estimate based on Richardson's extrapolation. Experiments are conducted on three problems in order to qualify the advantages of adaptive techniques relative to uniform mesh computations and the relative benefits of mesh moving and refinement. Key results indicate that local mesh refinement, with and without mesh moving, can provide reliable solutions at much lower computational cost than possible on uniform meshes; that mesh motion can be used to improve the results of uniform mesh solutions for a modest computational effort; that the cost of managing the tree data structure associated with refinement is small; and that a combination of mesh motion and refinement reliably produces solutions for the least cost per unit accuracy.
DMM assessments of attachment and adaptation: Procedures, validity and utility.
Farnfield, Steve; Hautamäki, Airi; Nørbech, Peder; Sahhar, Nicola
2010-07-01
This article gives a brief over view of the Dynamic-Maturational Model of attachment and adaptation (DMM; Crittenden, 2008) together with the various DMM assessments of attachment that have been developed for specific stages of development. Each assessment is discussed in terms of procedure, outcomes, validity, advantages and limitations, comparable procedures and areas for further research and validation. The aims are twofold: to provide an introduction to DMM theory and its application that underlie the articles in this issue of CCPP; and to provide researchers and clinicians with a guide to DMM assessments. PMID:20603420
A novel hyperbolic grid generation procedure with inherent adaptive dissipation
Tai, C.H.; Yin, S.L.; Soong, C.Y.
1995-01-01
This paper reports a novel hyperbolic grid-generation with an inherent adaptive dissipation (HGAD), which is capable of improving the oscillation and overlapping of grid lines. In the present work upwinding differencing is applied to discretize the hyperbolic system and, thereby, to develop the adaptive dissipation coefficient. Complex configurations with the features of geometric discontinuity, exceptional concavity and convexity are used as the test cases for comparison of the present HGAD procedure with the conventional hyerbolic and elliptic ones. The results reveal that the HGAD method is superior in orthogonality and smoothness of the grid system. In addition, the computational efficiency of the flow solver may be improved by using the present HGAD procedure. 15 refs., 8 figs.
Solution-adaptive finite element method in computational fracture mechanics
NASA Technical Reports Server (NTRS)
Min, J. B.; Bass, J. M.; Spradley, L. W.
1993-01-01
Some recent results obtained using solution-adaptive finite element method in linear elastic two-dimensional fracture mechanics problems are presented. The focus is on the basic issue of adaptive finite element method for validating the applications of new methodology to fracture mechanics problems by computing demonstration problems and comparing the stress intensity factors to analytical results.
Multigrid solution strategies for adaptive meshing problems
NASA Technical Reports Server (NTRS)
Mavriplis, Dimitri J.
1995-01-01
This paper discusses the issues which arise when combining multigrid strategies with adaptive meshing techniques for solving steady-state problems on unstructured meshes. A basic strategy is described, and demonstrated by solving several inviscid and viscous flow cases. Potential inefficiencies in this basic strategy are exposed, and various alternate approaches are discussed, some of which are demonstrated with an example. Although each particular approach exhibits certain advantages, all methods have particular drawbacks, and the formulation of a completely optimal strategy is considered to be an open problem.
NIF Anti-Reflective Coating Solutions: Preparation, Procedures and Specifications
Suratwala, T; Carman, L; Thomas, I
2003-07-01
The following document contains a detailed description of the preparation procedures for the antireflective coating solutions used for NIF optics. This memo includes preparation procedures for the coating solutions (sections 2.0-4.0), specifications and vendor information of the raw materials used and on all equipment used (section 5.0), and QA specifications (section 6.0) and procedures (section 7.0) to determine quality and repeatability of all the coating solutions. There are different five coating solutions that will be used to coat NIF optics. These solutions are listed below: (1) Colloidal silica (3%) in ethanol (2) Colloidal silica (2%) in sec-butanol (3) Colloidal silica (9%) in sec-butanol (deammoniated) (4) HMDS treated silica (10%) in decane (5) GR650 (3.3%) in ethanol/sec-butanol The names listed above are to be considered the official name for the solution. They will be referred to by these names in the remainder of this document. Table 1 gives a summary of all the optics to be coated including: (1) the surface to be coated; (2) the type of solution to be used; (3) the coating method (meniscus, dip, or spin coating) to be used; (4) the type of coating (broadband, 1?, 2?, 3?) to be made; (5) number of optics to be coated; and (6) the type of post processing required (if any). Table 2 gives a summary of the batch compositions and measured properties of all five of these solutions.
An Adaptive Ridge Procedure for L0 Regularization
Frommlet, Florian; Nuel, Grégory
2016-01-01
Penalized selection criteria like AIC or BIC are among the most popular methods for variable selection. Their theoretical properties have been studied intensively and are well understood, but making use of them in case of high-dimensional data is difficult due to the non-convex optimization problem induced by L0 penalties. In this paper we introduce an adaptive ridge procedure (AR), where iteratively weighted ridge problems are solved whose weights are updated in such a way that the procedure converges towards selection with L0 penalties. After introducing AR its specific shrinkage properties are studied in the particular case of orthogonal linear regression. Based on extensive simulations for the non-orthogonal case as well as for Poisson regression the performance of AR is studied and compared with SCAD and adaptive LASSO. Furthermore an efficient implementation of AR in the context of least-squares segmentation is presented. The paper ends with an illustrative example of applying AR to analyze GWAS data. PMID:26849123
A new procedure for dynamic adaption of three-dimensional unstructured grids
NASA Technical Reports Server (NTRS)
Biswas, Rupak; Strawn, Roger
1993-01-01
A new procedure is presented for the simultaneous coarsening and refinement of three-dimensional unstructured tetrahedral meshes. This algorithm allows for localized grid adaption that is used to capture aerodynamic flow features such as vortices and shock waves in helicopter flowfield simulations. The mesh-adaption algorithm is implemented in the C programming language and uses a data structure consisting of a series of dynamically-allocated linked lists. These lists allow the mesh connectivity to be rapidly reconstructed when individual mesh points are added and/or deleted. The algorithm allows the mesh to change in an anisotropic manner in order to efficiently resolve directional flow features. The procedure has been successfully implemented on a single processor of a Cray Y-MP computer. Two sample cases are presented involving three-dimensional transonic flow. Computed results show good agreement with conventional structured-grid solutions for the Euler equations.
A Procedure for Identifying Problems and Solutions in Desegregated Schools.
ERIC Educational Resources Information Center
Uhl, Norman P.
The purpose of this study was to investigate the usefulness of a procedure (a modification of the Delphi technique) for identifying racially-related problems and achieving some consensus on solutions to these problems among students, parents, and the school staff. The students who participated attended six classes which were selected to provide a…
Radiographic skills learning: procedure simulation using adaptive hypermedia.
Costaridou, L; Panayiotakis, G; Pallikarakis, N; Proimos, B
1996-10-01
The design and development of a simulation tool supporting learning of radiographic skills is reported. This tool has by textual, graphical and iconic resources, organized according to a building-block, adaptive hypermedia approach, which is described and supported by an image base of radiographs. It offers interactive user-controlled simulation of radiographic imaging procedures. The development is based on a commercially available environment (Toolbook 3.0, Asymetrix Corporation). The core of the system is an attributed precedence (priority) graph, which represents a task outline (concept and resources structure), which is dynamically adjusted to selected procedures. The user interface imitates a conventional radiography system, i.e. operating console, tube, table, patient and cassette. System parameters, such as patient positioning, focus-to-patient distance, magnification, field dimensions, tube voltage and mAs are under user control. Their effects on image quality are presented, by means of an image base acquired under controlled exposure conditions. Innovative use of hypermedia, computer based learning and simulation principles and technology in the development of this tool resulted in an enhanced interactive environment providing radiographic parameter control and visualization of parameter effects on image quality. PMID:9038530
Impact of space-time mesh adaptation on solute transport modeling in porous media
NASA Astrophysics Data System (ADS)
Esfandiar, Bahman; Porta, Giovanni; Perotto, Simona; Guadagnini, Alberto
2015-02-01
We implement a space-time grid adaptation procedure to efficiently improve the accuracy of numerical simulations of solute transport in porous media in the context of model parameter estimation. We focus on the Advection Dispersion Equation (ADE) for the interpretation of nonreactive transport experiments in laboratory-scale heterogeneous porous media. When compared to a numerical approximation based on a fixed space-time discretization, our approach is grounded on a joint automatic selection of the spatial grid and the time step to capture the main (space-time) system dynamics. Spatial mesh adaptation is driven by an anisotropic recovery-based error estimator which enables us to properly select the size, shape, and orientation of the mesh elements. Adaptation of the time step is performed through an ad hoc local reconstruction of the temporal derivative of the solution via a recovery-based approach. The impact of the proposed adaptation strategy on the ability to provide reliable estimates of the key parameters of an ADE model is assessed on the basis of experimental solute breakthrough data measured following tracer injection in a nonuniform porous system. Model calibration is performed in a Maximum Likelihood (ML) framework upon relying on the representation of the ADE solution through a generalized Polynomial Chaos Expansion (gPCE). Our results show that the proposed anisotropic space-time grid adaptation leads to ML parameter estimates and to model results of markedly improved quality when compared to classical inversion approaches based on a uniform space-time discretization.
NASA Technical Reports Server (NTRS)
Usab, William J., Jr.; Jiang, Yi-Tsann
1991-01-01
The objective of the present research is to develop a general solution adaptive scheme for the accurate prediction of inviscid quasi-three-dimensional flow in advanced compressor and turbine designs. The adaptive solution scheme combines an explicit finite-volume time-marching scheme for unstructured triangular meshes and an advancing front triangular mesh scheme with a remeshing procedure for adapting the mesh as the solution evolves. The unstructured flow solver has been tested on a series of two-dimensional airfoil configurations including a three-element analytic test case presented here. Mesh adapted quasi-three-dimensional Euler solutions are presented for three spanwise stations of the NASA rotor 67 transonic fan. Computed solutions are compared with available experimental data.
Solution-Adaptive Cartesian Cell Approach for Viscous and Inviscid Flows
NASA Technical Reports Server (NTRS)
Coirier, William J.; Powell, Kenneth G.
1996-01-01
A Cartesian cell-based approach for adaptively refined solutions of the Euler and Navier-Stokes equations in two dimensions is presented. Grids about geometrically complicated bodies are generated automatically, by the recursive subdivision of a single Cartesian cell encompassing the entire flow domain. Where the resulting cells intersect bodies, polygonal cut cells are created using modified polygon-clipping algorithms. The grid is stored in a binary tree data structure that provides a natural means of obtaining cell-to-cell connectivity and of carrying out solution-adaptive mesh refinement. The Euler and Navier-Stokes equations are solved on the resulting grids using a finite volume formulation. The convective terms are upwinded: A linear reconstruction of the primitive variables is performed, providing input states to an approximate Riemann solver for computing the fluxes between neighboring cells. The results of a study comparing the accuracy and positivity of two classes of cell-centered, viscous gradient reconstruction procedures is briefly summarized. Adaptively refined solutions of the Navier-Stokes equations are shown using the more robust of these gradient reconstruction procedures, where the results computed by the Cartesian approach are compared to theory, experiment, and other accepted computational results for a series of low and moderate Reynolds number flows.
Adaptive Multigrid Solution of Stokes' Equation on CELL Processor
NASA Astrophysics Data System (ADS)
Elgersma, M. R.; Yuen, D. A.; Pratt, S. G.
2006-12-01
We are developing an adaptive multigrid solver for treating nonlinear elliptic partial-differential equations, needed for mantle convection problems. Since multigrid is being used for the complete solution, not just as a preconditioner, spatial difference operators are kept nearly diagonally dominant by increasing density of the coarsest grid in regions where coefficients have rapid spatial variation. At each time step, the unstructured coarse grid is refined in regions where coefficients associated with the differential operators or boundary conditions have rapid spatial variation, and coarsened in regions where there is more gradual spatial variation. For three-dimensional problems, the boundary is two-dimensional, and regions where coefficients change rapidly are often near two-dimensional surfaces, so the coarsest grid is only fine near two-dimensional subsets of the three-dimensional space. Coarse grid density drops off exponentially with distance from boundary surfaces and rapid-coefficient-change surfaces. This unstructured coarse grid results in the number of coarse grid voxels growing proportional to surface area, rather than proportional to volume. This results in significant computational savings for the coarse-grid solution. This coarse-grid solution is then refined for the fine-grid solution, and multigrid methods have memory usage and runtime proportional to the number of fine-grid voxels. This adaptive multigrid algorithm is being implemented on the CELL processor, where each chip has eight floating point processors and each processor operates on four floating point numbers each clock cycle. Both the adaptive grid algorithm and the multigrid solver have very efficient parallel implementations, in order to take advantage of the CELL processor architecture.
NASA Technical Reports Server (NTRS)
Rebstock, Rainer
1987-01-01
Numerical methods are developed for control of three dimensional adaptive test sections. The physical properties of the design problem occurring in the external field computation are analyzed, and a design procedure suited for solution of the problem is worked out. To do this, the desired wall shape is determined by stepwise modification of an initial contour. The necessary changes in geometry are determined with the aid of a panel procedure, or, with incident flow near the sonic range, with a transonic small perturbation (TSP) procedure. The designed wall shape, together with the wall deflections set during the tunnel run, are the input to a newly derived one-step formula which immediately yields the adapted wall contour. This is particularly important since the classical iterative adaptation scheme is shown to converge poorly for 3D flows. Experimental results obtained in the adaptive test section with eight flexible walls are presented to demonstrate the potential of the procedure. Finally, a method is described to minimize wall interference in 3D flows by adapting only the top and bottom wind tunnel walls.
ALPS: A framework for parallel adaptive PDE solution
NASA Astrophysics Data System (ADS)
Burstedde, Carsten; Burtscher, Martin; Ghattas, Omar; Stadler, Georg; Tu, Tiankai; Wilcox, Lucas C.
2009-07-01
Adaptive mesh refinement and coarsening (AMR) is essential for the numerical solution of partial differential equations (PDEs) that exhibit behavior over a wide range of length and time scales. Because of the complex dynamic data structures and communication patterns and frequent data exchange and redistribution, scaling dynamic AMR to tens of thousands of processors has long been considered a challenge. We are developing ALPS, a library for dynamic mesh adaptation of PDEs that is designed to scale to hundreds of thousands of compute cores. Our approach uses parallel forest-of-octree-based hexahedral finite element meshes and dynamic load balancing based on space-filling curves. ALPS supports arbitrary-order accurate continuous and discontinuous finite element/spectral element discretizations on general geometries. We present scalability and performance results for two applications from geophysics: seismic wave propagation and mantle convection.
NASA Astrophysics Data System (ADS)
Shi, Lei; Wang, Z. J.
2015-08-01
Adjoint-based mesh adaptive methods are capable of distributing computational resources to areas which are important for predicting an engineering output. In this paper, we develop an adjoint-based h-adaptation approach based on the high-order correction procedure via reconstruction formulation (CPR) to minimize the output or functional error. A dual-consistent CPR formulation of hyperbolic conservation laws is developed and its dual consistency is analyzed. Super-convergent functional and error estimate for the output with the CPR method are obtained. Factors affecting the dual consistency, such as the solution point distribution, correction functions, boundary conditions and the discretization approach for the non-linear flux divergence term, are studied. The presented method is then used to perform simulations for the 2D Euler and Navier-Stokes equations with mesh adaptation driven by the adjoint-based error estimate. Several numerical examples demonstrate the ability of the presented method to dramatically reduce the computational cost comparing with uniform grid refinement.
Parallel partitioning strategies for the adaptive solution of conservation laws
Devine, K.D.; Flaherty, J.E.; Loy, R.M.
1995-12-31
We describe and examine the performance of adaptive methods for Solving hyperbolic systems of conservation laws on massively parallel computers. The differential system is approximated by a discontinuous Galerkin finite element method with a hierarchical Legendre piecewise polynomial basis for the spatial discretization. Fluxes at element boundaries are computed by solving an approximate Riemann problem; a projection limiter is applied to keep the average solution monotone; time discretization is performed by Runge-Kutta integration; and a p-refinement-based error estimate is used as an enrichment indicator. Adaptive order (p-) and mesh (h-) refinement algorithms are presented and demonstrated. Using an element-based dynamic load balancing algorithm called tiling and adaptive p-refinement, parallel efficiencies of over 60% are achieved on a 1024-processor nCUBE/2 hypercube. We also demonstrate a fast, tree-based parallel partitioning strategy for three-dimensional octree-structured meshes. This method produces partition quality comparable to recursive spectral bisection at a greatly reduced cost.
Adapting Assessment Procedures for Delivery via an Automated Format.
ERIC Educational Resources Information Center
Kelly, Karen L.; And Others
The Office of Personnel Management (OPM) decided to explore alternative examining procedures for positions covered by the Administrative Careers with America (ACWA) examination. One requirement for new procedures was that they be automated for use with OPM's recently developed Microcomputer Assisted Rating System (MARS), a highly efficient system…
Adaptive multigrid domain decomposition solutions for viscous interacting flows
NASA Technical Reports Server (NTRS)
Rubin, Stanley G.; Srinivasan, Kumar
1992-01-01
Several viscous incompressible flows with strong pressure interaction and/or axial flow reversal are considered with an adaptive multigrid domain decomposition procedure. Specific examples include the triple deck structure surrounding the trailing edge of a flat plate, the flow recirculation in a trough geometry, and the flow in a rearward facing step channel. For the latter case, there are multiple recirculation zones, of different character, for laminar and turbulent flow conditions. A pressure-based form of flux-vector splitting is applied to the Navier-Stokes equations, which are represented by an implicit lowest-order reduced Navier-Stokes (RNS) system and a purely diffusive, higher-order, deferred-corrector. A trapezoidal or box-like form of discretization insures that all mass conservation properties are satisfied at interfacial and outflow boundaries, even for this primitive-variable, non-staggered grid computation.
A Solution Adaptive Technique Using Tetrahedral Unstructured Grids
NASA Technical Reports Server (NTRS)
Pirzadeh, Shahyar Z.
2000-01-01
An adaptive unstructured grid refinement technique has been developed and successfully applied to several three dimensional inviscid flow test cases. The method is based on a combination of surface mesh subdivision and local remeshing of the volume grid Simple functions of flow quantities are employed to detect dominant features of the flowfield The method is designed for modular coupling with various error/feature analyzers and flow solvers. Several steady-state, inviscid flow test cases are presented to demonstrate the applicability of the method for solving practical three-dimensional problems. In all cases, accurate solutions featuring complex, nonlinear flow phenomena such as shock waves and vortices have been generated automatically and efficiently.
Cooperative solutions coupling a geometry engine and adaptive solver codes
NASA Technical Reports Server (NTRS)
Dickens, Thomas P.
1995-01-01
Follow-on work has progressed in using Aero Grid and Paneling System (AGPS), a geometry and visualization system, as a dynamic real time geometry monitor, manipulator, and interrogator for other codes. In particular, AGPS has been successfully coupled with adaptive flow solvers which iterate, refining the grid in areas of interest, and continuing on to a solution. With the coupling to the geometry engine, the new grids represent the actual geometry much more accurately since they are derived directly from the geometry and do not use refits to the first-cut grids. Additional work has been done with design runs where the geometric shape is modified to achieve a desired result. Various constraints are used to point the solution in a reasonable direction which also more closely satisfies the desired results. Concepts and techniques are presented, as well as examples of sample case studies. Issues such as distributed operation of the cooperative codes versus running all codes locally and pre-calculation for performance are discussed. Future directions are considered which will build on these techniques in light of changing computer environments.
High-order solution-adaptive central essentially non-oscillatory (CENO) method for viscous flows
NASA Astrophysics Data System (ADS)
Ivan, Lucian; Groth, Clinton P. T.
2014-01-01
A high-order, central, essentially non-oscillatory (CENO), finite-volume scheme in combination with a block-based adaptive mesh refinement (AMR) algorithm is proposed for solution of the Navier-Stokes equations on body-fitted multi-block mesh. In contrast to other ENO schemes which require reconstruction on multiple stencils, the proposed CENO method uses a hybrid reconstruction approach based on a fixed central stencil. This feature is crucial to avoiding the complexities associated with multiple stencils of ENO schemes, providing high-order accuracy at relatively lower computational cost as well as being very well suited for extension to unstructured meshes. The spatial discretization of the inviscid (hyperbolic) fluxes combines an unlimited high-order k-exact least-squares reconstruction technique following from the optimal central stencil with a monotonicity-preserving, limited, linear, reconstruction algorithm. This hybrid reconstruction procedure retains the unlimited high-order k-exact reconstruction for cells in which the solution is fully resolved and reverts to the limited lower-order counterpart for cells with under-resolved/discontinuous solution content. Switching in the hybrid procedure is determined by a smoothness indicator. The high-order viscous (elliptic) fluxes are computed to the same order of accuracy as the hyperbolic fluxes based on a k-order accurate cell interface gradient derived from the unlimited, cell-centred, reconstruction. A somewhat novel h-refinement criterion based on the solution smoothness indicator is used to direct the steady and unsteady mesh adaptation. The proposed numerical procedure is thoroughly analyzed for advection-diffusion problems characterized by the full range of Péclet numbers, and its predictive capabilities are also demonstrated for several inviscid and laminar flows. The ability of the scheme to accurately represent solutions with smooth extrema and yet robustly handle under-resolved and/or non
Differential Effects of Two Spelling Procedures on Acquisition, Maintenance and Adaption to Reading
ERIC Educational Resources Information Center
Cates, Gary L.; Dunne, Megan; Erkfritz, Karyn N.; Kivisto, Aaron; Lee, Nicole; Wierzbicki, Jennifer
2007-01-01
An alternating treatments design was used to assess the effects of a constant time delay (CTD) procedure and a cover-copy-compare (CCC) procedure on three students' acquisition, subsequent maintenance, and adaptation (i.e., application) of acquired spelling words to reading passages. Students were randomly presented two trials of word lists from…
Adaptive correction procedure for TVL1 image deblurring under impulse noise
NASA Astrophysics Data System (ADS)
Bai, Minru; Zhang, Xiongjun; Shao, Qianqian
2016-08-01
For the problem of image restoration of observed images corrupted by blur and impulse noise, the widely used TVL1 model may deviate from both the data-acquisition model and the prior model, especially for high noise levels. In order to seek a solution of high recovery quality beyond the reach of the TVL1 model, we propose an adaptive correction procedure for TVL1 image deblurring under impulse noise. Then, a proximal alternating direction method of multipliers (ADMM) is presented to solve the corrected TVL1 model and its convergence is also established under very mild conditions. It is verified by numerical experiments that our proposed approach outperforms the TVL1 model in terms of signal-to-noise ratio (SNR) values and visual quality, especially for high noise levels: it can handle salt-and-pepper noise as high as 90% and random-valued noise as high as 70%. In addition, a comparison with a state-of-the-art method, the two-phase method, demonstrates the superiority of the proposed approach.
A mineral separation procedure using hot Clerici solution
Rosenblum, Sam
1974-01-01
Careful boiling of Clerici solution in a Pyrex test tube in an oil bath is used to float minerals with densities up to 5.0 in order to obtain purified concentrates of monazite (density 5.1) for analysis. The "sink" and "float" fractions are trapped in solidified Clerici salts on rapid chilling, and the fractions are washed into separate filter papers with warm water. The hazardous nature of Clerici solution requires unusual care in handling.
Element-by-element Solution Procedures for Nonlinear Structural Analysis
NASA Technical Reports Server (NTRS)
Hughes, T. J. R.; Winget, J. M.; Levit, I.
1984-01-01
Element-by-element approximate factorization procedures are proposed for solving the large finite element equation systems which arise in nonlinear structural mechanics. Architectural and data base advantages of the present algorithms over traditional direct elimination schemes are noted. Results of calculations suggest considerable potential for the methods described.
A Procedure for Controlling General Test Overlap in Computerized Adaptive Testing
ERIC Educational Resources Information Center
Chen, Shu-Ying
2010-01-01
To date, exposure control procedures that are designed to control test overlap in computerized adaptive tests (CATs) are based on the assumption of item sharing between pairs of examinees. However, in practice, examinees may obtain test information from more than one previous test taker. This larger scope of information sharing needs to be…
Solution of Reactive Compressible Flows Using an Adaptive Wavelet Method
NASA Astrophysics Data System (ADS)
Zikoski, Zachary; Paolucci, Samuel; Powers, Joseph
2008-11-01
This work presents numerical simulations of reactive compressible flow, including detailed multicomponent transport, using an adaptive wavelet algorithm. The algorithm allows for dynamic grid adaptation which enhances our ability to fully resolve all physically relevant scales. The thermodynamic properties, equation of state, and multicomponent transport properties are provided by CHEMKIN and TRANSPORT libraries. Results for viscous detonation in a H2:O2:Ar mixture, and other problems in multiple dimensions, are included.
A structured multi-block solution-adaptive mesh algorithm with mesh quality assessment
NASA Technical Reports Server (NTRS)
Ingram, Clint L.; Laflin, Kelly R.; Mcrae, D. Scott
1995-01-01
The dynamic solution adaptive grid algorithm, DSAGA3D, is extended to automatically adapt 2-D structured multi-block grids, including adaption of the block boundaries. The extension is general, requiring only input data concerning block structure, connectivity, and boundary conditions. Imbedded grid singular points are permitted, but must be prevented from moving in space. Solutions for workshop cases 1 and 2 are obtained on multi-block grids and illustrate both increased resolution of and alignment with the solution. A mesh quality assessment criteria is proposed to determine how well a given mesh resolves and aligns with the solution obtained upon it. The criteria is used to evaluate the grid quality for solutions of workshop case 6 obtained on both static and dynamically adapted grids. The results indicate that this criteria shows promise as a means of evaluating resolution.
Measurement of Actinides in Molybdenum-99 Solution Analytical Procedure
Soderquist, Chuck Z.; Weaver, Jamie L.
2015-11-01
This document is a companion report to a previous report, PNNL 24519, Measurement of Actinides in Molybdenum-99 Solution, A Brief Review of the Literature, August 2015. In this companion report, we report a fast, accurate, newly developed analytical method for measurement of trace alpha-emitting actinide elements in commercial high-activity molybdenum-99 solution. Molybdenum-99 is widely used to produce ^{99m}Tc for medical imaging. Because it is used as a radiopharmaceutical, its purity must be proven to be extremely high, particularly for the alpha emitting actinides. The sample of ^{99}Mo solution is measured into a vessel (such as a polyethylene centrifuge tube) and acidified with dilute nitric acid. A gadolinium carrier is added (50 µg). Tracers and spikes are added as necessary. Then the solution is made strongly basic with ammonium hydroxide, which causes the gadolinium carrier to precipitate as hydrous Gd(OH)_{3}. The precipitate of Gd(OH)_{3} carries all of the actinide elements. The suspension of gadolinium hydroxide is then passed through a membrane filter to make a counting mount suitable for direct alpha spectrometry. The high-activity ^{99}Mo and ^{99m}Tc pass through the membrane filter and are separated from the alpha emitters. The gadolinium hydroxide, carrying any trace actinide elements that might be present in the sample, forms a thin, uniform cake on the surface of the membrane filter. The filter cake is first washed with dilute ammonium hydroxide to push the last traces of molybdate through, then with water. The filter is then mounted on a stainless steel counting disk. Finally, the alpha emitting actinide elements are measured by alpha spectrometry.
Eldred, Michael Scott; Subia, Samuel Ramirez; Neckels, David; Hopkins, Matthew Morgan; Notz, Patrick K.; Adams, Brian M.; Carnes, Brian; Wittwer, Jonathan W.; Bichon, Barron J.; Copps, Kevin D.
2006-10-01
This report documents the results for an FY06 ASC Algorithms Level 2 milestone combining error estimation and adaptivity, uncertainty quantification, and probabilistic design capabilities applied to the analysis and design of bistable MEMS. Through the use of error estimation and adaptive mesh refinement, solution verification can be performed in an automated and parameter-adaptive manner. The resulting uncertainty analysis and probabilistic design studies are shown to be more accurate, efficient, reliable, and convenient.
The use of solution adaptive grids in solving partial differential equations
NASA Technical Reports Server (NTRS)
Anderson, D. A.; Rai, M. M.
1982-01-01
The grid point distribution used in solving a partial differential equation using a numerical method has a substantial influence on the quality of the solution. An adaptive grid which adjusts as the solution changes provides the best results when the number of grid points available for use during the calculation is fixed. Basic concepts used in generating and applying adaptive grids are reviewed in this paper, and examples illustrating applications of these concepts are presented.
SAXO, the eXtreme Adaptive Optics System of SPHERE: overview and calibration procedure
NASA Astrophysics Data System (ADS)
Sauvage, J.-F.; Fusco, T.; Petit, C.; Meimon, S.; Fedrigo, E.; Suarez Valles, M.; Kasper, M.; Hubin, N.; Beuzit, J.-L.; Charton, J.; Costille, A.; Rabou, P., .; Mouillet, D.; Baudoz, P.; Buey, T.; Sevin, A.; Wildi, F.; Dohlen, K.
2010-07-01
The direct imaging of exoplanet is a challenging goal of todays astronomy. The light transmitted by exoplanet atmosphere is of a great interest as it may witness for life sign. SPHERE is a second generation instrument for the VLT, dedicated to exoplanet imaging, detection, and characterisation. SPHERE is a global project of an European consortium of 11 institutes from 5 countries. We present here the state of the art of the AIT of the Adaptive Optics part of the instrument. In addition we present fine calibration procedures dedicated to eXtreme Adaptive Optics systems. First we emphasized on vibration and turbulence identification for optimization of the control law. Then, we describe a procedure able to measure and compensate for NCPA with a coronagraphic system.
NASA Technical Reports Server (NTRS)
Gossard, Myron L
1952-01-01
An iterative transformation procedure suggested by H. Wielandt for numerical solution of flutter and similar characteristic-value problems is presented. Application of this procedure to ordinary natural-vibration problems and to flutter problems is shown by numerical examples. Comparisons of computed results with experimental values and with results obtained by other methods of analysis are made.
A Two Stage Solution Procedure for Production Planning System with Advance Demand Information
NASA Astrophysics Data System (ADS)
Ueno, Nobuyuki; Kadomoto, Kiyotaka; Hasuike, Takashi; Okuhara, Koji
We model for ‘Naiji System’ which is a unique corporation technique between a manufacturer and suppliers in Japan. We propose a two stage solution procedure for a production planning problem with advance demand information, which is called ‘Naiji’. Under demand uncertainty, this model is formulated as a nonlinear stochastic programming problem which minimizes the sum of production cost and inventory holding cost subject to a probabilistic constraint and some linear production constraints. By the convexity and the special structure of correlation matrix in the problem where inventory for different periods is not independent, we propose a solution procedure with two stages which are named Mass Customization Production Planning & Management System (MCPS) and Variable Mesh Neighborhood Search (VMNS) based on meta-heuristics. It is shown that the proposed solution procedure is available to get a near optimal solution efficiently and practical for making a good master production schedule in the suppliers.
A block-corrected subdomain solution procedure for recirculating flow calculations
NASA Technical Reports Server (NTRS)
Braaten, M. E.; Patankar, S. V.
1989-01-01
This paper describes a robust and efficient subdomain solution procedure for two-dimensional recirculating flows. The solution domain is divided into a number of overlapping subdomains, and a direct fully coupled solution is obtained for each subdomain using a sparse matrix form of LU decomposition. An effective parabolic block correction procedure, which calculates global corrections to the tentative solution by a marching technique similar to that used for boundary layer flows, is used to accelerate the convergence of the basic procedure. The use of effective block correction is found to be essential for the success of the subdomain approach on strongly recirculating flows. In a number of laminar two-dimensional flows, the new block-corrected method performed extremely well, rivaling the best direct methods in execution time, while requiring substantially less computer storage. The new method proved to be from two to ten times faster than conventional iterative methods, while requiring only a moderate increase in storage.
An adaptive nonlinear solution scheme for reservoir simulation
Lett, G.S.
1996-12-31
Numerical reservoir simulation involves solving large, nonlinear systems of PDE with strongly discontinuous coefficients. Because of the large demands on computer memory and CPU, most users must perform simulations on very coarse grids. The average properties of the fluids and rocks must be estimated on these grids. These coarse grid {open_quotes}effective{close_quotes} properties are costly to determine, and risky to use, since their optimal values depend on the fluid flow being simulated. Thus, they must be found by trial-and-error techniques, and the more coarse the grid, the poorer the results. This paper describes a numerical reservoir simulator which accepts fine scale properties and automatically generates multiple levels of coarse grid rock and fluid properties. The fine grid properties and the coarse grid simulation results are used to estimate discretization errors with multilevel error expansions. These expansions are local, and identify areas requiring local grid refinement. These refinements are added adoptively by the simulator, and the resulting composite grid equations are solved by a nonlinear Fast Adaptive Composite (FAC) Grid method, with a damped Newton algorithm being used on each local grid. The nonsymmetric linear system of equations resulting from Newton`s method are in turn solved by a preconditioned Conjugate Gradients-like algorithm. The scheme is demonstrated by performing fine and coarse grid simulations of several multiphase reservoirs from around the world.
Prism Adaptation and Aftereffect: Specifying the Properties of a Procedural Memory System
Fernández-Ruiz, Juan; Díaz, Rosalinda
1999-01-01
Prism adaptation, a form of procedural learning, is a phenomenon in which the motor system adapts to new visuospatial coordinates imposed by prisms that displace the visual field. Once the prisms are withdrawn, the degree and strength of the adaptation can be measured by the spatial deviation of the motor actions in the direction opposite to the visual displacement imposed by the prisms, a phenomenon known as aftereffect. This study was designed to define the variables that affect the acquisition and retention of the aftereffect. Subjects were required to throw balls to a target in front of them before, during, and after lateral displacement of the visual field with prismatic spectacles. The diopters of the prisms and the number of throws were varied among different groups of subjects. The results show that the adaptation process is dependent on the number of interactions between the visual and motor system, and not on the time spent wearing the prisms. The results also show that the magnitude of the aftereffect is highly correlated with the magnitude of the adaptation, regardless of the diopters of the prisms or the number of throws. Finally, the results suggest that persistence of the aftereffect depends on the number of throws after the adaptation is complete. On the basis of these results, we propose that the system underlying this kind of learning stores at least two different parameters, the contents (measured as the magnitude of displacement) and the persistence (measured as the number of throws to return to the baseline) of the learned information. PMID:10355523
Crane, N K; Parsons, I D; Hjelmstad, K D
2002-03-21
Adaptive mesh refinement selectively subdivides the elements of a coarse user supplied mesh to produce a fine mesh with reduced discretization error. Effective use of adaptive mesh refinement coupled with an a posteriori error estimator can produce a mesh that solves a problem to a given discretization error using far fewer elements than uniform refinement. A geometric multigrid solver uses increasingly finer discretizations of the same geometry to produce a very fast and numerically scalable solution to a set of linear equations. Adaptive mesh refinement is a natural method for creating the different meshes required by the multigrid solver. This paper describes the implementation of a scalable adaptive multigrid method on a distributed memory parallel computer. Results are presented that demonstrate the parallel performance of the methodology by solving a linear elastic rocket fuel deformation problem on an SGI Origin 3000. Two challenges must be met when implementing adaptive multigrid algorithms on massively parallel computing platforms. First, although the fine mesh for which the solution is desired may be large and scaled to the number of processors, the multigrid algorithm must also operate on much smaller fixed-size data sets on the coarse levels. Second, the mesh must be repartitioned as it is adapted to maintain good load balancing. In an adaptive multigrid algorithm, separate mesh levels may require separate partitioning, further complicating the load balance problem. This paper shows that, when the proper optimizations are made, parallel adaptive multigrid algorithms perform well on machines with several hundreds of processors.
A Heat Vulnerability Index and Adaptation Solutions for Pittsburgh, Pennsylvania
NASA Astrophysics Data System (ADS)
Klima, K.; Abrahams, L.; Bradford, K.; Hegglin, M.
2015-12-01
With increasing evidence of global warming, many cities have focused attention on response plans to address their populations' vulnerabilities. Despite expected increased frequency and intensity of heat waves, the health impacts of such events in urban areas can be minimized with careful policy and economic investments. We focus on Pittsburgh, Pennsylvania and ask two questions. First, what are the top factors contributing to heat vulnerability and how do these characteristics manifest geospatially throughout Pittsburgh? Second, assuming the City wishes to deploy additional cooling centers, what placement will optimally address the vulnerability of the at risk populations? We use national census data, ArcGIS geospatial modeling, and statistical analysis to determine a range of heat vulnerability indices and optimal cooling center placement. We find that while different studies use different data and statistical calculations, all methods tested locate additional cooling centers at the confluence of the three rivers (Downtown), the northeast side of Pittsburgh (Shadyside/ Highland Park), and the southeast side of Pittsburgh (Squirrel Hill). This suggests that for Pittsburgh, a researcher could apply the same factor analysis procedure to compare datasets for different locations and times; factor analyses for heat vulnerability are more robust than previously thought.
A Heat Vulnerability Index and Adaptation Solutions for Pittsburgh, Pennsylvania.
Bradford, Kathryn; Abrahams, Leslie; Hegglin, Miriam; Klima, Kelly
2015-10-01
With increasing evidence of global warming, many cities have focused attention on response plans to address their populations' vulnerabilities. Despite expected increased frequency and intensity of heat waves, the health impacts of such events in urban areas can be minimized with careful policy and economic investments. We focus on Pittsburgh, Pennsylvania and ask two questions. First, what are the top factors contributing to heat vulnerability and how do these characteristics manifest geospatially throughout Pittsburgh? Second, assuming the City wishes to deploy additional cooling centers, what placement will optimally address the vulnerability of the at risk populations? We use national census data, ArcGIS geospatial modeling, and statistical analysis to determine a range of heat vulnerability indices and optimal cooling center placement. We find that while different studies use different data and statistical calculations, all methods tested locate additional cooling centers at the confluence of the three rivers (Downtown), the northeast side of Pittsburgh (Shadyside/Highland Park), and the southeast side of Pittsburgh (Squirrel Hill). This suggests that for Pittsburgh, a researcher could apply the same factor analysis procedure to compare data sets for different locations and times; factor analyses for heat vulnerability are more robust than previously thought. PMID:26333158
Hierarchical Adaptive Solution of Radiation Transport Problems on Unstructured Grids
Dr. Cassiano R. E de Oliveira
2008-06-30
Computational radiation transport has steadily gained acceptance in the last decade as a viable modeling tool due to the rapid advancements in computer software and hardware technologies. It can be applied for the analysis of a wide range of problems which arise in nuclear reactor physics, medical physics, atmospheric physics, astrophysics and other areas of engineering physics. However, radiation transport is an extremely chanllenging computational problem since the governing equation is seven-deimensional (3 in space, 2 in direction, 1 in energy, and 1 in time) with a high degree of coupleing betwen these variables. If not careful, this relatively large number of independent variables when discretized can potentially lead to sets of linear equations of intractable size. Though parallel computing has allowed the solution of very large problems, avaliable computational resources will always be finite due to the fact that every more sophisticated multiphysics models are being demanded by industry. There is thus the pressing requirement to optimize the discretizations so as to minimize the effort and maximize the accuracy.
Gandhi, P A; Sawant, A D; Wilson, L A; Ahearn, D G
1993-01-01
Serratia marcescens (11 of 12 strains) demonstrated an ability to grow in certain chlorhexidine-based disinfecting solutions recommended for rigid gas-permeable contact lenses. For a representative strain, cells that were grown in nutrient-rich medium, washed, and inoculated into disinfecting solution went into a nonrecoverable phase within 24 h. However, after 4 days, cells that had the ability to grow in the disinfectant (doubling time, g = 5.7 h) emerged. Solutions supporting growth of S. marcescens were filter sterilized. These solutions, even after removal of the cells, showed bactericidal activity against Pseudomonas aeruginosa and a biphasic survival curve when rechallenged with S. marcescens. Adaptation to chlorhexidine by S. marcescens was not observed in solutions formulated with borate ions. For chlorhexidine-adapted cells, the MIC of chlorhexidine in saline was eightfold higher than that for unadapted cells. Cells adapted to chlorhexidine showed alterations in the proteins of the outer membrane and increased adherence to polyethylene. Cells adapted to chlorhexidine persisted or grew in several other contact lens solutions with different antimicrobial agents, including benzalkonium chloride. Images PMID:8439148
General tuning procedure for the nonlinear balance-based adaptive controller
NASA Astrophysics Data System (ADS)
Stebel, Krzysztof; Czeczot, Jacek; Laszczyk, Piotr
2014-01-01
This paper presents the intuitive and ready-to-use, general procedure for tuning the balance-based adaptive controller (B-BAC) based on its equivalence to the controller with PI term and with additional improvements shown for the linearised approximation of the dynamics of the nonlinear controlled process. The simple formulas are suggested to calculate the B-BAC tunings based on the PI tunings determined by any PI tuning procedure chosen accordingly to the desired closed-loop performance. This methodology is verified by comparing the closed-loop performance of the equivalently tuned B-BAC and PI/PI+feedforward controllers under the same scenario, both by the simulation and practical experiments.
An adaptive procedure for the numerical parameters of a particle simulation
NASA Astrophysics Data System (ADS)
Galitzine, Cyril; Boyd, Iain D.
2015-01-01
In this article, a computational procedure that automatically determines the optimum time step, cell weight and species weights for steady-state multi-species DSMC (direct simulation Monte Carlo) simulations is presented. The time step is required to satisfy the basic requirements of the DSMC method while the weight and relative weights fields are chosen so as to obtain a user-specified average number of particles in all cells of the domain. The procedure allows the conduct of efficient DSMC simulations with minimal user input and is integrable into existing DSMC codes. The adaptive method is used to simulate a test case consisting of two counterflowing jets at a Knudsen number of 0.015. Large accuracy gains for sampled number densities and velocities over a standard simulation approach for the same number of particles are observed.
Application of a solution adaptive grid scheme, SAGE, to complex three-dimensional flows
NASA Technical Reports Server (NTRS)
Davies, Carol B.; Venkatapathy, Ethiraj
1991-01-01
A new three-dimensional (3D) adaptive grid code based on the algebraic, solution-adaptive scheme of Nakahashi and Deiwert is developed and applied to a variety of problems. The new computer code, SAGE, is an extension of the same-named two-dimensional (2D) solution-adaptive program that has already proven to be a powerful tool in computational fluid dynamics applications. The new code has been applied to a range of complex three-dimensional, supersonic and hypersonic flows. Examples discussed are a tandem-slot fuel injector, the hypersonic forebody of the Aeroassist Flight Experiment (AFE), the 3D base flow behind the AFE, the supersonic flow around a 3D swept ramp and a generic, hypersonic, 3D nozzle-plume flow. The associated adapted grids and the solution enhancements resulting from the grid adaption are presented for these cases. Three-dimensional adaption is more complex than its 2D counterpart, and the complexities unique to the 3D problems are discussed.
NASA Technical Reports Server (NTRS)
Wang, Gang
2003-01-01
A multi grid solution procedure for the numerical simulation of turbulent flows in complex geometries has been developed. A Full Multigrid-Full Approximation Scheme (FMG-FAS) is incorporated into the continuity and momentum equations, while the scalars are decoupled from the multi grid V-cycle. A standard kappa-Epsilon turbulence model with wall functions has been used to close the governing equations. The numerical solution is accomplished by solving for the Cartesian velocity components either with a traditional grid staggering arrangement or with a multiple velocity grid staggering arrangement. The two solution methodologies are evaluated for relative computational efficiency. The solution procedure with traditional staggering arrangement is subsequently applied to calculate the flow and temperature fields around a model Short Take-off and Vertical Landing (STOVL) aircraft hovering in ground proximity.
An adaptive gating approach for x-ray dose reduction during cardiac interventional procedures
Abdel-Malek, A.; Yassa, F.; Bloomer, J. )
1994-03-01
The increasing number of cardiac interventional procedures has resulted in a tremendous increase in the absorbed x-ray dose by radiologists as well as patients. A new method is presented for x-ray dose reduction which utilizes adaptive tube pulse-rate scheduling in pulsed fluoroscopic systems. In the proposed system, pulse-rate scheduling depends on the heart muscle activity phase determined through continuous guided segmentation of the patient's electrocardiogram (ECG). Displaying images generated at the proposed adaptive nonuniform rate is visually unacceptable; therefore, a frame-filling approach is devised to ensure a 30 frame/sec display rate. The authors adopted two approaches for the frame-filling portion of the system depending on the imaging mode used in the procedure. During cine-mode imaging (high x-ray dose), collected image frame-to-frame pixel motion is estimated using a pel-recursive algorithm followed by motion-based pixel interpolation to estimate the frames necessary to increase the rate to 30 frames/sec. The other frame-filling approach is adopted during fluoro-mode imaging (low x-ray dose), characterized by low signal-to-noise ratio images. This approach consists of simply holding the last collected frame for as many frames as necessary to maintain the real-time display rate.
Global solution for a kinetic chemotaxis model with internal dynamics and its fast adaptation limit
NASA Astrophysics Data System (ADS)
Liao, Jie
2015-12-01
A nonlinear kinetic chemotaxis model with internal dynamics incorporating signal transduction and adaptation is considered. This paper is concerned with: (i) the global solution for this model, and, (ii) its fast adaptation limit to Othmer-Dunbar-Alt type model. This limit gives some insight to the molecular origin of the chemotaxis behaviour. First, by using the Schauder fixed point theorem, the global existence of weak solution is proved based on detailed a priori estimates, under quite general assumptions. However, the Schauder theorem does not provide uniqueness, so additional analysis is required to be developed for uniqueness. Next, the fast adaptation limit of this model is derived by extracting a weak convergence subsequence in measure space. For this limit, the first difficulty is to show the concentration effect on the internal state. Another difficulty is the strong compactness argument on the chemical potential, which is essential for passing the nonlinear kinetic equation to the weak limit.
Transonic flow solutions using a composite velocity procedure for potential, Euler and RNS equations
NASA Technical Reports Server (NTRS)
Gordnier, R. E.; Rubin, S. G.
1986-01-01
Solutions for transonic viscous and inviscid flows using a composite velocity procedure are presented. The velocity components of the compressible flow equations are written in terms of a multiplicative composite consisting of a viscous or rotational velocity and an inviscid, irrotational, potential-like function. This provides for an efficient solution procedure that is locally representative of both asymptotic inviscid and boundary layer theories. A modified conservative form of the axial momentum equation that is required to obtain rotational solutions in the inviscid region is presented and a combined conservation/nonconservation form is applied for evaluation of the reduced Navier-Stokes (RNS), Euler and potential equations. A variety of results is presented and the effects of the approximations on entropy production, shock capturing, and viscous interaction are discussed.
Transonic flow solutions using a composite velocity procedure for potential, Euler and RNS equations
NASA Technical Reports Server (NTRS)
Gordnier, R. E.; Rubin, S. G.
1989-01-01
Solutions for transonic viscous and inviscid flows using a composite velocity procedure are presented. The velocity components of the compressible flow equations are written in terms of a multiplicative composite consisting of a viscous or rotational velocity and an inviscid, irrotational, potential-like function. This provides for an efficient solution procedure that is locally representative of both asymptotic inviscid and boundary layer theories. A modified conservative form of the axial momentum equation that is required to obtain rotational solutions in the inviscid region is presented and a combined conservation/nonconservation form is applied for evaluation of the reduced Navier-Stokes (RNS), Euler and potential equations. A variety of results is presented and the effects of the approximations on entropy production, shock capturing, and viscous interaction are discussed.
Construction and solution of an adaptive image-restoration model for removing blur and mixed noise
NASA Astrophysics Data System (ADS)
Wang, Youquan; Cui, Lihong; Cen, Yigang; Sun, Jianjun
2016-03-01
We establish a practical regularized least-squares model with adaptive regularization for dealing with blur and mixed noise in images. This model has some advantages, such as good adaptability for edge restoration and noise suppression due to the application of a priori spatial information obtained from a polluted image. We further focus on finding an important feature of image restoration using an adaptive restoration model with different regularization parameters in polluted images. A more important observation is that the gradient of an image varies regularly from one regularization parameter to another under certain conditions. Then, a modified graduated nonconvexity approach combined with a median filter version of a spatial information indicator is proposed to seek the solution of our adaptive image-restoration model by applying variable splitting and weighted penalty techniques. Numerical experiments show that the method is robust and effective for dealing with various blur and mixed noise levels in images.
Evaluation of discretization procedures for transition elements in adaptive mesh refinement
NASA Technical Reports Server (NTRS)
Park, K. C.; Levit, Itzak; Stanley, Gary M.
1991-01-01
Three transition interpolation schemes for use in h-or r-refinement have been analyzed in terms of accuracy, implementation ease and extendability. They include blending-function interpolation, displacement averaging, and strain matching at discrete points along the transition edge lines. The results suggest that the choice of matching depends strongly on the element formulations, (viz. displacement or assumed strain, etc.) and mesh refinement criteria employed, and to a lesser extent the choice of computer architecture (serial vs. parallel) and the equation solution procedures. A recommended pairing of some of the elements with the choice factors is suggested.
Dean, Brian K; Wright, Cameron H G; Barrett, Steven F
2011-01-01
Two previous papers, presented at RMBS in 2009 and 2010, introduced a fly inspired vision sensor that could adapt to indoor light conditions by mimicking the light adaptation process of the commonhousefly, Muscadomestica. A new system has been designed that should allow the sensor to adapt to outdoor light conditions which will enable the sensors use inapplications such as: unmanned aerial vehicle (UAV) obstacle avoidance, UAV landing support, target tracking, wheelchair guidance, large structure monitoring, and many other outdoor applications. A sensor of this type is especially suited for these applications due to features of hyperacuity (or an ability to achieve movement resolution beyond the theoretical limit), extreme sensitivity to motion, and (through software simulation) image edge extraction, motion detection, and orientation and location of a line.Many of these qualities are beyond the ability of traditional computervision sensors such as charge coupled device (CCD) arrays.To achieve outdoor light adaptation, a variety of design obstacles have to be overcome such as infrared interference, dynamic range expansion, and light saturation. The newly designed system overcomes the latter two design obstacles by mimicking the flys solution of logarithmic compression followed by removal of the average background light intensity. This paper presents the new design and the preliminary tests that were conducted to determine its effectiveness. PMID:21525612
Combined LAURA-UPS solution procedure for chemically-reacting flows. M.S. Thesis
NASA Technical Reports Server (NTRS)
Wood, William A.
1994-01-01
A new procedure seeks to combine the thin-layer Navier-Stokes solver LAURA with the parabolized Navier-Stokes solver UPS for the aerothermodynamic solution of chemically-reacting air flowfields. The interface protocol is presented and the method is applied to two slender, blunted shapes. Both axisymmetric and three dimensional solutions are included with surface pressure and heat transfer comparisons between the present method and previously published results. The case of Mach 25 flow over an axisymmetric six degree sphere-cone with a noncatalytic wall is considered to 100 nose radii. A stability bound on the marching step size was observed with this case and is attributed to chemistry effects resulting from the noncatalytic wall boundary condition. A second case with Mach 28 flow over a sphere-cone-cylinder-flare configuration is computed at both two and five degree angles of attack with a fully-catalytic wall. Surface pressures are seen to be within five percent with the present method compared to the baseline LAURA solution and heat transfers are within 10 percent. The effect of grid resolution is investigated and the nonequilibrium results are compared with a perfect gas solution, showing that while the surface pressure is relatively unchanged by the inclusion of reacting chemistry the nonequilibrium heating is 25 percent higher. The procedure demonstrates significant, order of magnitude reductions in solution time and required memory for the three dimensional case over an all thin-layer Navier-Stokes solution.
An edge-based solution-adaptive method applied to the AIRPLANE code
NASA Technical Reports Server (NTRS)
Biswas, Rupak; Thomas, Scott D.; Cliff, Susan E.
1995-01-01
Computational methods to solve large-scale realistic problems in fluid flow can be made more efficient and cost effective by using them in conjunction with dynamic mesh adaption procedures that perform simultaneous coarsening and refinement to capture flow features of interest. This work couples the tetrahedral mesh adaption scheme, 3D_TAG, with the AIRPLANE code to solve complete aircraft configuration problems in transonic and supersonic flow regimes. Results indicate that the near-field sonic boom pressure signature of a cone-cylinder is improved, the oblique and normal shocks are better resolved on a transonic wing, and the bow shock ahead of an unstarted inlet is better defined.
qPR: An adaptive partial-report procedure based on Bayesian inference
Baek, Jongsoo; Lesmes, Luis Andres; Lu, Zhong-Lin
2016-01-01
Iconic memory is best assessed with the partial report procedure in which an array of letters appears briefly on the screen and a poststimulus cue directs the observer to report the identity of the cued letter(s). Typically, 6–8 cue delays or 600–800 trials are tested to measure the iconic memory decay function. Here we develop a quick partial report, or qPR, procedure based on a Bayesian adaptive framework to estimate the iconic memory decay function with much reduced testing time. The iconic memory decay function is characterized by an exponential function and a joint probability distribution of its three parameters. Starting with a prior of the parameters, the method selects the stimulus to maximize the expected information gain in the next test trial. It then updates the posterior probability distribution of the parameters based on the observer's response using Bayesian inference. The procedure is reiterated until either the total number of trials or the precision of the parameter estimates reaches a certain criterion. Simulation studies showed that only 100 trials were necessary to reach an average absolute bias of 0.026 and a precision of 0.070 (both in terms of probability correct). A psychophysical validation experiment showed that estimates of the iconic memory decay function obtained with 100 qPR trials exhibited good precision (the half width of the 68.2% credible interval = 0.055) and excellent agreement with those obtained with 1,600 trials of the conventional method of constant stimuli procedure (RMSE = 0.063). Quick partial-report relieves the data collection burden in characterizing iconic memory and makes it possible to assess iconic memory in clinical populations. PMID:27580045
Adaptively Refined Euler and Navier-Stokes Solutions with a Cartesian-Cell Based Scheme
NASA Technical Reports Server (NTRS)
Coirier, William J.; Powell, Kenneth G.
1995-01-01
A Cartesian-cell based scheme with adaptive mesh refinement for solving the Euler and Navier-Stokes equations in two dimensions has been developed and tested. Grids about geometrically complicated bodies were generated automatically, by recursive subdivision of a single Cartesian cell encompassing the entire flow domain. Where the resulting cells intersect bodies, N-sided 'cut' cells were created using polygon-clipping algorithms. The grid was stored in a binary-tree data structure which provided a natural means of obtaining cell-to-cell connectivity and of carrying out solution-adaptive mesh refinement. The Euler and Navier-Stokes equations were solved on the resulting grids using an upwind, finite-volume formulation. The inviscid fluxes were found in an upwinded manner using a linear reconstruction of the cell primitives, providing the input states to an approximate Riemann solver. The viscous fluxes were formed using a Green-Gauss type of reconstruction upon a co-volume surrounding the cell interface. Data at the vertices of this co-volume were found in a linearly K-exact manner, which ensured linear K-exactness of the gradients. Adaptively-refined solutions for the inviscid flow about a four-element airfoil (test case 3) were compared to theory. Laminar, adaptively-refined solutions were compared to accepted computational, experimental and theoretical results.
Aguila-Camacho, Norelys; Duarte-Mermoud, Manuel A
2016-01-01
This paper presents the analysis of three classes of fractional differential equations appearing in the field of fractional adaptive systems, for the case when the fractional order is in the interval α∈(0,1] and the Caputo definition for fractional derivatives is used. The boundedness of the solutions is proved for all three cases, and the convergence to zero of the mean value of one of the variables is also proved. Applications of the obtained results to fractional adaptive schemes in the context of identification and control problems are presented at the end of the paper, including numerical simulations which support the analytical results. PMID:26632495
Software for the parallel adaptive solution of conservation laws by discontinous Galerkin methods.
Flaherty, J. E.; Loy, R. M.; Shephard, M. S.; Teresco, J. D.
1999-08-17
The authors develop software tools for the solution of conservation laws using parallel adaptive discontinuous Galerkin methods. In particular, the Rensselaer Partition Model (RPM) provides parallel mesh structures within an adaptive framework to solve the Euler equations of compressible flow by a discontinuous Galerkin method (LOCO). Results are presented for a Rayleigh-Taylor flow instability for computations performed on 128 processors of an IBM SP computer. In addition to managing the distributed data and maintaining a load balance, RPM provides information about the parallel environment that can be used to tailor partitions to a specific computational environment.
A solution procedure based on the Ateb function for a two-degree-of-freedom oscillator
NASA Astrophysics Data System (ADS)
Cveticanin, L.
2015-06-01
In this paper vibration of a two mass system with two degrees of freedom is considered. Two equal harmonic oscillators are coupled with a strong nonlinear viscoelastic connection. Mathematical model of the system is two coupled second-order strong nonlinear differential equations. Introducing new variables the system transforms into two uncoupled equations: one of them is linear and the other with a strong nonlinearity. In the paper a method for solving the strong nonlinear equation is developed. Based on the exact solution of a pure nonlinear differential equation, we assumed a perturbed version of the solution with time variable parameters. Due to the fact that the solution is periodical, the averaging procedure is introduced. As a special case vibrations of harmonic oscillators with fraction order nonlinear connection are considered. Depending on the order and coefficient of nonlinearities bounded and unbounded motion of masses is determined. Besides, the conditions for steady-state periodical solution are discussed. The procedure given in the paper is applied for investigation of the vibration of a vocal cord, which is modeled with two harmonic oscillators with strong nonlinear fraction order viscoelastic connection. Using the experimental data for the vocal cord the parameters for the steady-state solution which describes the flexural vibration of the vocal cord is analyzed. The influence of the order of nonlinearity on the amplitude and frequency of vibration of the vocal cord is obtained. The analytical results are close to those obtained experimentally.
Adaptive remapping procedure for electronic cleansing of fecal tagging CT colonography images
NASA Astrophysics Data System (ADS)
Morra, Lia; Delsanto, Silvia; Campanella, Delia; Regge, Daniele; Bert, Alberto
2009-02-01
Fecal tagging preparations are attracting notable interest as a way to increase patients' compliance to virtual colonoscopy. Patient-friendly preparations, however, often result in less homogeneous tagging. Electronic cleansing algorithms should be capable of dealing with such preparations and yield good quality 2D and 3D images; moreover, successful electronic cleansing lays the basis for the application of Computer Aided Detection schemes. In this work, we present a cleansing algorithm based on an adaptive remapping procedure, which is based on a model of how partial volume affects both the air-tissue and the soft-tissue interfaces. Partial volume at the stool-soft tissue interface is characterized in terms of the local characteristics of tagged regions, in order to account for variations in tagging intensity throughout the colon. The two models are then combined in order to obtain a remapping equation relating the observed intensity to the that of the cleansed colon. The electronic cleansed datasets were then processed by a CAD scheme composed of three main steps: colon surface extraction, polyp candidate segmentation through curvature-based features, and linear classifier-based discrimination between true polyps and false alarms. Results obtained were compared with a previous version of the cleansing algorithm, in which a simpler remapping procedure was used. Performances are increased both in terms of the visual quality of the 2D cleansed images and 3D rendered volumes, and of CAD performances on a sameday FT virtual colonoscopy dataset.
NASA Astrophysics Data System (ADS)
Gotovac, Hrvoje; Srzic, Veljko
2014-05-01
Contaminant transport in natural aquifers is a complex, multiscale process that is frequently studied using different Eulerian, Lagrangian and hybrid numerical methods. Conservative solute transport is typically modeled using the advection-dispersion equation (ADE). Despite the large number of available numerical methods that have been developed to solve it, the accurate numerical solution of the ADE still presents formidable challenges. In particular, current numerical solutions of multidimensional advection-dominated transport in non-uniform velocity fields are affected by one or all of the following problems: numerical dispersion that introduces artificial mixing and dilution, grid orientation effects, unresolved spatial and temporal scales and unphysical numerical oscillations (e.g., Herrera et al, 2009; Bosso et al., 2012). In this work we will present Eulerian Lagrangian Adaptive Fup Collocation Method (ELAFCM) based on Fup basis functions and collocation approach for spatial approximation and explicit stabilized Runge-Kutta-Chebyshev temporal integration (public domain routine SERK2) which is especially well suited for stiff parabolic problems. Spatial adaptive strategy is based on Fup basis functions which are closely related to the wavelets and splines so that they are also compactly supported basis functions; they exactly describe algebraic polynomials and enable a multiresolution adaptive analysis (MRA). MRA is here performed via Fup Collocation Transform (FCT) so that at each time step concentration solution is decomposed using only a few significant Fup basis functions on adaptive collocation grid with appropriate scales (frequencies) and locations, a desired level of accuracy and a near minimum computational cost. FCT adds more collocations points and higher resolution levels only in sensitive zones with sharp concentration gradients, fronts and/or narrow transition zones. According to the our recent achievements there is no need for solving the large
Kim, S.
1994-12-31
Parallel iterative procedures based on domain decomposition techniques are defined and analyzed for the numerical solution of wave propagation by finite element and finite difference methods. For finite element methods, in a Lagrangian framework, an efficient way for choosing the algorithm parameter as well as the algorithm convergence are indicated. Some heuristic arguments for finding the algorithm parameter for finite difference schemes are addressed. Numerical results are presented to indicate the effectiveness of the methods.
NASA Technical Reports Server (NTRS)
Stahara, S. S.; Elliott, J. P.; Spreiter, J. R.
1981-01-01
Perturbation procedures and associated computational codes for determining nonlinear flow solutions were developed to establish a method for minimizing computational requirements associated with parametric studies of transonic flows in turbomachines. The procedure that was developed and evaluated was found to be capable of determining highly accurate approximations to families of strongly nonlinear solutions which are either continuous or discontinuous, and which represent variations in some arbitrary parameter. Coordinate straining is employed to account for the movement of discontinuities and maxima of high gradient regions due to the perturbation. The development and results reported are for the single parameter perturbation problem. Flows past both isolated airfoils and compressor cascades involving a wide variety of flow and geometry parameter changes are reported. Attention is focused in particular on transonic flows which are strongly supercritical and exhibit large surface shock movement over the parametric range studied; and on subsonic flows which display large pressure variations in the stagnation and peak suction pressure regions. Comparisons with the corresponding 'exact' nonlinear solutions indicate a remarkable accuracy and range of validity of such a procedure.
A Dynamically Adaptive Arbitrary Lagrangian-Eulerian Method for Solution of the Euler Equations
Anderson, R W; Elliott, N S; Pember, R B
2003-02-14
A new method that combines staggered grid arbitrary Lagrangian-Eulerian (ALE) techniques with structured local adaptive mesh refinement (AMR) has been developed for solution of the Euler equations. The novel components of the methods are driven by the need to reconcile traditional AMR techniques with the staggered variables and moving, deforming meshes associated with Lagrange based ALE schemes. We develop interlevel solution transfer operators and interlevel boundary conditions first in the case of purely Lagrangian hydrodynamics, and then extend these ideas into an ALE method by developing adaptive extensions of elliptic mesh relaxation techniques. Conservation properties of the method are analyzed, and a series of test problem calculations are presented which demonstrate the utility and efficiency of the method.
NASA Technical Reports Server (NTRS)
Duque, Earl P. N.; Biswas, Rupak; Strawn, Roger C.
1995-01-01
This paper summarizes a method that solves both the three dimensional thin-layer Navier-Stokes equations and the Euler equations using overset structured and solution adaptive unstructured grids with applications to helicopter rotor flowfields. The overset structured grids use an implicit finite-difference method to solve the thin-layer Navier-Stokes/Euler equations while the unstructured grid uses an explicit finite-volume method to solve the Euler equations. Solutions on a helicopter rotor in hover show the ability to accurately convect the rotor wake. However, isotropic subdivision of the tetrahedral mesh rapidly increases the overall problem size.
NASA Technical Reports Server (NTRS)
Jawerth, Bjoern; Sweldens, Wim
1993-01-01
We present ideas on how to use wavelets in the solution of boundary value ordinary differential equations. Rather than using classical wavelets, we adapt their construction so that they become (bi)orthogonal with respect to the inner product defined by the operator. The stiffness matrix in a Galerkin method then becomes diagonal and can thus be trivially inverted. We show how one can construct an O(N) algorithm for various constant and variable coefficient operators.
Roberts, Nathan V.; Demkowiz, Leszek; Moser, Robert
2015-11-15
The discontinuous Petrov-Galerkin methodology with optimal test functions (DPG) of Demkowicz and Gopalakrishnan [18, 20] guarantees the optimality of the solution in an energy norm, and provides several features facilitating adaptive schemes. Whereas Bubnov-Galerkin methods use identical trial and test spaces, Petrov-Galerkin methods allow these function spaces to differ. In DPG, test functions are computed on the fly and are chosen to realize the supremum in the inf-sup condition; the method is equivalent to a minimum residual method. For well-posed problems with sufficiently regular solutions, DPG can be shown to converge at optimal rates—the inf-sup constants governing the convergence are mesh-independent, and of the same order as those governing the continuous problem [48]. DPG also provides an accurate mechanism for measuring the error, and this can be used to drive adaptive mesh refinements. We employ DPG to solve the steady incompressible Navier-Stokes equations in two dimensions, building on previous work on the Stokes equations, and focusing particularly on the usefulness of the approach for automatic adaptivity starting from a coarse mesh. We apply our approach to a manufactured solution due to Kovasznay as well as the lid-driven cavity flow, backward-facing step, and flow past a cylinder problems.
NASA Astrophysics Data System (ADS)
Roberts, Nathan V.; Demkowicz, Leszek; Moser, Robert
2015-11-01
The discontinuous Petrov-Galerkin methodology with optimal test functions (DPG) of Demkowicz and Gopalakrishnan [18,20] guarantees the optimality of the solution in an energy norm, and provides several features facilitating adaptive schemes. Whereas Bubnov-Galerkin methods use identical trial and test spaces, Petrov-Galerkin methods allow these function spaces to differ. In DPG, test functions are computed on the fly and are chosen to realize the supremum in the inf-sup condition; the method is equivalent to a minimum residual method. For well-posed problems with sufficiently regular solutions, DPG can be shown to converge at optimal rates-the inf-sup constants governing the convergence are mesh-independent, and of the same order as those governing the continuous problem [48]. DPG also provides an accurate mechanism for measuring the error, and this can be used to drive adaptive mesh refinements. We employ DPG to solve the steady incompressible Navier-Stokes equations in two dimensions, building on previous work on the Stokes equations, and focusing particularly on the usefulness of the approach for automatic adaptivity starting from a coarse mesh. We apply our approach to a manufactured solution due to Kovasznay as well as the lid-driven cavity flow, backward-facing step, and flow past a cylinder problems.
An Adaptive Landscape Classification Procedure using Geoinformatics and Artificial Neural Networks
Coleman, Andre Michael
2008-06-01
The Adaptive Landscape Classification Procedure (ALCP), which links the advanced geospatial analysis capabilities of Geographic Information Systems (GISs) and Artificial Neural Networks (ANNs) and particularly Self-Organizing Maps (SOMs), is proposed as a method for establishing and reducing complex data relationships. Its adaptive and evolutionary capability is evaluated for situations where varying types of data can be combined to address different prediction and/or management needs such as hydrologic response, water quality, aquatic habitat, groundwater recharge, land use, instrumentation placement, and forecast scenarios. The research presented here documents and presents favorable results of a procedure that aims to be a powerful and flexible spatial data classifier that fuses the strengths of geoinformatics and the intelligence of SOMs to provide data patterns and spatial information for environmental managers and researchers. This research shows how evaluation and analysis of spatial and/or temporal patterns in the landscape can provide insight into complex ecological, hydrological, climatic, and other natural and anthropogenic-influenced processes. Certainly, environmental management and research within heterogeneous watersheds provide challenges for consistent evaluation and understanding of system functions. For instance, watersheds over a range of scales are likely to exhibit varying levels of diversity in their characteristics of climate, hydrology, physiography, ecology, and anthropogenic influence. Furthermore, it has become evident that understanding and analyzing these diverse systems can be difficult not only because of varying natural characteristics, but also because of the availability, quality, and variability of spatial and temporal data. Developments in geospatial technologies, however, are providing a wide range of relevant data, and in many cases, at a high temporal and spatial resolution. Such data resources can take the form of high
Karmali, Faisal; Chaudhuri, Shomesh E; Yi, Yongwoo; Merfeld, Daniel M
2016-03-01
When measuring thresholds, careful selection of stimulus amplitude can increase efficiency by increasing the precision of psychometric fit parameters (e.g., decreasing the fit parameter error bars). To find efficient adaptive algorithms for psychometric threshold ("sigma") estimation, we combined analytic approaches, Monte Carlo simulations, and human experiments for a one-interval, binary forced-choice, direction-recognition task. To our knowledge, this is the first time analytic results have been combined and compared with either simulation or human results. Human performance was consistent with theory and not significantly different from simulation predictions. Our analytic approach provides a bound on efficiency, which we compared against the efficiency of standard staircase algorithms, a modified staircase algorithm with asymmetric step sizes, and a maximum likelihood estimation (MLE) procedure. Simulation results suggest that optimal efficiency at determining threshold is provided by the MLE procedure targeting a fraction correct level of 0.92, an asymmetric 4-down, 1-up staircase targeting between 0.86 and 0.92 or a standard 6-down, 1-up staircase. Psychometric test efficiency, computed by comparing simulation and analytic results, was between 41 and 58% for 50 trials for these three algorithms, reaching up to 84% for 200 trials. These approaches were 13-21% more efficient than the commonly used 3-down, 1-up symmetric staircase. We also applied recent advances to reduce accuracy errors using a bias-reduced fitting approach. Taken together, the results lend confidence that the assumptions underlying each approach are reasonable and that human threshold forced-choice decision making is modeled well by detection theory models and mimics simulations based on detection theory models. PMID:26645306
Mission to Mars: Adaptive Identifier for the Solution of Inverse Optical Metrology Tasks
NASA Astrophysics Data System (ADS)
Krapivin, Vladimir F.; Varotsos, Costas A.; Christodoulakis, John
2016-06-01
A human mission to Mars requires the solution of many problems that mainly linked to the safety of life, the reliable operational control of drinking water as well as health care. The availability of liquid fuels is also an important issue since the existing tools cannot fully provide the required liquid fuels quantities for the mission return journey. This paper presents the development of new methods and technology for reliable, operational, and with high availability chemical analysis of liquid solutions of various types. This technology is based on the employment of optical sensors (such as the multi-channel spectrophotometers or spectroellipsometers and microwave radiometers) and the development of a database of spectral images for typical liquid solutions that could be the objects of life on Mars. This database exploits the adaptive recognition of optical images of liquids using specific algorithms that are based on spectral analysis, cluster analysis and methods for solving the inverse optical metrology tasks.
Mission to Mars: Adaptive Identifier for the Solution of Inverse Optical Metrology Tasks
NASA Astrophysics Data System (ADS)
Krapivin, Vladimir F.; Varotsos, Costas A.; Christodoulakis, John
2016-04-01
A human mission to Mars requires the solution of many problems that mainly linked to the safety of life, the reliable operational control of drinking water as well as health care. The availability of liquid fuels is also an important issue since the existing tools cannot fully provide the required liquid fuels quantities for the mission return journey. This paper presents the development of new methods and technology for reliable, operational, and with high availability chemical analysis of liquid solutions of various types. This technology is based on the employment of optical sensors (such as the multi-channel spectrophotometers or spectroellipsometers and microwave radiometers) and the development of a database of spectral images for typical liquid solutions that could be the objects of life on Mars. This database exploits the adaptive recognition of optical images of liquids using specific algorithms that are based on spectral analysis, cluster analysis and methods for solving the inverse optical metrology tasks.
The development of a solution-adaptive 3D Navier-Stokes solver for turbomachinery
NASA Astrophysics Data System (ADS)
Dawes, W. N.
1991-06-01
This paper describes the early stages in the development of a solution-adaptive fully three-dimensional Navier-Stokes solver. The compressible, Navier-Stokes equations, closed with k-epsiton turbulence modeling, are discretized on an unstructured mesh formed from tetrahedral computational control volumes. At the mesh generation stage and at stages during the solution process itself, mesh refinement is carried out by flagging cells which satisfy particular critera. These criteria include geometric features such as proximity to wetted surfaces and features associated with the particular flowfield, such as fractional variation of a flow variable over cell faces. Solutions are presented for the highly three-dimensional flows associated with a truncated cylinder in a cross flow, a three-dimensional swept transonic bump, and the corner stall and secondary flow in a transonic compressor cascade.
ERIC Educational Resources Information Center
Yang, Xiangdong; Poggio, John C.; Glasnapp, Douglas R.
2006-01-01
The effects of five ability estimators, that is, maximum likelihood estimator, weighted likelihood estimator, maximum a posteriori, expected a posteriori, and Owen's sequential estimator, on the performances of the item response theory-based adaptive classification procedure on multiple categories were studied via simulations. The following…
Adaptive resolution simulation of an atomistic DNA molecule in MARTINI salt solution
NASA Astrophysics Data System (ADS)
Zavadlav, J.; Podgornik, R.; Melo, M. N.; Marrink, S. J.; Praprotnik, M.
2016-07-01
We present a dual-resolution model of a deoxyribonucleic acid (DNA) molecule in a bathing solution, where we concurrently couple atomistic bundled water and ions with the coarse-grained MARTINI model of the solvent. We use our fine-grained salt solution model as a solvent in the inner shell surrounding the DNA molecule, whereas the solvent in the outer shell is modeled by the coarse-grained model. The solvent entities can exchange between the two domains and adapt their resolution accordingly. We critically asses the performance of our multiscale model in adaptive resolution simulations of an infinitely long DNA molecule, focusing on the structural characteristics of the solvent around DNA. Our analysis shows that the adaptive resolution scheme does not produce any noticeable artifacts in comparison to a reference system simulated in full detail. The effect of using a bundled-SPC model, required for multiscaling, compared to the standard free SPC model is also evaluated. Our multiscale approach opens the way for large scale applications of DNA and other biomolecules which require a large solvent reservoir to avoid boundary effects.
NASA Technical Reports Server (NTRS)
Przekwas, A. J.; Singhal, A. K.; Tam, L. T.
1984-01-01
The capability of simulating three dimensional two phase reactive flows with combustion in the liquid fuelled rocket engines is demonstrated. This was accomplished by modifying an existing three dimensional computer program (REFLAN3D) with Eulerian Lagrangian approach to simulate two phase spray flow, evaporation and combustion. The modified code is referred as REFLAN3D-SPRAY. The mathematical formulation of the fluid flow, heat transfer, combustion and two phase flow interaction of the numerical solution procedure, boundary conditions and their treatment are described.
Calculation procedures for potential and viscous flow solutions for engine inlets
NASA Technical Reports Server (NTRS)
Albers, J. A.; Stockman, N. O.
1973-01-01
The method and basic elements of computer solutions for both potential flow and viscous flow calculations for engine inlets are described. The procedure is applicable to subsonic conventional (CTOL), short-haul (STOL), and vertical takeoff (VTOL) aircraft engine nacelles operating in a compressible viscous flow. The calculated results compare well with measured surface pressure distributions for a number of model inlets. The paper discusses the uses of the program in both the design and analysis of engine inlets, with several examples given for VTOL lift fans, acoustic splitters, and for STOL engine nacelles. Several test support applications are also given.
Patched based methods for adaptive mesh refinement solutions of partial differential equations
Saltzman, J.
1997-09-02
This manuscript contains the lecture notes for a course taught from July 7th through July 11th at the 1997 Numerical Analysis Summer School sponsored by C.E.A., I.N.R.I.A., and E.D.F. The subject area was chosen to support the general theme of that year`s school which is ``Multiscale Methods and Wavelets in Numerical Simulation.`` The first topic covered in these notes is a description of the problem domain. This coverage is limited to classical PDEs with a heavier emphasis on hyperbolic systems and constrained hyperbolic systems. The next topic is difference schemes. These schemes are the foundation for the adaptive methods. After the background material is covered, attention is focused on a simple patched based adaptive algorithm and its associated data structures for square grids and hyperbolic conservation laws. Embellishments include curvilinear meshes, embedded boundary and overset meshes. Next, several strategies for parallel implementations are examined. The remainder of the notes contains descriptions of elliptic solutions on the mesh hierarchy, elliptically constrained flow solution methods and elliptically constrained flow solution methods with diffusion.
Pérez-Jordá, José M
2011-11-28
A series of improvements for the solution of the three-dimensional Schrödinger equation over a method introduced by Gygi [F. Gygi, Europhys. Lett. 19, 617 (1992); F. Gygi, Phys. Rev. B 48, 11692 (1993)] are presented. As in the original Gygi's method, the solution (orbital) is expressed by means of plane waves in adaptive coordinates u, where u is mapped from Cartesian coordinates, u=f(r). The improvements implemented are threefold. First, maps are introduced that allow the application of the method to atoms and molecules without the assistance of the supercell approximation. Second, the electron-nucleus singularities are exactly removed, so that pseudo-potentials are no longer required. Third, the sampling error during integral evaluation is made negligible, which results in a true variational, second-order energy error procedure. The method is tested on the hydrogen atom (ground and excited states) and the H(2)(+) molecule, resulting in milli-Hartree accuracy with a moderate number of plane waves. PMID:22128925
NASA Astrophysics Data System (ADS)
Gioiella, Lucia; Altobelli, Rosaria; de Luna, Martina Salzano; Filippone, Giovanni
2016-05-01
The efficacy of chitosan-based hydrogels in the removal of dyes from aqueous solutions has been investigated as a function of different parameters. Hydrogels were obtained by gelation of chitosan with a non-toxic gelling agent based on an aqueous basic solution. The preparation procedure has been optimized in terms of chitosan concentration in the starting solution, gelling agent concentration and chitosan-to-gelling agent ratio. The goal is to properly select the material- and process-related parameters in order to optimize the performances of the chitosan-based dye adsorbent. First, the influence of such factors on the gelling process has been studied from a kinetic point of view. Then, the effects on the adsorption capacity and kinetics of the chitosan hydrogels obtained in different conditions have been investigated. A common food dye (Indigo Carmine) has been used for this purpose. Noticeably, although the disk-shaped hydrogels are in the bulk form, their adsorption capacity is comparable to that reported in the literature for films and beads. In addition, the bulk samples can be easily separated from the liquid phase after the adsorption process, which is highly attractive from a practical point of view. Compression tests reveal that the samples do not breakup even after relatively large compressive strains. The obtained results suggest that the fine tuning of the process parameters allows the production of mechanical resistant and highly adsorbing chitosan-based hydrogels.
NASA Astrophysics Data System (ADS)
Wissmeier, L. C.; Barry, D. A.
2009-12-01
Computer simulations of water availability and quality play an important role in state-of-the-art water resources management. However, many of the most utilized software programs focus either on physical flow and transport phenomena (e.g., MODFLOW, MT3DMS, FEFLOW, HYDRUS) or on geochemical reactions (e.g., MINTEQ, PHREEQC, CHESS, ORCHESTRA). In recent years, several couplings between both genres of programs evolved in order to consider interactions between flow and biogeochemical reactivity (e.g., HP1, PHWAT). Software coupling procedures can be categorized as ‘close couplings’, where programs pass information via the memory stack at runtime, and ‘remote couplings’, where the information is exchanged at each time step via input/output files. The former generally involves modifications of software codes and therefore expert programming skills are required. We present a generic recipe for remotely coupling the PHREEQC geochemical modeling framework and flow and solute transport (FST) simulators. The iterative scheme relies on operator splitting with continuous re-initialization of PHREEQC and the FST of choice at each time step. Since PHREEQC calculates the geochemistry of aqueous solutions in contact with soil minerals, the procedure is primarily designed for couplings to FST’s for liquid phase flow in natural environments. It requires the accessibility of initial conditions and numerical parameters such as time and space discretization in the input text file for the FST and control of the FST via commands to the operating system (batch on Windows; bash/shell on Unix/Linux). The coupling procedure is based on PHREEQC’s capability to save the state of a simulation with all solid, liquid and gaseous species as a PHREEQC input file by making use of the dump file option in the TRANSPORT keyword. The output from one reaction calculation step is therefore reused as input for the following reaction step where changes in element amounts due to advection
NASA Astrophysics Data System (ADS)
Kuraz, Michal
2016-06-01
This paper presents pseudo-deterministic catchment runoff model based on the Richards equation model [1] - the governing equation for the subsurface flow. The subsurface flow in a catchment is described here by two-dimensional variably saturated flow (unsaturated and saturated). The governing equation is the Richards equation with a slight modification of the time derivative term as considered e.g. by Neuman [2]. The nonlinear nature of this problem appears in unsaturated zone only, however the delineation of the saturated zone boundary is a nonlinear computationally expensive issue. The simple one-dimensional Boussinesq equation was used here as a rough estimator of the saturated zone boundary. With this estimate the dd-adaptivity algorithm (see Kuraz et al. [4, 5, 6]) could always start with an optimal subdomain split, so it is now possible to avoid solutions of huge systems of linear equations in the initial iteration level of our Richards equation based runoff model.
NASA Technical Reports Server (NTRS)
Stricklin, J. A.; Haisler, W. E.; Von Riesemann, W. A.
1972-01-01
This paper presents an assessment of the solution procedures available for the analysis of inelastic and/or large deflection structural behavior. A literature survey is given which summarized the contribution of other researchers in the analysis of structural problems exhibiting material nonlinearities and combined geometric-material nonlinearities. Attention is focused at evaluating the available computation and solution techniques. Each of the solution techniques is developed from a common equation of equilibrium in terms of pseudo forces. The solution procedures are applied to circular plates and shells of revolution in an attempt to compare and evaluate each with respect to computational accuracy, economy, and efficiency. Based on the numerical studies, observations and comments are made with regard to the accuracy and economy of each solution technique.
Zonal multigrid solution of compressible flow problems on unstructured and adaptive meshes
NASA Technical Reports Server (NTRS)
Mavriplis, Dimitri J.
1989-01-01
The simultaneous use of adaptive meshing techniques with a multigrid strategy for solving the 2-D Euler equations in the context of unstructured meshes is studied. To obtain optimal efficiency, methods capable of computing locally improved solutions without recourse to global recalculations are pursued. A method for locally refining an existing unstructured mesh, without regenerating a new global mesh is employed, and the domain is automatically partitioned into refined and unrefined regions. Two multigrid strategies are developed. In the first, time-stepping is performed on a global fine mesh covering the entire domain, and convergence acceleration is achieved through the use of zonal coarse grid accelerator meshes, which lie under the adaptively refined regions of the global fine mesh. Both schemes are shown to produce similar convergence rates to each other, and also with respect to a previously developed global multigrid algorithm, which performs time-stepping throughout the entire domain, on each mesh level. However, the present schemes exhibit higher computational efficiency due to the smaller number of operations on each level.
Broom, Donald M
2006-01-01
The term adaptation is used in biology in three different ways. It may refer to changes which occur at the cell and organ level, or at the individual level, or at the level of gene action and evolutionary processes. Adaptation by cells, especially nerve cells helps in: communication within the body, the distinguishing of stimuli, the avoidance of overload and the conservation of energy. The time course and complexity of these mechanisms varies. Adaptive characters of organisms, including adaptive behaviours, increase fitness so this adaptation is evolutionary. The major part of this paper concerns adaptation by individuals and its relationships to welfare. In complex animals, feed forward control is widely used. Individuals predict problems and adapt by acting before the environmental effect is substantial. Much of adaptation involves brain control and animals have a set of needs, located in the brain and acting largely via motivational mechanisms, to regulate life. Needs may be for resources but are also for actions and stimuli which are part of the mechanism which has evolved to obtain the resources. Hence pigs do not just need food but need to be able to carry out actions like rooting in earth or manipulating materials which are part of foraging behaviour. The welfare of an individual is its state as regards its attempts to cope with its environment. This state includes various adaptive mechanisms including feelings and those which cope with disease. The part of welfare which is concerned with coping with pathology is health. Disease, which implies some significant effect of pathology, always results in poor welfare. Welfare varies over a range from very good, when adaptation is effective and there are feelings of pleasure or contentment, to very poor. A key point concerning the concept of individual adaptation in relation to welfare is that welfare may be good or poor while adaptation is occurring. Some adaptation is very easy and energetically cheap and
Impact of Metal Nanoform Colloidal Solution on the Adaptive Potential of Plants
NASA Astrophysics Data System (ADS)
Taran, Nataliya; Batsmanova, Ludmila; Kovalenko, Mariia; Okanenko, Alexander
2016-02-01
Nanoparticles are a known cause of oxidative stress and so induce antistress action. The latter property was the purpose of our study. The effect of two concentrations (120 and 240 mg/l) of nanoform biogenic metal (Ag, Cu, Fe, Zn, Mn) colloidal solution on antioxidant enzymes, superoxide dismutase and catalase; the level of the factor of the antioxidant state; and the content of thiobarbituric acid reactive substances (TBARSs) of soybean plant in terms of field experience were studied. It was found that the oxidative processes developed a metal nanoparticle pre-sowing seed treatment variant at a concentration of 120 mg/l, as evidenced by the increase in the content of TBARS in photosynthetic tissues by 12 %. Pre-sowing treatment in a double concentration (240 mg/l) resulted in a decrease in oxidative processes (19 %), and pre-sowing treatment combined with vegetative treatment also contributed to the reduction of TBARS (10 %). Increased activity of superoxide dismutase (SOD) was observed in a variant by increasing the content of TBARS; SOD activity was at the control level in two other variants. Catalase activity decreased in all variants. The factor of antioxidant activity was highest (0.3) in a variant with nanoparticle double treatment (pre-sowing and vegetative) at a concentration of 120 mg/l. Thus, the studied nanometal colloidal solution when used in small doses, in a certain time interval, can be considered as a low-level stress factor which according to hormesis principle promoted adaptive response reaction.
Sukkay, Sasicha
2016-01-01
Based on a 2013 statistic published by Thai with Disability foundation, five percent of Thailand's population are disabled people. Six hundred thousand of them have mobility disability, and the number is increasing every year. To support them, the Thai government has implemented a number of disability laws and policies. One of the policies is to better disabled people's quality of life by adapting their houses to facilitate their activities. However, the policy has not been fully realized yet-there is still no specific guideline for housing adaptation for people with disabilities. This study is an attempt to address the lack of standardized criteria for such adaptation by developing a number of effective ones. Our development had 3 objectives: first, to identify the body functioning of a group of people with mobility disability according to the international classification functioning concept (ICF); second, to perform post-occupancy evaluation of this group and their houses; and third, with the collected data, to have a group of multidisciplinary experts cooperatively develop criteria for housing adaptation. The major findings were that room dimensions and furniture materials really had an impact on accessibility and toilet as well as bed room were the most difficult areas to access. PMID:27534326
EEG-Based BCI System Using Adaptive Features Extraction and Classification Procedures
Mangia, Anna Lisa; Cappello, Angelo
2016-01-01
Motor imagery is a common control strategy in EEG-based brain-computer interfaces (BCIs). However, voluntary control of sensorimotor (SMR) rhythms by imagining a movement can be skilful and unintuitive and usually requires a varying amount of user training. To boost the training process, a whole class of BCI systems have been proposed, providing feedback as early as possible while continuously adapting the underlying classifier model. The present work describes a cue-paced, EEG-based BCI system using motor imagery that falls within the category of the previously mentioned ones. Specifically, our adaptive strategy includes a simple scheme based on a common spatial pattern (CSP) method and support vector machine (SVM) classification. The system's efficacy was proved by online testing on 10 healthy participants. In addition, we suggest some features we implemented to improve a system's “flexibility” and “customizability,” namely, (i) a flexible training session, (ii) an unbalancing in the training conditions, and (iii) the use of adaptive thresholds when giving feedback.
ERIC Educational Resources Information Center
Chang, Hua-Hua; And Others
Recently, R. Shealy and W. Stout (1993) proposed a procedure for detecting differential item functioning (DIF) called SIBTEST. Current versions of SIBTEST can only be used for dichotomously scored items, but this paper presents an extension to handle polytomous items. The paper presents: (1) a discussion of an appropriate definition of DIF for…
Bayesian Procedures for Identifying Aberrant Response-Time Patterns in Adaptive Testing
ERIC Educational Resources Information Center
van der Linden, Wim J.; Guo, Fanmin
2008-01-01
In order to identify aberrant response-time patterns on educational and psychological tests, it is important to be able to separate the speed at which the test taker operates from the time the items require. A lognormal model for response times with this feature was used to derive a Bayesian procedure for detecting aberrant response times.…
Impact of Metal Nanoform Colloidal Solution on the Adaptive Potential of Plants.
Taran, Nataliya; Batsmanova, Ludmila; Kovalenko, Mariia; Okanenko, Alexander
2016-12-01
Nanoparticles are a known cause of oxidative stress and so induce antistress action. The latter property was the purpose of our study. The effect of two concentrations (120 and 240 mg/l) of nanoform biogenic metal (Ag, Cu, Fe, Zn, Mn) colloidal solution on antioxidant enzymes, superoxide dismutase and catalase; the level of the factor of the antioxidant state; and the content of thiobarbituric acid reactive substances (TBARSs) of soybean plant in terms of field experience were studied. It was found that the oxidative processes developed a metal nanoparticle pre-sowing seed treatment variant at a concentration of 120 mg/l, as evidenced by the increase in the content of TBARS in photosynthetic tissues by 12 %. Pre-sowing treatment in a double concentration (240 mg/l) resulted in a decrease in oxidative processes (19 %), and pre-sowing treatment combined with vegetative treatment also contributed to the reduction of TBARS (10 %). Increased activity of superoxide dismutase (SOD) was observed in a variant by increasing the content of TBARS; SOD activity was at the control level in two other variants. Catalase activity decreased in all variants. The factor of antioxidant activity was highest (0.3) in a variant with nanoparticle double treatment (pre-sowing and vegetative) at a concentration of 120 mg/l. Thus, the studied nanometal colloidal solution when used in small doses, in a certain time interval, can be considered as a low-level stress factor which according to hormesis principle promoted adaptive response reaction. PMID:26876039
NASA Astrophysics Data System (ADS)
Bargatze, L. F.
2015-12-01
Active Data Archive Product Tracking (ADAPT) is a collection of software routines that permits one to generate XML metadata files to describe and register data products in support of the NASA Heliophysics Virtual Observatory VxO effort. ADAPT is also a philosophy. The ADAPT concept is to use any and all available metadata associated with scientific data to produce XML metadata descriptions in a consistent, uniform, and organized fashion to provide blanket access to the full complement of data stored on a targeted data server. In this poster, we present an application of ADAPT to describe all of the data products that are stored by using the Common Data File (CDF) format served out by the CDAWEB and SPDF data servers hosted at the NASA Goddard Space Flight Center. These data servers are the primary repositories for NASA Heliophysics data. For this purpose, the ADAPT routines have been used to generate data resource descriptions by using an XML schema named Space Physics Archive, Search, and Extract (SPASE). SPASE is the designated standard for documenting Heliophysics data products, as adopted by the Heliophysics Data and Model Consortium. The set of SPASE XML resource descriptions produced by ADAPT includes high-level descriptions of numerical data products, display data products, or catalogs and also includes low-level "Granule" descriptions. A SPASE Granule is effectively a universal access metadata resource; a Granule associates an individual data file (e.g. a CDF file) with a "parent" high-level data resource description, assigns a resource identifier to the file, and lists the corresponding assess URL(s). The CDAWEB and SPDF file systems were queried to provide the input required by the ADAPT software to create an initial set of SPASE metadata resource descriptions. Then, the CDAWEB and SPDF data repositories were queried subsequently on a nightly basis and the CDF file lists were checked for any changes such as the occurrence of new, modified, or deleted
Dissociating proportion congruent and conflict adaptation effects in a Simon-Stroop procedure.
Torres-Quesada, Maryem; Funes, Maria Jesús; Lupiáñez, Juan
2013-02-01
Proportion congruent and conflict adaptation are two well known effects associated with cognitive control. A critical open question is whether they reflect the same or separate cognitive control mechanisms. In this experiment, in a training phase we introduced a proportion congruency manipulation for one conflict type (i.e. Simon), whereas in pre-training and post-training phases two conflict types (e.g. Simon and Spatial Stroop) were displayed with the same incongruent-to-congruent ratio. The results supported the sustained nature of the proportion congruent effect, as it transferred from the training to the post-training phase. Furthermore, this transfer generalized to both conflict types. By contrast, the conflict adaptation effect was specific to conflict type, as it was only observed when the same conflict type (either Simon or Stroop) was presented on two consecutive trials (no effect was observed on conflict type alternation trials). Results are interpreted as supporting the reactive and proactive control mechanisms distinction. PMID:23337083
Test procedure for anion exchange testing with Argonne 10-L solutions
Compton, J.A.
1995-05-17
Four anion exchange resins will be tested to confirm that they will sorb and release plutonium from/to the appropriate solutions in the presence of other cations. Certain cations need to be removed from the test solutions to minimize adverse behavior in other processing equipment. The ion exchange resins will be tested using old laboratory solutions from Argonne National Laboratory; results will be compared to results from other similar processes for application to all plutonium solutions stored in the Plutonium Finishing Plant.
Two-Dimensional Solutions of MHD Equations with AN Adapted ROE Method
NASA Astrophysics Data System (ADS)
Aslan, Necdet
1996-12-01
In this paper a higher-order Godunov method for two-dimensional solutions of the ideal MHD (magnetohydrodynamic) equations is presented. The method utilizes the finite volume approach with quadrilateral cells. In Section 2 the MHD equations (including flux and source terms) in conservat ive form are given. The momentum flux is rearranged such that while a source vector is produced, the eigenstructure of the Jacobian matrix does not change. This rearrangement allows a full Roe averaging of the density, velocity and pressure for any value of adiabatic index (contrary to Brio and Wus conclusion (J. Comput. Phys., 75, 400 (1988)). Full Roe averaging for the magnetic field is possible only when the normal gradient of the magnetic field is negligible; otherwise an arithmetic averaging can be used. This new procedure to get Roe-averaged MHD fields at the interfaces between left and right states has been presented by Aslan (Ph.D. Thesis, University of Michigan, 1993; Int. j. numer. methods fluids, 22, 569-580 (1996)). This section also includes the shock structure and an eigensystem for MHD problems. The eigenvalues, right eigenvectors and wave strengths for MHD are given in detail to provide the reader with a full description. The second-order, limited finite volume approach which utilizes quadrilateral cells is given in full detail in Section 3. Section 4 gives one- and two-dimensional numerical results obtained from this method. Finally, conclusions are given in Section 5.
Dispensing an enzyme-conjugated solution into an ELISA plate by adapting ink-jet printers.
Lonini, Luca; Accoto, Dino; Petroni, Silvia; Guglielmelli, Eugenio
2008-04-24
The rapid and precise delivery of small volumes of bio-fluids (from picoliters to nanoliters) is a key feature of modern bioanalytical assays. Commercial ink-jet printers are low-cost systems which enable the dispensing of tiny droplets at a rate which may exceed 10(4) Hz per nozzle. Currently, the main ejection technologies are piezoelectric and bubble-jet. We adapted two commercial printers, respectively a piezoelectric and a bubble-jet one, for the deposition of immunoglobulins into an ELISA plate. The objective was to perform a comparative evaluation of the two classes of ink-jet technologies in terms of required hardware modifications and possible damage on the dispensed molecules. The hardware of the two printers was modified to dispense an enzyme conjugate solution, containing polyclonal rabbit anti-human IgG labelled with HRP in 7 wells of an ELISA plate. Moreover, the ELISA assay was used to assess the functional activity of the biomolecules after ejection. ELISA is a common and well-assessed technique to detect the presence of particular antigens or antibodies in a sample. We employed an ELISA diagnostic kit for the qualitative screening of anti-ENA antibodies to verify the ability of the dispensed immunoglobulins to bind the primary antibodies in the wells. Experimental tests showed that the dispensing of immunoglobulins using the piezoelectric printer does not cause any detectable difference on the outcome of the ELISA test if compared to manual dispensing using micropipettes. On the contrary, the thermal printhead was not able to reliably dispense the bio-fluid, which may mean that a surfactant is required to modify the wetting properties of the liquid. PMID:17588671
A formal protocol test procedure for the Survivable Adaptable Fiber Optic Embedded Network (SAFENET)
NASA Astrophysics Data System (ADS)
High, Wayne
1993-03-01
This thesis focuses upon a new method for verifying the correct operation of a complex, high speed fiber optic communication network. These networks are of growing importance to the military because of their increased connectivity, survivability, and reconfigurability. With the introduction and increased dependence on sophisticated software and protocols, it is essential that their operation be correct. Because of the speed and complexity of fiber optic networks being designed today, they are becoming increasingly difficult to test. Previously, testing was accomplished by application of conformance test methods which had little connection with an implementation's specification. The major goal of conformance testing is to ensure that the implementation of a profile is consistent with its specification. Formal specification is needed to ensure that the implementation performs its intended operations while exhibiting desirable behaviors. The new conformance test method presented is based upon the System of Communicating Machine model which uses a formal protocol specification to generate a test sequence. The major contribution of this thesis is the application of the System of Communicating Machine model to formal profile specifications of the Survivable Adaptable Fiber Optic Embedded Network (SAFENET) standard which results in the derivation of test sequences for a SAFENET profile. The results applying this new method to SAFENET's OSI and Lightweight profiles are presented.
NASA Technical Reports Server (NTRS)
Jiang, Yi-Tsann; Usab, William J., Jr.
1993-01-01
A general solution adaptive scheme based on a remeshing technique is developed for solving the two-dimensional and quasi-three-dimensional Euler and Favre-averaged Navier-Stokes equations. The numerical scheme is formulated on an unstructured triangular mesh utilizing an edge-based pointer system which defines the edge connectivity of the mesh structure. Jameson's four-stage hybrid Runge-Kutta scheme is used to march the solution in time. The convergence rate is enhanced through the use of local time stepping and implicit residual averaging. As the solution evolves, the mesh is regenerated adaptively using flow field information. Mesh adaptation parameters are evaluated such that an estimated local numerical error is equally distributed over the whole domain. For inviscid flows, the present approach generates a complete unstructured triangular mesh using the advancing front method. For turbulent flows, the approach combines a local highly stretched structured triangular mesh in the boundary layer region with an unstructured mesh in the remaining regions to efficiently resolve the important flow features. One-equation and two-equation turbulence models are incorporated into the present unstructured approach. Results are presented for a wide range of flow problems including two-dimensional multi-element airfoils, two-dimensional cascades, and quasi-three-dimensional cascades. This approach is shown to gain flow resolution in the refined regions while achieving a great reduction in the computational effort and storage requirements since solution points are not wasted in regions where they are not required.
NASA Technical Reports Server (NTRS)
Jiang, Yi-Tsann
1993-01-01
A general solution adaptive scheme-based on a remeshing technique is developed for solving the two-dimensional and quasi-three-dimensional Euler and Favre-averaged Navier-Stokes equations. The numerical scheme is formulated on an unstructured triangular mesh utilizing an edge-based pointer system which defines the edge connectivity of the mesh structure. Jameson's four-stage hybrid Runge-Kutta scheme is used to march the solution in time. The convergence rate is enhanced through the use of local time stepping and implicit residual averaging. As the solution evolves, the mesh is regenerated adaptively using flow field information. Mesh adaptation parameters are evaluated such that an estimated local numerical error is equally distributed over the whole domain. For inviscid flows, the present approach generates a complete unstructured triangular mesh using the advancing front method. For turbulent flows, the approach combines a local highly stretched structured triangular mesh in the boundary layer region with an unstructured mesh in the remaining regions to efficiently resolve the important flow features. One-equation and two-equation turbulence models are incorporated into the present unstructured approach. Results are presented for a wide range of flow problems including two-dimensional multi-element airfoils, two-dimensional cascades, and quasi-three-dimensional cascades. This approach is shown to gain flow resolution in the refined regions while achieving a great reduction in the computational effort and storage requirements since solution points are not wasted in regions where they are not required.
Dawes, W.N. )
1993-04-01
This paper describes recent developments to a three-dimensional, unstructured mesh, solution-adaptive Navier-Stokes solver. By adopting a simple, pragmatic but systematic approach to mesh generation, the range of simulations that can be attempted is extended toward arbitrary geometries. The combined benefits of the approach result in a powerful analytical ability. Solutions for a wide range of flows are presented, including a transonic compressor rotor, a centrifugal impeller, a steam turbine nozzle guide vane with casing extraction belt, the internal coolant passage of a radial inflow turbine, and a turbine disk cavity flow.
NASA Astrophysics Data System (ADS)
Alrachid, Houssam; Lelièvre, Tony; Talhouk, Raafat
2016-05-01
We prove global existence, uniqueness and regularity of the mild, Lp and classical solution of a non-linear Fokker-Planck equation arising in an adaptive importance sampling method for molecular dynamics calculations. The non-linear term is related to a conditional expectation, and is thus non-local. The proof uses tools from the theory of semigroups of linear operators for the local existence result, and an a priori estimate based on a supersolution for the global existence result.
NASA Technical Reports Server (NTRS)
Coirier, William J.; Powell, Kenneth G.
1994-01-01
A Cartesian, cell-based approach for adaptively-refined solutions of the Euler and Navier-Stokes equations in two dimensions is developed and tested. Grids about geometrically complicated bodies are generated automatically, by recursive subdivision of a single Cartesian cell encompassing the entire flow domain. Where the resulting cells intersect bodies, N-sided 'cut' cells are created using polygon-clipping algorithms. The grid is stored in a binary-tree structure which provides a natural means of obtaining cell-to-cell connectivity and of carrying out solution-adaptive mesh refinement. The Euler and Navier-Stokes equations are solved on the resulting grids using a finite-volume formulation. The convective terms are upwinded: a gradient-limited, linear reconstruction of the primitive variables is performed, providing input states to an approximate Riemann solver for computing the fluxes between neighboring cells. The more robust of a series of viscous flux functions is used to provide the viscous fluxes at the cell interfaces. Adaptively-refined solutions of the Navier-Stokes equations using the Cartesian, cell-based approach are obtained and compared to theory, experiment, and other accepted computational results for a series of low and moderate Reynolds number flows.
NASA Technical Reports Server (NTRS)
Coirier, William J.; Powell, Kenneth G.
1995-01-01
A Cartesian, cell-based approach for adaptively-refined solutions of the Euler and Navier-Stokes equations in two dimensions is developed and tested. Grids about geometrically complicated bodies are generated automatically, by recursive subdivision of a single Cartesian cell encompassing the entire flow domain. Where the resulting cells intersect bodies, N-sided 'cut' cells are created using polygon-clipping algorithms. The grid is stored in a binary-tree data structure which provides a natural means of obtaining cell-to-cell connectivity and of carrying out solution-adaptive mesh refinement. The Euler and Navier-Stokes equations are solved on the resulting grids using a finite-volume formulation. The convective terms are upwinded: A gradient-limited, linear reconstruction of the primitive variables is performed, providing input states to an approximate Riemann solver for computing the fluxes between neighboring cells. The more robust of a series of viscous flux functions is used to provide the viscous fluxes at the cell interfaces. Adaptively-refined solutions of the Navier-Stokes equations using the Cartesian, cell-based approach are obtained and compared to theory, experiment and other accepted computational results for a series of low and moderate Reynolds number flows.
Kreitler, Jason R.; Stoms, David M.; Davis, Frank W.
2014-01-01
Quantitative methods of spatial conservation prioritization have traditionally been applied to issues in conservation biology and reserve design, though their use in other types of natural resource management is growing. The utility maximization problem is one form of a covering problem where multiple criteria can represent the expected social benefits of conservation action. This approach allows flexibility with a problem formulation that is more general than typical reserve design problems, though the solution methods are very similar. However, few studies have addressed optimization in utility maximization problems for conservation planning, and the effect of solution procedure is largely unquantified. Therefore, this study mapped five criteria describing elements of multifunctional agriculture to determine a hypothetical conservation resource allocation plan for agricultural land conservation in the Central Valley of CA, USA. We compared solution procedures within the utility maximization framework to determine the difference between an open source integer programming approach and a greedy heuristic, and find gains from optimization of up to 12%. We also model land availability for conservation action as a stochastic process and determine the decline in total utility compared to the globally optimal set using both solution algorithms. Our results are comparable to other studies illustrating the benefits of optimization for different conservation planning problems, and highlight the importance of maximizing the effectiveness of limited funding for conservation and natural resource management.
Stoms, David M.; Davis, Frank W.
2014-01-01
Quantitative methods of spatial conservation prioritization have traditionally been applied to issues in conservation biology and reserve design, though their use in other types of natural resource management is growing. The utility maximization problem is one form of a covering problem where multiple criteria can represent the expected social benefits of conservation action. This approach allows flexibility with a problem formulation that is more general than typical reserve design problems, though the solution methods are very similar. However, few studies have addressed optimization in utility maximization problems for conservation planning, and the effect of solution procedure is largely unquantified. Therefore, this study mapped five criteria describing elements of multifunctional agriculture to determine a hypothetical conservation resource allocation plan for agricultural land conservation in the Central Valley of CA, USA. We compared solution procedures within the utility maximization framework to determine the difference between an open source integer programming approach and a greedy heuristic, and find gains from optimization of up to 12%. We also model land availability for conservation action as a stochastic process and determine the decline in total utility compared to the globally optimal set using both solution algorithms. Our results are comparable to other studies illustrating the benefits of optimization for different conservation planning problems, and highlight the importance of maximizing the effectiveness of limited funding for conservation and natural resource management. PMID:25538868
Adaptive Filtering for Large Space Structures: A Closed-Form Solution
NASA Technical Reports Server (NTRS)
Rauch, H. E.; Schaechter, D. B.
1985-01-01
In a previous paper Schaechter proposes using an extended Kalman filter to estimate adaptively the (slowly varying) frequencies and damping ratios of a large space structure. The time varying gains for estimating the frequencies and damping ratios can be determined in closed form so it is not necessary to integrate the matrix Riccati equations. After certain approximations, the time varying adaptive gain can be written as the product of a constant matrix times a matrix derived from the components of the estimated state vector. This is an important savings of computer resources and allows the adaptive filter to be implemented with approximately the same effort as the nonadaptive filter. The success of this new approach for adaptive filtering was demonstrated using synthetic data from a two mode system.
Yancey, Paul H; Siebenaller, Joseph F
2015-06-01
Organisms experience a wide range of environmental factors such as temperature, salinity and hydrostatic pressure, which pose challenges to biochemical processes. Studies on adaptations to such factors have largely focused on macromolecules, especially intrinsic adaptations in protein structure and function. However, micromolecular cosolutes can act as cytoprotectants in the cellular milieu to affect biochemical function and they are now recognized as important extrinsic adaptations. These solutes, both inorganic and organic, have been best characterized as osmolytes, which accumulate to reduce osmotic water loss. Singly, and in combination, many cosolutes have properties beyond simple osmotic effects, e.g. altering the stability and function of proteins in the face of numerous stressors. A key example is the marine osmolyte trimethylamine oxide (TMAO), which appears to enhance water structure and is excluded from peptide backbones, favoring protein folding and stability and counteracting destabilizers like urea and temperature. Co-evolution of intrinsic and extrinsic adaptations is illustrated with high hydrostatic pressure in deep-living organisms. Cytosolic and membrane proteins and G-protein-coupled signal transduction in fishes under pressure show inhibited function and stability, while revealing a number of intrinsic adaptations in deep species. Yet, intrinsic adaptations are often incomplete, and those fishes accumulate TMAO linearly with depth, suggesting a role for TMAO as an extrinsic 'piezolyte' or pressure cosolute. Indeed, TMAO is able to counteract the inhibitory effects of pressure on the stability and function of many proteins. Other cosolutes are cytoprotective in other ways, such as via antioxidation. Such observations highlight the importance of considering the cellular milieu in biochemical and cellular adaptation. PMID:26085665
Analytical solutions and numerical procedures for minimum-weight Michell structures
NASA Astrophysics Data System (ADS)
Dewhurst, Peter
2001-03-01
A power-series method developed for plane-strain slip-line field theory is applied to the construction of minimum-weight Michell frameworks. The relationship between the space and force diagrams is defined as a basis for weight calculations. Analytical solutions obtained by the method are shown to agree with known solutions that were obtained through virtual displacement calculations. Framework boundary conditions are investigated, and matrix operators used in slip-line field theory are shown to apply to the force-free straight framework boundary-value problem. The matrix operator method is used to illustrate the transition from circular arc-based to cycloid-based Michell solutions. Finally, an example is given in the use of the method for evaluation of support boundary conditions.
NASA Astrophysics Data System (ADS)
Shigeta, Takemi; Young, D. L.; Liu, Chein-Shan
2012-08-01
The mixed boundary value problem of the Laplace equation is considered. The method of fundamental solutions (MFS) approximates the exact solution to the Laplace equation by a linear combination of independent fundamental solutions with different source points. The accuracy of the numerical solution depends on the distribution of source points. In this paper, a weighted greedy QR decomposition (GQRD) is proposed to choose significant source points by introducing a weighting parameter. An index called an average degree of approximation is defined to show the efficiency of the proposed method. From numerical experiments, it is concluded that the numerical solution tends to be more accurate when the average degree of approximation is larger, and that the proposed method can yield more accurate solutions with a less number of source points than the conventional GQRD.
Finding the Genomic Basis of Local Adaptation: Pitfalls, Practical Solutions, and Future Directions.
Hoban, Sean; Kelley, Joanna L; Lotterhos, Katie E; Antolin, Michael F; Bradburd, Gideon; Lowry, David B; Poss, Mary L; Reed, Laura K; Storfer, Andrew; Whitlock, Michael C
2016-10-01
Uncovering the genetic and evolutionary basis of local adaptation is a major focus of evolutionary biology. The recent development of cost-effective methods for obtaining high-quality genome-scale data makes it possible to identify some of the loci responsible for adaptive differences among populations. Two basic approaches for identifying putatively locally adaptive loci have been developed and are broadly used: one that identifies loci with unusually high genetic differentiation among populations (differentiation outlier methods) and one that searches for correlations between local population allele frequencies and local environments (genetic-environment association methods). Here, we review the promises and challenges of these genome scan methods, including correcting for the confounding influence of a species' demographic history, biases caused by missing aspects of the genome, matching scales of environmental data with population structure, and other statistical considerations. In each case, we make suggestions for best practices for maximizing the accuracy and efficiency of genome scans to detect the underlying genetic basis of local adaptation. With attention to their current limitations, genome scan methods can be an important tool in finding the genetic basis of adaptive evolutionary change. PMID:27622873
A coupled multi-block solution procedure for spray combustion in complex geometries
NASA Technical Reports Server (NTRS)
Chen, Kuo-Huey; Shuen, Jian-Shun
1993-01-01
Turbulent spray-combusting flow in complex geometries is presently treated by a coupled implicit procedure that employs finite-rate chemistry and real gas properties for combustion, as well as the stochastic separated model for spray and a multiblock treatment for complex geometries. Illustrative numerical tests conducted encompass a steady-state nonreacting backward-facing step flow, a premixed single-phase combustion flow, and spray combustion flow in a gas turbine combustor.
Warren, Rachel
2011-01-13
The papers in this volume discuss projections of climate change impacts upon humans and ecosystems under a global mean temperature rise of 4°C above preindustrial levels. Like most studies, they are mainly single-sector or single-region-based assessments. Even the multi-sector or multi-region approaches generally consider impacts in sectors and regions independently, ignoring interactions. Extreme weather and adaptation processes are often poorly represented and losses of ecosystem services induced by climate change or human adaptation are generally omitted. This paper addresses this gap by reviewing some potential interactions in a 4°C world, and also makes a comparison with a 2°C world. In a 4°C world, major shifts in agricultural land use and increased drought are projected, and an increased human population might increasingly be concentrated in areas remaining wet enough for economic prosperity. Ecosystem services that enable prosperity would be declining, with carbon cycle feedbacks and fire causing forest losses. There is an urgent need for integrated assessments considering the synergy of impacts and limits to adaptation in multiple sectors and regions in a 4°C world. By contrast, a 2°C world is projected to experience about one-half of the climate change impacts, with concomitantly smaller challenges for adaptation. Ecosystem services, including the carbon sink provided by the Earth's forests, would be expected to be largely preserved, with much less potential for interaction processes to increase challenges to adaptation. However, demands for land and water for biofuel cropping could reduce the availability of these resources for agricultural and natural systems. Hence, a whole system approach to mitigation and adaptation, considering interactions, potential human and species migration, allocation of land and water resources and ecosystem services, will be important in either a 2°C or a 4°C world. PMID:21115521
Webster, Clayton G; Zhang, Guannan; Gunzburger, Max D
2012-10-01
Accurate predictive simulations of complex real world applications require numerical approximations to first, oppose the curse of dimensionality and second, converge quickly in the presence of steep gradients, sharp transitions, bifurcations or finite discontinuities in high-dimensional parameter spaces. In this paper we present a novel multi-dimensional multi-resolution adaptive (MdMrA) sparse grid stochastic collocation method, that utilizes hierarchical multiscale piecewise Riesz basis functions constructed from interpolating wavelets. The basis for our non-intrusive method forms a stable multiscale splitting and thus, optimal adaptation is achieved. Error estimates and numerical examples will used to compare the efficiency of the method with several other techniques.
A Procedure to Construct Exact Solutions of Nonlinear Fractional Differential Equations
Güner, Özkan; Cevikel, Adem C.
2014-01-01
We use the fractional transformation to convert the nonlinear partial fractional differential equations with the nonlinear ordinary differential equations. The Exp-function method is extended to solve fractional partial differential equations in the sense of the modified Riemann-Liouville derivative. We apply the Exp-function method to the time fractional Sharma-Tasso-Olver equation, the space fractional Burgers equation, and the time fractional fmKdV equation. As a result, we obtain some new exact solutions. PMID:24737972
AP-Cloud: Adaptive particle-in-cloud method for optimal solutions to Vlasov–Poisson equation
Wang, Xingyu; Samulyak, Roman; Jiao, Xiangmin; Yu, Kwangmin
2016-04-19
We propose a new adaptive Particle-in-Cloud (AP-Cloud) method for obtaining optimal numerical solutions to the Vlasov–Poisson equation. Unlike the traditional particle-in-cell (PIC) method, which is commonly used for solving this problem, the AP-Cloud adaptively selects computational nodes or particles to deliver higher accuracy and efficiency when the particle distribution is highly non-uniform. Unlike other adaptive techniques for PIC, our method balances the errors in PDE discretization and Monte Carlo integration, and discretizes the differential operators using a generalized finite difference (GFD) method based on a weighted least square formulation. As a result, AP-Cloud is independent of the geometric shapes ofmore » computational domains and is free of artificial parameters. Efficient and robust implementation is achieved through an octree data structure with 2:1 balance. We analyze the accuracy and convergence order of AP-Cloud theoretically, and verify the method using an electrostatic problem of a particle beam with halo. Here, simulation results show that the AP-Cloud method is substantially more accurate and faster than the traditional PIC, and it is free of artificial forces that are typical for some adaptive PIC techniques.« less
AP-Cloud: Adaptive Particle-in-Cloud method for optimal solutions to Vlasov-Poisson equation
NASA Astrophysics Data System (ADS)
Wang, Xingyu; Samulyak, Roman; Jiao, Xiangmin; Yu, Kwangmin
2016-07-01
We propose a new adaptive Particle-in-Cloud (AP-Cloud) method for obtaining optimal numerical solutions to the Vlasov-Poisson equation. Unlike the traditional particle-in-cell (PIC) method, which is commonly used for solving this problem, the AP-Cloud adaptively selects computational nodes or particles to deliver higher accuracy and efficiency when the particle distribution is highly non-uniform. Unlike other adaptive techniques for PIC, our method balances the errors in PDE discretization and Monte Carlo integration, and discretizes the differential operators using a generalized finite difference (GFD) method based on a weighted least square formulation. As a result, AP-Cloud is independent of the geometric shapes of computational domains and is free of artificial parameters. Efficient and robust implementation is achieved through an octree data structure with 2:1 balance. We analyze the accuracy and convergence order of AP-Cloud theoretically, and verify the method using an electrostatic problem of a particle beam with halo. Simulation results show that the AP-Cloud method is substantially more accurate and faster than the traditional PIC, and it is free of artificial forces that are typical for some adaptive PIC techniques.
ERIC Educational Resources Information Center
Riley, Barth B.; Dennis, Michael L.; Conrad, Kendon J.
2010-01-01
This simulation study sought to compare four different computerized adaptive testing (CAT) content-balancing procedures designed for use in a multidimensional assessment with respect to measurement precision, symptom severity classification, validity of clinical diagnostic recommendations, and sensitivity to atypical responding. The four…
A Simple Procedure for Constructing 5'-Amino-Terminated Oligodeoxynucleotides in Aqueous Solution
NASA Technical Reports Server (NTRS)
Bruick, Richard K.; Koppitz, Marcus; Joyce, Gerald F.; Orgel, Leslie E.
1997-01-01
A rapid method for the synthesis of oligodeoxynucleotides (ODNs) terminated by 5'-amino-5'-deoxythymidine is described. A 3'-phosphorylated ODN (the donor) is incubated in aqueous solution with 5'-amino- 5'-deoxythymidine in the presence of N-(3-dimethylaminopropyl)-)N'-ethylcarbodiimide hydrochloride (EDC), extending the donor by one residue via a phosphoramidate bond. Template- directed ligation of the extended donor and an acceptor ODN, followed by acid hydrolysis, yields the acceptor ODN extended by a single 5'-amino-5'-deoxythymidine residue at its 5'terminus.
Triangle based adaptive stencils for the solution of hyperbolic conservation laws
NASA Technical Reports Server (NTRS)
Durlofsky, Louis J.; Engquist, Bjorn; Osher, Stanley
1992-01-01
A triangle based total variation diminishing (TVD) scheme for the numerical approximation of hyperbolic conservation laws in two space dimensions is constructed. The novelty of the scheme lies in the nature of the preprocessing of the cell averaged data, which is accomplished via a nearest neighbor linear interpolation followed by a slope limiting procedures. Two such limiting procedures are suggested. The resulting method is considerably more simple than other triangle based non-oscillatory approximations which, like this scheme, approximate the flux up to second order accuracy. Numerical results for linear advection and Burgers' equation are presented.
NASA Astrophysics Data System (ADS)
Benfenati, A.; La Camera, A.; Carbillet, M.
2016-02-01
Aims: High-dynamic range images of astrophysical objects present some difficulties in their restoration because of the presence of very bright point-wise sources surrounded by faint and smooth structures. We propose a method that enables the restoration of this kind of images by taking these kinds of sources into account and, at the same time, improving the contrast enhancement in the final image. Moreover, the proposed approach can help to detect the position of the bright sources. Methods: The classical variational scheme in the presence of Poisson noise aims to find the minimum of a functional compound of the generalized Kullback-Leibler function and a regularization functional: the latter function is employed to preserve some characteristic in the restored image. The inexact Bregman procedure substitutes the regularization function with its inexact Bregman distance. This proposed scheme allows us to take under control the level of inexactness arising in the computed solution and permits us to employ an overestimation of the regularization parameter (which balances the trade-off between the Kullback-Leibler and the Bregman distance). This aspect is fundamental, since the estimation of this kind of parameter is very difficult in the presence of Poisson noise. Results: The inexact Bregman procedure is tested on a bright unresolved binary star with a faint circumstellar environment. When the sources' position is exactly known, this scheme provides us with very satisfactory results. In case of inexact knowledge of the sources' position, it can in addition give some useful information on the true positions. Finally, the inexact Bregman scheme can be also used when information about the binary star's position concerns a connected region instead of isolated pixels.
This standard operating procedure describes the method used for preparing internal standard, surrogate recovery standard and calibration standard solutions for neutral analytes used for gas chromatography/mass spectrometry analysis.
An adaptive-mesh finite-difference solution method for the Navier-Stokes equations
NASA Astrophysics Data System (ADS)
Luchini, Paolo
1987-02-01
An adjustable variable-spacing grid is presented which permits the addition or deletion of single points during iterative solutions of the Navier-Stokes equations by finite difference methods. The grid is designed for application to two-dimensional steady-flow problems which can be described by partial differential equations whose second derivatives are constrained to the Laplacian operator. An explicit Navier-Stokes equations solution technique defined for use with the grid incorporates a hybrid form of the convective terms. Three methods are developed for automatic modifications of the mesh during calculations.
Panico, Francesco; Sagliano, Laura; Grossi, Dario; Trojano, Luigi
2016-06-01
The aim of this study is to clarify the specific role of the cerebellum during prism adaptation procedure (PAP), considering its involvement in early prism exposure (i.e., in the recalibration process) and in post-exposure phase (i.e., in the after-effect, related to spatial realignment). For this purpose we interfered with cerebellar activity by means of cathodal transcranial direct current stimulation (tDCS), while young healthy individuals were asked to perform a pointing task on a touch screen before, during and after wearing base-left prism glasses. The distance from the target dot in each trial (in terms of pixels) on horizontal and vertical axes was recorded and served as an index of accuracy. Results on horizontal axis, that was shifted by prism glasses, revealed that participants who received cathodal stimulation showed increased rightward deviation from the actual position of the target while wearing prisms and a larger leftward deviation from the target after prisms removal. Results on vertical axis, in which no shift was induced, revealed a general trend in the two groups to improve accuracy through the different phases of the task, and a trend, more visible in cathodal stimulated participants, to worsen accuracy from the first to the last movements in each phase. Data on horizontal axis allow to confirm that the cerebellum is involved in all stages of PAP, contributing to early strategic recalibration process, as well as to spatial realignment. On vertical axis, the improving performance across the different stages of the task and the worsening accuracy within each task phase can be ascribed, respectively, to a learning process and to the task-related fatigue. PMID:27031676
Copper-Adapted Suillus luteus, a Symbiotic Solution for Pines Colonizing Cu Mine Spoils
Adriaensen, K.; Vrålstad, T.; Noben, J.-P.; Vangronsveld, J.; Colpaert, J. V.
2005-01-01
Natural populations thriving in heavy-metal-contaminated ecosystems are often subjected to selective pressures for increased resistance to toxic metals. In the present study we describe a population of the ectomycorrhizal fungus Suillus luteus that colonized a toxic Cu mine spoil in Norway. We hypothesized that this population had developed adaptive Cu tolerance and was able to protect pine trees against Cu toxicity. We also tested for the existence of cotolerance to Cu and Zn in S. luteus. Isolates from Cu-polluted, Zn-polluted, and nonpolluted sites were grown in vitro on Cu- or Zn-supplemented medium. The Cu mine isolates exhibited high Cu tolerance, whereas the Zn-tolerant isolates were shown to be Cu sensitive, and vice versa. This indicates the evolution of metal-specific tolerance mechanisms is strongly triggered by the pollution in the local environment. Cotolerance does not occur in the S. luteus isolates studied. In a dose-response experiment, the Cu sensitivity of nonmycorrhizal Pinus sylvestris seedlings was compared to the sensitivity of mycorrhizal seedlings colonized either by a Cu-sensitive or Cu-tolerant S. luteus isolate. In nonmycorrhizal plants and plants colonized by the Cu-sensitive isolate, root growth and nutrient uptake were strongly inhibited under Cu stress conditions. In contrast, plants colonized by the Cu-tolerant isolate were hardly affected. The Cu-adapted S. luteus isolate provided excellent insurance against Cu toxicity in pine seedlings exposed to elevated Cu levels. Such a metal-adapted Suillus-Pinus combination might be suitable for large-scale land reclamation at phytotoxic metalliferous and industrial sites. PMID:16269769
NASA Technical Reports Server (NTRS)
Brislawn, Kristi D.; Brown, David L.; Chesshire, Geoffrey S.; Saltzman, Jeffrey S.
1995-01-01
Adaptive mesh refinement (AMR) in conjunction with higher-order upwind finite-difference methods have been used effectively on a variety of problems in two and three dimensions. In this paper we introduce an approach for resolving problems that involve complex geometries in which resolution of boundary geometry is important. The complex geometry is represented by using the method of overlapping grids, while local resolution is obtained by refining each component grid with the AMR algorithm, appropriately generalized for this situation. The CMPGRD algorithm introduced by Chesshire and Henshaw is used to automatically generate the overlapping grid structure for the underlying mesh.
A scalable and adaptable solution framework within components of the CCSM
Evans, Katherine J; Rouson, Damian; Salinger, Andy; Taylor, Mark; White III, James B; Weijer, Wilbert
2009-01-01
A framework for a fully implicit solution method is implemented into (1) the High Order Methods Modeling Environment (HOMME), which is a spectral element dynamical core option in the Community Atmosphere Model (CAM), and (2) the Parallel Ocean Program (POP) model of the global ocean. Both of these models are components of the Community Climate System Model (CCSM). HOMME is a development version of CAM and provides a scalable alternative when run with an explicit time integrator. However, it suffers the typical time step size limit to maintain stability. POP uses a time-split semi-implicit time integrator that allows larger time steps but less accuracy when used with scale interacting physics. A fully implicit solution framework allows larger time step sizes and additional climate analysis capability such as model steady state and spin-up efficiency gains without a loss in scalability. This framework is implemented into HOMME and POP using a new Fortran interface to the Trilinos solver library, ForTrilinos, which leverages several new capabilities in the current Fortran standard to maximize robustness and speed. The ForTrilinos solution template was also designed for interchangeability; other solution methods and capability improvements can be more easily implemented into the models as they are developed without severely interacting with the code structure. The utility of this approach is illustrated with a test case for each of the climate component models.
NASA Astrophysics Data System (ADS)
Ghattas, O.; Burstedde, C.; Stadler, G.; Wilcox, L. C.; Tu, T.; Issac, T.; Gurnis, M.; Alisic, L.; Tan, E.; Zhong, S.
2009-12-01
Many problems in solid earth geophysics are characterized by dynamics occurring on a wide range of length and time scales, placing the solution of the governing partial differential equations (PDEs) for such problems among the grand challenges of computational geophysics. One approach to overcoming the tyranny of scales is adaptive mesh refinement (AMR), which locally and dynamically adapts the mesh to resolve spatio-temporal scales and features of interest. For example, we are interested in modeling global mantle convection with nonlinear rheology and kilometer-scale resolution at faulted plate boundaries. Another problem of interest is modeling the dynamics of polar ice sheets with fine resolution in the vicinity of stick-slip transitions. Geophysical inverse problems characterized by a wide range of medium properties can also benefit from AMR as the earth model is updated. While AMR promises to help overcome the challenges inherent in modeling multiscale problems, the benefits are difficult to achieve in practice, particularly on petascale computers that are essential for frontier problems. Due to the complex dynamic data structures and communication patterns, and frequent data exchange and redistribution, scaling dynamic AMR to tens of thousands of processors has long been considered a challenge. Another difficulty is extending parallel AMR techniques to high-order-accurate, complex-geometry-conforming finite element methods that are favored for many classes of solid earth geophysical problems. Here, we present the ALPS (Adaptive Large-scale Parallel Simulations) framework for parallel adaptive solution of PDEs. ALPS includes the octor and p4est libraries for parallel dynamic mesh adaptivity on single-octree-based and forest-of-octree-based geometries, respectively, and the mangll library for arbitrary-order hexahedral continuous and discontinuous finite/spectral element discretizations on general multi-octree geometries. ALPS has been shown to scale well
NASA Astrophysics Data System (ADS)
Zheng, H. W.; Shu, C.; Chew, Y. T.
2008-07-01
In this paper, an object-oriented and quadrilateral-mesh based solution adaptive algorithm for the simulation of compressible multi-fluid flows is presented. The HLLC scheme (Harten, Lax and van Leer approximate Riemann solver with the Contact wave restored) is extended to adaptively solve the compressible multi-fluid flows under complex geometry on unstructured mesh. It is also extended to the second-order of accuracy by using MUSCL extrapolation. The node, edge and cell are arranged in such an object-oriented manner that each of them inherits from a basic object. A home-made double link list is designed to manage these objects so that the inserting of new objects and removing of the existing objects (nodes, edges and cells) are independent of the number of objects and only of the complexity of O( 1). In addition, the cells with different levels are further stored in different lists. This avoids the recursive calculation of solution of mother (non-leaf) cells. Thus, high efficiency is obtained due to these features. Besides, as compared to other cell-edge adaptive methods, the separation of nodes would reduce the memory requirement of redundant nodes, especially in the cases where the level number is large or the space dimension is three. Five two-dimensional examples are used to examine its performance. These examples include vortex evolution problem, interface only problem under structured mesh and unstructured mesh, bubble explosion under the water, bubble-shock interaction, and shock-interface interaction inside the cylindrical vessel. Numerical results indicate that there is no oscillation of pressure and velocity across the interface and it is feasible to apply it to solve compressible multi-fluid flows with large density ratio (1000) and strong shock wave (the pressure ratio is 10,000) interaction with the interface.
NASA Technical Reports Server (NTRS)
Schoenauer, W.; Daeubler, H. G.; Glotz, G.; Gruening, J.
1986-01-01
An implicit difference procedure for the solution of equations for a chemically reacting hypersonic boundary layer is described. Difference forms of arbitrary error order in the x and y coordinate plane were used to derive estimates for discretization error. Computational complexity and time were minimized by the use of this difference method and the iteration of the nonlinear boundary layer equations was regulated by discretization error. Velocity and temperature profiles are presented for Mach 20.14 and Mach 18.5; variables are velocity profiles, temperature profiles, mass flow factor, Stanton number, and friction drag coefficient; three figures include numeric data.
A local anisotropic adaptive algorithm for the solution of low-Mach transient combustion problems
NASA Astrophysics Data System (ADS)
Carpio, Jaime; Prieto, Juan Luis; Vera, Marcos
2016-02-01
A novel numerical algorithm for the simulation of transient combustion problems at low Mach and moderately high Reynolds numbers is presented. These problems are often characterized by the existence of a large disparity of length and time scales, resulting in the development of directional flow features, such as slender jets, boundary layers, mixing layers, or flame fronts. This makes local anisotropic adaptive techniques quite advantageous computationally. In this work we propose a local anisotropic refinement algorithm using, for the spatial discretization, unstructured triangular elements in a finite element framework. For the time integration, the problem is formulated in the context of semi-Lagrangian schemes, introducing the semi-Lagrange-Galerkin (SLG) technique as a better alternative to the classical semi-Lagrangian (SL) interpolation. The good performance of the numerical algorithm is illustrated by solving a canonical laminar combustion problem: the flame/vortex interaction. First, a premixed methane-air flame/vortex interaction with simplified transport and chemistry description (Test I) is considered. Results are found to be in excellent agreement with those in the literature, proving the superior performance of the SLG scheme when compared with the classical SL technique, and the advantage of using anisotropic adaptation instead of uniform meshes or isotropic mesh refinement. As a more realistic example, we then conduct simulations of non-premixed hydrogen-air flame/vortex interactions (Test II) using a more complex combustion model which involves state-of-the-art transport and chemical kinetics. In addition to the analysis of the numerical features, this second example allows us to perform a satisfactory comparison with experimental visualizations taken from the literature.
A cellular automaton model adapted to sandboxes to simulate the transport of solutes
NASA Astrophysics Data System (ADS)
Lora, Boris; Donado, Leonardo; Castro, Eduardo; Bayuelo, Alfredo
2016-04-01
The increasingly use of groundwater sources for human consumption and the growth of the levels of these hydric sources contamination make imperative to reach a deeper understanding how the contaminants are transported by the water, in particular through a heterogeneous porous medium. Accordingly, the present research aims to design a model, which simulates the transport of solutes through a heterogeneous porous medium, using cellular automata. Cellular automata (CA) are a class of spatially (pixels) and temporally discrete mathematical systems characterized by local interaction (neighborhoods). The pixel size and the CA neighborhood were determined in order to reproduce accurately the solute behavior (Ilachinski, 2001). For the design and corresponding validation of the CA model were developed different conservative tracer tests using a sandbox packed heterogeneously with a coarse sand (size # 20 grain diameter 0,85 to 0,6 mm) and clay. We use Uranine and a saline solution with NaCl as a tracer which were measured taking snapshots each 20 seconds. A calibration curve (pixel intensity Vs Concentration) was used to obtain concentration maps. The sandbox was constructed of acrylic (caliber 0,8 cms) with 70 x 45 x 4 cms of dimensions. The "sandbox" had a grid of 35 transversal holes with a diameter of 4 mm each and an uniform separation from one to another of 10 cms. To validate the CA-model it was used a metric consisting in rating the number of correctly predicted pixels over the total per image throughout the entire test run. The CA-model shows that calibrations of pixels and neighborhoods allow reaching results over the 60 % of correctly predictions usually. This makes possible to think that the application of the CA- model could be useful in further researches regarding the transport of contaminants in hydrogeology.
Practical Study and Solutions Adapted For The Road Noise In The Algiers City
NASA Astrophysics Data System (ADS)
Iddir, R.; Boukhaloua, N.; Saadi, T.
At the present hour where the city spreads on a big space, the road network devel- opment was a following logical of this movement. Generating a considerable impact thus on the environment. This last is a resulting open system of the interaction be- tween the man and the nature, it's affected all side by the different means of transport and by their increasing demand of mobility. The contemporary city development be- got problems bound to the environment and among it : The road noise. This last is a complex phenomenon, essentially by reason of its humans sensory effects, its impact on the environment is considerable, this one concerns the life quality directly, mainly in population zones to strong density. The resonant pollution reached its paroxysm; the road network of Algiers is not conceived to satisfy requirements in resonant pol- lution matter. For it arrangements soundproof should be adapted in order to face of it these new requirements in matter of the acoustic comfort. All these elements drove to the process aiming the attenuation of the hindrance caused by the road traffic and it by actions essentially aiming: vehicles, the structure of the road and the immediate envi- ronment of the system road - structure. From these results, we note that the situation in resonant nuisance matter in this zone with strong traffic is disturbing, and especially on the residents health.
Tzvetkova, G.V.; Resconi, G.
1999-10-01
The paper deals with the forward dynamics problem in robotics. The solution of the problem is found on the basis of a new theory, called general system logical theory. It uses operators and transformation of operators extensively to study objects and their relations in the real world. The basis notions and operator equations are given. The forward dynamics problem is presented as a diagram, called elementary logical system. The diagram unities a set of variables, a set of operators, and a set of relations between the operators. A generic form of a recursive process using the operators and the Lie product is described. The convergence of the process is discussed. Original operator procedures dedicated to the links of the robot are proposed. The wanted solution is found at the limit of the recursive process. An example is given, as well. The result obtained illustrates the ability of the theory to study robotics problems. The forward dynamics problem is solved in a new way and without inversion of the mass matrix of the robot.
NASA Astrophysics Data System (ADS)
Grenga, Temistocle
The aim of this research is to further develop a dynamically adaptive algorithm based on wavelets that is able to solve efficiently multi-dimensional compressible reactive flow problems. This work demonstrates the great potential for the method to perform direct numerical simulation (DNS) of combustion with detailed chemistry and multi-component diffusion. In particular, it addresses the performance obtained using a massive parallel implementation and demonstrates important savings in memory storage and computational time over conventional methods. In addition, fully-resolved simulations of challenging three dimensional problems involving mixing and combustion processes are performed. These problems are particularly challenging due to their strong multiscale characteristics. For these solutions, it is necessary to combine the advanced numerical techniques applied to modern computational resources.
NASA Astrophysics Data System (ADS)
Wang, Shuoqin; Verbrugge, Mark; Wang, John S.; Liu, Ping
2011-10-01
We report the development of an adaptive, multi-parameter battery state estimator based on the direct solution of the differential equations that govern an equivalent circuit representation of the battery. The core of the estimator includes two sets of inter-related equations corresponding to discharge and charge events respectively. Simulation results indicate that the estimator gives accurate prediction and numerically stable performance in the regression of model parameters. The estimator is implemented in a vehicle-simulated environment to predict the state of charge (SOC) and the charge and discharge power capabilities (state of power, SOP) of a lithium ion battery. Predictions for the SOC and SOP agree well with experimental measurements, demonstrating the estimator's application in battery management systems. In particular, this new approach appears to be very stable for high-frequency data streams.
Ma Xiang; Zabaras, Nicholas
2010-05-20
A computational methodology is developed to address the solution of high-dimensional stochastic problems. It utilizes high-dimensional model representation (HDMR) technique in the stochastic space to represent the model output as a finite hierarchical correlated function expansion in terms of the stochastic inputs starting from lower-order to higher-order component functions. HDMR is efficient at capturing the high-dimensional input-output relationship such that the behavior for many physical systems can be modeled to good accuracy only by the first few lower-order terms. An adaptive version of HDMR is also developed to automatically detect the important dimensions and construct higher-order terms using only the important dimensions. The newly developed adaptive sparse grid collocation (ASGC) method is incorporated into HDMR to solve the resulting sub-problems. By integrating HDMR and ASGC, it is computationally possible to construct a low-dimensional stochastic reduced-order model of the high-dimensional stochastic problem and easily perform various statistic analysis on the output. Several numerical examples involving elementary mathematical functions and fluid mechanics problems are considered to illustrate the proposed method. The cases examined show that the method provides accurate results for stochastic dimensionality as high as 500 even with large-input variability. The efficiency of the proposed method is examined by comparing with Monte Carlo (MC) simulation.
ERIC Educational Resources Information Center
Colorado State Dept. of Education, Denver.
This manual is designed to assist Colorado personnel in developing and providing adapted physical education, occupational therapy, and physical therapy to Colorado public school students who have needs in the motor area. Guidelines are presented that have been developed to focus on the problems encountered by students with needs in the physical…
Benantar, M.; Flaherty, J.E.
1990-01-01
We consider the parallel assembly and solution on shared-memory computers of linear algebraic systems arising from the finite element discretization of two-dimensional linear self-adjoint elliptic problems. Stiffness matrix assembly and conjugate gradient solution of the linear system using element-by-element and symmetric successive over-relaxation preconditioners are processed in parallel with computations scheduled on noncontiguous regions in order to minimize process synchronization. An underlying quadtree structure, used for automatic mesh generation and solution-based mesh refinement, is separated into disjoint regions called quadrants using six-color procedure having linear time complexity.
Amiri, Mohammad J; Abedi-Koupai, Jahangir; Eslamian, Sayed S; Mousavi, Sayed F; Hasheminejad, Hasti
2013-01-01
To evaluate the performance of Adaptive Neural-Based Fuzzy Inference System (ANFIS) model in estimating the efficiency of Pb (II) ions removal from aqueous solution by ostrich bone ash, a batch experiment was conducted. Five operational parameters including adsorbent dosage (C(s)), initial concentration of Pb (II) ions (C(o)), initial pH, temperature (T) and contact time (t) were taken as the input data and the adsorption efficiency (AE) of bone ash as the output. Based on the 31 different structures, 5 ANFIS models were tested against the measured adsorption efficiency to assess the accuracy of each model. The results showed that ANFIS5, which used all input parameters, was the most accurate (RMSE = 2.65 and R(2) = 0.95) and ANFIS1, which used only the contact time input, was the worst (RMSE = 14.56 and R(2) = 0.46). In ranking the models, ANFIS4, ANFIS3 and ANFIS2 ranked second, third and fourth, respectively. The sensitivity analysis revealed that the estimated AE is more sensitive to the contact time, followed by pH, initial concentration of Pb (II) ions, adsorbent dosage, and temperature. The results showed that all ANFIS models overestimated the AE. In general, this study confirmed the capabilities of ANFIS model as an effective tool for estimation of AE. PMID:23383640
Fukuda, Ryoichi Ehara, Masahiro; Cammi, Roberto
2014-02-14
A perturbative approximation of the state specific polarizable continuum model (PCM) symmetry-adapted cluster-configuration interaction (SAC-CI) method is proposed for efficient calculations of the electronic excitations and absorption spectra of molecules in solutions. This first-order PCM SAC-CI method considers the solvent effects on the energies of excited states up to the first-order with using the zeroth-order wavefunctions. This method can avoid the costly iterative procedure of the self-consistent reaction field calculations. The first-order PCM SAC-CI calculations well reproduce the results obtained by the iterative method for various types of excitations of molecules in polar and nonpolar solvents. The first-order contribution is significant for the excitation energies. The results obtained by the zeroth-order PCM SAC-CI, which considers the fixed ground-state reaction field for the excited-state calculations, are deviated from the results by the iterative method about 0.1 eV, and the zeroth-order PCM SAC-CI cannot predict even the direction of solvent shifts in n-hexane for many cases. The first-order PCM SAC-CI is applied to studying the solvatochromisms of (2,2{sup ′}-bipyridine)tetracarbonyltungsten [W(CO){sub 4}(bpy), bpy = 2,2{sup ′}-bipyridine] and bis(pentacarbonyltungsten)pyrazine [(OC){sub 5}W(pyz)W(CO){sub 5}, pyz = pyrazine]. The SAC-CI calculations reveal the detailed character of the excited states and the mechanisms of solvent shifts. The energies of metal to ligand charge transfer states are significantly sensitive to solvents. The first-order PCM SAC-CI well reproduces the observed absorption spectra of the tungsten carbonyl complexes in several solvents.
2010-01-01
Background Standardised translation and cross-cultural adaptation (TCCA) procedures are vital to describe language translation, cultural adaptation, and to evaluate quality factors of transformed outcome measures. No TCCA procedure for objectively-assessed outcome (OAO) measures exists. Furthermore, no official German version of the Canadian Chedoke Arm and Hand Activity Inventory (CAHAI) is available. Methods An eight-step for TCCA procedure for OAO was developed (TCCA-OAO) based on the existing TCCA procedure for patient-reported outcomes. The TCCA-OAO procedure was applied to develop a German version of the CAHAI (CAHAI-G). Inter-rater reliability of the CAHAI-G was determined through video rating of CAHAI-G. Validity evaluation of the CAHAI-G was assessed using the Chedoke-McMaster Stroke Assessment (CMSA). All ratings were performed by trained, independent raters. In a cross-sectional study, patients were tested within 31 hours after the initial CAHAI-G scoring, for their motor function level using the subscales for arm and hand of the CMSA. Inpatients and outpatients of the occupational therapy department who experienced a cerebrovascular accident or an intracerebral haemorrhage were included. Results Performance of 23 patients (mean age 69.4, SD 12.9; six females; mean time since stroke onset: 1.5 years, SD 2.5 years) have been assessed. A high inter-rater reliability was calculated with ICCs for 4 CAHAI-G versions (13, 9, 8, 7 items) ranging between r = 0.96 and r = 0.99 (p < 0.001). Correlation between the CAHAI-G and CMSA subscales for hand and arm was r = 0.74 (p < 0.001) and r = 0.67 (p < 0.001) respectively. Internal consistency of the CAHAI-G for all four versions ranged between α = 0.974 and α = 0.979. Conclusions The TCCA-OAO procedure was validated regarding its feasibility and applicability for objectively-assessed outcome measures. The resulting German CAHAI can be used as a valid and reliable assessment for bilateral upper limb performance in
Lindskog, Marcus; Winman, Anders; Juslin, Peter; Poom, Leo
2013-01-01
Two studies investigated the reliability and predictive validity of commonly used measures and models of Approximate Number System acuity (ANS). Study 1 investigated reliability by both an empirical approach and a simulation of maximum obtainable reliability under ideal conditions. Results showed that common measures of the Weber fraction (w) are reliable only when using a substantial number of trials, even under ideal conditions. Study 2 compared different purported measures of ANS acuity as for convergent and predictive validity in a within-subjects design and evaluated an adaptive test using the ZEST algorithm. Results showed that the adaptive measure can reduce the number of trials needed to reach acceptable reliability. Only direct tests with non-symbolic numerosity discriminations of stimuli presented simultaneously were related to arithmetic fluency. This correlation remained when controlling for general cognitive ability and perceptual speed. Further, the purported indirect measure of ANS acuity in terms of the Numeric Distance Effect (NDE) was not reliable and showed no sign of predictive validity. The non-symbolic NDE for reaction time was significantly related to direct w estimates in a direction contrary to the expected. Easier stimuli were found to be more reliable, but only harder (7:8 ratio) stimuli contributed to predictive validity. PMID:23964256
NASA Astrophysics Data System (ADS)
Movahed, Pooya; Johnsen, Eric
2013-04-01
The evolution of high-speed initially laminar multicomponent flows into a turbulent multi-material mixing entity, e.g., in the Richtmyer-Meshkov instability, poses significant challenges for high-fidelity numerical simulations. Although high-order shock- and interface-capturing schemes represent such flows well at early times, the excessive numerical dissipation thereby introduced and the resulting computational cost prevent the resolution of small-scale features. Furthermore, unless special care is taken, shock-capturing schemes generate spurious pressure oscillations at material interfaces where the specific heats ratio varies. To remedy these problems, a solution-adaptive high-order central/shock-capturing finite difference scheme is presented for efficient computations of compressible multi-material flows, including turbulence. A new discontinuity sensor discriminates between smooth and discontinuous regions. The appropriate split form of (energy preserving) central schemes is derived for flows of smoothly varying specific heats ratio, such that spurious pressure oscillations are prevented. High-order accurate weighted essentially non-oscillatory (WENO) schemes are applied only at discontinuities; the standard approach is followed for shocks and contacts, but material discontinuities are treated by interpolating the primitive variables. The hybrid nature of the method allows for efficient and accurate computations of shocks and broadband motions, and is shown to prevent pressure oscillations for varying specific heats ratios. The method is assessed through one-dimensional problems with shocks, sharp interfaces and smooth distributions of specific heats ratio, and the two-dimensional single-mode inviscid and viscous Richtmyer-Meshkov instability with re-shock.
Borges, Sivanildo S; Vieira, Gláucia P; Reis, Boaventura F
2007-01-01
In this work, an automatic device to deliver titrant solution into a titration chamber with the ability to determine the dispensed volume of solution, with good precision independent of both elapsed time and flow rate, is proposed. A glass tube maintained at the vertical position was employed as a container for the titrant solution. Electronic devices were coupled to the glass tube in order to control its filling with titrant solution, as well as the stepwise solution delivering into the titration chamber. The detection of the titration end point was performed employing a photometer designed using a green LED (lambda=545 nm) and a phototransistor. The titration flow system comprised three-way solenoid valves, which were assembled to allow that the steps comprising the solution container loading and the titration run were carried out automatically. The device for the solution volume determination was designed employing an infrared LED (lambda=930 nm) and a photodiode. When solution volume delivered from proposed device was within the range of 5 to 105 mul, a linear relationship (R = 0.999) between the delivered volumes and the generated potential difference was achieved. The usefulness of the proposed device was proved performing photometric titration of hydrochloric acid solution with a standardized sodium hydroxide solution and using phenolphthalein as an external indicator. The achieved results presented relative standard deviation of 1.5%. PMID:18317510
Borges, Sivanildo S.; Vieira, Gláucia P.; Reis, Boaventura F.
2007-01-01
In this work, an automatic device to deliver titrant solution into a titration chamber with the ability to determine the dispensed volume of solution, with good precision independent of both elapsed time and flow rate, is proposed. A glass tube maintained at the vertical position was employed as a container for the titrant solution. Electronic devices were coupled to the glass tube in order to control its filling with titrant solution, as well as the stepwise solution delivering into the titration chamber. The detection of the titration end point was performed employing a photometer designed using a green LED (λ=545 nm) and a phototransistor. The titration flow system comprised three-way solenoid valves, which were assembled to allow that the steps comprising the solution container loading and the titration run were carried out automatically. The device for the solution volume determination was designed employing an infrared LED (λ=930 nm) and a photodiode. When solution volume delivered from proposed device was within the range of 5 to 105 μl, a linear relationship (R = 0.999) between the delivered volumes and the generated potential difference was achieved. The usefulness of the proposed device was proved performing photometric titration of hydrochloric acid solution with a standardized sodium hydroxide solution and using phenolphthalein as an external indicator. The achieved results presented relative standard deviation of 1.5%. PMID:18317510
NASA Astrophysics Data System (ADS)
Hawken, D. F.; Gottlieb, J. J.; Hansen, J. S.
1991-08-01
Results are presented from a search of the literature on adaptive numerical methods for the solution of PDEs, using node-movements to yield low truncation-solution error levels while minimizing the number of nodes required in the calculation. The applications in question encompass nonstationary flow problems containing moving regions of rapid flow-variable change, amid regions of comparatively smooth variation. Flows involving shock waves, contact surfaces, slipstreams, boundary layers, and phase-change interfaces, are shown to be modeled with both great precision and economy of execution, if the nodes are moved so that they are concentrated in the rapid flow-variable change regions.
NASA Astrophysics Data System (ADS)
Schmitt, Kara Anne
This research aims to prove that strict adherence to procedures and rigid compliance to process in the US Nuclear Industry may not prevent incidents or increase safety. According to the Institute of Nuclear Power Operations, the nuclear power industry has seen a recent rise in events, and this research claims that a contributing factor to this rise is organizational, cultural, and based on peoples overreliance on procedures and policy. Understanding the proper balance of function allocation, automation and human decision-making is imperative to creating a nuclear power plant that is safe, efficient, and reliable. This research claims that new generations of operators are less engaged and thinking because they have been instructed to follow procedures to a fault. According to operators, they were once to know the plant and its interrelations, but organizationally more importance is now put on following procedure and policy. Literature reviews were performed, experts were questioned, and a model for context analysis was developed. The Context Analysis Method for Identifying Design Solutions (CAMIDS) Model was created, verified and validated through both peer review and application in real world scenarios in active nuclear power plant simulators. These experiments supported the claim that strict adherence and rigid compliance to procedures may not increase safety by studying the industry's propensity for following incorrect procedures, and when it directly affects the outcome of safety or security of the plant. The findings of this research indicate that the younger generations of operators rely highly on procedures, and the organizational pressures of required compliance to procedures may lead to incidents within the plant because operators feel pressured into following the rules and policy above performing the correct actions in a timely manner. The findings support computer based procedures, efficient alarm systems, and skill of the craft matrices. The solution to
NASA Astrophysics Data System (ADS)
Zhu, Li; He, Yongxiang; Xue, Haidong; Chen, Leichen
Traditional genetic algorithms (GA) displays a disadvantage of early-constringency in dealing with scheduling problem. To improve the crossover operators and mutation operators self-adaptively, this paper proposes a self-adaptive GA at the target of multitask scheduling optimization under limited resources. The experiment results show that the proposed algorithm outperforms the traditional GA in evolutive ability to deal with complex task scheduling optimization.
Edwards, Andrew G; Teoh, Mark; Hodges, Ryan J; Palma-Dias, Ricardo; Cole, Stephen A; Fung, Alison M; Walker, Susan P
2016-06-01
The benefits of fetoscopic laser photocoagulation (FLP) for treatment of twin-to-twin transfusion syndrome (TTTS) have been recognized for over a decade, yet access to FLP remains limited in many settings. This means at a population level, the potential benefits of FLP for TTTS are far from being fully realized. In part, this is because there are many centers where the case volume is relatively low. This creates an inevitable tension; on one hand, wanting FLP to be readily accessible to all women who may need it, yet on the other, needing to ensure that a high degree of procedural competence is maintained. Some of the solutions to these apparently competing priorities may be found in novel training solutions to achieve, and maintain, procedural proficiency, and with the increased utilization of 'competence based' assessment and credentialing frameworks. We suggest an under-utilized approach is the development of collaborative surgical services, where pooling of personnel and resources can improve timely access to surgery, improve standardized assessment and management of TTTS, minimize the impact of the surgical learning curve, and facilitate audit, education, and research. When deciding which centers should offer laser for TTTS and how we decide, we propose some solutions from a collaborative model. PMID:27087260
NASA Technical Reports Server (NTRS)
Felici, Helene M.; Drela, Mark
1993-01-01
A new approach based on the coupling of an Eulerian and a Lagrangian solver, aimed at reducing the numerical diffusion errors of standard Eulerian time-marching finite-volume solvers, is presented. The approach is applied to the computation of the secondary flow in two bent pipes and the flow around a 3D wing. Using convective point markers the Lagrangian approach provides a correction of the basic Eulerian solution. The Eulerian flow in turn integrates in time the Lagrangian state-vector. A comparison of coarse and fine grid Eulerian solutions makes it possible to identify numerical diffusion. It is shown that the Eulerian/Lagrangian approach is an effective method for reducing numerical diffusion errors.
NASA Technical Reports Server (NTRS)
Stein, M.; Stein, P. A.
1978-01-01
Approximate solutions for three nonlinear orthotropic plate problems are presented: (1) a thick plate attached to a pad having nonlinear material properties which, in turn, is attached to a substructure which is then deformed; (2) a long plate loaded in inplane longitudinal compression beyond its buckling load; and (3) a long plate loaded in inplane shear beyond its buckling load. For all three problems, the two dimensional plate equations are reduced to one dimensional equations in the y-direction by using a one dimensional trigonometric approximation in the x-direction. Each problem uses different trigonometric terms. Solutions are obtained using an existing algorithm for simultaneous, first order, nonlinear, ordinary differential equations subject to two point boundary conditions. Ordinary differential equations are derived to determine the variable coefficients of the trigonometric terms.
NASA Astrophysics Data System (ADS)
Dawes, W. N.
1992-06-01
This paper describes the application of a solution-adaptive, three-dimensional Navier-Stokes solver to the problem of the flow in turbine internal coolant passages. First the variation of Nusselt number in a cylindrical, multi-ribbed duct is predicted and found to be in acceptable agreement with experimental data. Then the flow is computed in the serpentine coolant passage of a radial inflow turbine including modeling the internal baffles and pin fins. The aerodynamics of the passage, particularly that associated with the pin fins, is found to be complex. The predicted heat transfer coefficients allow zones of poor coolant penetration and potential hot spots to be identified.
Ragusa, Jean C.
2015-01-01
In this paper, we propose a piece-wise linear discontinuous (PWLD) finite element discretization of the diffusion equation for arbitrary polygonal meshes. It is based on the standard diffusion form and uses the symmetric interior penalty technique, which yields a symmetric positive definite linear system matrix. A preconditioned conjugate gradient algorithm is employed to solve the linear system. Piece-wise linear approximations also allow a straightforward implementation of local mesh adaptation by allowing unrefined cells to be interpreted as polygons with an increased number of vertices. Several test cases, taken from the literature on the discretization of the radiation diffusion equation, are presented: random, sinusoidal, Shestakov, and Z meshes are used. The last numerical example demonstrates the application of the PWLD discretization to adaptive mesh refinement.
NASA Astrophysics Data System (ADS)
Ranjan, Srikant
2005-11-01
Fatigue-induced failures in aircraft gas turbine and rocket engine turbopump blades and vanes are a pervasive problem. Turbine blades and vanes represent perhaps the most demanding structural applications due to the combination of high operating temperature, corrosive environment, high monotonic and cyclic stresses, long expected component lifetimes and the enormous consequence of structural failure. Single crystal nickel-base superalloy turbine blades are being utilized in rocket engine turbopumps and jet engines because of their superior creep, stress rupture, melt resistance, and thermomechanical fatigue capabilities over polycrystalline alloys. These materials have orthotropic properties making the position of the crystal lattice relative to the part geometry a significant factor in the overall analysis. Computation of stress intensity factors (SIFs) and the ability to model fatigue crack growth rate at single crystal cracks subject to mixed-mode loading conditions are important parts of developing a mechanistically based life prediction for these complex alloys. A general numerical procedure has been developed to calculate SIFs for a crack in a general anisotropic linear elastic material subject to mixed-mode loading conditions, using three-dimensional finite element analysis (FEA). The procedure does not require an a priori assumption of plane stress or plane strain conditions. The SIFs KI, KII, and KIII are shown to be a complex function of the coupled 3D crack tip displacement field. A comprehensive study of variation of SIFs as a function of crystallographic orientation, crack length, and mode-mixity ratios is presented, based on the 3D elastic orthotropic finite element modeling of tensile and Brazilian Disc (BD) specimens in specific crystal orientations. Variation of SIF through the thickness of the specimens is also analyzed. The resolved shear stress intensity coefficient or effective SIF, Krss, can be computed as a function of crack tip SIFs and the
Aich, Udayanath; Liu, Aston; Lakbub, Jude; Mozdzanowski, Jacek; Byrne, Michael; Shah, Nilesh; Galosy, Sybille; Patel, Pramthesh; Bam, Narendra
2016-03-01
Consistent glycosylation in therapeutic monoclonal antibodies is a major concern in the biopharmaceutical industry as it impacts the drug's safety and efficacy and manufacturing processes. Large numbers of samples are created for the analysis of glycans during various stages of recombinant proteins drug development. Profiling and quantifying protein N-glycosylation is important but extremely challenging due to its microheterogeneity and more importantly the limitations of existing time-consuming sample preparation methods. Thus, a quantitative method with fast sample preparation is crucial for understanding, controlling, and modifying the glycoform variance in therapeutic monoclonal antibody development. Presented here is a rapid and highly quantitative method for the analysis of N-glycans from monoclonal antibodies. The method comprises a simple and fast solution-based sample preparation method that uses nontoxic reducing reagents for direct labeling of N-glycans. The complete work flow for the preparation of fluorescently labeled N-glycans takes a total of 3 h with less than 30 min needed for the release of N-glycans from monoclonal antibody samples. PMID:26886304
Beauvais, Z S; Thompson, K H; Kearfott, K J
2009-07-01
Due to a recent upward trend in the price of uranium and subsequent increased interest in uranium mining, accurate modeling of baseline dose from environmental sources of radioactivity is of increasing interest. Residual radioactivity model and code (RESRAD) is a program used to model environmental movement and calculate the dose due to the inhalation, ingestion, and exposure to radioactive materials following a placement. This paper presents a novel use of RESRAD for the calculation of dose from non-enhanced, or ancient, naturally occurring radioactive material (NORM). In order to use RESRAD to calculate the total effective dose (TED) due to ancient NORM, a procedural adaptation was developed to negate the effects of time progressive distribution of radioactive materials. A dose due to United States' average concentrations of uranium, actinium, and thorium series radionuclides was then calculated. For adults exposed in a residential setting and assumed to eat significant amounts of food grown in NORM concentrated areas, the annual dose due to national average NORM concentrations was 0.935 mSv y(-1). A set of environmental dose factors were calculated for simple estimation of dose from uranium, thorium, and actinium series radionuclides for various age groups and exposure scenarios as a function of elemental uranium and thorium activity concentrations in groundwater and soil. The values of these factors for uranium were lowest for an adult exposed in an industrial setting: 0.00476 microSv kg Bq(-1) y(-1) for soil and 0.00596 microSv m(3) Bq(-1) y(-1) for water (assuming a 1:1 234U:238U activity ratio in water). The uranium factors were highest for infants exposed in a residential setting and assumed to ingest food grown onsite: 34.8 microSv kg Bq(-1) y(-1) in soil and 13.0 microSv m(3) Bq(-1) y(-1) in water. PMID:19509509
Mihajlovic, Vojkan; Patki, Shrishail; Grundlehner, Bernard
2014-01-01
Designing and developing a comfortable and convenient EEG system for daily usage that can provide reliable and robust EEG signal, encompasses a number of challenges. Among them, the most ambitious is the reduction of artifacts due to body movements. This paper studies the effect of head movement artifacts on the EEG signal and on the dry electrode-tissue impedance (ETI), monitored continuously using the imec's wireless EEG headset. We have shown that motion artifacts have huge impact on the EEG spectral content in the frequency range lower than 20 Hz. Coherence and spectral analysis revealed that ETI is not capable of describing disturbances at very low frequencies (below 2 Hz). Therefore, we devised a motion artifact reduction (MAR) method that uses a combination of a band-pass filtering and multi-channel adaptive filtering (AF), suitable for real-time MAR. This method was capable of substantially reducing artifacts produced by head movements. PMID:25571131
Cathcart, Nicole; Mistry, Pretesh; Makra, Christy; Pietrobon, Brendan; Coombs, Neil; Jelokhani-Niaraki, Masoud; Kitaev, Vladimir
2009-05-19
A novel approach of cyclic reduction in oxidative conditions has been developed to prepare a single dominant species of chiral thiol-stabilized silver nanoclusters (AgNCs). Such AgNCs, which are stable in solution for up to a few days, have been obtained for the first time. The generality of the established procedure is proven by using several enantiomeric water-soluble thiols, including glutathione, as protective ligands. The prepared AgNCs featured prominent optical properties including a single pattern of UV-vis absorption with well-resolved peaks. The chirality of the clusters has been investigated by circular dichroism (CD) spectroscopy. CD spectra displayed strong characteristic signatures in the visible range. Tentative identification of the cluster composition is discussed. PMID:19358597
Coliţă, Andrei; Coliţă, Anca; Zamfirescu, Dragos; Lupu, Anca Roxana
2012-09-01
Hematopoietic stem cell transplantation (HSCT) is a a standard therapeutic option for several diseases. The success of the procedure depends on quality and quantity of transplanted cells and on stromal capacity to create an optimal microenvironment, that supports survival and development of the hematopoietic elements. Conditions associated with stromal dysfunction lead to slower/insufficient engraftment and/or immune reconstitution. A possible solution to this problem is to realize a combined graft of hematopoietic stem cells along with the medular stroma in the form of vascularized bone marrow transplant (VBMT). Another major drawback of HSCT is the risk of graft versus host disease (GVHD). Recently, mesenchymal stromal cells (MSC) have demonstrated the capacity to down-regulate alloreactive T-cell and to enhance the engraftment. Cotransplantation of MSC could be a therapeutic option for a better engraftment and GVHD prevention. PMID:22677297
NASA Astrophysics Data System (ADS)
Zawadzki, Robert J.; Jones, Steven M.; Kim, Dae Yu; Poyneer, Lisa; Capps, Arlie G.; Hamann, Bernd; Olivier, Scot S.; Werner, John S.
2012-03-01
Recent progress in retinal image acquisition techniques, including optical coherence tomography (OCT) and scanning laser ophthalmoscopy (SLO), combined with improved performance of adaptive optics (AO) instrumentation, has resulted in improvement in the quality of in vivo images of cellular structures in the outer layers of the human retina. Despite the significant progress in imaging cone and rod photoreceptor mosaics, visualization of cellular structures in the inner retina has been achieved only with extrinsic contrast agents that have not been approved for use with humans. In this paper we describe the main limiting factors in visualizing inner retinal cells and the methods we implemented to reduce their effects on images acquired with AO-OCT. These include improving the system point spread function (AO performance), monitoring of motion artifacts (retinal motion tracking), and speckle pattern reduction (temporal and spatial averaging). Results of imaging inner retinal morphology and the improvement offered by the new UC Davis AOOCT system with spatio-temporal image averaging are presented.
Grid quality improvement by a grid adaptation technique
NASA Technical Reports Server (NTRS)
Lee, K. D.; Henderson, T. L.; Choo, Y. K.
1991-01-01
A grid adaptation technique is presented which improves grid quality. The method begins with an assessment of grid quality by defining an appropriate grid quality measure. Then, undesirable grid properties are eliminated by a grid-quality-adaptive grid generation procedure. The same concept has been used for geometry-adaptive and solution-adaptive grid generation. The difference lies in the definition of the grid control sources; here, they are extracted from the distribution of a particular grid property. Several examples are presented to demonstrate the versatility and effectiveness of the method.
A framework for constructing adaptive and reconfigurable systems
Poirot, Pierre-Etienne; Nogiec, Jerzy; Ren, Shangping; /IIT, Chicago
2007-05-01
This paper presents a software approach to augmenting existing real-time systems with self-adaptation capabilities. In this approach, based on the control loop paradigm commonly used in industrial control, self-adaptation is decomposed into observing system events, inferring necessary changes based on a system's functional model, and activating appropriate adaptation procedures. The solution adopts an architectural decomposition that emphasizes independence and separation of concerns. It encapsulates observation, modeling and correction into separate modules to allow for easier customization of the adaptive behavior and flexibility in selecting implementation technologies.
Fukuda, Ryoichi Ehara, Masahiro
2015-12-31
The effects from solvent environment are specific to the electronic states; therefore, a computational scheme for solvent effects consistent with the electronic states is necessary to discuss electronic excitation of molecules in solution. The PCM (polarizable continuum model) SAC (symmetry-adapted cluster) and SAC-CI (configuration interaction) methods are developed for such purposes. The PCM SAC-CI adopts the state-specific (SS) solvation scheme where solvent effects are self-consistently considered for every ground and excited states. For efficient computations of many excited states, we develop a perturbative approximation for the PCM SAC-CI method, which is called corrected linear response (cLR) scheme. Our test calculations show that the cLR PCM SAC-CI is a very good approximation of the SS PCM SAC-CI method for polar and nonpolar solvents.
Sowers, K. R.; Gunsalus, R. P.
1995-01-01
The methanogenic Archaea, like the Bacteria and Eucarya, possess several osmoregulatory strategies that enable them to adapt to osmotic changes in their environment. The physiological responses of Methanosarcina species to different osmotic pressures were studied in extracellular osmolalities ranging from 0.3 to 2.0 osmol/kg. Regardless of the isolation source, the maximum rate of growth for species from freshwater, sewage, and marine sources occurred in extracellular osmolalities between 0.62 and 1.0 osmol/kg and decreased to minimal detectable growth as the solute concentration approached 2.0 osmol/kg. The steady-state water-accessible volume of Methanosarcina thermophila showed a disproportionate decrease of 30% between 0.3 and 0.6 osmol/kg and then a linear decrease of 22% as the solute concentration in the media increased from 0.6 to 2.0 osmol/kg. The total intracellular K(sup+) ion concentration in M. thermophila increased from 0.12 to 0.5 mol/kg as the medium osmolality was raised from 0.3 to 1.0 osmol/kg and then remained above 0.4 mol/kg as extracellular osmolality was increased to 2.0 osmol/kg. Concurrent with K(sup+) accumulation, M. thermophila synthesized and accumulated (alpha)-glutamate as the predominant intracellular osmoprotectant in media containing up to 1.0 osmol of solute per kg. At medium osmolalities greater than 1.0 osmol/kg, the (alpha)-glutamate concentration leveled off and the zwitterionic (beta)-amino acid N(sup(epsilon))-acetyl-(beta)-lysine was synthesized, accumulating to an intracellular concentration exceeding 1.1 osmol/kg at an osmolality of 2.0 osmol/kg. When glycine betaine was added to culture medium, it caused partial repression of de novo (alpha)-glutamate and N(sup(epsilon))-acetyl-(beta)-lysine synthesis and was accumulated by the cell as the predominant compatible solute. The distribution and concentration of compatible solutes in eight strains representing five Methanosarcina spp. were similar to those found in M
Pérez-Jordá, José M
2010-01-14
A new method for solving the Schrödinger equation is proposed, based on the following details. First, a map u=u(r) from Cartesian coordinates r to a new coordinate system u is chosen. Second, the solution (orbital) psi(r) is written in terms of a function U depending on u so that psi(r)=/J(u)/(-1/2)U(u), where /J(u)/ is the Jacobian determinant of the map. Third, U is expressed as a linear combination of plane waves in the u coordinate, U(u)= sum (k)c(k)e(ik x u). Finally, the coefficients c(k) are variationally optimized to obtain the best energy, using a generalization of an algorithm originally developed for the Coulomb potential [J. M. Perez-Jorda, Phys. Rev. B 58, 1230 (1998)]. The method is tested for the radial Schrödinger equation in the hydrogen atom, resulting in micro-Hartree accuracy or better for the energy of ns and np orbitals (with n up to 5) using expansions of moderate length. PMID:20095666
NASA Technical Reports Server (NTRS)
Johnson, F. T.; Samant, S. S.; Bieterman, M. B.; Melvin, R. G.; Young, D. P.; Bussoletti, J. E.; Hilmes, C. L.
1992-01-01
A new computer program, called TranAir, for analyzing complex configurations in transonic flow (with subsonic or supersonic freestream) was developed. This program provides accurate and efficient simulations of nonlinear aerodynamic flows about arbitrary geometries with the ease and flexibility of a typical panel method program. The numerical method implemented in TranAir is described. The method solves the full potential equation subject to a set of general boundary conditions and can handle regions with differing total pressure and temperature. The boundary value problem is discretized using the finite element method on a locally refined rectangular grid. The grid is automatically constructed by the code and is superimposed on the boundary described by networks of panels; thus no surface fitted grid generation is required. The nonlinear discrete system arising from the finite element method is solved using a preconditioned Krylov subspace method embedded in an inexact Newton method. The solution is obtained on a sequence of successively refined grids which are either constructed adaptively based on estimated solution errors or are predetermined based on user inputs. Many results obtained by using TranAir to analyze aerodynamic configurations are presented.
NASA Technical Reports Server (NTRS)
Radhakrishnan, Krishnan
1994-01-01
LSENS, the Lewis General Chemical Kinetics and Sensitivity Analysis Code, has been developed for solving complex, homogeneous, gas-phase chemical kinetics problems and contains sensitivity analysis for a variety of problems, including nonisothermal situations. This report is part 1 of a series of three reference publications that describe LENS, provide a detailed guide to its usage, and present many example problems. Part 1 derives the governing equations and describes the numerical solution procedures for the types of problems that can be solved. The accuracy and efficiency of LSENS are examined by means of various test problems, and comparisons with other methods and codes are presented. LSENS is a flexible, convenient, accurate, and efficient solver for chemical reaction problems such as static system; steady, one-dimensional, inviscid flow; reaction behind incident shock wave, including boundary layer correction; and perfectly stirred (highly backmixed) reactor. In addition, the chemical equilibrium state can be computed for the following assigned states: temperature and pressure, enthalpy and pressure, temperature and volume, and internal energy and volume. For static problems the code computes the sensitivity coefficients of the dependent variables and their temporal derivatives with respect to the initial values of the dependent variables and/or the three rate coefficient parameters of the chemical reactions.
Dynamic Load Balancing for Adaptive Unstructured Grids
NASA Technical Reports Server (NTRS)
Biswas, Rupak; Saini, Subhash (Technical Monitor)
1998-01-01
Dynamic mesh adaptation on unstructured grids is a powerful tool for computing unsteady three-dimensional problems that require grid modifications to efficiently resolve solution features. By locally refining and coarsening the mesh to capture phenomena of interest, such procedures make standard computational methods more cost effective. Highly refined meshes are required to accurately capture shock waves, contact discontinuities, vortices, and shear layers in fluid flow problems. Adaptive meshes have also proved to be useful in several other areas of computational science and engineering like computer vision and graphics, semiconductor device modeling, and structural mechanics. Local mesh adaptation provides the opportunity to obtain solutions that are comparable to those obtained on globally-refined grids but at a much lower cost. Additional information is contained in the original extended abstract.
Cartesian Off-Body Grid Adaption for Viscous Time- Accurate Flow Simulation
NASA Technical Reports Server (NTRS)
Buning, Pieter G.; Pulliam, Thomas H.
2011-01-01
An improved solution adaption capability has been implemented in the OVERFLOW overset grid CFD code. Building on the Cartesian off-body approach inherent in OVERFLOW and the original adaptive refinement method developed by Meakin, the new scheme provides for automated creation of multiple levels of finer Cartesian grids. Refinement can be based on the undivided second-difference of the flow solution variables, or on a specific flow quantity such as vorticity. Coupled with load-balancing and an inmemory solution interpolation procedure, the adaption process provides very good performance for time-accurate simulations on parallel compute platforms. A method of using refined, thin body-fitted grids combined with adaption in the off-body grids is presented, which maximizes the part of the domain subject to adaption. Two- and three-dimensional examples are used to illustrate the effectiveness and performance of the adaption scheme.
NASA Astrophysics Data System (ADS)
Weller, Hilary; Browne, Philip; Budd, Chris; Cullen, Mike
2016-03-01
An equation of Monge-Ampère type has, for the first time, been solved numerically on the surface of the sphere in order to generate optimally transported (OT) meshes, equidistributed with respect to a monitor function. Optimal transport generates meshes that keep the same connectivity as the original mesh, making them suitable for r-adaptive simulations, in which the equations of motion can be solved in a moving frame of reference in order to avoid mapping the solution between old and new meshes and to avoid load balancing problems on parallel computers. The semi-implicit solution of the Monge-Ampère type equation involves a new linearisation of the Hessian term, and exponential maps are used to map from old to new meshes on the sphere. The determinant of the Hessian is evaluated as the change in volume between old and new mesh cells, rather than using numerical approximations to the gradients. OT meshes are generated to compare with centroidal Voronoi tessellations on the sphere and are found to have advantages and disadvantages; OT equidistribution is more accurate, the number of iterations to convergence is independent of the mesh size, face skewness is reduced and the connectivity does not change. However anisotropy is higher and the OT meshes are non-orthogonal. It is shown that optimal transport on the sphere leads to meshes that do not tangle. However, tangling can be introduced by numerical errors in calculating the gradient of the mesh potential. Methods for alleviating this problem are explored. Finally, OT meshes are generated using observed precipitation as a monitor function, in order to demonstrate the potential power of the technique.
ERIC Educational Resources Information Center
Ho, Tsung-Han
2010-01-01
Computerized adaptive testing (CAT) provides a highly efficient alternative to the paper-and-pencil test. By selecting items that match examinees' ability levels, CAT not only can shorten test length and administration time but it can also increase measurement precision and reduce measurement error. In CAT, maximum information (MI) is the most…
Self-adaptive Solution Strategies
NASA Technical Reports Server (NTRS)
Padovan, J.
1984-01-01
The development of enhancements to current generation nonlinear finite element algorithms of the incremental Newton-Raphson type was overviewed. Work was introduced on alternative formulations which lead to improve algorithms that avoid the need for global level updating and inversion. To quantify the enhanced Newton-Raphson scheme and the new alternative algorithm, the results of several benchmarks are presented.
The purpose of this SOP is to describe procedures for preparing calibration curve solutions used for gas chromatography/mass spectrometry (GC/MS) analysis of chlorpyrifos, diazinon, malathion, DDT, DDE, DDD, a-chlordane, and g-chlordane in dust, soil, air, and handwipe sample ext...
Adaptive Image Denoising by Mixture Adaptation.
Luo, Enming; Chan, Stanley H; Nguyen, Truong Q
2016-10-01
We propose an adaptive learning procedure to learn patch-based image priors for image denoising. The new algorithm, called the expectation-maximization (EM) adaptation, takes a generic prior learned from a generic external database and adapts it to the noisy image to generate a specific prior. Different from existing methods that combine internal and external statistics in ad hoc ways, the proposed algorithm is rigorously derived from a Bayesian hyper-prior perspective. There are two contributions of this paper. First, we provide full derivation of the EM adaptation algorithm and demonstrate methods to improve the computational complexity. Second, in the absence of the latent clean image, we show how EM adaptation can be modified based on pre-filtering. The experimental results show that the proposed adaptation algorithm yields consistently better denoising results than the one without adaptation and is superior to several state-of-the-art algorithms. PMID:27416593
An assessment of the adaptive unstructured tetrahedral grid, Euler Flow Solver Code FELISA
NASA Technical Reports Server (NTRS)
Djomehri, M. Jahed; Erickson, Larry L.
1994-01-01
A three-dimensional solution-adaptive Euler flow solver for unstructured tetrahedral meshes is assessed, and the accuracy and efficiency of the method for predicting sonic boom pressure signatures about simple generic models are demonstrated. Comparison of computational and wind tunnel data and enhancement of numerical solutions by means of grid adaptivity are discussed. The mesh generation is based on the advancing front technique. The FELISA code consists of two solvers, the Taylor-Galerkin and the Runge-Kutta-Galerkin schemes, both of which are spacially discretized by the usual Galerkin weighted residual finite-element methods but with different explicit time-marching schemes to steady state. The solution-adaptive grid procedure is based on either remeshing or mesh refinement techniques. An alternative geometry adaptive procedure is also incorporated.
NASA Astrophysics Data System (ADS)
Barton, P.
1987-04-01
The basic principles of adaptive antennas are outlined in terms of the Wiener-Hopf expression for maximizing signal to noise ratio in an arbitrary noise environment; the analogy with generalized matched filter theory provides a useful aid to understanding. For many applications, there is insufficient information to achieve the above solution and thus non-optimum constrained null steering algorithms are also described, together with a summary of methods for preventing wanted signals being nulled by the adaptive system. The three generic approaches to adaptive weight control are discussed; correlation steepest descent, weight perturbation and direct solutions based on sample matrix conversion. The tradeoffs between hardware complexity and performance in terms of null depth and convergence rate are outlined. The sidelobe cancellor technique is described. Performance variation with jammer power and angular distribution is summarized and the key performance limitations identified. The configuration and performance characteristics of both multiple beam and phase scan array antennas are covered, with a brief discussion of performance factors.
AEST: Adaptive Eigenvalue Stability Code
NASA Astrophysics Data System (ADS)
Zheng, L.-J.; Kotschenreuther, M.; Waelbroeck, F.; van Dam, J. W.; Berk, H.
2002-11-01
An adaptive eigenvalue linear stability code is developed. The aim is on one hand to include the non-ideal MHD effects into the global MHD stability calculation for both low and high n modes and on the other hand to resolve the numerical difficulty involving MHD singularity on the rational surfaces at the marginal stability. Our code follows some parts of philosophy of DCON by abandoning relaxation methods based on radial finite element expansion in favor of an efficient shooting procedure with adaptive gridding. The δ W criterion is replaced by the shooting procedure and subsequent matrix eigenvalue problem. Since the technique of expanding a general solution into a summation of the independent solutions employed, the rank of the matrices involved is just a few hundreds. This makes easier to solve the eigenvalue problem with non-ideal MHD effects, such as FLR or even full kinetic effects, as well as plasma rotation effect, taken into account. To include kinetic effects, the approach of solving for the distribution function as a local eigenvalue ω problem as in the GS2 code will be employed in the future. Comparison of the ideal MHD version of the code with DCON, PEST, and GATO will be discussed. The non-ideal MHD version of the code will be employed to study as an application the transport barrier physics in tokamak discharges.
Large spatial, temporal, and algorithmic adaptivity for implicit nonlinear finite element analysis
Engelmann, B.E.; Whirley, R.G.
1992-07-30
The development of effective solution strategies to solve the global nonlinear equations which arise in implicit finite element analysis has been the subject of much research in recent years. Robust algorithms are needed to handle the complex nonlinearities that arise in many implicit finite element applications such as metalforming process simulation. The authors experience indicates that robustness can best be achieved through adaptive solution strategies. In the course of their research, this adaptivity and flexibility has been refined into a production tool through the development of a solution control language called ISLAND. This paper discusses aspects of adaptive solution strategies including iterative procedures to solve the global equations and remeshing techniques to extend the domain of Lagrangian methods. Examples using the newly developed ISLAND language are presented to illustrate the advantages of embedding temporal, algorithmic, and spatial adaptivity in a modem implicit nonlinear finite element analysis code.
This SOP describes the method used for preparing surrogate recovery standard and internal standard solutions for the analysis of polar target analytes. It also describes the method for preparing calibration standard solutions for polar analytes used for gas chromatography/mass sp...
Genetic algorithms in adaptive fuzzy control
NASA Technical Reports Server (NTRS)
Karr, C. Lucas; Harper, Tony R.
1992-01-01
Researchers at the U.S. Bureau of Mines have developed adaptive process control systems in which genetic algorithms (GA's) are used to augment fuzzy logic controllers (FLC's). GA's are search algorithms that rapidly locate near-optimum solutions to a wide spectrum of problems by modeling the search procedures of natural genetics. FLC's are rule based systems that efficiently manipulate a problem environment by modeling the 'rule-of-thumb' strategy used in human decision making. Together, GA's and FLC's possess the capabilities necessary to produce powerful, efficient, and robust adaptive control systems. To perform efficiently, such control systems require a control element to manipulate the problem environment, an analysis element to recognize changes in the problem environment, and a learning element to adjust fuzzy membership functions in response to the changes in the problem environment. Details of an overall adaptive control system are discussed. A specific computer-simulated chemical system is used to demonstrate the ideas presented.
Adaptive Finite Element Methods in Geodynamics
NASA Astrophysics Data System (ADS)
Davies, R.; Davies, H.; Hassan, O.; Morgan, K.; Nithiarasu, P.
2006-12-01
Adaptive finite element methods are presented for improving the quality of solutions to two-dimensional (2D) and three-dimensional (3D) convection dominated problems in geodynamics. The methods demonstrate the application of existing technology in the engineering community to problems within the `solid' Earth sciences. Two-Dimensional `Adaptive Remeshing': The `remeshing' strategy introduced in 2D adapts the mesh automatically around regions of high solution gradient, yielding enhanced resolution of the associated flow features. The approach requires the coupling of an automatic mesh generator, a finite element flow solver and an error estimator. In this study, the procedure is implemented in conjunction with the well-known geodynamical finite element code `ConMan'. An unstructured quadrilateral mesh generator is utilised, with mesh adaptation accomplished through regeneration. This regeneration employs information provided by an interpolation based local error estimator, obtained from the computed solution on an existing mesh. The technique is validated by solving thermal and thermo-chemical problems with known benchmark solutions. In a purely thermal context, results illustrate that the method is highly successful, improving solution accuracy whilst increasing computational efficiency. For thermo-chemical simulations the same conclusions can be drawn. However, results also demonstrate that the grid based methods employed for simulating the compositional field are not competitive with the other methods (tracer particle and marker chain) currently employed in this field, even at the higher spatial resolutions allowed by the adaptive grid strategies. Three-Dimensional Adaptive Multigrid: We extend the ideas from our 2D work into the 3D realm in the context of a pre-existing 3D-spherical mantle dynamics code, `TERRA'. In its original format, `TERRA' is computationally highly efficient since it employs a multigrid solver that depends upon a grid utilizing a clever
McCarey, Bernard E.; Edelhauser, Henry F.; Lynn, Michael J.
2010-01-01
Specular microscopy can provide a non-invasive morphological analysis of the corneal endothelial cell layer from subjects enrolled in clinical trials. The analysis provides a measure of the endothelial cell physiological reserve from aging, ocular surgical procedures, pharmaceutical exposure, and general health of the corneal endothelium. The purpose of this review is to discuss normal and stressed endothelial cell morphology, the techniques for determining the morphology parameters, and clinical trial applications. PMID:18245960
Near-Body Grid Adaption for Overset Grids
NASA Technical Reports Server (NTRS)
Buning, Pieter G.; Pulliam, Thomas H.
2016-01-01
A solution adaption capability for curvilinear near-body grids has been implemented in the OVERFLOW overset grid computational fluid dynamics code. The approach follows closely that used for the Cartesian off-body grids, but inserts refined grids in the computational space of original near-body grids. Refined curvilinear grids are generated using parametric cubic interpolation, with one-sided biasing based on curvature and stretching ratio of the original grid. Sensor functions, grid marking, and solution interpolation tasks are implemented in the same fashion as for off-body grids. A goal-oriented procedure, based on largest error first, is included for controlling growth rate and maximum size of the adapted grid system. The adaption process is almost entirely parallelized using MPI, resulting in a capability suitable for viscous, moving body simulations. Two- and three-dimensional examples are presented.
Adaptive mesh generation for viscous flows using Delaunay triangulation
NASA Technical Reports Server (NTRS)
Mavriplis, Dimitri J.
1988-01-01
A method for generating an unstructured triangular mesh in two dimensions, suitable for computing high Reynolds number flows over arbitrary configurations is presented. The method is based on a Delaunay triangulation, which is performed in a locally stretched space, in order to obtain very high aspect ratio triangles in the boundary layer and the wake regions. It is shown how the method can be coupled with an unstructured Navier-Stokes solver to produce a solution adaptive mesh generation procedure for viscous flows.
Tiwari, Rahul; Heuser, Thomas; Weyandt, Elisabeth; Wang, Baochun; Walther, Andreas
2015-11-14
Microgels with internal and reconfigurable complex nanostructure are emerging as possible adaptive particles, yet they remain challenging to design synthetically. Here, we report the synthesis of highly charged poly(methacrylic acid) (PMAA) microgels incorporating permanent (poly(methyl methacrylate) (PMMA)) and switchable hydrophobic pockets (poly(N,N'-diethylaminoethyl methacrylate) (PDEAEMA)) via emulsion polymerization. We demonstrate detailed tuning of the size, crosslinking density and tailored incorporation of functional comonomers into the polyacid microgels. Analysis via cryo-TEM and pyrene probe measurements reveal switchable hydrophobic pockets inside the microgels as a function of pH. The particles show a rich diversity of internal phase-segregation, that adapts to the surrounding conditions. Large amounts of hydrophobic pockets even lead to hydrophobic bridging between particles. The study shows ways towards tailored polyelectrolyte microgels with narrow dispersity, high charge density, as well as tailored and reconfigurable hydrophobic compartments and interactions. PMID:26350118
2014-01-01
Background Simple and effective cryopreservation of human oocytes would have an enormous impact on the financial and ethical constraints of human assisted reproduction. Recently, studies have demonstrated the potential for cryopreservation in an ice-free glassy state by equilibrating oocytes with high concentrations of cryoprotectants (CPAs) and rapidly cooling to liquid nitrogen temperatures. A major difficulty with this approach is that the high concentrations required for the avoidance of crystal formation (vitrification) also increase the risk of osmotic and toxic damage. We recently described a mathematical optimization approach for designing CPA equilibration procedures that avoid osmotic damage and minimize toxicity, and we presented optimized procedures for human oocytes involving continuous changes in solution composition. Methods Here we adapt and refine our previous algorithm to predict piecewise-constant changes in extracellular solution concentrations in order to make the predicted procedures easier to implement. Importantly, we investigate the effects of using alternate equilibration endpoints on predicted protocol toxicity. Finally, we compare the resulting procedures to previously described experimental methods, as well as mathematically optimized procedures involving continuous changes in solution composition. Results For equilibration with CPA, our algorithm predicts an optimal first step consisting of exposure to a solution containing only water and CPA. This is predicted to cause the cells to initially shrink and then swell to the maximum cell volume limit. To reach the target intracellular CPA concentration, the cells are then induced to shrink to the minimum cell volume limit by exposure to a high CPA concentration. For post-thaw equilibration to remove CPA, the optimal procedures involve exposure to CPA-free solutions that are predicted to cause swelling to the maximum volume limit. The toxicity associated with these procedures is predicted
NASA Astrophysics Data System (ADS)
Abdel Wahab, N. H.; Salah, Ahmed
2015-05-01
In this paper, the interaction of a three-level -configration atom and a one-mode quantized electromagnetic cavity field has been studied. The detuning parameters, the Kerr nonlinearity and the arbitrary form of both the field and intensity-dependent atom-field coupling have been taken into account. The wave function when the atom and the field are initially prepared in the excited state and coherent state, respectively, by using the Schrödinger equation has been given. The analytical approximation solution of this model has been obtained by using the modified homotopy analysis method (MHAM). The homotopy analysis method is mentioned summarily. MHAM can be obtained from the homotopy analysis method (HAM) applied to Laplace, inverse Laplace transform and Pade approximate. MHAM is used to increase the accuracy and accelerate the convergence rate of truncated series solution obtained by the HAM. The time-dependent parameters of the anti-bunching of photons, the amplitude-squared squeezing and the coherent properties have been calculated. The influence of the detuning parameters, Kerr nonlinearity and photon number operator on the temporal behavior of these phenomena have been analyzed. We noticed that the considered system is sensitive to variations in the presence of these parameters.
Adapting Assessment Procedures: The Black Child.
ERIC Educational Resources Information Center
Hilliard, Asa G., III
This speech deals with the assumptions and approaches underlying educational assessment and suggests alternatives to standardized testing. It is proposed that the assumption that test items can be standardized is at the base of assessment problems; while there are standard mental functions which children develop, there are no standard items that…
Clarification Procedure for Gels
NASA Technical Reports Server (NTRS)
Barber, Patrick G.; Simpson, Norman R.
1987-01-01
Procedure developed to obtain transparent gels with consistencies suitable for crystal growth, by replacing sodium ions in silicate solution with potassium ions. Clarification process uses cation-exchange resin to replace sodium ions in stock solution with potassium ions, placed in 1M solution of soluble potassium salt. Slurry stirred for several hours to allow potassium ions to replace all other cations on resin. Supernatant solution decanted through filter, and beads rinsed with distilled water. Rinsing removes excess salt but leaves cation-exchange beads fully charged with potassium ions.
Gonçalves, F S; Barretto, L S S; Arruda, R P; Perri, S H V; Mingoti, G Z
2014-01-01
The presence of heparin and a mixture of penicillamine, hypotaurine, and epinephrine (PHE) solution in the in vitro fertilization (IVF) media seem to be a prerequisite when bovine spermatozoa are capacitated in vitro, in order to stimulate sperm motility and acrosome reaction. The present study was designed to determine the effect of the addition of heparin and PHE during IVF on the quality and penetrability of spermatozoa into bovine oocytes and on subsequent embryo development. Sperm quality, evaluated by the integrity of plasma and acrosomal membranes and mitochondrial function, was diminished (P<0.05) in the presence of heparin and PHE. Oocyte penetration and normal pronuclear formation rates, as well as the percentage of zygotes presenting more than two pronuclei, was higher (P<0.05) in the presence of heparin and PHE. No differences were observed in cleavage rates between treatment and control (P>0.05). However, the developmental rate to the blastocyst stage was increased in the presence of heparin and PHE (P>0.05). The quality of embryos that reached the blastocyst stage was evaluated by counting the inner cell mass (ICM) and trophectoderm (TE) cell numbers and total number of cells; the percentage of ICM and TE cells was unaffected (P>0.05) in the presence of heparin and PHE (P<0.05). In conclusion, this study demonstrated that while the supplementation of IVF media with heparin and PHE solution impairs spermatozoa quality, it plays an important role in sperm capacitation, improving pronuclear formation, and early embryonic development. PMID:23949783
Time domain and frequency domain design techniques for model reference adaptive control systems
NASA Technical Reports Server (NTRS)
Boland, J. S., III
1971-01-01
Some problems associated with the design of model-reference adaptive control systems are considered and solutions to these problems are advanced. The stability of the adapted system is a primary consideration in the development of both the time-domain and the frequency-domain design techniques. Consequentially, the use of Liapunov's direct method forms an integral part of the derivation of the design procedures. The application of sensitivity coefficients to the design of model-reference adaptive control systems is considered. An application of the design techniques is also presented.
Adaptive unstructured meshing for thermal stress analysis of built-up structures
NASA Technical Reports Server (NTRS)
Dechaumphai, Pramote
1992-01-01
An adaptive unstructured meshing technique for mechanical and thermal stress analysis of built-up structures has been developed. A triangular membrane finite element and a new plate bending element are evaluated on a panel with a circular cutout and a frame stiffened panel. The adaptive unstructured meshing technique, without a priori knowledge of the solution to the problem, generates clustered elements only where needed. An improved solution accuracy is obtained at a reduced problem size and analysis computational time as compared to the results produced by the standard finite element procedure.
Developing Competency in Payroll Procedures
ERIC Educational Resources Information Center
Jackson, Allen L.
1975-01-01
The author describes a sequence of units that provides for competency in payroll procedures. The units could be the basis for a four to six week minicourse and are adaptable, so that the student, upon completion, will be able to apply his learning to any payroll procedures system. (Author/AJ)
Structured adaptive grid generation using algebraic methods
NASA Technical Reports Server (NTRS)
Yang, Jiann-Cherng; Soni, Bharat K.; Roger, R. P.; Chan, Stephen C.
1993-01-01
The accuracy of the numerical algorithm depends not only on the formal order of approximation but also on the distribution of grid points in the computational domain. Grid adaptation is a procedure which allows optimal grid redistribution as the solution progresses. It offers the prospect of accurate flow field simulations without the use of an excessively timely, computationally expensive, grid. Grid adaptive schemes are divided into two basic categories: differential and algebraic. The differential method is based on a variational approach where a function which contains a measure of grid smoothness, orthogonality and volume variation is minimized by using a variational principle. This approach provided a solid mathematical basis for the adaptive method, but the Euler-Lagrange equations must be solved in addition to the original governing equations. On the other hand, the algebraic method requires much less computational effort, but the grid may not be smooth. The algebraic techniques are based on devising an algorithm where the grid movement is governed by estimates of the local error in the numerical solution. This is achieved by requiring the points in the large error regions to attract other points and points in the low error region to repel other points. The development of a fast, efficient, and robust algebraic adaptive algorithm for structured flow simulation applications is presented. This development is accomplished in a three step process. The first step is to define an adaptive weighting mesh (distribution mesh) on the basis of the equidistribution law applied to the flow field solution. The second, and probably the most crucial step, is to redistribute grid points in the computational domain according to the aforementioned weighting mesh. The third and the last step is to reevaluate the flow property by an appropriate search/interpolate scheme at the new grid locations. The adaptive weighting mesh provides the information on the desired concentration
NASA Technical Reports Server (NTRS)
Narendra, K. S.; Annaswamy, A. M.
1985-01-01
Several concepts and results in robust adaptive control are are discussed and is organized in three parts. The first part surveys existing algorithms. Different formulations of the problem and theoretical solutions that have been suggested are reviewed here. The second part contains new results related to the role of persistent excitation in robust adaptive systems and the use of hybrid control to improve robustness. In the third part promising new areas for future research are suggested which combine different approaches currently known.
Adaptive challenges in medical practices.
Daiker, Barbara L
2013-01-01
The purpose of this qualitative grounded theory study was to describe the theoretical structures of the strategies used by medical practices to navigate adaptive challenges. The process of responding to adaptive challenges in five medical practices was studied using a grounded theory approach, collecting data from interviews with the organizations' leaders and managers. The leadership of these medical practices had successfully navigated adaptive challenges within two years of the study. The analysis revealed a model that describes the key elements in finding solutions to adaptive challenges. The model was named the Adaptation Solution Dynamic, which explains the elements of Rational Tools, Relationship Commitment, and Achievement Drive. The findings from the results of this study provide a theoretical basis for studying how leaders support identifying solutions to adaptive challenges. PMID:23866647
NASA Astrophysics Data System (ADS)
Romeo, A.; Finocchio, G.; Carpentieri, M.; Torres, L.; Consolo, G.; Azzerboni, B.
2008-02-01
The Landau-Lifshitz-Gilbert (LLG) equation is the fundamental equation to describe magnetization dynamics in microscale and nanoscale magnetic systems. In this paper we present a brief overview of a time-domain numerical method related to the fifth order Runge-Kutta formula, which has been applied to the solution of the LLG equation successfully. We discuss advantages of the method, describing the results of a numerical experiment based on the standard problem #4. The results are in good agreement with the ones present in literature. By including thermal effects in our framework, our simulations show magnetization dynamics slightly dependent on the spatial discretization.
Prism Adaptation in Schizophrenia
ERIC Educational Resources Information Center
Bigelow, Nirav O.; Turner, Beth M.; Andreasen, Nancy C.; Paulsen, Jane S.; O'Leary, Daniel S.; Ho, Beng-Choon
2006-01-01
The prism adaptation test examines procedural learning (PL) in which performance facilitation occurs with practice on tasks without the need for conscious awareness. Dynamic interactions between frontostriatal cortices, basal ganglia, and the cerebellum have been shown to play key roles in PL. Disruptions within these neural networks have also…
ERIC Educational Resources Information Center
Flournoy, Nancy
Designs for sequential sampling procedures that adapt to cumulative information are discussed. A familiar illustration is the play-the-winner rule in which there are two treatments; after a random start, the same treatment is continued as long as each successive subject registers a success. When a failure occurs, the other treatment is used until…
The electromagnetic spike solutions
NASA Astrophysics Data System (ADS)
Nungesser, Ernesto; Lim, Woei Chet
2013-12-01
The aim of this paper is to use the existing relation between polarized electromagnetic Gowdy spacetimes and vacuum Gowdy spacetimes to find explicit solutions for electromagnetic spikes by a procedure which has been developed by one of the authors for gravitational spikes. We present new inhomogeneous solutions which we call the EME and MEM electromagnetic spike solutions.
Goffin, Mark A.; Baker, Christopher M.J.; Buchan, Andrew G.; Pain, Christopher C.; Eaton, Matthew D.; Smith, Paul N.
2013-06-01
This article presents a method for goal-based anisotropic adaptive methods for the finite element method applied to the Boltzmann transport equation. The neutron multiplication factor, k{sub eff}, is used as the goal of the adaptive procedure. The anisotropic adaptive algorithm requires error measures for k{sub eff} with directional dependence. General error estimators are derived for any given functional of the flux and applied to k{sub eff} to acquire the driving force for the adaptive procedure. The error estimators require the solution of an appropriately formed dual equation. Forward and dual error indicators are calculated by weighting the Hessian of each solution with the dual and forward residual respectively. The Hessian is used as an approximation of the interpolation error in the solution which gives rise to the directional dependence. The two indicators are combined to form a single error metric that is used to adapt the finite element mesh. The residual is approximated using a novel technique arising from the sub-grid scale finite element discretisation. Two adaptive routes are demonstrated: (i) a single mesh is used to solve all energy groups, and (ii) a different mesh is used to solve each energy group. The second method aims to capture the benefit from representing the flux from each energy group on a specifically optimised mesh. The k{sub eff} goal-based adaptive method was applied to three examples which illustrate the superior accuracy in criticality problems that can be obtained.
Matsuda, Ikki; Sha, John C M; Ortmann, Sylvia; Schwarm, Angela; Grandl, Florian; Caton, Judith; Jens, Warner; Kreuzer, Michael; Marlena, Diana; Hagen, Katharina B; Clauss, Marcus
2015-10-01
Behavioral observations and small fecal particles compared to other primates indicate that free-ranging proboscis monkeys (Nasalis larvatus) have a strategy of facultative merycism(rumination). In functional ruminants (ruminant and camelids), rumination is facilitated by a particle sorting mechanism in the forestomach that selectively retains larger particles and subjects them to repeated mastication. Using a set of a solute and three particle markers of different sizes (b2, 5 and 8mm),we displayed digesta passage kinetics and measured mean retention times (MRTs) in four captive proboscis monkeys (6–18 kg) and compared the marker excretion patterns to those in domestic cattle. In addition, we evaluated various methods of calculating and displaying passage characteristics. The mean ± SD dry matter intake was 98 ± 22 g kg−0.75 d−1, 68 ± 7% of which was browse. Accounting for sampling intervals in MRT calculation yielded results that were not affected by the sampling frequency. Displaying marker excretion patterns using fecal marker concentrations (rather than amounts) facilitated comparisons with reactor theory outputs and indicated that both proboscis and cattle digestive tracts represent a series of very few tank reactors. However, the separation of the solute and particle marker and the different-sized particle markers, evident in cattle, did not occur in proboscis monkeys, in which all markers moved together, at MRTs of approximately 40 h. The results indicate that the digestive physiology of proboscis monkeys does not show typical characteristics of ruminants, which may explain why merycism is only a facultative strategy in this species. PMID:26004169
SAGE - MULTIDIMENSIONAL SELF-ADAPTIVE GRID CODE
NASA Technical Reports Server (NTRS)
Davies, C. B.
1994-01-01
SAGE, Self Adaptive Grid codE, is a flexible tool for adapting and restructuring both 2D and 3D grids. Solution-adaptive grid methods are useful tools for efficient and accurate flow predictions. In supersonic and hypersonic flows, strong gradient regions such as shocks, contact discontinuities, shear layers, etc., require careful distribution of grid points to minimize grid error and produce accurate flow-field predictions. SAGE helps the user obtain more accurate solutions by intelligently redistributing (i.e. adapting) the original grid points based on an initial or interim flow-field solution. The user then computes a new solution using the adapted grid as input to the flow solver. The adaptive-grid methodology poses the problem in an algebraic, unidirectional manner for multi-dimensional adaptations. The procedure is analogous to applying tension and torsion spring forces proportional to the local flow gradient at every grid point and finding the equilibrium position of the resulting system of grid points. The multi-dimensional problem of grid adaption is split into a series of one-dimensional problems along the computational coordinate lines. The reduced one dimensional problem then requires a tridiagonal solver to find the location of grid points along a coordinate line. Multi-directional adaption is achieved by the sequential application of the method in each coordinate direction. The tension forces direct the redistribution of points to the strong gradient region. To maintain smoothness and a measure of orthogonality of grid lines, torsional forces are introduced that relate information between the family of lines adjacent to one another. The smoothness and orthogonality constraints are direction-dependent, since they relate only the coordinate lines that are being adapted to the neighboring lines that have already been adapted. Therefore the solutions are non-unique and depend on the order and direction of adaption. Non-uniqueness of the adapted grid is
Ryan, P C; Hillier, S; Wall, A J
2008-12-15
Sequential extraction procedures (SEPs) are commonly used to determine speciation of trace metals in soils and sediments. However, the non-selectivity of reagents for targeted phases has remained a lingering concern. Furthermore, potentially reactive phases such as phyllosilicate clay minerals often contain trace metals in structural sites, and their reactivity has not been quantified. Accordingly, the objective of this study is to analyze the behavior of trace metal-bearing clay minerals exposed to the revised BCR 3-step plus aqua regia SEP. Mineral quantification based on stoichiometric analysis and quantitative powder X-ray diffraction (XRD) documents progressive dissolution of chlorite (CCa-2 ripidolite) and two varieties of smectite (SapCa-2 saponite and SWa-1 nontronite) during steps 1-3 of the BCR procedure. In total, 8 (+/-1) % of ripidolite, 19 (+/-1) % of saponite, and 19 (+/-3) % of nontronite (% mineral mass) dissolved during extractions assumed by many researchers to release trace metals from exchange sites, carbonates, hydroxides, sulfides and organic matter. For all three reference clays, release of Ni into solution is correlated with clay dissolution. Hydrolysis of relatively weak Mg-O bonds (362 kJ/mol) during all stages, reduction of Fe(III) during hydroxylamine hydrochloride extraction and oxidation of Fe(II) during hydrogen peroxide extraction are the main reasons for clay mineral dissolution. These findings underscore the need for precise mineral quantification when using SEPs to understand the origin/partitioning of trace metals with solid phases. PMID:18951614
Adaptive management is an approach to natural resource management that emphasizes learning through management where knowledge is incomplete, and when, despite inherent uncertainty, managers and policymakers must act. Unlike a traditional trial and error approach, adaptive managem...
Vassault, A; Arnaud, J; Szymanovicz, A
2010-12-01
Examination procedures have to be written for each examination according to the standard requirements. Using CE marked devices, technical inserts can be used, but because of their lack of homogeneity, it could be easier to document their use as a standard procedure. Document control policy applies for those procedures, the content of which could be as provided in this document. Electronic manuals can be used as well. PMID:21613016
Numerical Boundary Condition Procedures
NASA Technical Reports Server (NTRS)
1981-01-01
Topics include numerical procedures for treating inflow and outflow boundaries, steady and unsteady discontinuous surfaces, far field boundaries, and multiblock grids. In addition, the effects of numerical boundary approximations on stability, accuracy, and convergence rate of the numerical solution are discussed.
Adaptive process control using fuzzy logic and genetic algorithms
NASA Technical Reports Server (NTRS)
Karr, C. L.
1993-01-01
Researchers at the U.S. Bureau of Mines have developed adaptive process control systems in which genetic algorithms (GA's) are used to augment fuzzy logic controllers (FLC's). GA's are search algorithms that rapidly locate near-optimum solutions to a wide spectrum of problems by modeling the search procedures of natural genetics. FLC's are rule based systems that efficiently manipulate a problem environment by modeling the 'rule-of-thumb' strategy used in human decision making. Together, GA's and FLC's possess the capabilities necessary to produce powerful, efficient, and robust adaptive control systems. To perform efficiently, such control systems require a control element to manipulate the problem environment, and a learning element to adjust to the changes in the problem environment. Details of an overall adaptive control system are discussed. A specific laboratory acid-base pH system is used to demonstrate the ideas presented.
Adaptive Process Control with Fuzzy Logic and Genetic Algorithms
NASA Technical Reports Server (NTRS)
Karr, C. L.
1993-01-01
Researchers at the U.S. Bureau of Mines have developed adaptive process control systems in which genetic algorithms (GA's) are used to augment fuzzy logic controllers (FLC's). GA's are search algorithms that rapidly locate near-optimum solutions to a wide spectrum of problems by modeling the search procedures of natural genetics. FLC's are rule based systems that efficiently manipulate a problem environment by modeling the 'rule-of-thumb' strategy used in human decision-making. Together, GA's and FLC's possess the capabilities necessary to produce powerful, efficient, and robust adaptive control systems. To perform efficiently, such control systems require a control element to manipulate the problem environment, an analysis element to recognize changes in the problem environment, and a learning element to adjust to the changes in the problem environment. Details of an overall adaptive control system are discussed. A specific laboratory acid-base pH system is used to demonstrate the ideas presented.
Topology and grid adaption for high-speed flow computations
NASA Technical Reports Server (NTRS)
Abolhassani, Jamshid S.; Tiwari, Surendra N.
1989-01-01
This study investigates the effects of grid topology and grid adaptation on numerical solutions of the Navier-Stokes equations. In the first part of this study, a general procedure is presented for computation of high-speed flow over complex three-dimensional configurations. The flow field is simulated on the surface of a Butler wing in a uniform stream. Results are presented for Mach number 3.5 and a Reynolds number of 2,000,000. The O-type and H-type grids have been used for this study, and the results are compared together and with other theoretical and experimental results. The results demonstrate that while the H-type grid is suitable for the leading and trailing edges, a more accurate solution can be obtained for the middle part of the wing with an O-type grid. In the second part of this study, methods of grid adaption are reviewed and a method is developed with the capability of adapting to several variables. This method is based on a variational approach and is an algebraic method. Also, the method has been formulated in such a way that there is no need for any matrix inversion. This method is used in conjunction with the calculation of hypersonic flow over a blunt-nose body. A movie has been produced which shows simultaneously the transient behavior of the solution and the grid adaption.
NASA Astrophysics Data System (ADS)
Fukuda, Ryoichi; Ehara, Masahiro
2014-10-01
Solvent effects on electronic excitation spectra are considerable in many situations; therefore, we propose an efficient and reliable computational scheme that is based on the symmetry-adapted cluster-configuration interaction (SAC-CI) method and the polarizable continuum model (PCM) for describing electronic excitations in solution. The new scheme combines the recently proposed first-order PCM SAC-CI method with the PTE (perturbation theory at the energy level) PCM SAC scheme. This is essentially equivalent to the usual SAC and SAC-CI computations with using the PCM Hartree-Fock orbital and integrals, except for the additional correction terms that represent solute-solvent interactions. The test calculations demonstrate that the present method is a very good approximation of the more costly iterative PCM SAC-CI method for excitation energies of closed-shell molecules in their equilibrium geometry. This method provides very accurate values of electric dipole moments but is insufficient for describing the charge-transfer (CT) indices in polar solvent. The present method accurately reproduces the absorption spectra and their solvatochromism of push-pull type 2,2'-bithiophene molecules. Significant solvent and substituent effects on these molecules are intuitively visualized using the CT indices. The present method is the simplest and theoretically consistent extension of SAC-CI method for including PCM environment, and therefore, it is useful for theoretical and computational spectroscopy.
Fukuda, Ryoichi Ehara, Masahiro
2014-10-21
Solvent effects on electronic excitation spectra are considerable in many situations; therefore, we propose an efficient and reliable computational scheme that is based on the symmetry-adapted cluster-configuration interaction (SAC-CI) method and the polarizable continuum model (PCM) for describing electronic excitations in solution. The new scheme combines the recently proposed first-order PCM SAC-CI method with the PTE (perturbation theory at the energy level) PCM SAC scheme. This is essentially equivalent to the usual SAC and SAC-CI computations with using the PCM Hartree-Fock orbital and integrals, except for the additional correction terms that represent solute-solvent interactions. The test calculations demonstrate that the present method is a very good approximation of the more costly iterative PCM SAC-CI method for excitation energies of closed-shell molecules in their equilibrium geometry. This method provides very accurate values of electric dipole moments but is insufficient for describing the charge-transfer (CT) indices in polar solvent. The present method accurately reproduces the absorption spectra and their solvatochromism of push-pull type 2,2{sup ′}-bithiophene molecules. Significant solvent and substituent effects on these molecules are intuitively visualized using the CT indices. The present method is the simplest and theoretically consistent extension of SAC-CI method for including PCM environment, and therefore, it is useful for theoretical and computational spectroscopy.
Interdisciplinarity in Adapted Physical Activity
ERIC Educational Resources Information Center
Bouffard, Marcel; Spencer-Cavaliere, Nancy
2016-01-01
It is commonly accepted that inquiry in adapted physical activity involves the use of different disciplines to address questions. It is often advanced today that complex problems of the kind frequently encountered in adapted physical activity require a combination of disciplines for their solution. At the present time, individual research…
Adaptive Assessment for Nonacademic Secondary Reading.
ERIC Educational Resources Information Center
Hittleman, Daniel R.
Adaptive assessment procedures are a means of determining the quality of a reader's performance in a variety of reading situations and on a variety of written materials. Such procedures are consistent with the idea that there are functional competencies which change with the reading task. Adaptive assessment takes into account that a lack of…
Refined numerical solution of the transonic flow past a wedge
NASA Technical Reports Server (NTRS)
Liang, S.-M.; Fung, K.-Y.
1985-01-01
A numerical procedure combining the ideas of solving a modified difference equation and of adaptive mesh refinement is introduced. The numerical solution on a fixed grid is improved by using better approximations of the truncation error computed from local subdomain grid refinements. This technique is used to obtain refined solutions of steady, inviscid, transonic flow past a wedge. The effects of truncation error on the pressure distribution, wave drag, sonic line, and shock position are investigated. By comparing the pressure drag on the wedge and wave drag due to the shocks, a supersonic-to-supersonic shock originating from the wedge shoulder is confirmed.
NASA Technical Reports Server (NTRS)
Banks, D. W.; Hafez, M. M.
1996-01-01
Grid adaptation for structured meshes is the art of using information from an existing, but poorly resolved, solution to automatically redistribute the grid points in such a way as to improve the resolution in regions of high error, and thus the quality of the solution. This involves: (1) generate a grid vis some standard algorithm, (2) calculate a solution on this grid, (3) adapt the grid to this solution, (4) recalculate the solution on this adapted grid, and (5) repeat steps 3 and 4 to satisfaction. Steps 3 and 4 can be repeated until some 'optimal' grid is converged to but typically this is not worth the effort and just two or three repeat calculations are necessary. They also may be repeated every 5-10 time steps for unsteady calculations.
Barrett, Harrison H.; Furenlid, Lars R.; Freed, Melanie; Hesterman, Jacob Y.; Kupinski, Matthew A.; Clarkson, Eric; Whitaker, Meredith K.
2008-01-01
Adaptive imaging systems alter their data-acquisition configuration or protocol in response to the image information received. An adaptive pinhole single-photon emission computed tomography (SPECT) system might acquire an initial scout image to obtain preliminary information about the radiotracer distribution and then adjust the configuration or sizes of the pinholes, the magnifications, or the projection angles in order to improve performance. This paper briefly describes two small-animal SPECT systems that allow this flexibility and then presents a framework for evaluating adaptive systems in general, and adaptive SPECT systems in particular. The evaluation is in terms of the performance of linear observers on detection or estimation tasks. Expressions are derived for the ideal linear (Hotelling) observer and the ideal linear (Wiener) estimator with adaptive imaging. Detailed expressions for the performance figures of merit are given, and possible adaptation rules are discussed. PMID:18541485
Investigations in adaptive processing of multispectral data
NASA Technical Reports Server (NTRS)
Kriegler, F. J.; Horwitz, H. M.
1973-01-01
Adaptive data processing procedures are applied to the problem of classifying objects in a scene scanned by multispectral sensor. These procedures show a performance improvement over standard nonadaptive techniques. Some sources of error in classification are identified and those correctable by adaptive processing are discussed. Experiments in adaptation of signature means by decision-directed methods are described. Some of these methods assume correlation between the trajectories of different signature means; for others this assumption is not made.
Prism adaptation in schizophrenia.
Bigelow, Nirav O; Turner, Beth M; Andreasen, Nancy C; Paulsen, Jane S; O'Leary, Daniel S; Ho, Beng-Choon
2006-08-01
The prism adaptation test examines procedural learning (PL) in which performance facilitation occurs with practice on tasks without the need for conscious awareness. Dynamic interactions between frontostriatal cortices, basal ganglia, and the cerebellum have been shown to play key roles in PL. Disruptions within these neural networks have also been implicated in schizophrenia, and such disruptions may manifest as impairment in prism adaptation test performance in schizophrenia patients. This study examined prism adaptation in a sample of patients diagnosed with schizophrenia (N=91) and healthy normal controls (N=58). Quantitative indices of performance during prism adaptation conditions with and without visual feedback were studied. Schizophrenia patients were significantly more impaired in adapting to prism distortion and demonstrated poorer quality of PL. Patients did not differ from healthy controls on aftereffects when the prisms were removed, but they had significantly greater difficulties in reorientation. Deficits in prism adaptation among schizophrenia patients may be due to abnormalities in motor programming arising from the disruptions within the neural networks that subserve PL. PMID:16510223
Adaptive explicit and implicit finite element methods for transient thermal analysis
NASA Technical Reports Server (NTRS)
Probert, E. J.; Hassan, O.; Morgan, K.; Peraire, J.
1992-01-01
The application of adaptive finite element methods to the solution of transient heat conduction problems in two dimensions is investigated. The computational domain is represented by an unstructured assembly of linear triangular elements and the mesh adaptation is achieved by local regeneration of the grid, using an error estimation procedure coupled to an automatic triangular mesh generator. Two alternative solution procedures are considered. In the first procedure, the solution is advanced by explicit timestepping, with domain decomposition being used to improve the computational efficiency of the method. In the second procedure, an algorithm for constructing continuous lines which pass only once through each node of the mesh is employed. The lines are used as the basis of a fully implicit method, in which the equation system is solved by line relaxation using a block tridiagonal equation solver. The numerical performance of the two procedures is compared for the analysis of a problem involving a moving heat source applied to a convectively cooled cylindrical leading edge.
Vortex-dominated conical-flow computations using unstructured adaptively-refined meshes
NASA Technical Reports Server (NTRS)
Batina, John T.
1989-01-01
A conical Euler/Navier-Stokes algorithm is presented for the computation of vortex-dominated flows. The flow solver involves a multistage Runge-Kutta time stepping scheme which uses a finite-volume spatial discretization on an unstructured grid made up of triangles. The algorithm also employs an adaptive mesh refinement procedure which enriches the mesh locally to more accurately resolve the vortical flow features. Results are presented for several highly-swept delta wing and circular cone cases at high angles of attack and at supersonic freestream flow conditions. Accurate solutions were obtained more efficiently when adaptive mesh refinement was used in contrast with refining the grid globally. The paper presents descriptions of the conical Euler/Navier-Stokes flow solver and adaptive mesh refinement procedures along with results which demonstrate the capability.
PLUM: Parallel Load Balancing for Unstructured Adaptive Meshes
NASA Technical Reports Server (NTRS)
Oliker, Leonid
1998-01-01
Dynamic mesh adaption on unstructured grids is a powerful tool for computing large-scale problems that require grid modifications to efficiently resolve solution features. Unfortunately, an efficient parallel implementation is difficult to achieve, primarily due to the load imbalance created by the dynamically-changing nonuniform grid. To address this problem, we have developed PLUM, an automatic portable framework for performing adaptive large-scale numerical computations in a message-passing environment. First, we present an efficient parallel implementation of a tetrahedral mesh adaption scheme. Extremely promising parallel performance is achieved for various refinement and coarsening strategies on a realistic-sized domain. Next we describe PLUM, a novel method for dynamically balancing the processor workloads in adaptive grid computations. This research includes interfacing the parallel mesh adaption procedure based on actual flow solutions to a data remapping module, and incorporating an efficient parallel mesh repartitioner. A significant runtime improvement is achieved by observing that data movement for a refinement step should be performed after the edge-marking phase but before the actual subdivision. We also present optimal and heuristic remapping cost metrics that can accurately predict the total overhead for data redistribution. Several experiments are performed to verify the effectiveness of PLUM on sequences of dynamically adapted unstructured grids. Portability is demonstrated by presenting results on the two vastly different architectures of the SP2 and the Origin2OOO. Additionally, we evaluate the performance of five state-of-the-art partitioning algorithms that can be used within PLUM. It is shown that for certain classes of unsteady adaption, globally repartitioning the computational mesh produces higher quality results than diffusive repartitioning schemes. We also demonstrate that a coarse starting mesh produces high quality load balancing, at
Adaptive triangular mesh generation
NASA Technical Reports Server (NTRS)
Erlebacher, G.; Eiseman, P. R.
1984-01-01
A general adaptive grid algorithm is developed on triangular grids. The adaptivity is provided by a combination of node addition, dynamic node connectivity and a simple node movement strategy. While the local restructuring process and the node addition mechanism take place in the physical plane, the nodes are displaced on a monitor surface, constructed from the salient features of the physical problem. An approximation to mean curvature detects changes in the direction of the monitor surface, and provides the pulling force on the nodes. Solutions to the axisymmetric Grad-Shafranov equation demonstrate the capturing, by triangles, of the plasma-vacuum interface in a free-boundary equilibrium configuration.
NASA Astrophysics Data System (ADS)
Davies, D. R.; Davies, J. H.; Hassan, O.; Morgan, K.; Nithiarasu, P.
2007-05-01
An adaptive finite element procedure is presented for improving the quality of solutions to convection-dominated problems in geodynamics. The method adapts the mesh automatically around regions of high solution gradient, yielding enhanced resolution of the associated flow features. The approach requires the coupling of an automatic mesh generator, a finite element flow solver, and an error estimator. In this study, the procedure is implemented in conjunction with the well-known geodynamical finite element code ConMan. An unstructured quadrilateral mesh generator is utilized, with mesh adaptation accomplished through regeneration. This regeneration employs information provided by an interpolation-based local error estimator, obtained from the computed solution on an existing mesh. The technique is validated by solving thermal and thermochemical problems with well-established benchmark solutions. In a purely thermal context, results illustrate that the method is highly successful, improving solution accuracy while increasing computational efficiency. For thermochemical simulations the same conclusions can be drawn. However, results also demonstrate that the grid-based methods employed for simulating the compositional field are not competitive with the other methods (tracer particle and marker chain) currently employed in this field, even at the higher spatial resolutions allowed by the adaptive grid strategies.
Ramponi, Denise R
2016-01-01
Dental problems are a common complaint in emergency departments in the United States. There are a wide variety of dental issues addressed in emergency department visits such as dental caries, loose teeth, dental trauma, gingival infections, and dry socket syndrome. Review of the most common dental blocks and dental procedures will allow the practitioner the opportunity to make the patient more comfortable and reduce the amount of analgesia the patient will need upon discharge. Familiarity with the dental equipment, tooth, and mouth anatomy will help prepare the practitioner for to perform these dental procedures. PMID:27482994
The development and application of the self-adaptive grid code, SAGE
NASA Astrophysics Data System (ADS)
Davies, Carol B.
The multidimensional self-adaptive grid code, SAGE, has proven to be a flexible and useful tool in the solution of complex flow problems. Both 2- and 3-D examples given in this report show the code to be reliable and to substantially improve flowfield solutions. Since the adaptive procedure is a marching scheme the code is extremely fast and uses insignificant CPU time compared to the corresponding flow solver. The SAGE program is also machine and flow solver independent. Significant effort was made to simplify user interaction, though some parameters still need to be chosen with care. It is also difficult to tell when the adaption process has provided its best possible solution. This is particularly true if no experimental data are available or if there is a lack of theoretical understanding of the flow. Another difficulty occurs if local features are important but missing in the original grid; the adaption to this solution will not result in any improvement, and only grid refinement can result in an improved solution. These are complex issues that need to be explored within the context of each specific problem.
The development and application of the self-adaptive grid code, SAGE
NASA Technical Reports Server (NTRS)
Davies, Carol B.
1993-01-01
The multidimensional self-adaptive grid code, SAGE, has proven to be a flexible and useful tool in the solution of complex flow problems. Both 2- and 3-D examples given in this report show the code to be reliable and to substantially improve flowfield solutions. Since the adaptive procedure is a marching scheme the code is extremely fast and uses insignificant CPU time compared to the corresponding flow solver. The SAGE program is also machine and flow solver independent. Significant effort was made to simplify user interaction, though some parameters still need to be chosen with care. It is also difficult to tell when the adaption process has provided its best possible solution. This is particularly true if no experimental data are available or if there is a lack of theoretical understanding of the flow. Another difficulty occurs if local features are important but missing in the original grid; the adaption to this solution will not result in any improvement, and only grid refinement can result in an improved solution. These are complex issues that need to be explored within the context of each specific problem.
Adaptive Force Control in Compliant Motion
NASA Technical Reports Server (NTRS)
Seraji, H.
1994-01-01
This paper addresses the problem of controlling a manipulator in compliant motion while in contact with an environment having an unknown stiffness. Two classes of solutions are discussed: adaptive admittance control and adaptive compliance control. In both admittance and compliance control schemes, compensator adaptation is used to ensure a stable and uniform system performance.
Error analysis of finite element solutions for postbuckled cylinders
NASA Technical Reports Server (NTRS)
Sistla, Rajaram; Thurston, Gaylen A.
1989-01-01
A general method of error analysis and correction is investigated for the discrete finite-element results for cylindrical shell structures. The method for error analysis is an adaptation of the method of successive approximation. When applied to the equilibrium equations of shell theory, successive approximations derive an approximate continuous solution from the discrete finite-element results. The advantage of this continuous solution is that it contains continuous partial derivatives of an order higher than the basis functions of the finite-element solution. Preliminary numerical results are presented in this paper for the error analysis of finite-element results for a postbuckled stiffened cylindrical panel modeled by a general purpose shell code. Numerical results from the method have previously been reported for postbuckled stiffened plates. A procedure for correcting the continuous approximate solution by Newton's method is outlined.
Climate Literacy and Adaptation Solutions for Society
NASA Astrophysics Data System (ADS)
Sohl, L. E.; Chandler, M. A.
2011-12-01
Many climate literacy programs and resources are targeted specifically at children and young adults, as part of the concerted effort to improve STEM education in the U.S. This work is extremely important in building a future society that is well prepared to adopt policies promoting climate change resilience. What these climate literacy efforts seldom do, however, is reach the older adult population that is making economic decisions right now (or not, as the case may be) on matters that can be impacted by climate change. The result is a lack of appreciation of "climate intelligence" - information that could be incorporated into the decision-making process, to maximize opportunities, minimize risk, and create a climate-resilient economy. A National Climate Service, akin to the National Weather Service, would help provide legitimacy to the need for climate intelligence, and would certainly also be the first stop for both governments and private sector concerns seeking climate information for operational purposes. However, broader collaboration between the scientific and business communities is also needed, so that they become co-creators of knowledge that is beneficial and informative to all. The stakeholder-driven research that is the focus of NOAA's RISA (Regional Integrated Sciences and Assessments) projects is one example of how such collaborations can be developed.
QUEST - A Bayesian adaptive psychometric method
NASA Technical Reports Server (NTRS)
Watson, A. B.; Pelli, D. G.
1983-01-01
An adaptive psychometric procedure that places each trial at the current most probable Bayesian estimate of threshold is described. The procedure takes advantage of the common finding that the human psychometric function is invariant in form when expressed as a function of log intensity. The procedure is simple, fast, and efficient, and may be easily implemented on any computer.
Adaptive Algebraic Multigrid Methods
Brezina, M; Falgout, R; MacLachlan, S; Manteuffel, T; McCormick, S; Ruge, J
2004-04-09
Our ability to simulate physical processes numerically is constrained by our ability to solve the resulting linear systems, prompting substantial research into the development of multiscale iterative methods capable of solving these linear systems with an optimal amount of effort. Overcoming the limitations of geometric multigrid methods to simple geometries and differential equations, algebraic multigrid methods construct the multigrid hierarchy based only on the given matrix. While this allows for efficient black-box solution of the linear systems associated with discretizations of many elliptic differential equations, it also results in a lack of robustness due to assumptions made on the near-null spaces of these matrices. This paper introduces an extension to algebraic multigrid methods that removes the need to make such assumptions by utilizing an adaptive process. The principles which guide the adaptivity are highlighted, as well as their application to algebraic multigrid solution of certain symmetric positive-definite linear systems.
Adaptive multiresolution modeling of groundwater flow in heterogeneous porous media
NASA Astrophysics Data System (ADS)
Malenica, Luka; Gotovac, Hrvoje; Srzic, Veljko; Andric, Ivo
2016-04-01
Proposed methodology was originally developed by our scientific team in Split who designed multiresolution approach for analyzing flow and transport processes in highly heterogeneous porous media. The main properties of the adaptive Fup multi-resolution approach are: 1) computational capabilities of Fup basis functions with compact support capable to resolve all spatial and temporal scales, 2) multi-resolution presentation of heterogeneity as well as all other input and output variables, 3) accurate, adaptive and efficient strategy and 4) semi-analytical properties which increase our understanding of usually complex flow and transport processes in porous media. The main computational idea behind this approach is to separately find the minimum number of basis functions and resolution levels necessary to describe each flow and transport variable with the desired accuracy on a particular adaptive grid. Therefore, each variable is separately analyzed, and the adaptive and multi-scale nature of the methodology enables not only computational efficiency and accuracy, but it also describes subsurface processes closely related to their understood physical interpretation. The methodology inherently supports a mesh-free procedure, avoiding the classical numerical integration, and yields continuous velocity and flux fields, which is vitally important for flow and transport simulations. In this paper, we will show recent improvements within the proposed methodology. Since "state of the art" multiresolution approach usually uses method of lines and only spatial adaptive procedure, temporal approximation was rarely considered as a multiscale. Therefore, novel adaptive implicit Fup integration scheme is developed, resolving all time scales within each global time step. It means that algorithm uses smaller time steps only in lines where solution changes are intensive. Application of Fup basis functions enables continuous time approximation, simple interpolation calculations across
Ghosh, A
1989-06-15
There are two different approaches for improving the accuracy of analog optical associative processors: postprocessing with a bimodal system and preprocessing with a preconditioner. These two approaches can be combined to develop an adaptive optical multiprocessor that can adjust the computational steps depending on the data and produce solutions of linear algebra problems with a specified accuracy in a given amount of time. PMID:19752909
Application of Sequential Interval Estimation to Adaptive Mastery Testing
ERIC Educational Resources Information Center
Chang, Yuan-chin Ivan
2005-01-01
In this paper, we apply sequential one-sided confidence interval estimation procedures with beta-protection to adaptive mastery testing. The procedures of fixed-width and fixed proportional accuracy confidence interval estimation can be viewed as extensions of one-sided confidence interval procedures. It can be shown that the adaptive mastery…
Adjoint-Based, Three-Dimensional Error Prediction and Grid Adaptation
NASA Technical Reports Server (NTRS)
Park, Michael A.
2002-01-01
Engineering computational fluid dynamics (CFD) analysis and design applications focus on output functions (e.g., lift, drag). Errors in these output functions are generally unknown and conservatively accurate solutions may be computed. Computable error estimates can offer the possibility to minimize computational work for a prescribed error tolerance. Such an estimate can be computed by solving the flow equations and the linear adjoint problem for the functional of interest. The computational mesh can be modified to minimize the uncertainty of a computed error estimate. This robust mesh-adaptation procedure automatically terminates when the simulation is within a user specified error tolerance. This procedure for estimating and adapting to error in a functional is demonstrated for three-dimensional Euler problems. An adaptive mesh procedure that links to a Computer Aided Design (CAD) surface representation is demonstrated for wing, wing-body, and extruded high lift airfoil configurations. The error estimation and adaptation procedure yielded corrected functions that are as accurate as functions calculated on uniformly refined grids with ten times as many grid points.
Adaptive building skin structures
NASA Astrophysics Data System (ADS)
Del Grosso, A. E.; Basso, P.
2010-12-01
The concept of adaptive and morphing structures has gained considerable attention in the recent years in many fields of engineering. In civil engineering very few practical applications are reported to date however. Non-conventional structural concepts like deployable, inflatable and morphing structures may indeed provide innovative solutions to some of the problems that the construction industry is being called to face. To give some examples, searches for low-energy consumption or even energy-harvesting green buildings are amongst such problems. This paper first presents a review of the above problems and technologies, which shows how the solution to these problems requires a multidisciplinary approach, involving the integration of architectural and engineering disciplines. The discussion continues with the presentation of a possible application of two adaptive and dynamically morphing structures which are proposed for the realization of an acoustic envelope. The core of the two applications is the use of a novel optimization process which leads the search for optimal solutions by means of an evolutionary technique while the compatibility of the resulting configurations of the adaptive envelope is ensured by the virtual force density method.
Patel, Aalpen A; Glaiberman, Craig; Gould, Derek A
2007-06-01
In the past few decades, medicine has started to look at the potential use of simulators in medical education. Procedural medicine lends itself well to the use of simulators. Efforts are under way to establish national agendas to change the way medical education is approached and thereby improve patient safety. Universities, credentialing organizations, and hospitals are investing large sums of money to build and use simulation centers for undergraduate and graduate medical education. PMID:17574195
Cyclic creep analysis from elastic finite-element solutions
NASA Technical Reports Server (NTRS)
Kaufman, A.; Hwang, S. Y.
1986-01-01
A uniaxial approach was developed for calculating cyclic creep and stress relaxation at the critical location of a structure subjected to cyclic thermomechanical loading. This approach was incorporated into a simplified analytical procedure for predicting the stress-strain history at a crack initiation site for life prediction purposes. An elastic finite-element solution for the problem was used as input for the simplified procedure. The creep analysis includes a self-adaptive time incrementing scheme. Cumulative creep is the sum of the initial creep, the recovery from the stress relaxation and the incremental creep. The simplified analysis was exercised for four cases involving a benchmark notched plate problem. Comparisons were made with elastic-plastic-creep solutions for these cases using the MARC nonlinear finite-element computer code.
Adaptive rezoner in a two-dimensional Lagrangian hydrodynamic code
Pyun, J.J.; Saltzman, J.S.; Scannapieco, A.J.; Carroll, D.
1985-01-01
In an effort to increase spatial resolution without adding additional meshes, an adaptive mesh was incorporated into a two-dimensional Lagrangian hydrodynamics code along with two-dimensional flux corrected (FCT) remapper. The adaptive mesh automatically generates a mesh based on smoothness and orthogonality, and at the same time also tracks physical conditions of interest by focusing mesh points in regions that exhibit those conditions; this is done by defining a weighting function associated with the physical conditions to be tracked. The FCT remapper calculates the net transportive fluxes based on a weighted average of two fluxes computed by a low-order scheme and a high-order scheme. This averaging procedure produces solutions which are conservative and nondiffusive, and maintains positivity. 10 refs., 12 figs.
Adaptive methods and parallel computation for partial differential equations. Final report
Biswas, R.; Benantar, M.; Flaherty, J.E.
1992-05-01
Consider the adaptive solution of two-dimensional vector systems of hyperbolic and elliptic partial differential equations on shared-memory parallel computers. Hyperbolic systems are approximated by an explicit finite volume technique and solved by a recursive local mesh refinement procedure on a tree-structured grid. Local refinement of the time steps and spatial cells of a coarse base mesh is performed in regions where a refinement indicator exceeds a prescribed tolerance. Computational procedures that sequentially traverse the tree while processing solutions on each grid in parallel, that process solutions at the same tree level in parallel, and that dynamically assign processors to nodes of the tree have been developed and applied to an example. Computational results comparing a variety of heuristic processor load balancing techniques and refinement strategies are presented.
NASA Technical Reports Server (NTRS)
2005-01-01
The goal of this research is to develop and demonstrate innovative adaptive seal technologies that can lead to dramatic improvements in engine performance, life, range, and emissions, and enhance operability for next generation gas turbine engines. This work is concentrated on the development of self-adaptive clearance control systems for gas turbine engines. Researchers have targeted the high-pressure turbine (HPT) blade tip seal location for following reasons: Current active clearance control (ACC) systems (e.g., thermal case-cooling schemes) cannot respond to blade tip clearance changes due to mechanical, thermal, and aerodynamic loads. As such they are prone to wear due to the required tight running clearances during operation. Blade tip seal wear (increased clearances) reduces engine efficiency, performance, and service life. Adaptive sealing technology research has inherent impact on all envisioned 21st century propulsion systems (e.g. distributed vectored, hybrid and electric drive propulsion concepts).
Knowledge Retrieval Solutions.
ERIC Educational Resources Information Center
Khan, Kamran
1998-01-01
Excalibur RetrievalWare offers true knowledge retrieval solutions. Its fundamental technologies, Adaptive Pattern Recognition Processing and Semantic Networks, have capabilities for knowledge discovery and knowledge management of full-text, structured and visual information. The software delivers a combination of accuracy, extensibility,…
ERIC Educational Resources Information Center
Wedman, John; Wedman, Judy
1985-01-01
The "Animals" program found on the Apple II and IIe system master disk can be adapted for use in the mathematics classroom. Instructions for making the necessary changes and suggestions for using it in lessons related to geometric shapes are provided. (JN)
Bremer, P. -T.
2014-08-26
ADAPT is a topological analysis code that allow to compute local threshold, in particular relevance based thresholds for features defined in scalar fields. The initial target application is vortex detection but the software is more generally applicable to all threshold based feature definitions.
Davies, Kelvin J A
2016-06-01
Homeostasis is a central pillar of modern Physiology. The term homeostasis was invented by Walter Bradford Cannon in an attempt to extend and codify the principle of 'milieu intérieur,' or a constant interior bodily environment, that had previously been postulated by Claude Bernard. Clearly, 'milieu intérieur' and homeostasis have served us well for over a century. Nevertheless, research on signal transduction systems that regulate gene expression, or that cause biochemical alterations to existing enzymes, in response to external and internal stimuli, makes it clear that biological systems are continuously making short-term adaptations both to set-points, and to the range of 'normal' capacity. These transient adaptations typically occur in response to relatively mild changes in conditions, to programs of exercise training, or to sub-toxic, non-damaging levels of chemical agents; thus, the terms hormesis, heterostasis, and allostasis are not accurate descriptors. Therefore, an operational adjustment to our understanding of homeostasis suggests that the modified term, Adaptive Homeostasis, may be useful especially in studies of stress, toxicology, disease, and aging. Adaptive Homeostasis may be defined as follows: 'The transient expansion or contraction of the homeostatic range in response to exposure to sub-toxic, non-damaging, signaling molecules or events, or the removal or cessation of such molecules or events.' PMID:27112802
NASA Technical Reports Server (NTRS)
Georgeff, Michael P.; Lansky, Amy L.
1986-01-01
Much of commonsense knowledge about the real world is in the form of procedures or sequences of actions for achieving particular goals. In this paper, a formalism is presented for representing such knowledge using the notion of process. A declarative semantics for the representation is given, which allows a user to state facts about the effects of doing things in the problem domain of interest. An operational semantics is also provided, which shows how this knowledge can be used to achieve particular goals or to form intentions regarding their achievement. Given both semantics, the formalism additionally serves as an executable specification language suitable for constructing complex systems. A system based on this formalism is described, and examples involving control of an autonomous robot and fault diagnosis for NASA's Space Shuttle are provided.
Development of a Countermeasure to Enhance Postflight Locomotor Adaptability
NASA Technical Reports Server (NTRS)
Bloomberg, Jacob J.
2006-01-01
Astronauts returning from space flight experience locomotor dysfunction following their return to Earth. Our laboratory is currently developing a gait adaptability training program that is designed to facilitate recovery of locomotor function following a return to a gravitational environment. The training program exploits the ability of the sensorimotor system to generalize from exposure to multiple adaptive challenges during training so that the gait control system essentially learns to learn and therefore can reorganize more rapidly when faced with a novel adaptive challenge. We have previously confirmed that subjects participating in adaptive generalization training programs using a variety of visuomotor distortions can enhance their ability to adapt to a novel sensorimotor environment. Importantly, this increased adaptability was retained even one month after completion of the training period. Adaptive generalization has been observed in a variety of other tasks requiring sensorimotor transformations including manual control tasks and reaching (Bock et al., 2001, Seidler, 2003) and obstacle avoidance during walking (Lam and Dietz, 2004). Taken together, the evidence suggests that a training regimen exposing crewmembers to variation in locomotor conditions, with repeated transitions among states, may enhance their ability to learn how to reassemble appropriate locomotor patterns upon return from microgravity. We believe exposure to this type of training will extend crewmembers locomotor behavioral repertoires, facilitating the return of functional mobility after long duration space flight. Our proposed training protocol will compel subjects to develop new behavioral solutions under varying sensorimotor demands. Over time subjects will learn to create appropriate locomotor solution more rapidly enabling acquisition of mobility sooner after long-duration space flight. Our laboratory is currently developing adaptive generalization training procedures and the
Organization of Distributed Adaptive Learning
ERIC Educational Resources Information Center
Vengerov, Alexander
2009-01-01
The growing sensitivity of various systems and parts of industry, society, and even everyday individual life leads to the increased volume of changes and needs for adaptation and learning. This creates a new situation where learning from being purely academic knowledge transfer procedure is becoming a ubiquitous always-on essential part of all…
Camera lens adapter magnifies image
NASA Technical Reports Server (NTRS)
Moffitt, F. L.
1967-01-01
Polaroid Land camera with an illuminated 7-power magnifier adapted to the lens, photographs weld flaws. The flaws are located by inspection with a 10-power magnifying glass and then photographed with this device, thus providing immediate pictorial data for use in remedial procedures.
Solutions For Smart Metering Under Harsh Environmental Condicions
NASA Astrophysics Data System (ADS)
Kunicina, N.; Zabasta, A.; Kondratjevs, K.; Asmanis, G.
2015-02-01
The described case study concerns application of wireless sensor networks to the smart control of power supply substations. The solution proposed for metering is based on the modular principle and has been tested in the intersystem communication paradigm using selectable interface modules (IEEE 802.3, ISM radio interface, GSM/GPRS). The solution modularity gives 7 % savings of maintenance costs. The developed solution can be applied to the control of different critical infrastructure networks using adapted modules. The proposed smart metering is suitable for outdoor installation, indoor industrial installations, operation under electromagnetic pollution, temperature and humidity impact. The results of tests have shown a good electromagnetic compatibility of the prototype meter with other electronic devices. The metering procedure is exemplified by operation of a testing company's workers under harsh environmental conditions.
NASA Astrophysics Data System (ADS)
Van Den Daele, W.; Malaquin, C.; Baumel, N.; Kononchuk, O.; Cristoloveanu, S.
2013-10-01
This paper revisits and adapts of the pseudo-MOSFET (Ψ-MOSFET) characterization technique for advanced fully depleted silicon on insulator (FDSOI) wafers. We review the current challenges for standard Ψ-MOSFET set-up on ultra-thin body (12 nm) over ultra-thin buried oxide (25 nm BOX) and propose a novel set-up enabling the technique on FDSOI structures. This novel configuration embeds 4 probes with large tip radius (100-200 μm) and low pressure to avoid oxide damage. Compared with previous 4-point probe measurements, we introduce a simplified and faster methodology together with an adapted Y-function. The models for parameters extraction are revisited and calibrated through systematic measurements of SOI wafers with variable film thickness. We propose an in-depth analysis of the FDSOI structure through comparison of experimental data, TCAD (Technology Computed Aided Design) simulations, and analytical modeling. TCAD simulations are used to unify previously reported thickness-dependent analytical models by analyzing the BOX/substrate potential and the electrical field in ultrathin films. Our updated analytical models are used to explain the results and to extract correct electrical parameters such as low-field electron and hole mobility, subthreshold slope, and film/BOX interface traps density.
Adaptive Units of Learning and Educational Videogames
ERIC Educational Resources Information Center
Moreno-Ger, Pablo; Thomas, Pilar Sancho; Martinez-Ortiz, Ivan; Sierra, Jose Luis; Fernandez-Manjon, Baltasar
2007-01-01
In this paper, we propose three different ways of using IMS Learning Design to support online adaptive learning modules that include educational videogames. The first approach relies on IMS LD to support adaptation procedures where the educational games are considered as Learning Objects. These games can be included instead of traditional content…
NASA Technical Reports Server (NTRS)
Hacker, Scott C. (Inventor); Dean, Richard J. (Inventor); Burge, Scott W. (Inventor); Dartez, Toby W. (Inventor)
2007-01-01
An adapter for installing a connector to a terminal post, wherein the connector is attached to a cable, is presented. In an embodiment, the adapter is comprised of an elongated collet member having a longitudinal axis comprised of a first collet member end, a second collet member end, an outer collet member surface, and an inner collet member surface. The inner collet member surface at the first collet member end is used to engage the connector. The outer collet member surface at the first collet member end is tapered for a predetermined first length at a predetermined taper angle. The collet includes a longitudinal slot that extends along the longitudinal axis initiating at the first collet member end for a predetermined second length. The first collet member end is formed of a predetermined number of sections segregated by a predetermined number of channels and the longitudinal slot.
Watson, Bobby L.; Aeby, Ian
1982-01-01
An adaptive data compression device for compressing data having variable frequency content, including a plurality of digital filters for analyzing the content of the data over a plurality of frequency regions, a memory, and a control logic circuit for generating a variable rate memory clock corresponding to the analyzed frequency content of the data in the frequency region and for clocking the data into the memory in response to the variable rate memory clock.
Watson, B.L.; Aeby, I.
1980-08-26
An adaptive data compression device for compressing data is described. The device has a frequency content, including a plurality of digital filters for analyzing the content of the data over a plurality of frequency regions, a memory, and a control logic circuit for generating a variable rate memory clock corresponding to the analyzed frequency content of the data in the frequency region and for clocking the data into the memory in response to the variable rate memory clock.
Feline onychectomy and elective procedures.
Young, William Phillip
2002-05-01
The development of the carbon dioxide (CO2) surgical laser has given veterinarians a new perspective in the field of surgery. Recently developed techniques and improvisations of established procedures have opened the field of surgery to infinite applications never before dreamed of as little as 10 years ago. Today's CO2 surgical laser is an adaptable, indispensable tool for the everyday veterinary practitioner. Its use is becoming a common occurrence in offices of veterinarians around the world. PMID:12064043
Reentry vehicle adaptive telemetry
Kidner, R.E.
1993-09-01
In RF telemetry (TM) the allowable RF bandwidth limits the amount of data in the telemetered data set. Typically the data set is less than ideal to accommodate all aspects of a test. In the case of diagnostic data, the compromise often leaves insufficient diagnostic data when problems occur. As a solution, intelligence was designed into a TM, allowing it to adapt to changing data requirements. To minimize the computational requirements for an intelligent TM, a fuzzy logic inference engine was developed. This reference engine was simulated on a PC and then loaded into a TM hardware package for final testing.
PLUM: Parallel Load Balancing for Unstructured Adaptive Meshes. Degree awarded by Colorado Univ.
NASA Technical Reports Server (NTRS)
Oliker, Leonid
1998-01-01
Dynamic mesh adaption on unstructured grids is a powerful tool for computing large-scale problems that require grid modifications to efficiently resolve solution features. By locally refining and coarsening the mesh to capture physical phenomena of interest, such procedures make standard computational methods more cost effective. Unfortunately, an efficient parallel implementation of these adaptive methods is rather difficult to achieve, primarily due to the load imbalance created by the dynamically-changing nonuniform grid. This requires significant communication at runtime, leading to idle processors and adversely affecting the total execution time. Nonetheless, it is generally thought that unstructured adaptive- grid techniques will constitute a significant fraction of future high-performance supercomputing. Various dynamic load balancing methods have been reported to date; however, most of them either lack a global view of loads across processors or do not apply their techniques to realistic large-scale applications.
Adaptation strategies for high order discontinuous Galerkin methods based on Tau-estimation
NASA Astrophysics Data System (ADS)
Kompenhans, Moritz; Rubio, Gonzalo; Ferrer, Esteban; Valero, Eusebio
2016-02-01
In this paper three p-adaptation strategies based on the minimization of the truncation error are presented for high order discontinuous Galerkin methods. The truncation error is approximated by means of a τ-estimation procedure and enables the identification of mesh regions that require adaptation. Three adaptation strategies are developed and termed a posteriori, quasi-a priori and quasi-a priori corrected. All strategies require fine solutions, which are obtained by enriching the polynomial order, but while the former needs time converged solutions, the last two rely on non-converged solutions, which lead to faster computations. In addition, the high order method permits the spatial decoupling for the estimated errors and enables anisotropic p-adaptation. These strategies are verified and compared in terms of accuracy and computational cost for the Euler and the compressible Navier-Stokes equations. It is shown that the two quasi-a priori methods achieve a significant reduction in computational cost when compared to a uniform polynomial enrichment. Namely, for a viscous boundary layer flow, we obtain a speedup of 6.6 and 7.6 for the quasi-a priori and quasi-a priori corrected approaches, respectively.
Evaluating Content Alignment in Computerized Adaptive Testing
ERIC Educational Resources Information Center
Wise, Steven L.; Kingsbury, G. Gage; Webb, Norman L.
2015-01-01
The alignment between a test and the content domain it measures represents key evidence for the validation of test score inferences. Although procedures have been developed for evaluating the content alignment of linear tests, these procedures are not readily applicable to computerized adaptive tests (CATs), which require large item pools and do…
Adaptive sequential testing for multiple comparisons.
Gao, Ping; Liu, Lingyun; Mehta, Cyrus
2014-01-01
We propose a Markov process theory-based adaptive sequential testing procedure for multiple comparisons. The procedure can be used for confirmative trials involving multi-comparisons, including dose selection or population enrichment. Dose or subpopulation selection and sample size modification can be made at any interim analysis. Type I error control is exact. PMID:24926848
Adaptive sampling for noisy problems
Cantu-Paz, E
2004-03-26
The usual approach to deal with noise present in many real-world optimization problems is to take an arbitrary number of samples of the objective function and use the sample average as an estimate of the true objective value. The number of samples is typically chosen arbitrarily and remains constant for the entire optimization process. This paper studies an adaptive sampling technique that varies the number of samples based on the uncertainty of deciding between two individuals. Experiments demonstrate the effect of adaptive sampling on the final solution quality reached by a genetic algorithm and the computational cost required to find the solution. The results suggest that the adaptive technique can effectively eliminate the need to set the sample size a priori, but in many cases it requires high computational costs.
School Solutions for Cyberbullying
ERIC Educational Resources Information Center
Sutton, Susan
2009-01-01
This article offers solutions and steps to prevent cyberbullying. Schools can improve their ability to handle cyberbullying by educating staff members, students, and parents and by implementing rules and procedures for how to handle possible incidents. Among the steps is to include a section about cyberbullying and expectations in the student…
Adaptive Texture Synthesis for Large Scale City Modeling
NASA Astrophysics Data System (ADS)
Despine, G.; Colleu, T.
2015-02-01
Large scale city models textured with aerial images are well suited for bird-eye navigation but generally the image resolution does not allow pedestrian navigation. One solution to face this problem is to use high resolution terrestrial photos but it requires huge amount of manual work to remove occlusions. Another solution is to synthesize generic textures with a set of procedural rules and elementary patterns like bricks, roof tiles, doors and windows. This solution may give realistic textures but with no correlation to the ground truth. Instead of using pure procedural modelling we present a method to extract information from aerial images and adapt the texture synthesis to each building. We describe a workflow allowing the user to drive the information extraction and to select the appropriate texture patterns. We also emphasize the importance to organize the knowledge about elementary pattern in a texture catalogue allowing attaching physical information, semantic attributes and to execute selection requests. Roofs are processed according to the detected building material. Façades are first described in terms of principal colours, then opening positions are detected and some window features are computed. These features allow selecting the most appropriate patterns from the texture catalogue. We experimented this workflow on two samples with 20 cm and 5 cm resolution images. The roof texture synthesis and opening detection were successfully conducted on hundreds of buildings. The window characterization is still sensitive to the distortions inherent to the projection of aerial images onto the facades.
The Adaptability Evaluation of Enterprise Information Systems
NASA Astrophysics Data System (ADS)
Liu, Junjuan; Xue, Chaogai; Dong, Lili
In this paper, a set of evaluation system is proposed by GQM (Goal-Question-Metrics) for enterprise information systems. Then based on Similarity to Ideal Solution (TOPSIS), the evaluation model is proposed to evaluate enterprise information systems' adaptability. Finally, the application of the evaluation system and model is proved via a case study, which provides references for optimizing enterprise information systems' adaptability.
Regularized Estimate of the Weight Vector of an Adaptive Interference Canceller
NASA Astrophysics Data System (ADS)
Ermolayev, V. T.; Sorokin, I. S.; Flaksman, A. G.; Yastrebov, A. V.
2016-05-01
We consider an adaptive multi-channel interference canceller, which ensures the minimum value of the average output power of interference. It is proposed to form the weight vector of such a canceller as the power-vector expansion. It is shown that this approach allows one to obtain an exact analytical solution for the optimal weight vector by using the procedure of the power-vector orthogonalization. In the case of a limited number of the input-process samples, the solution becomes ill-defined and its regularization is required. An effective regularization method, which ensures a high degree of the interference suppression and does not involve the procedure of inversion of the correlation matrix of interference, is proposed, which significantly reduces the computational cost of the weight-vector estimation.
Regularized Estimate of the Weight Vector of an Adaptive Interference Canceller
NASA Astrophysics Data System (ADS)
Ermolayev, V. T.; Sorokin, I. S.; Flaksman, A. G.; Yastrebov, A. V.
2016-06-01
We consider an adaptive multi-channel interference canceller, which ensures the minimum value of the average output power of interference. It is proposed to form the weight vector of such a canceller as the power-vector expansion. It is shown that this approach allows one to obtain an exact analytical solution for the optimal weight vector by using the procedure of the power-vector orthogonalization. In the case of a limited number of the input-process samples, the solution becomes ill-defined and its regularization is required. An effective regularization method, which ensures a high degree of the interference suppression and does not involve the procedure of inversion of the correlation matrix of interference, is proposed, which significantly reduces the computational cost of the weight-vector estimation.
Countermeasures to Enhance Sensorimotor Adaptability
NASA Technical Reports Server (NTRS)
Bloomberg, J. J.; Peters, B. T.; Mulavara, A. P.; Brady, R. A.; Batson, C. C.; Miller, C. A.; Cohen, H. S.
2011-01-01
adaptability. These results indicate that SA training techniques can be added to existing treadmill exercise equipment and procedures to produce a single integrated countermeasure system to improve performance of astro/cosmonauts during prolonged exploratory space missions.
Analyzing Hedges in Verbal Communication: An Adaptation-Based Approach
ERIC Educational Resources Information Center
Wang, Yuling
2010-01-01
Based on Adaptation Theory, the article analyzes the production process of hedges. The procedure consists of the continuous making of choices in linguistic forms and communicative strategies. These choices are made just for adaptation to the contextual correlates. Besides, the adaptation process is dynamic, intentional and bidirectional.
Population extinction and the genetics of adaptation.
Orr, H Allen; Unckless, Robert L
2008-08-01
Theories of adaptation typically ignore the effect of environmental change on population size. But some environmental challenges--challenges to which populations must adapt--may depress absolute fitness below 1, causing populations to decline. Under this scenario, adaptation is a race; beneficial alleles that adapt a population to the new environment must sweep to high frequency before the population becomes extinct. We derive simple, though approximate, solutions to the probability of successful adaptation (population survival) when adaptation involves new mutations, the standing genetic variation, or a mixture of the two. Our results show that adaptation to such environmental challenges can be difficult when relying on new mutations at one or a few loci, and populations will often decline to extinction. PMID:18662122
Improved procedures for in vitro skin irritation testing of sticky and greasy natural botanicals.
Molinari, J; Eskes, C; Andres, E; Remoué, N; Sá-Rocha, V M; Hurtado, S P; Barrichello, C
2013-02-01
Skin irritation evaluation is an important endpoint for the safety assessment of cosmetic ingredients required by various regulatory authorities for notification and/or import of test substances. The present study was undertaken to investigate possible protocol adaptations of the currently validated in vitro skin irritation test methods based on reconstructed human epidermis (RhE) for the testing of plant extracts and natural botanicals. Due to their specific physico-chemical properties, such as lipophilicity, sticky/buttery-like texture, waxy/creamy foam characteristics, normal washing procedures can lead to an incomplete removal of these materials and/or to mechanical damage to the tissues, resulting in an impaired prediction of the true skin irritation potential of the materials. For this reason different refined washing procedures were evaluated for their ability to ensure appropriate removal of greasy and sticky substances while not altering the normal responses of the validated RhE test method. Amongst the different procedures evaluated, the use of a SDS 0.1% PBS solution to remove the sticky and greasy test material prior to the normal washing procedures was found to be the most suitable adaptation to ensure efficient removal of greasy and sticky in-house controls without affecting the results of the negative control. The predictive capacity of the refined SDS 0.1% washing procedure, was investigated by using twelve oily and viscous compounds having known skin irritation effects supported by raw and/or peer reviewed in vivo data. The normal washing procedure resulted in 8 out of 10 correctly predicted compounds as compared to 9 out of 10 with the refined washing procedures, showing an increase in the predictive ability of the assay. The refined washing procedure allowed to correctly identify all in vivo skin irritant materials showing the same sensitivity as the normal washing procedures, and further increased the specificity of the assay from 5 to 6 correct
Adaptive evolution of molecular phenotypes
NASA Astrophysics Data System (ADS)
Held, Torsten; Nourmohammad, Armita; Lässig, Michael
2014-09-01
Molecular phenotypes link genomic information with organismic functions, fitness, and evolution. Quantitative traits are complex phenotypes that depend on multiple genomic loci. In this paper, we study the adaptive evolution of a quantitative trait under time-dependent selection, which arises from environmental changes or through fitness interactions with other co-evolving phenotypes. We analyze a model of trait evolution under mutations and genetic drift in a single-peak fitness seascape. The fitness peak performs a constrained random walk in the trait amplitude, which determines the time-dependent trait optimum in a given population. We derive analytical expressions for the distribution of the time-dependent trait divergence between populations and of the trait diversity within populations. Based on this solution, we develop a method to infer adaptive evolution of quantitative traits. Specifically, we show that the ratio of the average trait divergence and the diversity is a universal function of evolutionary time, which predicts the stabilizing strength and the driving rate of the fitness seascape. From an information-theoretic point of view, this function measures the macro-evolutionary entropy in a population ensemble, which determines the predictability of the evolutionary process. Our solution also quantifies two key characteristics of adapting populations: the cumulative fitness flux, which measures the total amount of adaptation, and the adaptive load, which is the fitness cost due to a population's lag behind the fitness peak.
The benefits of using customized procedure packs.
Baines, R; Colquhoun, G; Jones, N; Bateman, R
2001-01-01
Discrete item purchasing is the traditional approach for hospitals to obtain consumable supplies for theatre procedures. Although most items are relatively low cost, the management and co-ordination of the supply chain, raising orders, controlling stock, picking and delivering to each operating theatre can be complex and costly. Customized procedure packs provide a solution. PMID:11892113
Multiple Comparison Procedures when Population Variances Differ.
ERIC Educational Resources Information Center
Olejnik, Stephen; Lee, JaeShin
A review of the literature on multiple comparison procedures suggests several alternative approaches for comparing means when population variances differ. These include: (1) the approach of P. A. Games and J. F. Howell (1976); (2) C. W. Dunnett's C confidence interval (1980); and (3) Dunnett's T3 solution (1980). These procedures control the…
Adaptive wavelets and relativistic magnetohydrodynamics
NASA Astrophysics Data System (ADS)
Hirschmann, Eric; Neilsen, David; Anderson, Matthe; Debuhr, Jackson; Zhang, Bo
2016-03-01
We present a method for integrating the relativistic magnetohydrodynamics equations using iterated interpolating wavelets. Such provide an adaptive implementation for simulations in multidimensions. A measure of the local approximation error for the solution is provided by the wavelet coefficients. They place collocation points in locations naturally adapted to the flow while providing expected conservation. We present demanding 1D and 2D tests includingthe Kelvin-Helmholtz instability and the Rayleigh-Taylor instability. Finally, we consider an outgoing blast wave that models a GRB outflow.
Parallel Adaptive Mesh Refinement
Diachin, L; Hornung, R; Plassmann, P; WIssink, A
2005-03-04
As large-scale, parallel computers have become more widely available and numerical models and algorithms have advanced, the range of physical phenomena that can be simulated has expanded dramatically. Many important science and engineering problems exhibit solutions with localized behavior where highly-detailed salient features or large gradients appear in certain regions which are separated by much larger regions where the solution is smooth. Examples include chemically-reacting flows with radiative heat transfer, high Reynolds number flows interacting with solid objects, and combustion problems where the flame front is essentially a two-dimensional sheet occupying a small part of a three-dimensional domain. Modeling such problems numerically requires approximating the governing partial differential equations on a discrete domain, or grid. Grid spacing is an important factor in determining the accuracy and cost of a computation. A fine grid may be needed to resolve key local features while a much coarser grid may suffice elsewhere. Employing a fine grid everywhere may be inefficient at best and, at worst, may make an adequately resolved simulation impractical. Moreover, the location and resolution of fine grid required for an accurate solution is a dynamic property of a problem's transient features and may not be known a priori. Adaptive mesh refinement (AMR) is a technique that can be used with both structured and unstructured meshes to adjust local grid spacing dynamically to capture solution features with an appropriate degree of resolution. Thus, computational resources can be focused where and when they are needed most to efficiently achieve an accurate solution without incurring the cost of a globally-fine grid. Figure 1.1 shows two example computations using AMR; on the left is a structured mesh calculation of a impulsively-sheared contact surface and on the right is the fuselage and volume discretization of an RAH-66 Comanche helicopter [35]. Note the
On the solution of creep induced buckling in general structure
NASA Technical Reports Server (NTRS)
Padovan, J.; Tovichakchaikul, S.
1982-01-01
This paper considers the pre and post buckling behavior of general structures exposed to high temperature fields for long durations wherein creep effects become significant. The solution to this problem is made possible through the use of closed upper bounding constraint surfaces which enable the development of a new time stepping algorithm. This permits the stable and efficient solution of structural problems which exhibit indefinite tangent properties. Due to the manner of constraining/bounding successive iterates, the algorithm developed herein is largely self adaptive, inherently stable, sufficiently flexible to handle geometric material and boundary induced nonlinearity, and can be incorporated into either finite element or difference simulations. To illustrate the capability of the procedure, as well as, the physics of creep induced pre and post buckling behavior, the results of several numerical experiments are included.
Limits of adaptation, residual interferences
NASA Technical Reports Server (NTRS)
Mokry, Miroslav (Editor); Erickson, J. C., Jr.; Goodyer, Michael J.; Mignosi, Andre; Russo, Giuseppe P.; Smith, J.; Wedemeyer, Erich H.; Newman, Perry A.
1990-01-01
Methods of determining linear residual wall interference appear to be well established theoretically; however they need to be validated, for example by comparative studies of test data on the same model in different adaptive-wall wind tunnels as well as in passive, ventilated-wall tunnels. The GARTEur CAST 7 and the CAST 10/DOA 2 investigations are excellent examples of such comparative studies. Results to date in both one-variable and two-variable methods for nonlinear wall interference indicate that a great deal more research and validation are required. The status in 2D flow is advanced over that in 3D flow as is the case generally with adaptive-wall development. Nevertheless, it is now well established that for transonic testing with extensive supercritical flow present, significant wall interference is likely to exist in conventional ventilated test sections. Consequently, residual correction procedures require further development hand-in-hand with further adaptive-wall development.
Pipe Cleaning Operating Procedures
Clark, D.; Wu, J.; /Fermilab
1991-01-24
This cleaning procedure outlines the steps involved in cleaning the high purity argon lines associated with the DO calorimeters. The procedure is broken down into 7 cycles: system setup, initial flush, wash, first rinse, second rinse, final rinse and drying. The system setup involves preparing the pump cart, line to be cleaned, distilled water, and interconnecting hoses and fittings. The initial flush is an off-line flush of the pump cart and its plumbing in order to preclude contaminating the line. The wash cycle circulates the detergent solution (Micro) at 180 degrees Fahrenheit through the line to be cleaned. The first rinse is then intended to rid the line of the majority of detergent and only needs to run for 30 minutes and at ambient temperature. The second rinse (if necessary) should eliminate the remaining soap residue. The final rinse is then intended to be a check that there is no remaining soap or other foreign particles in the line, particularly metal 'chips.' The final rinse should be run at 180 degrees Fahrenheit for at least 90 minutes. The filters should be changed after each cycle, paying particular attention to the wash cycle and the final rinse cycle return filters. These filters, which should be bagged and labeled, prove that the pipeline is clean. Only distilled water should be used for all cycles, especially rinsing. The level in the tank need not be excessive, merely enough to cover the heater float switch. The final rinse, however, may require a full 50 gallons. Note that most of the details of the procedure are included in the initial flush description. This section should be referred to if problems arise in the wash or rinse cycles.
Adapting agriculture to climate change
Howden, S. Mark; Soussana, Jean-François; Tubiello, Francesco N.; Chhetri, Netra; Dunlop, Michael; Meinke, Holger
2007-01-01
The strong trends in climate change already evident, the likelihood of further changes occurring, and the increasing scale of potential climate impacts give urgency to addressing agricultural adaptation more coherently. There are many potential adaptation options available for marginal change of existing agricultural systems, often variations of existing climate risk management. We show that implementation of these options is likely to have substantial benefits under moderate climate change for some cropping systems. However, there are limits to their effectiveness under more severe climate changes. Hence, more systemic changes in resource allocation need to be considered, such as targeted diversification of production systems and livelihoods. We argue that achieving increased adaptation action will necessitate integration of climate change-related issues with other risk factors, such as climate variability and market risk, and with other policy domains, such as sustainable development. Dealing with the many barriers to effective adaptation will require a comprehensive and dynamic policy approach covering a range of scales and issues, for example, from the understanding by farmers of change in risk profiles to the establishment of efficient markets that facilitate response strategies. Science, too, has to adapt. Multidisciplinary problems require multidisciplinary solutions, i.e., a focus on integrated rather than disciplinary science and a strengthening of the interface with decision makers. A crucial component of this approach is the implementation of adaptation assessment frameworks that are relevant, robust, and easily operated by all stakeholders, practitioners, policymakers, and scientists. PMID:18077402
Linearly-Constrained Adaptive Signal Processing Methods
NASA Astrophysics Data System (ADS)
Griffiths, Lloyd J.
1988-01-01
In adaptive least-squares estimation problems, a desired signal d(n) is estimated using a linear combination of L observation values samples xi (n), x2(n), . . . , xL-1(n) and denoted by the vector X(n). The estimate is formed as the inner product of this vector with a corresponding L-dimensional weight vector W. One particular weight vector of interest is Wopt which minimizes the mean-square between d(n) and the estimate. In this context, the term `mean-square difference' is a quadratic measure such as statistical expectation or time average. The specific value of W which achieves the minimum is given by the prod-uct of the inverse data covariance matrix and the cross-correlation between the data vector and the desired signal. The latter is often referred to as the P-vector. For those cases in which time samples of both the desired and data vector signals are available, a variety of adaptive methods have been proposed which will guarantee that an iterative weight vector Wa(n) converges (in some sense) to the op-timal solution. Two which have been extensively studied are the recursive least-squares (RLS) method and the LMS gradient approximation approach. There are several problems of interest in the communication and radar environment in which the optimal least-squares weight set is of interest and in which time samples of the desired signal are not available. Examples can be found in array processing in which only the direction of arrival of the desired signal is known and in single channel filtering where the spectrum of the desired response is known a priori. One approach to these problems which has been suggested is the P-vector algorithm which is an LMS-like approximate gradient method. Although it is easy to derive the mean and variance of the weights which result with this algorithm, there has never been an identification of the corresponding underlying error surface which the procedure searches. The purpose of this paper is to suggest an alternative
A direct element resequencing procedure
NASA Technical Reports Server (NTRS)
Akin, J. E.; Fulford, R. E.
1978-01-01
Element by element frontal solution algorithms are utilized in many of the existing finite element codes. The overall computational efficiency of this type of procedure is directly related to the element data input sequence. Thus, it is important to have a pre-processor which will resequence these data so as to reduce the element wavefronts to be encountered in the solution algorithm. A direct element resequencing algorithm is detailed for reducing element wavefronts. It also generates computational by products that can be utilized in pre-front calculations and in various post-processors. Sample problems are presented and compared with other algorithms.
Vortical Flow Prediction Using an Adaptive Unstructured Grid Method
NASA Technical Reports Server (NTRS)
Pirzadeh, Shahyar Z.
2003-01-01
A computational fluid dynamics (CFD) method has been employed to compute vortical flows around slender wing/body configurations. The emphasis of the paper is on the effectiveness of an adaptive grid procedure in "capturing" concentrated vortices generated at sharp edges or flow separation lines of lifting surfaces flying at high angles of attack. The method is based on a tetrahedral unstructured grid technology developed at the NASA Langley Research Center. Two steady-state, subsonic, inviscid and Navier-Stokes flow test cases are presented to demonstrate the applicability of the method for solving practical vortical flow problems. The first test case concerns vortex flow over a simple 65 delta wing with different values of leading-edge radius. Although the geometry is quite simple, it poses a challenging problem for computing vortices originating from blunt leading edges. The second case is that of a more complex fighter configuration. The superiority of the adapted solutions in capturing the vortex flow structure over the conventional unadapted results is demonstrated by comparisons with the wind-tunnel experimental data. The study shows that numerical prediction of vortical flows is highly sensitive to the local grid resolution and that the implementation of grid adaptation is essential when applying CFD methods to such complicated flow problems.
Vortical Flow Prediction Using an Adaptive Unstructured Grid Method
NASA Technical Reports Server (NTRS)
Pirzadeh, Shahyar Z.
2001-01-01
A computational fluid dynamics (CFD) method has been employed to compute vortical flows around slender wing/body configurations. The emphasis of the paper is on the effectiveness of an adaptive grid procedure in "capturing" concentrated vortices generated at sharp edges or flow separation lines of lifting surfaces flying at high angles of attack. The method is based on a tetrahedral unstructured grid technology developed at the NASA Langley Research Center. Two steady-state, subsonic, inviscid and Navier-Stokes flow test cases are presented to demonstrate the applicability of the method for solving practical vortical flow problems. The first test case concerns vortex flow over a simple 65deg delta wing with different values of leading-edge bluntness, and the second case is that of a more complex fighter configuration. The superiority of the adapted solutions in capturing the vortex flow structure over the conventional unadapted results is demonstrated by comparisons with the windtunnel experimental data. The study shows that numerical prediction of vortical flows is highly sensitive to the local grid resolution and that the implementation of grid adaptation is essential when applying CFD methods to such complicated flow problems.
Adaptive [theta]-methods for pricing American options
NASA Astrophysics Data System (ADS)
Khaliq, Abdul Q. M.; Voss, David A.; Kazmi, Kamran
2008-12-01
We develop adaptive [theta]-methods for solving the Black-Scholes PDE for American options. By adding a small, continuous term, the Black-Scholes PDE becomes an advection-diffusion-reaction equation on a fixed spatial domain. Standard implementation of [theta]-methods would require a Newton-type iterative procedure at each time step thereby increasing the computational complexity of the methods. Our linearly implicit approach avoids such complications. We establish a general framework under which [theta]-methods satisfy a discrete version of the positivity constraint characteristic of American options, and numerically demonstrate the sensitivity of the constraint. The positivity results are established for the single-asset and independent two-asset models. In addition, we have incorporated and analyzed an adaptive time-step control strategy to increase the computational efficiency. Numerical experiments are presented for one- and two-asset American options, using adaptive exponential splitting for two-asset problems. The approach is compared with an iterative solution of the two-asset problem in terms of computational efficiency.
NASA Astrophysics Data System (ADS)
Emamzadeh, Seyed Shahab; Ahmadi, Mohammad Taghi; Mohammadi, Soheil; Biglarkhani, Masoud
2015-07-01
In this paper, an investigation into the propagation of far field explosion waves in water and their effects on nearby structures are carried out. For the far field structure, the motion of the fluid surrounding the structure may be assumed small, allowing linearization of the governing fluid equations. A complete analysis of the problem must involve simultaneous solution of the dynamic response of the structure and the propagation of explosion wave in the surrounding fluid. In this study, a dynamic adaptive finite element procedure is proposed. Its application to the solution of a 2D fluid-structure interaction is investigated in the time domain. The research includes: a) calculation of the far-field scatter wave due to underwater explosion including solution of the time-depended acoustic wave equation, b) fluid-structure interaction analysis using coupled Euler-Lagrangian approach, and c) adaptive finite element procedures employing error estimates, and re-meshing. The temporal mesh adaptation is achieved by local regeneration of the grid using a time-dependent error indicator based on curvature of pressure function. As a result, the overall response is better predicted by a moving mesh than an equivalent uniform mesh. In addition, the cost of computation for large problems is reduced while the accuracy is improved.
Advances in adaptive structures at Jet Propulsion Laboratory
NASA Technical Reports Server (NTRS)
Wada, Ben K.; Garba, John A.
1993-01-01
Future proposed NASA missions with the need for large deployable or erectable precision structures will require solutions to many technical problems. The Jet Propulsion Laboratory (JPL) is developing new technologies in Adaptive Structures to meet these challenges. The technology requirements, approaches to meet the requirements using Adaptive Structures, and the recent JPL research results in Adaptive Structures are described.
Collected radiochemical and geochemical procedures
Kleinberg, J
1990-05-01
This revision of LA-1721, 4th Ed., Collected Radiochemical Procedures, reflects the activities of two groups in the Isotope and Nuclear Chemistry Division of the Los Alamos National Laboratory: INC-11, Nuclear and radiochemistry; and INC-7, Isotope Geochemistry. The procedures fall into five categories: I. Separation of Radionuclides from Uranium, Fission-Product Solutions, and Nuclear Debris; II. Separation of Products from Irradiated Targets; III. Preparation of Samples for Mass Spectrometric Analysis; IV. Dissolution Procedures; and V. Geochemical Procedures. With one exception, the first category of procedures is ordered by the positions of the elements in the Periodic Table, with separate parts on the Representative Elements (the A groups); the d-Transition Elements (the B groups and the Transition Triads); and the Lanthanides (Rare Earths) and Actinides (the 4f- and 5f-Transition Elements). The members of Group IIIB-- scandium, yttrium, and lanthanum--are included with the lanthanides, elements they resemble closely in chemistry and with which they occur in nature. The procedures dealing with the isolation of products from irradiated targets are arranged by target element.
Time-dependent grid adaptation for meshes of triangles and tetrahedra
NASA Technical Reports Server (NTRS)
Rausch, Russ D.
1993-01-01
This paper presents in viewgraph form a method of optimizing grid generation for unsteady CFD flow calculations that distributes the numerical error evenly throughout the mesh. Adaptive meshing is used to locally enrich in regions of relatively large errors and to locally coarsen in regions of relatively small errors. The enrichment/coarsening procedures are robust for isotropic cells; however, enrichment of high aspect ratio cells may fail near boundary surfaces with relatively large curvature. The enrichment indicator worked well for the cases shown, but in general requires user supervision for a more efficient solution.
Adaptive Confidence Bands for Nonparametric Regression Functions
Cai, T. Tony; Low, Mark; Ma, Zongming
2014-01-01
A new formulation for the construction of adaptive confidence bands in non-parametric function estimation problems is proposed. Confidence bands are constructed which have size that adapts to the smoothness of the function while guaranteeing that both the relative excess mass of the function lying outside the band and the measure of the set of points where the function lies outside the band are small. It is shown that the bands adapt over a maximum range of Lipschitz classes. The adaptive confidence band can be easily implemented in standard statistical software with wavelet support. Numerical performance of the procedure is investigated using both simulated and real datasets. The numerical results agree well with the theoretical analysis. The procedure can be easily modified and used for other nonparametric function estimation models. PMID:26269661
MITG test procedure and results
Eck, M.B.; Mukunda, M.
1983-01-01
Elements and modules for Radioisotope Thermoelectric Generator have been performance tested since the inception of the RTG program. These test articles seldom resembled flight hardware and often lacked adequate diagnostic instrumentation. Because of this, performance problems were not identified in the early stage of program development. The lack of test data in an unexpected area often hampered the development of a problem solution. A procedure for conducting the MITG Test was developed in an effort to obtain data in a systematic, unambiguous manner. This procedure required the development of extensive data acquisition software and test automation. The development of a facility to implement the test procedure, the facility hardware and software requirements, and the results of the MITG testing are the subject of this paper.
ADAPTATION AND ADAPTABILITY, THE BELLEFAIRE FOLLOWUP STUDY.
ERIC Educational Resources Information Center
ALLERHAND, MELVIN E.; AND OTHERS
A RESEARCH TEAM STUDIED INFLUENCES, ADAPTATION, AND ADAPTABILITY IN 50 POORLY ADAPTING BOYS AT BELLEFAIRE, A REGIONAL CHILD CARE CENTER FOR EMOTIONALLY DISTURBED CHILDREN. THE TEAM ATTEMPTED TO GAUGE THE SUCCESS OF THE RESIDENTIAL TREATMENT CENTER IN TERMS OF THE PSYCHOLOGICAL PATTERNS AND ROLE PERFORMANCES OF THE BOYS DURING INDIVIDUAL CASEWORK…
Structured programming: Principles, notation, procedure
NASA Technical Reports Server (NTRS)
JOST
1978-01-01
Structured programs are best represented using a notation which gives a clear representation of the block encapsulation. In this report, a set of symbols which can be used until binding directives are republished is suggested. Structured programming also allows a new method of procedure for design and testing. Programs can be designed top down, that is, they can start at the highest program plane and can penetrate to the lowest plane by step-wise refinements. The testing methodology also is adapted to this procedure. First, the highest program plane is tested, and the programs which are not yet finished in the next lower plane are represented by so-called dummies. They are gradually replaced by the real programs.
A Procedure for Estimating Intrasubject Behavior Consistency
ERIC Educational Resources Information Center
Hernandez, Jose M.; Rubio, Victor J.; Revuelta, Javier; Santacreu, Jose
2006-01-01
Trait psychology implicitly assumes consistency of the personal traits. Mischel, however, argued against the idea of a general consistency of human beings. The present article aims to design a statistical procedure based on an adaptation of the pi* statistic to measure the degree of intraindividual consistency independently of the measure used.…
Strategies: Office Procedures with Communications Math.
ERIC Educational Resources Information Center
Wyoming Univ., Laramie. Coll. of Education.
This booklet contains 30 one-page strategies for teaching mathematical skills needed for office procedures. All the strategies are suitable for or can be adapted for special needs students. Each strategy is a classroom activity and is matched with the skill that it develops and its technology/content area (communications and/or mathematics). Some…
Milne, R.B.
1995-12-01
This thesis describes a new method for the numerical solution of partial differential equations of the parabolic type on an adaptively refined mesh in two or more spatial dimensions. The method is motivated and developed in the context of the level set formulation for the curvature dependent propagation of surfaces in three dimensions. In that setting, it realizes the multiple advantages of decreased computational effort, localized accuracy enhancement, and compatibility with problems containing a range of length scales.
Survey of adaptive control using Liapunov design
NASA Technical Reports Server (NTRS)
Lindorff, D. P.; Carroll, R. L.
1972-01-01
A survey was made of the literature devoted to the synthesis of model-tracking adaptive systems based on application of Liapunov's second method. The basic synthesis procedure is introduced and a critical review of extensions made to the theory since 1966 is made. The extensions relate to design for relative stability, reduction of order techniques, design with disturbance, design with time variable parameters, multivariable systems, identification, and an adaptive observer.
Auto-adaptive finite element meshes
NASA Technical Reports Server (NTRS)
Richter, Roland; Leyland, Penelope
1995-01-01
Accurate capturing of discontinuities within compressible flow computations is achieved by coupling a suitable solver with an automatic adaptive mesh algorithm for unstructured triangular meshes. The mesh adaptation procedures developed rely on non-hierarchical dynamical local refinement/derefinement techniques, which hence enable structural optimization as well as geometrical optimization. The methods described are applied for a number of the ICASE test cases are particularly interesting for unsteady flow simulations.
Research in digital adaptive flight controllers
NASA Technical Reports Server (NTRS)
Kaufman, H.
1976-01-01
A design study of adaptive control logic suitable for implementation in modern airborne digital flight computers was conducted. Both explicit controllers which directly utilize parameter identification and implicit controllers which do not require identification were considered. Extensive analytical and simulation efforts resulted in the recommendation of two explicit digital adaptive flight controllers. Interface weighted least squares estimation procedures with control logic were developed using either optimal regulator theory or with control logic based upon single stage performance indices.
Local adaptive tone mapping for video enhancement
NASA Astrophysics Data System (ADS)
Lachine, Vladimir; Dai, Min (.
2015-03-01
As new technologies like High Dynamic Range cameras, AMOLED and high resolution displays emerge on consumer electronics market, it becomes very important to deliver the best picture quality for mobile devices. Tone Mapping (TM) is a popular technique to enhance visual quality. However, the traditional implementation of Tone Mapping procedure is limited by pixel's value to value mapping, and the performance is restricted in terms of local sharpness and colorfulness. To overcome the drawbacks of traditional TM, we propose a spatial-frequency based framework in this paper. In the proposed solution, intensity component of an input video/image signal is split on low pass filtered (LPF) and high pass filtered (HPF) bands. Tone Mapping (TM) function is applied to LPF band to improve the global contrast/brightness, and HPF band is added back afterwards to keep the local contrast. The HPF band may be adjusted by a coring function to avoid noise boosting and signal overshooting. Colorfulness of an original image may be preserved or enhanced by chroma components correction by means of saturation function. Localized content adaptation is further improved by dividing an image to a set of non-overlapped regions and modifying each region individually. The suggested framework allows users to implement a wide range of tone mapping applications with perceptional local sharpness and colorfulness preserved or enhanced. Corresponding hardware circuit may be integrated in camera, video or display pipeline with minimal hardware budget
The adaptive deep brain stimulation challenge.
Arlotti, Mattia; Rosa, Manuela; Marceglia, Sara; Barbieri, Sergio; Priori, Alberto
2016-07-01
Sub-optimal clinical outcomes of conventional deep brain stimulation (cDBS) in treating Parkinson's Disease (PD) have boosted the development of new solutions to improve DBS therapy. Adaptive DBS (aDBS), consisting of closed-loop, real-time changing of stimulation parameters according to the patient's clinical state, promises to achieve this goal and is attracting increasing interest in overcoming all of the challenges posed by its development and adoption. In the design, implementation, and application of aDBS, the choice of the control variable and of the control algorithm represents the core challenge. The proposed approaches, in fact, differ in the choice of the control variable and control policy, in the system design and its technological limits, in the patient's target symptom, and in the surgical procedure needed. Here, we review the current proposals for aDBS systems, focusing on the choice of the control variable and its advantages and drawbacks, thus providing a general overview of the possible pathways for the clinical translation of aDBS with its benefits, limitations and unsolved issues. PMID:27079257
Bullock, Jonathan S.; Harper, William L.; Peck, Charles G.
1976-06-22
This invention is directed to an aqueous halogen-free electromarking solution which possesses the capacity for marking a broad spectrum of metals and alloys selected from different classes. The aqueous solution comprises basically the nitrate salt of an amphoteric metal, a chelating agent, and a corrosion-inhibiting agent.
Adaption of unstructured meshes using node movement
Carpenter, J.G.; McRae, V.D.S.
1996-12-31
The adaption algorithm of Benson and McRae is modified for application to unstructured grids. The weight function generation was modified for application to unstructured grids and movement was limited to prevent cross over. A NACA 0012 airfoil is used as a test case to evaluate the modified algorithm when applied to unstructured grids and compared to results obtained by Warren. An adaptive mesh solution for the Sudhoo and Hall four element airfoil is included as a demonstration case.
An adaptive gridless methodology in one dimension
Snyder, N.T.; Hailey, C.E.
1996-09-01
Gridless numerical analysis offers great potential for accurately solving for flow about complex geometries or moving boundary problems. Because gridless methods do not require point connection, the mesh cannot twist or distort. The gridless method utilizes a Taylor series about each point to obtain the unknown derivative terms from the current field variable estimates. The governing equation is then numerically integrated to determine the field variables for the next iteration. Effects of point spacing and Taylor series order on accuracy are studied, and they follow similar trends of traditional numerical techniques. Introducing adaption by point movement using a spring analogy allows the solution method to track a moving boundary. The adaptive gridless method models linear, nonlinear, steady, and transient problems. Comparison with known analytic solutions is given for these examples. Although point movement adaption does not provide a significant increase in accuracy, it helps capture important features and provides an improved solution.
Cortazar, E; Usobiaga, A; Fernández, L A; de, Diego A; Madariaga, J M
2002-02-01
A MATHEMATICA package, 'CONDU.M', has been developed to find the polynomial in concentration and temperature which best fits conductimetric data of the type (kappa, c, T) or (kappa, c1, c2, T) of electrolyte solutions (kappa: specific conductivity; ci: concentration of component i; T: temperature). In addition, an interface, 'TKONDU', has been written in the TCL/Tk language to facilitate the use of CONDU.M by an operator not familiarised with MATHEMATICA. All this software is available on line (UPV/EHU, 2001). 'CONDU.M' has been programmed to: (i) select the optimum grade in c1 and/or c2; (ii) compare models with linear or quadratic terms in temperature; (iii) calculate the set of adjustable parameters which best fits data; (iv) simplify the model by elimination of 'a priori' included adjustable parameters which after the regression analysis result in low statistical significance; (v) facilitate the location of outlier data by graphical analysis of the residuals; and (vi) provide quantitative statistical information on the quality of the fit, allowing a critical comparison among different models. Due to the multiple options offered the software allows testing different conductivity models in a short time, even if a large set of conductivity data is being considered simultaneously. Then, the user can choose the best model making use of the graphical and statistical information provided in the output file. Although the program has been initially designed to treat conductimetric data, it can be also applied for processing data with similar structure, e.g. (P, c, T) or (P, c1, c2, T), being P any appropriate transport, physical or thermodynamic property. PMID:11868914
Habituation of visual adaptation
Dong, Xue; Gao, Yi; Lv, Lili; Bao, Min
2016-01-01
Our sensory system adjusts its function driven by both shorter-term (e.g. adaptation) and longer-term (e.g. learning) experiences. Most past adaptation literature focuses on short-term adaptation. Only recently researchers have begun to investigate how adaptation changes over a span of days. This question is important, since in real life many environmental changes stretch over multiple days or longer. However, the answer to the question remains largely unclear. Here we addressed this issue by tracking perceptual bias (also known as aftereffect) induced by motion or contrast adaptation across multiple daily adaptation sessions. Aftereffects were measured every day after adaptation, which corresponded to the degree of adaptation on each day. For passively viewed adapters, repeated adaptation attenuated aftereffects. Once adapters were presented with an attentional task, aftereffects could either reduce for easy tasks, or initially show an increase followed by a later decrease for demanding tasks. Quantitative analysis of the decay rates in contrast adaptation showed that repeated exposure of the adapter appeared to be equivalent to adaptation to a weaker stimulus. These results suggest that both attention and a non-attentional habituation-like mechanism jointly determine how adaptation develops across multiple daily sessions. PMID:26739917
Habituation of visual adaptation.
Dong, Xue; Gao, Yi; Lv, Lili; Bao, Min
2016-01-01
Our sensory system adjusts its function driven by both shorter-term (e.g. adaptation) and longer-term (e.g. learning) experiences. Most past adaptation literature focuses on short-term adaptation. Only recently researchers have begun to investigate how adaptation changes over a span of days. This question is important, since in real life many environmental changes stretch over multiple days or longer. However, the answer to the question remains largely unclear. Here we addressed this issue by tracking perceptual bias (also known as aftereffect) induced by motion or contrast adaptation across multiple daily adaptation sessions. Aftereffects were measured every day after adaptation, which corresponded to the degree of adaptation on each day. For passively viewed adapters, repeated adaptation attenuated aftereffects. Once adapters were presented with an attentional task, aftereffects could either reduce for easy tasks, or initially show an increase followed by a later decrease for demanding tasks. Quantitative analysis of the decay rates in contrast adaptation showed that repeated exposure of the adapter appeared to be equivalent to adaptation to a weaker stimulus. These results suggest that both attention and a non-attentional habituation-like mechanism jointly determine how adaptation develops across multiple daily sessions. PMID:26739917
Computerized procedures system
Lipner, Melvin H.; Mundy, Roger A.; Franusich, Michael D.
2010-10-12
An online data driven computerized procedures system that guides an operator through a complex process facility's operating procedures. The system monitors plant data, processes the data and then, based upon this processing, presents the status of the current procedure step and/or substep to the operator. The system supports multiple users and a single procedure definition supports several interface formats that can be tailored to the individual user. Layered security controls access privileges and revisions are version controlled. The procedures run on a server that is platform independent of the user workstations that the server interfaces with and the user interface supports diverse procedural views.
Inverse solutions for electric and potential field imaging
NASA Astrophysics Data System (ADS)
Johnson, Christopher R.; MacLeod, Robert S.
1993-08-01
One of the fundamental problems in theoretical electrocardiography can be characterized by an inverse problem. In this paper, we present new methods for achieving better estimates of heart surface potential distributions in terms of torso potentials through an inverse procedure. First, an adaptive meshing algorithm is described which minimizes the error in the forward problem due to spatial discretization. We have found that since the inverse problem relies directly on the accuracy of the forward solution, adaptive meshing produces a more accurate inverse transfer matrix. Secondly, we introduce a new local regularization procedure. This method works by breaking the global transfer matrix into sub-matrices and performing regularization only on those sub-matrices which have large condition numbers. Furthermore, the regularization parameters are specifically 'tuned' for each sub-matrix using an a priori scheme based on the L-curve method. This local regularization method provides substantial increases in accuracy when compared to global regularization schemes. Finally, we present specific examples of the implementation of these schemes using models derived from magnetic resonance imaging data from a human subject.
49 CFR 572.142 - Head assembly and test procedure.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 49 Transportation 7 2014-10-01 2014-10-01 false Head assembly and test procedure. 572.142 Section...-year-Old Child Crash Test Dummy, Alpha Version § 572.142 Head assembly and test procedure. (a) The head assembly (refer to § 572.140(a)(1)(i)) for this test consists of the head (drawing 210-1000), adapter...
49 CFR 572.142 - Head assembly and test procedure.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 49 Transportation 7 2013-10-01 2013-10-01 false Head assembly and test procedure. 572.142 Section...-year-Old Child Crash Test Dummy, Alpha Version § 572.142 Head assembly and test procedure. (a) The head assembly (refer to § 572.140(a)(1)(i)) for this test consists of the head (drawing 210-1000), adapter...
Evaluation of the CATSIB DIF Procedure in a Pretest Setting
ERIC Educational Resources Information Center
Nandakumar, Ratna; Roussos, Louis
2004-01-01
A new procedure, CATSIB, for assessing differential item functioning (DIF) on computerized adaptive tests (CATs) is proposed. CATSIB, a modified SIBTEST procedure, matches test takers on estimated ability and controls for impact-induced Type 1 error inflation by employing a CAT version of the IBTEST "regression correction." The performance of…
An adaptive grid with directional control
NASA Technical Reports Server (NTRS)
Brackbill, J. U.
1993-01-01
An adaptive grid generator for adaptive node movement is here derived by combining a variational formulation of Winslow's (1981) variable-diffusion method with a directional control functional. By applying harmonic-function theory, it becomes possible to define conditions under which there exist unique solutions of the resulting elliptic equations. The results obtained for the grid generator's application to the complex problem posed by the fluid instability-driven magnetic field reconnection demonstrate one-tenth the computational cost of either a Eulerian grid or an adaptive grid without directional control.
40 CFR 60.648 - Optional procedure for measuring hydrogen sulfide in acid gas-Tutwiler Procedure. 1
Code of Federal Regulations, 2014 CFR
2014-07-01
... hydrogen sulfide in acid gas-Tutwiler Procedure. 1 60.648 Section 60.648 Protection of Environment..., 2011 § 60.648 Optional procedure for measuring hydrogen sulfide in acid gas—Tutwiler Procedure. 1 1 Gas... dilute solutions are used. In principle, this method consists of titrating hydrogen sulfide in a...
Code of Federal Regulations, 2014 CFR
2014-07-01
... measuring hydrogen sulfide in acid gas-Tutwiler Procedure? 60.5408 Section 60.5408 Protection of Environment... § 60.5408 What is an optional procedure for measuring hydrogen sulfide in acid gas—Tutwiler Procedure... of titrating hydrogen sulfide in a gas sample directly with a standard solution of iodine....
40 CFR 60.648 - Optional procedure for measuring hydrogen sulfide in acid gas-Tutwiler Procedure. 1
Code of Federal Regulations, 2013 CFR
2013-07-01
... hydrogen sulfide in acid gas-Tutwiler Procedure. 1 60.648 Section 60.648 Protection of Environment..., 2011 § 60.648 Optional procedure for measuring hydrogen sulfide in acid gas—Tutwiler Procedure. 1 1 Gas... dilute solutions are used. In principle, this method consists of titrating hydrogen sulfide in a...
Code of Federal Regulations, 2013 CFR
2013-07-01
... measuring hydrogen sulfide in acid gas-Tutwiler Procedure? 60.5408 Section 60.5408 Protection of Environment... § 60.5408 What is an optional procedure for measuring hydrogen sulfide in acid gas—Tutwiler Procedure... of titrating hydrogen sulfide in a gas sample directly with a standard solution of iodine....
A multigrid method for steady Euler equations on unstructured adaptive grids
NASA Technical Reports Server (NTRS)
Riemslagh, Kris; Dick, Erik
1993-01-01
A flux-difference splitting type algorithm is formulated for the steady Euler equations on unstructured grids. The polynomial flux-difference splitting technique is used. A vertex-centered finite volume method is employed on a triangular mesh. The multigrid method is in defect-correction form. A relaxation procedure with a first order accurate inner iteration and a second-order correction performed only on the finest grid, is used. A multi-stage Jacobi relaxation method is employed as a smoother. Since the grid is unstructured a Jacobi type is chosen. The multi-staging is necessary to provide sufficient smoothing properties. The domain is discretized using a Delaunay triangular mesh generator. Three grids with more or less uniform distribution of nodes but with different resolution are generated by successive refinement of the coarsest grid. Nodes of coarser grids appear in the finer grids. The multigrid method is started on these grids. As soon as the residual drops below a threshold value, an adaptive refinement is started. The solution on the adaptively refined grid is accelerated by a multigrid procedure. The coarser multigrid grids are generated by successive coarsening through point removement. The adaption cycle is repeated a few times. Results are given for the transonic flow over a NACA-0012 airfoil.
Computerized Adaptive Testing with Item Cloning.
ERIC Educational Resources Information Center
Glas, Cees A. W.; van der Linden, Wim J.
2003-01-01
Developed a multilevel item response (IRT) model that allows for differences between the distributions of item parameters of families of item clones. Results from simulation studies based on an item pool from the Law School Admission Test illustrate the accuracy of the item pool calibration and adaptive testing procedures based on the model. (SLD)
Adapting Aquatic Circuit Training for Special Populations.
ERIC Educational Resources Information Center
Thome, Kathleen
1980-01-01
The author discusses how land activities can be adapted to water so that individuals with handicapping conditions can participate in circuit training activities. An initial section lists such organizational procedures as providing vocal and/or visual cues for activities, having assistants accompany the performers throughout the circuit, and…
Employer coverage of experimental medical procedures.
Mora, J
1986-01-01
Should an employer's medical plan pay for organ transplants, in vitro fertilization or other "experimental" or "high-risk" procedures? Most employers have looked to insurance companies to decide, but the rising frequency and cost of such procedures, coupled with the litigation potential they pose, raise policy making issues that employers themselves must face. A Hewitt Associates Consultant describes some problems and some solutions for insured and self-funded medical plans. PMID:10279242
Organic compatible solutes of halotolerant and halophilic microorganisms
Roberts, Mary F
2005-01-01
Microorganisms that adapt to moderate and high salt environments use a variety of solutes, organic and inorganic, to counter external osmotic pressure. The organic solutes can be zwitterionic, noncharged, or anionic (along with an inorganic cation such as K+). The range of solutes, their diverse biosynthetic pathways, and physical properties of the solutes that effect molecular stability are reviewed. PMID:16176595
Expressing Adaptation Strategies Using Adaptation Patterns
ERIC Educational Resources Information Center
Zemirline, N.; Bourda, Y.; Reynaud, C.
2012-01-01
Today, there is a real challenge to enable personalized access to information. Several systems have been proposed to address this challenge including Adaptive Hypermedia Systems (AHSs). However, the specification of adaptation strategies remains a difficult task for creators of such systems. In this paper, we consider the problem of the definition…
Grid generation and flow solution method for Euler equations on unstructured grids
NASA Technical Reports Server (NTRS)
Anderson, W. Kyle
1992-01-01
A grid generation and flow solution algorithm for the Euler equations on unstructured grids is presented. The grid generation scheme, which uses Delaunay triangulation, generates the field points for the mesh based on cell aspect ratios and allows clustering of grid points near solid surfaces. The flow solution method is an implicit algorithm in which the linear set of equations arising at each time step is solved using a Gauss-Seidel procedure that is completely vectorizable. Also, a study is conducted to examine the number of subiterations required for good convergence of the overall algorithm. Grid generation results are shown in two dimensions for an NACA 0012 airfoil as well as a two element configuration. Flow solution results are shown for a two dimensional flow over the NACA 0012 airfoil and for a two element configuration in which the solution was obtained through an adaptation procedure and compared with an exact solution. Preliminary three dimensional results also are shown in which the subsonic flow over a business jet is computed.
A grid generation and flow solution method for the Euler equations on unstructured grids
NASA Technical Reports Server (NTRS)
Anderson, W. Kyle
1994-01-01
A grid generation and flow solution algorithm for the Euler equations on unstructured grids is presented. The grid generation scheme utilizes Delaunay triangulation and self-generates the field points for the mesh based on cell aspect ratios and allows for clustering near solid surfaces. The flow solution method is an implicit algorithm in which the linear set of equations arising at each time step is solved using a Gauss Seidel procedure which is completely vectorizable. In addition, a study is conducted to examine the number of subiterations required for good convergence of the overall algorithm. Grid generation results are shown in two dimensions for a National Advisory Committee for Aeronautics (NACA) 0012 airfoil as well as a two-element configuration. Flow solution results are shown for two-dimensional flow over the NACA 0012 airfoil and for a two-element configuration in which the solution has been obtained through an adaptation procedure and compared to an exact solution. Preliminary three-dimensional results are also shown in which subsonic flow over a business jet is computed.
A grid generation and flow solution method for the Euler equations on unstructured grids
Anderson, W.K. )
1994-01-01
A grid generation and flow solution algorithm for the Euler equations on unstructured grids is presented. The grid generation scheme utilizes Delaunay triangulation and self-generates the field points for the mesh based on cell aspect ratios and allows for clustering near solid surfaces. The flow solution method is an implicit algorithm in which the linear set or equations arising at each time step is solved using a Gauss Seidel procedure which is completely vectorizable. In addition, a study is conducted to examine the number of subiterations required for good convergence of the overall algorithm. Grid generation results are shown in two dimensions for a NACA 0012 airfoil as well as two-element configuration. Flow solution results are shown for two-dimensional flow over the NACA 0012 airfoil and for a two-element configuration in which the solution has been obtained through an adaptation procedure and compared to an exact solution. Preliminary three-dimensional results are also shown in which subsonic flow over a business jet is computed. 31 refs. 30 figs.
NASA Astrophysics Data System (ADS)
Erdogan, Eren; Durmaz, Murat; Liang, Wenjing; Kappelsberger, Maria; Dettmering, Denise; Limberger, Marco; Schmidt, Michael; Seitz, Florian
2015-04-01
This project focuses on the development of a novel near real-time data adaptive filtering framework for global modeling of the vertical total electron content (VTEC). Ionospheric data can be acquired from various space geodetic observation techniques such as GNSS, altimetry, DORIS and radio occultation. The project aims to model the temporal and spatial variations of the ionosphere by a combination of these techniques in an adaptive data assimilation framework, which utilizes appropriate basis functions to represent the VTEC. The measurements naturally have inhomogeneous data distribution both in time and space. Therefore, integrating the aforementioned observation techniques into data adaptive basis selection methods (e.g. Multivariate Adaptive Regression B-Splines) with recursive filtering (e.g. Kalman filtering) to model the daily global ionosphere may deliver important improvements over classical estimation methods. Since ionospheric inverse problems are ill-posed, a suitable regularization procedure might stabilize the solution. In this contribution we present first results related to the selected evaluation procedure. Comparisons made with respect to applicability, efficiency, accuracy, and numerical efforts.
Public Sector Impasse Procedures.
ERIC Educational Resources Information Center
Vadakin, James C.
The subject of collective bargaining negotiation impasse procedures in the public sector, which includes public school systems, is a broad one. In this speech, the author introduces the various procedures, explains how they are used, and lists their advantages and disadvantages. Procedures discussed are mediation, fact-finding, arbitration,…
Rapid, generalized adaptation to asynchronous audiovisual speech
Van der Burg, Erik; Goodbourn, Patrick T.
2015-01-01
The brain is adaptive. The speed of propagation through air, and of low-level sensory processing, differs markedly between auditory and visual stimuli; yet the brain can adapt to compensate for the resulting cross-modal delays. Studies investigating temporal recalibration to audiovisual speech have used prolonged adaptation procedures, suggesting that adaptation is sluggish. Here, we show that adaptation to asynchronous audiovisual speech occurs rapidly. Participants viewed a brief clip of an actor pronouncing a single syllable. The voice was either advanced or delayed relative to the corresponding lip movements, and participants were asked to make a synchrony judgement. Although we did not use an explicit adaptation procedure, we demonstrate rapid recalibration based on a single audiovisual event. We find that the point of subjective simultaneity on each trial is highly contingent upon the modality order of the preceding trial. We find compelling evidence that rapid recalibration generalizes across different stimuli, and different actors. Finally, we demonstrate that rapid recalibration occurs even when auditory and visual events clearly belong to different actors. These results suggest that rapid temporal recalibration to audiovisual speech is primarily mediated by basic temporal factors, rather than higher-order factors such as perceived simultaneity and source identity. PMID:25716790
Rapid, generalized adaptation to asynchronous audiovisual speech.
Van der Burg, Erik; Goodbourn, Patrick T
2015-04-01
The brain is adaptive. The speed of propagation through air, and of low-level sensory processing, differs markedly between auditory and visual stimuli; yet the brain can adapt to compensate for the resulting cross-modal delays. Studies investigating temporal recalibration to audiovisual speech have used prolonged adaptation procedures, suggesting that adaptation is sluggish. Here, we show that adaptation to asynchronous audiovisual speech occurs rapidly. Participants viewed a brief clip of an actor pronouncing a single syllable. The voice was either advanced or delayed relative to the corresponding lip movements, and participants were asked to make a synchrony judgement. Although we did not use an explicit adaptation procedure, we demonstrate rapid recalibration based on a single audiovisual event. We find that the point of subjective simultaneity on each trial is highly contingent upon the modality order of the preceding trial. We find compelling evidence that rapid recalibration generalizes across different stimuli, and different actors. Finally, we demonstrate that rapid recalibration occurs even when auditory and visual events clearly belong to different actors. These results suggest that rapid temporal recalibration to audiovisual speech is primarily mediated by basic temporal factors, rather than higher-order factors such as perceived simultaneity and source identity. PMID:25716790
NASA Astrophysics Data System (ADS)
Kevlahan, N. N.; Vasilyev, O. V.; Yuen, D. A.
2003-12-01
An adaptive multilevel wavelet collocation method for solving multi-dimensional elliptic problems with localized structures is developed. The method is based on the general class of multi-dimensional second generation wavelets and is an extension of the dynamically adaptive second generation wavelet collocation method for evolution problems. Wavelet decomposition is used for grid adaptation and interpolation, while O(N) hierarchical finite difference scheme, which takes advantage of wavelet multilevel decomposition, is used for derivative calculations. The multilevel structure of the wavelet approximation provides a natural way to obtain the solution on a near optimal grid. In order to accelerate the convergence of the iterative solver, an iterative procedure analogous to the multigrid algorithm is developed. For the problems with slowly varying viscosity simple diagonal preconditioning works. For problems with large laterally varying viscosity contrasts either direct solver on shared-memory machines or multilevel iterative solver with incomplete LU preconditioner may be used. The method is demonstrated for the solution of a number of two-dimensional elliptic test problems with both constant and spatially varying viscosity with multiscale character.
Bellitto, M.W.; Williams, H.T.; Ward, J.N.
1999-07-01
High altitude, historic, gold and silver tailings deposits, which included a more recent cyanide heap leach operation, were decommissioned, detoxified, re-contoured and revegetated. Detoxification of the heap included rinsing with hydrogen peroxide, lime and ferric chloride, followed by evaporation and land application of remaining solution. Grading included the removal of solution ponds, construction of a geosynthetic/clay lined pond, heap removal and site drainage development. Ameliorative and adaptive revegetation methodologies were utilized. Revegetation was complicated by limited access, lack of topsoil, low pH and evaluated metals concentrations in the tailings, and a harsh climate. Water quality sampling results for the first year following revegetation, indicate reclamation activities have contributed to a decrease in metals and sediment loading to surface waters downgradient of the site. Procedures, methodologies and results, following the first year of vegetation growth, are provided.
NASA Astrophysics Data System (ADS)
Bijl, Piet; Valeton, J. Mathieu
1999-10-01
The characterization of electro-optical system performance by means of the standard minimum resolvable temperature difference (MRTD) or the minimum resolvable contrast (MRC) has a number of serious disadvantages. One of the problems is that they depend on the subjective decision criterion of the observer. We present an improved measurement procedure in which the results are free from observer bias. In an adaptive two-alternative forced-choice procedure, both the standard four-bar pattern and a five-bar reference pattern of the same size and contrast are presented consecutively in random order. The observer decides which of the two presentations contains the four-bar pattern. Misjudgments are made if the bars cannot be resolved or are distorted by sampling. The procedure converges to the contrast at which 75% of the observer responses are correct. The reliability of the responses is tested statistically. Curves cut off near the Nyquist frequency, so that it is not necessary to artificially set a frequency limit for sampling array cameras. The procedure enables better and easier measurement, yields more stable results than the standard procedure, and avoids disputes between different measuring teams. The presented procedure is a `quick fix' solution for some of the problems with the MRTD and MRC, and is recommended as long as bar patterns are used as the stimulus. A new and fundamentally better method to characterize electro-optical system performance, called the triangle orientation discrimination threshold was recently proposed by Bijl and Valeton (1998).
Zijp, Michiel C; Posthuma, Leo; Wintersen, Arjen; Devilee, Jeroen; Swartjes, Frank A
2016-05-01
This paper introduces Solution-focused Sustainability Assessment (SfSA), provides practical guidance formatted as a versatile process framework, and illustrates its utility for solving a wicked environmental management problem. Society faces complex and increasingly wicked environmental problems for which sustainable solutions are sought. Wicked problems are multi-faceted, and deriving of a management solution requires an approach that is participative, iterative, innovative, and transparent in its definition of sustainability and translation to sustainability metrics. We suggest to add the use of a solution-focused approach. The SfSA framework is collated from elements from risk assessment, risk governance, adaptive management and sustainability assessment frameworks, expanded with the 'solution-focused' paradigm as recently proposed in the context of risk assessment. The main innovation of this approach is the broad exploration of solutions upfront in assessment projects. The case study concerns the sustainable management of slightly contaminated sediments continuously formed in ditches in rural, agricultural areas. This problem is wicked, as disposal of contaminated sediment on adjacent land is potentially hazardous to humans, ecosystems and agricultural products. Non-removal would however reduce drainage capacity followed by increased risks of flooding, while contaminated sediment removal followed by offsite treatment implies high budget costs and soil subsidence. Application of the steps in the SfSA-framework served in solving this problem. Important elements were early exploration of a wide 'solution-space', stakeholder involvement from the onset of the assessment, clear agreements on the risk and sustainability metrics of the problem and on the interpretation and decision procedures, and adaptive management. Application of the key elements of the SfSA approach eventually resulted in adoption of a novel sediment management policy. The stakeholder
NASA Astrophysics Data System (ADS)
Pardo, A.; Camacho, J. J.; Poyato, J. M. L.; Fernandez-Alonso, J. I.
1986-03-01
Potential energy curves for the X 1Σ +state of 6LiH, 7LiH and 6LiD, 7LiD molecules have been calculated by the third-order RKR inversion procedure by including the Kaise correction. The results are in agreement with previously obtained curves by other authors using differents methods. As a check, the exact vibrational eigenfunctions, appropriate to these potentials, are obtained by direct numerical solutions of the radical Schrödinger equation.
Parallel Adaptive Multi-Mechanics Simulations using Diablo
Parsons, D; Solberg, J
2004-12-03
Coupled multi-mechanics simulations (such as thermal-stress and fluidstructure interaction problems) are of substantial interest to engineering analysts. In addition, adaptive mesh refinement techniques present an attractive alternative to current mesh generation procedures and provide quantitative error bounds that can be used for model verification. This paper discusses spatially adaptive multi-mechanics implicit simulations using the Diablo computer code. (U)
Adaptive Assessment of Young Children with Visual Impairment
ERIC Educational Resources Information Center
Ruiter, Selma; Nakken, Han; Janssen, Marleen; Van Der Meulen, Bieuwe; Looijestijn, Paul
2011-01-01
The aim of this study was to assess the effect of adaptations for children with low vision of the Bayley Scales, a standardized developmental instrument widely used to assess development in young children. Low vision adaptations were made to the procedures, item instructions and play material of the Dutch version of the Bayley Scales of Infant…
A wavelet packet adaptive filtering algorithm for enhancing manatee vocalizations.
Gur, M Berke; Niezrecki, Christopher
2011-04-01
Approximately a quarter of all West Indian manatee (Trichechus manatus latirostris) mortalities are attributed to collisions with watercraft. A boater warning system based on the passive acoustic detection of manatee vocalizations is one possible solution to reduce manatee-watercraft collisions. The success of such a warning system depends on effective enhancement of the vocalization signals in the presence of high levels of background noise, in particular, noise emitted from watercraft. Recent research has indicated that wavelet domain pre-processing of the noisy vocalizations is capable of significantly improving the detection ranges of passive acoustic vocalization detectors. In this paper, an adaptive denoising procedure, implemented on the wavelet packet transform coefficients obtained from the noisy vocalization signals, is investigated. The proposed denoising algorithm is shown to improve the manatee detection ranges by a factor ranging from two (minimum) to sixteen (maximum) compared to high-pass filtering alone, when evaluated using real manatee vocalization and background noise signals of varying signal-to-noise ratios (SNR). Furthermore, the proposed method is also shown to outperform a previously suggested feedback adaptive line enhancer (FALE) filter on average 3.4 dB in terms of noise suppression and 0.6 dB in terms of waveform preservation. PMID:21476661
Crew procedures development techniques
NASA Technical Reports Server (NTRS)
Arbet, J. D.; Benbow, R. L.; Hawk, M. L.; Mangiaracina, A. A.; Mcgavern, J. L.; Spangler, M. C.
1975-01-01
The study developed requirements, designed, developed, checked out and demonstrated the Procedures Generation Program (PGP). The PGP is a digital computer program which provides a computerized means of developing flight crew procedures based on crew action in the shuttle procedures simulator. In addition, it provides a real time display of procedures, difference procedures, performance data and performance evaluation data. Reconstruction of displays is possible post-run. Data may be copied, stored on magnetic tape and transferred to the document processor for editing and documentation distribution.
A novel model for simultaneous study of neointestinal regeneration and intestinal adaptation.
Jwo, Shyh-Chuan; Tang, Shye-Jye; Chen, Jim-Ray; Chiang, Kun-Chun; Huang, Ting-Shou; Chen, Huang-Yang
2013-01-01
The use of autologous grafts, fabricated from tissue-engineered neointestine, to enhance insufficient compensation of intestinal adaptation for severe short bowel syndrome is a compelling idea. Unfortunately, current approaches and knowledge for neointestinal regeneration, unlike intestinal adaptation, are still unsatisfactory. Thus, we have designed a novel model of intestinal adaptation with simultaneous neointestinal regeneration and evaluated its feasibility for future basic research and clinical application. Fifty male Sprague-Dawley rats weighing 250-350 g underwent this procedure and sacrificed at 4, 8, and 12 weeks postoperatively. Spatiotemporal analyses were carried out by gross, histology, and DNA/protein quantification. Three rats died of operative complications. In early experiments, the use of hard silicone stent as tissue scaffold in 11 rats was unsatisfactory for neointestinal regeneration. In later experiments, when a soft silastic tube was used, the success rate increased up to 90.9%. Further analyses revealed that no neointestine developed without donor intestine; regenerated lengths of mucosa and muscle were positively related to time postsurgery but independent of donor length with 0.5 or 1 cm. Other parameters of neointestinal regeneration or intestinal adaptation showed no relationship to both time postsurgery and donor length. In conclusion, this is a potentially important model for investigators searching for solutions to short bowel syndrome. PMID:23441784
Controlling chaos in a defined trajectory using adaptive fuzzy logic algorithm
NASA Astrophysics Data System (ADS)
Sadeghi, Maryam; Menhaj, Bagher
2012-09-01
Chaos is a nonlinear behavior of chaotic system with the extreme sensitivity to the initial conditions. Chaos control is so complicated that solutions never converge to a specific numbers and vary chaotically from one amount to the other next. A tiny perturbation in a chaotic system may result in chaotic, periodic, or stationary behavior. Modern controllers are introduced for controlling the chaotic behavior. In this research an adaptive Fuzzy Logic Controller (AFLC) is proposed to control the chaotic system with two equilibrium points. This method is introduced as an adaptive progressed fashion with the full ability to control the nonlinear systems even in the undertrained conditions. Using AFLC designers are released to determine the precise mathematical model of system and satisfy the vast adaption that is needed for a rapid variation which may be caused in the dynamic of nonlinear system. Rules and system parameters are generated through the AFLC and expert knowledge is downright only in the initialization stage. So if the knowledge was not assuring the dynamic of system it could be changed through the adaption procedure of parameters values. AFLC methodology is an advanced control fashion in control yielding to both robustness and smooth motion in nonlinear system control.
Adaptive computing for people with disabilities.
Merrow, S L; Corbett, C D
1994-01-01
Adaptive computing is a relatively new area, and little has been written in the nursing literature on the topic. "Adaptive computing" refers to the professional services and the technology (both hardware and software) that make computing technology accessible for persons with disabilities. Nurses in many settings such as schools, industry, rehabilitation facilities, and the community, can use knowledge of adaptive computing as they counsel, advise, and advocate for people with disabilities. Nurses with an awareness and knowledge of adaptive computing will be better able to promote high-level wellness for individuals with disabilities, thus maximizing their potential for an active fulfilling life. People with different types of disabilities, including visual, mobility, hearing, learning, communication disorders and acquired brain injuries may benefit from computer adaptations. Disabled people encounter barriers to computing in six major areas: 1) the environment, 2) data entry, 3) information output, 4) technical documentation, 5) support, and 6) training. After a discussion of these barriers, the criteria for selecting appropriate adaptations and selected examples of adaptations are presented. Several cases studies illustrate the evaluation process and the development of adaptive computer solutions. PMID:8082064
Adaptive structures - Test hardware and experimental results
NASA Technical Reports Server (NTRS)
Wada, Ben K.; Fanson, James L.; Chen, Gun-Shing; Kuo, Chin-Po
1990-01-01
The facilities and procedures used at JPL to test adaptive structures such as the large deployable reflector (LDR) are described and preliminary results are reported. The applications of adaptive structures in future NASA missions are outlined, and the techniques which are employed to modify damping, stiffness, and isolation characteristics, as well as geometric changes, are listed. The development of adaptive structures is shown to be effective as a result of new actuators and sensors, and examples are listed for categories such as fiber optics, shape-memory materials, piezoelectrics, and electrorheological fluids. Some ground test results are described for laboratory truss structures and truss test beds, which are shown to be efficient and easy to assemble in space. Adaptive structures are shown to be important for precision space structures such as the LDR, and can alleviate ground test requirements.
Krawczyk, Gerhard Erich; Miller, Kevin Michael
2011-07-26
There is provided a method of making a polymer solution comprising polymerizing one or more monomer in a solvent, wherein said monomer comprises one or more ethylenically unsaturated monomer that is a multi-functional Michael donor, and wherein said solvent comprises 40% or more by weight, based on the weight of said solvent, one or more multi-functional Michael donor.
ERIC Educational Resources Information Center
Starkman, Neal
2007-01-01
Poor classroom acoustics are impairing students' hearing and their ability to learn. However, technology has come up with a solution: tools that focus voices in a way that minimizes intrusive ambient noise and gets to the intended receiver--not merely amplifying the sound, but also clarifying and directing it. One provider of classroom audio…
Solution-Assisted Optical Contacting
NASA Technical Reports Server (NTRS)
Shaddock, Daniel; Abramovici, Alexander
2004-01-01
A modified version of a conventional optical-contact procedure has been found to facilitate alignment of optical components. The optical-contact procedure (called simply optical contacting in the art) is a standard means of bonding two highly polished and cleaned glass optical components without using epoxies or other adhesives. In its unmodified form, the procedure does not involve the use of any foreign substances at all: components to be optically contacted are dry. The main disadvantage of conventional optical contacting is that it is difficult or impossible to adjust the alignment of the components once they have become bonded. In the modified version of the procedure, a drop of an alcohol-based optical cleaning solution (isopropyl alcohol or similar) is placed at the interface between two components immediately before putting the components together. The solution forms a weak bond that gradually strengthens during a time interval of the order of tens of seconds as the alcohol evaporates. While the solution is present, the components can be slid, without loss of contact, to perform fine adjustments of their relative positions. After about a minute, most of the alcohol has evaporated and the optical components are rigidly attached to each other. If necessary, more solution can be added to enable resumption or repetition of the adjustment until the components are aligned to the required precision.
Higher-order numerical solutions using cubic splines
NASA Technical Reports Server (NTRS)
Rubin, S. G.; Khosla, P. K.
1976-01-01
A cubic spline collocation procedure was developed for the numerical solution of partial differential equations. This spline procedure is reformulated so that the accuracy of the second-derivative approximation is improved and parallels that previously obtained for lower derivative terms. The final result is a numerical procedure having overall third-order accuracy of a nonuniform mesh. Solutions using both spline procedures, as well as three-point finite difference methods, are presented for several model problems.
Balancing Flexible Constraints and Measurement Precision in Computerized Adaptive Testing
ERIC Educational Resources Information Center
Moyer, Eric L.; Galindo, Jennifer L.; Dodd, Barbara G.
2012-01-01
Managing test specifications--both multiple nonstatistical constraints and flexibly defined constraints--has become an important part of designing item selection procedures for computerized adaptive tests (CATs) in achievement testing. This study compared the effectiveness of three procedures: constrained CAT, flexible modified constrained CAT,…
NASA Astrophysics Data System (ADS)
Northrup, Scott A.
A new parallel implicit adaptive mesh refinement (AMR) algorithm is developed for the prediction of unsteady behaviour of laminar flames. The scheme is applied to the solution of the system of partial-differential equations governing time-dependent, two- and three-dimensional, compressible laminar flows for reactive thermally perfect gaseous mixtures. A high-resolution finite-volume spatial discretization procedure is used to solve the conservation form of these equations on body-fitted multi-block hexahedral meshes. A local preconditioning technique is used to remove numerical stiffness and maintain solution accuracy for low-Mach-number, nearly incompressible flows. A flexible block-based octree data structure has been developed and is used to facilitate automatic solution-directed mesh adaptation according to physics-based refinement criteria. The data structure also enables an efficient and scalable parallel implementation via domain decomposition. The parallel implicit formulation makes use of a dual-time-stepping like approach with an implicit second-order backward discretization of the physical time, in which a Jacobian-free inexact Newton method with a preconditioned generalized minimal residual (GMRES) algorithm is used to solve the system of nonlinear algebraic equations arising from the temporal and spatial discretization procedures. An additive Schwarz global preconditioner is used in conjunction with block incomplete LU type local preconditioners for each sub-domain. The Schwarz preconditioning and block-based data structure readily allow efficient and scalable parallel implementations of the implicit AMR approach on distributed-memory multi-processor architectures. The scheme was applied to solutions of steady and unsteady laminar diffusion and premixed methane-air combustion and was found to accurately predict key flame characteristics. For a premixed flame under terrestrial gravity, the scheme accurately predicted the frequency of the natural
Procedure improvement enterprises
Davis, P.L.
1992-01-01
At Allied-Signal's Kansas City Division (KCD), we recognize the importance of clear, concise and timely procedures for sharing information, promoting consistency and documenting the way we do business. For these reasons, the KCD has gathered a team of employees to analyze the process we currently use to publish procedures, identify the procedure needs of KCD employees, and design a system that meets or exceeds the requirements and expectations of DOE. The name of our group is the Procedure Improvement Enterprise Critical Process Team, or PIE CPT. The mission statement of Procedure Improvement Enterprise is to develop and implement within the Kansas City Division an effective nd flexible procedure system that will establish a model of excellence, will emphasize team work and open communication, and will ensure compliance with corporate/government requirements.
Procedure improvement enterprises
Davis, P.L.
1992-01-01
At Allied-Signal`s Kansas City Division (KCD), we recognize the importance of clear, concise and timely procedures for sharing information, promoting consistency and documenting the way we do business. For these reasons, the KCD has gathered a team of employees to analyze the process we currently use to publish procedures, identify the procedure needs of KCD employees, and design a system that meets or exceeds the requirements and expectations of DOE. The name of our group is the Procedure Improvement Enterprise Critical Process Team, or PIE CPT. The mission statement of Procedure Improvement Enterprise is to develop and implement within the Kansas City Division an effective nd flexible procedure system that will establish a model of excellence, will emphasize team work and open communication, and will ensure compliance with corporate/government requirements.
Comparing Anisotropic Output-Based Grid Adaptation Methods by Decomposition
NASA Technical Reports Server (NTRS)
Park, Michael A.; Loseille, Adrien; Krakos, Joshua A.; Michal, Todd
2015-01-01
Anisotropic grid adaptation is examined by decomposing the steps of flow solution, ad- joint solution, error estimation, metric construction, and simplex grid adaptation. Multiple implementations of each of these steps are evaluated by comparison to each other and expected analytic results when available. For example, grids are adapted to analytic metric fields and grid measures are computed to illustrate the properties of multiple independent implementations of grid adaptation mechanics. Different implementations of each step in the adaptation process can be evaluated in a system where the other components of the adaptive cycle are fixed. Detailed examination of these properties allows comparison of different methods to identify the current state of the art and where further development should be targeted.
Pyroshock prediction procedures
NASA Astrophysics Data System (ADS)
Piersol, Allan G.
2002-05-01
Given sufficient effort, pyroshock loads can be predicted by direct analytical procedures using Hydrocodes that analytically model the details of the pyrotechnic explosion and its interaction with adjacent structures, including nonlinear effects. However, it is more common to predict pyroshock environments using empirical procedures based upon extensive studies of past pyroshock data. Various empirical pyroshock prediction procedures are discussed, including those developed by the Jet Propulsion Laboratory, Lockheed-Martin, and Boeing.
Candidate CDTI procedures study
NASA Technical Reports Server (NTRS)
Ace, R. E.
1981-01-01
A concept with potential for increasing airspace capacity by involving the pilot in the separation control loop is discussed. Some candidate options are presented. Both enroute and terminal area procedures are considered and, in many cases, a technologically advanced Air Traffic Control structure is assumed. Minimum display characteristics recommended for each of the described procedures are presented. Recommended sequencing of the operational testing of each of the candidate procedures is presented.
Hill, Colin
2010-01-01
Recently we reported a role for compatible solute uptake in mediating bile tolerance and increased gastrointestinal persistence in the foodborne pathogen Listeria monocytogenes.1 Herein, we review the evolution in our understanding of how these low molecular weight molecules contribute to growth and survival of the pathogen both inside and outside the body, and how this stress survival mechanism may ultimately be used to target and kill the pathogen. PMID:21326913
Adaptive Finite-Element Computation In Fracture Mechanics
NASA Technical Reports Server (NTRS)
Min, J. B.; Bass, J. M.; Spradley, L. W.
1995-01-01
Report discusses recent progress in use of solution-adaptive finite-element computational methods to solve two-dimensional problems in linear elastic fracture mechanics. Method also shown extensible to three-dimensional problems.
Peritoneal dialysis solution and nutrition.
Verger, Christian
2012-01-01
20-70% of peritoneal dialysis patients have some signs of malnutrition. Anorexia, protein and amino acid losses in dialysate, advanced age of elderly patients, inflammation and cardiac failure are among the main causes. Modern dialysis solutions aim to reduce these causes, but none of them is without side effects: glucose is relatively safe and brings additional energy but induces anorexia and lipid abnormalities, amino acids compensate dialysate losses but may increase uremia and acidosis, icodextrin helps control hyperhydration and chronic heart failure and minimizes glucose side effects, but may sometimes cause inflammation, and poly chamber bags allow the replacement of lactate by bicarbonate and are more biocompatible, decrease GDP, induce less inflammation and have a better effect on nutritional status. However, it appears that the management of nutrition with the different solutions available nowadays necessitates various combinations of solutions adapted to different patient profiles and there is not actually a single universal solution to minimize malnutrition in peritoneal dialysis patients. PMID:22652708
A self-correcting procedure for computational liquid metal magnetohydrodynamics
NASA Astrophysics Data System (ADS)
Araseki, Hideo; Kotake, Shoji
1994-02-01
This paper describes a new application of the self-correcting procedure to computational liquid metal magnetohydrodynamics. In this procedure, the conservation law of the electric current density incorporated in a Poisson equation for the scalar potential plays an important role of correcting this potential. This role is similar to that of the conservation law of mass in a Poisson equation for the pressure. Some numerical results show that the proposed self-correcting procedure can provide a more accurate numerical solution of the electric current density than the existing solution procedure.
A computational procedure for large rotational motions in multibody dynamics
NASA Technical Reports Server (NTRS)
Park, K. C.; Chiou, J. C.
1987-01-01
A computational procedure suitable for the solution of equations of motion for multibody systems is presented. The present procedure adopts a differential partitioning of the translational motions and the rotational motions. The translational equations of motion are then treated by either a conventional explicit or an implicit direct integration method. A principle feature of this procedure is a nonlinearly implicit algorithm for updating rotations via the Euler four-parameter representation. This procedure is applied to the rolling of a sphere through a specific trajectory, which shows that it yields robust solutions.
NASA Astrophysics Data System (ADS)
Grothmann, T.; Grecksch, K.; Winges, M.; Siebenhüner, B.
2013-12-01
Several case studies show that social factors like institutions, perceptions and social capital strongly affect social capacities to adapt to climate change. Together with economic and technological development they are important for building social capacities. However, there are almost no methodologies for the systematic assessment of social factors. After reviewing existing methodologies we identify the Adaptive Capacity Wheel (ACW) by Gupta et al. (2010), developed for assessing the adaptive capacity of institutions, as the most comprehensive and operationalised framework to assess social factors. The ACW differentiates 22 criteria to assess 6 dimensions: variety, learning capacity, room for autonomous change, leadership, availability of resources, fair governance. To include important psychological factors we extended the ACW by two dimensions: "adaptation motivation" refers to actors' motivation to realise, support and/or promote adaptation to climate; "adaptation belief" refers to actors' perceptions of realisability and effectiveness of adaptation measures. We applied the extended ACW to assess adaptive capacities of four sectors - water management, flood/coastal protection, civil protection and regional planning - in northwestern Germany. The assessments of adaptation motivation and belief provided a clear added value. The results also revealed some methodological problems in applying the ACW (e.g. overlap of dimensions), for which we propose methodological solutions.
NASA Astrophysics Data System (ADS)
Grothmann, T.; Grecksch, K.; Winges, M.; Siebenhüner, B.
2013-03-01
Several case studies show that "soft social factors" (e.g. institutions, perceptions, social capital) strongly affect social capacities to adapt to climate change. Many soft social factors can probably be changed faster than "hard social factors" (e.g. economic and technological development) and are therefore particularly important for building social capacities. However, there are almost no methodologies for the systematic assessment of soft social factors. Gupta et al. (2010) have developed the Adaptive Capacity Wheel (ACW) for assessing the adaptive capacity of institutions. The ACW differentiates 22 criteria to assess six dimensions: variety, learning capacity, room for autonomous change, leadership, availability of resources, fair governance. To include important psychological factors we extended the ACW by two dimensions: "adaptation motivation" refers to actors' motivation to realise, support and/or promote adaptation to climate. "Adaptation belief" refers to actors' perceptions of realisability and effectiveness of adaptation measures. We applied the extended ACW to assess adaptive capacities of four sectors - water management, flood/coastal protection, civil protection and regional planning - in North Western Germany. The assessments of adaptation motivation and belief provided a clear added value. The results also revealed some methodological problems in applying the ACW (e.g. overlap of dimensions), for which we propose methodological solutions.
A Novel Approach for Adaptive Signal Processing
NASA Technical Reports Server (NTRS)
Chen, Ya-Chin; Juang, Jer-Nan
1998-01-01
Adaptive linear predictors have been used extensively in practice in a wide variety of forms. In the main, their theoretical development is based upon the assumption of stationarity of the signals involved, particularly with respect to the second order statistics. On this basis, the well-known normal equations can be formulated. If high- order statistical stationarity is assumed, then the equivalent normal equations involve high-order signal moments. In either case, the cross moments (second or higher) are needed. This renders the adaptive prediction procedure non-blind. A novel procedure for blind adaptive prediction has been proposed and considerable implementation has been made in our contributions in the past year. The approach is based upon a suitable interpretation of blind equalization methods that satisfy the constant modulus property and offers significant deviations from the standard prediction methods. These blind adaptive algorithms are derived by formulating Lagrange equivalents from mechanisms of constrained optimization. In this report, other new update algorithms are derived from the fundamental concepts of advanced system identification to carry out the proposed blind adaptive prediction. The results of the work can be extended to a number of control-related problems, such as disturbance identification. The basic principles are outlined in this report and differences from other existing methods are discussed. The applications implemented are speech processing, such as coding and synthesis. Simulations are included to verify the novel modelling method.
Spatial-multiblock procedure for radiation heat transfer
Chai, J.C.; Moder, J.P.
1996-12-31
A spatial-multiblock procedure for radiation heat transfer is presented in this article. The proposed procedure is applicable to isothermal or nonisothermal, absorbing, emitting and scattering of transparent media with black or reflecting walls. Although not shown in this article, the procedure is also applicable to nongray conditions. The proposed procedure can be used with the discrete ordinates method and the finite volume method. The heat transfer rate, net radiation power and other full-range and half-range moments are conserved between spatial blocks by the proposed procedure. The utilities of the proposed procedure are shown using four sample problems. The solutions indicate that the multiblock procedure can reproduce the results of a single-block procedure even when very coarse spatial grids are used in the multiblock procedure.
Dynamic mesh adaption for triangular and tetrahedral grids
NASA Technical Reports Server (NTRS)
Biswas, Rupak; Strawn, Roger
1993-01-01
The following topics are discussed: requirements for dynamic mesh adaption; linked-list data structure; edge-based data structure; adaptive-grid data structure; three types of element subdivision; mesh refinement; mesh coarsening; additional constraints for coarsening; anisotropic error indicator for edges; unstructured-grid Euler solver; inviscid 3-D wing; and mesh quality for solution-adaptive grids. The discussion is presented in viewgraph form.
Protection of the main maximum in adaptive antenna arrays
NASA Astrophysics Data System (ADS)
Pistolkors, A. A.
1980-12-01
An adaptive algorithm based on the solution of the problem of minimizing the noise at the output of an array when a constraint is imposed on the main maximum direction is discussed. The suppression depth for the cases of one and two interferences and the enhancement of the direction-finding capability and resolution of an adaptive array are investigated.
Organizational Adaptation and Higher Education.
ERIC Educational Resources Information Center
Cameron, Kim S.
1984-01-01
Organizational adaptation and types of adaptation needed in academe in the future are reviewed and major conceptual approaches to organizational adaptation are presented. The probable environment that institutions will face in the future that will require adaptation is discussed. (MLW)
Hybrid Surface Mesh Adaptation for Climate Modeling
Khamayseh, Ahmed K; de Almeida, Valmor F; Hansen, Glen
2008-01-01
Solution-driven mesh adaptation is becoming quite popular for spatial error control in the numerical simulation of complex computational physics applications, such as climate modeling. Typically, spatial adaptation is achieved by element subdivision (h adaptation) with a primary goal of resolving the local length scales of interest. A second, less-popular method of spatial adaptivity is called "mesh motion" (r adaptation); the smooth repositioning of mesh node points aimed at resizing existing elements to capture the local length scales. This paper proposes an adaptation method based on a combination of both element subdivision and node point repositioning (rh adaptation). By combining these two methods using the notion of a mobility function, the proposed approach seeks to increase the flexibility and extensibility of mesh motion algorithms while providing a somewhat smoother transition between refined regions than is produced by element subdivision alone. Further, in an attempt to support the requirements of a very general class of climate simulation applications, the proposed method is designed to accommodate unstructured, polygonal mesh topologies in addition to the most popular mesh types.
Hybrid Surface Mesh Adaptation for Climate Modeling
Ahmed Khamayseh; Valmor de Almeida; Glen Hansen
2008-10-01
Solution-driven mesh adaptation is becoming quite popular for spatial error control in the numerical simulation of complex computational physics applications, such as climate modeling. Typically, spatial adaptation is achieved by element subdivision (h adaptation) with a primary goal of resolving the local length scales of interest. A second, less-popular method of spatial adaptivity is called “mesh motion” (r adaptation); the smooth repositioning of mesh node points aimed at resizing existing elements to capture the local length scales. This paper proposes an adaptation method based on a combination of both element subdivision and node point repositioning (rh adaptation). By combining these two methods using the notion of a mobility function, the proposed approach seeks to increase the flexibility and extensibility of mesh motion algorithms while providing a somewhat smoother transition between refined regions than is produced by element subdivision alone. Further, in an attempt to support the requirements of a very general class of climate simulation applications, the proposed method is designed to accommodate unstructured, polygonal mesh topologies in addition to the most popular mesh types.
Parallel object-oriented adaptive mesh refinement
Balsara, D.; Quinlan, D.J.
1997-04-01
In this paper we study adaptive mesh refinement (AMR) for elliptic and hyperbolic systems. We use the Asynchronous Fast Adaptive Composite Grid Method (AFACX), a parallel algorithm based upon the of Fast Adaptive Composite Grid Method (FAC) as a test case of an adaptive elliptic solver. For our hyperbolic system example we use TVD and ENO schemes for solving the Euler and MHD equations. We use the structured grid load balancer MLB as a tool for obtaining a load balanced distribution in a parallel environment. Parallel adaptive mesh refinement poses difficulties in expressing both the basic single grid solver, whether elliptic or hyperbolic, in a fashion that parallelizes seamlessly. It also requires that these basic solvers work together within the adaptive mesh refinement algorithm which uses the single grid solvers as one part of its adaptive solution process. We show that use of AMR++, an object-oriented library within the OVERTURE Framework, simplifies the development of AMR applications. Parallel support is provided and abstracted through the use of the P++ parallel array class.
Dynamic Adaption of Vascular Morphology
Okkels, Fridolin; Jacobsen, Jens Christian Brings
2012-01-01
The structure of vascular networks adapts continuously to meet changes in demand of the surrounding tissue. Most of the known vascular adaptation mechanisms are based on local reactions to local stimuli such as pressure and flow, which in turn reflects influence from the surrounding tissue. Here we present a simple two-dimensional model in which, as an alternative approach, the tissue is modeled as a porous medium with intervening sharply defined flow channels. Based on simple, physiologically realistic assumptions, flow-channel structure adapts so as to reach a configuration in which all parts of the tissue are supplied. A set of model parameters uniquely determine the model dynamics, and we have identified the region of the best-performing model parameters (a global optimum). This region is surrounded in parameter space by less optimal model parameter values, and this separation is characterized by steep gradients in the related fitness landscape. Hence it appears that the optimal set of parameters tends to localize close to critical transition zones. Consequently, while the optimal solution is stable for modest parameter perturbations, larger perturbations may cause a profound and permanent shift in systems characteristics. We suggest that the system is driven toward a critical state as a consequence of the ongoing parameter optimization, mimicking an evolutionary pressure on the system. PMID:23060814
Science Safety Procedure Handbook.
ERIC Educational Resources Information Center
Lynch, Mervyn A.; Offet, Lorna
This booklet outlines general safety procedures in the areas of: (1) student supervision; (2) storage safety regulations, including lists of incompatible chemicals, techniques of disposal and storage; (3) fire; and (4) first aid. Specific sections exist for elementary, junior high school, senior high school, in which special procedures are…
Handbook of radiologic procedures
Hedgcock, M.
1986-01-01
This book is organized around radiologic procedures with each discussed from the points of view of: indications, contraindications, materials, method of procedures and complications. Covered in this book are: emergency radiology chest radiology, bone radiology, gastrointestinal radiology, GU radiology, pediatric radiology, computerized tomography, neuroradiology, visceral and peripheral angiography, cardiovascular radiology, nuclear medicine, lymphangiography, and mammography.
Procedural Learning and Dyslexia
ERIC Educational Resources Information Center
Nicolson, R. I.; Fawcett, A. J.; Brookes, R. L.; Needle, J.
2010-01-01
Three major "neural systems", specialized for different types of information processing, are the sensory, declarative, and procedural systems. It has been proposed ("Trends Neurosci.",30(4), 135-141) that dyslexia may be attributable to impaired function in the procedural system together with intact declarative function. We provide a brief…
Coombs' Type Response Procedures.
ERIC Educational Resources Information Center
Koehler, Roger A.
This paper provides substantial evidence in favor of the continued use of conventional objective testing procedures in lieu of either the Coombs' cross-out technique or the Dressel and Schmid free-choice response procedure. From the studies presented in this paper, the tendency is for the cross-out and the free choice methods to yield a decrement…
ERIC Educational Resources Information Center
Davis, Kevin; Poston, George
This manual provides information on the enucleation procedure (removal of the eyes for organ banks). An introductory section focuses on the anatomy of the eye and defines each of the parts. Diagrams of the eye are provided. A list of enucleation materials follows. Other sections present outlines of (1) a sterile procedure; (2) preparation for eye…
ERIC Educational Resources Information Center
Nevada State Dept. of Education, Carson City.
The procedure described herein entails the use of an educational planning consultant, statements of educational and service problems to be solved by proposed construction, a site plan, and architect selection. Also included in the outline of procedures is a tentative statement of specifications, tentative cost estimates and matrices for conducting…
ERIC Educational Resources Information Center
Klein, William D.; McKenna, Bernard
1997-01-01
States that, although policies and procedure documents play an important role in developing and maintaining a consistent quality of interaction in organizations, research literature is weak in this area. Initiates further discussion by defining and describing policy/procedure documents. Identifies a third kind, work instructions. Uses a genre…
46 CFR 153.1065 - Sodium chlorate solutions.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 46 Shipping 5 2013-10-01 2013-10-01 false Sodium chlorate solutions. 153.1065 Section 153.1065... Procedures § 153.1065 Sodium chlorate solutions. (a) No person may load sodium chlorate solutions into a... solutions are immediately washed away. Approval of Surveyors and Handling of Categories A, B, C, and D...
46 CFR 153.1065 - Sodium chlorate solutions.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 46 Shipping 5 2014-10-01 2014-10-01 false Sodium chlorate solutions. 153.1065 Section 153.1065... Procedures § 153.1065 Sodium chlorate solutions. (a) No person may load sodium chlorate solutions into a... solutions are immediately washed away. Approval of Surveyors and Handling of Categories A, B, C, and D...
Application of the Flood-IMPAT procedure in the Valle d'Aosta Region, Italy
NASA Astrophysics Data System (ADS)
Minucci, Guido; Mendoza, Marina Tamara; Molinari, Daniela; Atun, Funda; Menoni, Scira; Ballio, Francesco
2016-04-01
Flood Risk Management Plans (FRMPs) established by European "Floods" Directive (Directive 2007/60/EU) to Member States in order to address all aspects of flood risk management, taking into account costs and benefits of proposed mitigation tools must be reviewed by the same law every six years. This is aimed at continuously increasing the effectiveness of risk management, on the bases of the most advanced knowledge of flood risk and most (economically) feasible solutions, also taking into consideration achievements of the previous management cycle. Within this context, the Flood-IMPAT (i.e. Integrated Meso-scale Procedure to Assess Territorial flood risk) procedure has been developed aiming at overcoming limits of risk maps produced by the Po River Basin Authority and adopted for the first version of the Po River FRMP. The procedure allows the estimation of flood risk at the meso-scale and it is characterized by three main peculiarities. First is its feasibility for the entire Italian territory. Second is the possibility to express risk in monetary terms (i.e. expected damage), at least for those categories of damage for which suitable models are available. Finally, independent modules compose the procedure: each module allows the estimation of a certain type of damage (i.e. direct, indirect, intangibles) on a certain sector (e.g. residential, industrial, agriculture, environment, etc.) separately, guaranteeing flexibility in the implementation. This paper shows the application of the Flood-IMPAT procedure and the recent advancements in the procedure, aiming at increasing its reliability and usability. Through a further implementation of the procedure in the Dora Baltea River Basin (North of Italy), it was possible to test the sensitivity of risk estimates supplied by Flood-IMPAT with respect to different damage models and different approaches for the estimation of assets at risk. Risk estimates were also compared with observed damage data in the investigated areas
Toddler test or procedure preparation
... procedure; Test/procedure preparation - toddler; Preparing for a medical test or procedure - toddler ... A, Franz BE. Practical communication guide for paediatric procedures. Emerg ... PMID: 19588390 www.ncbi.nlm.nih.gov/pubmed/19588390 .
ERIC Educational Resources Information Center
Melaragno, Ralph J.
The two-phase study compared two methods of adapting self-instructional materials to individual differences among learners. The methods were compared with each other and with a control condition involving only minimal adaptation. The first adaptation procedure was based on subjects' performances on a learning task in Phase I of the study; the…
Liongue, Clifford; John, Liza B; Ward, Alister
2011-01-01
Adaptive immunity, involving distinctive antibody- and cell-mediated responses to specific antigens based on "memory" of previous exposure, is a hallmark of higher vertebrates. It has been argued that adaptive immunity arose rapidly, as articulated in the "big bang theory" surrounding its origins, which stresses the importance of coincident whole-genome duplications. Through a close examination of the key molecules and molecular processes underpinning adaptive immunity, this review suggests a less-extreme model, in which adaptive immunity emerged as part of longer evolutionary journey. Clearly, whole-genome duplications provided additional raw genetic materials that were vital to the emergence of adaptive immunity, but a variety of other genetic events were also required to generate some of the key molecules, whereas others were preexisting and simply co-opted into adaptive immunity. PMID:21395512
Parallel Anisotropic Tetrahedral Adaptation
NASA Technical Reports Server (NTRS)
Park, Michael A.; Darmofal, David L.
2008-01-01
An adaptive method that robustly produces high aspect ratio tetrahedra to a general 3D metric specification without introducing hybrid semi-structured regions is presented. The elemental operators and higher-level logic is described with their respective domain-decomposed parallelizations. An anisotropic tetrahedral grid adaptation scheme is demonstrated for 1000-1 stretching for a simple cube geometry. This form of adaptation is applicable to more complex domain boundaries via a cut-cell approach as demonstrated by a parallel 3D supersonic simulation of a complex fighter aircraft. To avoid the assumptions and approximations required to form a metric to specify adaptation, an approach is introduced that directly evaluates interpolation error. The grid is adapted to reduce and equidistribute this interpolation error calculation without the use of an intervening anisotropic metric. Direct interpolation error adaptation is illustrated for 1D and 3D domains.
Gravitational adaptation of animals
NASA Technical Reports Server (NTRS)
Smith, A. H.; Burton, R. R.
1982-01-01
The effect of gravitational adaptation is studied in a group of five Leghorn cocks which had become physiologically adapted to 2 G after 162 days of centrifugation. After this period of adaptation, they are periodically exposed to a 2 G field, accompanied by five previously unexposed hatch-mates, and the degree of retained acceleration adaptation is estimated from the decrease in lymphocyte frequency after 24 hr at 2 G. Results show that the previously adapted birds exhibit an 84% greater lymphopenia than the unexposed birds, and that the lymphocyte frequency does not decrease to a level below that found at the end of 162 days at 2 G. In addition, the capacity for adaptation to chronic acceleration is found to be highly heritable. An acceleration tolerant strain of birds shows lesser mortality during chronic acceleration, particularly in intermediate fields, although the result of acceleration selection is largely quantitative (a greater number of survivors) rather than qualitative (behavioral or physiological changes).
Technology transfer for adaptation
NASA Astrophysics Data System (ADS)
Biagini, Bonizella; Kuhl, Laura; Gallagher, Kelly Sims; Ortiz, Claudia
2014-09-01
Technology alone will not be able to solve adaptation challenges, but it is likely to play an important role. As a result of the role of technology in adaptation and the importance of international collaboration for climate change, technology transfer for adaptation is a critical but understudied issue. Through an analysis of Global Environment Facility-managed adaptation projects, we find there is significantly more technology transfer occurring in adaptation projects than might be expected given the pessimistic rhetoric surrounding technology transfer for adaptation. Most projects focused on demonstration and early deployment/niche formation for existing technologies rather than earlier stages of innovation, which is understandable considering the pilot nature of the projects. Key challenges for the transfer process, including technology selection and appropriateness under climate change, markets and access to technology, and diffusion strategies are discussed in more detail.
Gardner, Andy
2009-01-01
The problem of adaptation is to explain the apparent design of organisms. Darwin solved this problem with the theory of natural selection. However, population geneticists, whose responsibility it is to formalize evolutionary theory, have long neglected the link between natural selection and organismal design. Here, I review the major historical developments in theory of organismal adaptation, clarifying what adaptation is and what it is not, and I point out future avenues for research. PMID:19793739
Phase Adaptation and Correction by Adaptive Optics
NASA Astrophysics Data System (ADS)
Tiziani, Hans J.
2010-04-01
Adaptive optical elements and systems for imaging or laser beam propagation are used for some time in particular in astronomy, where the image quality is degraded by atmospheric turbulence. In astronomical telescopes a deformable mirror is frequently used to compensate wavefront-errors due to deformations of the large mirror, vibrations as well as turbulence and hence to increase the image quality. In the last few years interesting elements like Spatial Light Modulators, SLM's, such as photorefractive crystals, liquid crystals and micro mirrors and membrane mirrors were introduced. The development of liquid crystals and micro mirrors was driven by data projectors as consumer products. They contain typically a matrix of individually addressable pixels of liquid crystals and flip mirrors respectively or more recently piston mirrors for special applications. Pixel sizes are in the order of a few microns and therefore also appropriate as active diffractive elements in digital holography or miniature masks. Although liquid crystals are mainly optimized for intensity modulation; they can be used for phase modulation. Adaptive optics is a technology for beam shaping and wavefront adaptation. The application of spatial light modulators for wavefront adaptation and correction and defect analysis as well as sensing will be discussed. Dynamic digital holograms are generated with liquid crystal devices (LCD) and used for wavefront correction as well as for beam shaping and phase manipulation, for instance. Furthermore, adaptive optics is very useful to extend the measuring range of wavefront sensors and for the wavefront adaptation in order to measure and compare the shape of high precision aspherical surfaces.
Colorimetric determination of tobramycin in parenteral solutions.
Das Gupta, V
1988-06-01
A colorimetric method based on a reaction between tobramycin and alkaline copper sulphate solution has been proposed to quantify tobramycin in injections. The excipients present and normal saline did not interfere with the assay procedure. A tobramycin sample which was decomposed using either sulphuric acid or sodium hydroxide solution indicated fairly good stability on both sides of the pH scale. PMID:3209627
On Browne's Solution for Oblique Procrustes Rotation
ERIC Educational Resources Information Center
Cramer, Elliot M.
1974-01-01
A form of Browne's (1967) solution of finding a least squares fit to a specified factor structure is given which does not involve solution of an eigenvalue problem. It suggests the possible existence of a singularity, and a simple modification of Browne's computational procedure is proposed. (Author/RC)
Evans, G.W. Jacobs, S.V.; Frager, N.B.
1982-10-01
This study examined the health effects of human adaptation to photochemical smog. A group of recent arrivals to the Los Angeles air basin were compared to long-term residents of the basin. Evidence for adaptation included greater irritation and respiratory problems among the recent arrivals and desensitization among the long-term residents in their judgments of the severity of the smog problem to their health. There was no evidence for biochemical adaptation as measured by hemoglobin response to oxidant challenge. The results were discussed in terms of psychological adaption to chronic environmental stressors.
Adaptive parallel logic networks
NASA Technical Reports Server (NTRS)
Martinez, Tony R.; Vidal, Jacques J.
1988-01-01
Adaptive, self-organizing concurrent systems (ASOCS) that combine self-organization with massive parallelism for such applications as adaptive logic devices, robotics, process control, and system malfunction management, are presently discussed. In ASOCS, an adaptive network composed of many simple computing elements operating in combinational and asynchronous fashion is used and problems are specified by presenting if-then rules to the system in the form of Boolean conjunctions. During data processing, which is a different operational phase from adaptation, the network acts as a parallel hardware circuit.
Quantifying the Adaptive Cycle
Angeler, David G.; Allen, Craig R.; Garmestani, Ahjond S.; Gunderson, Lance H.; Hjerne, Olle; Winder, Monika
2015-01-01
The adaptive cycle was proposed as a conceptual model to portray patterns of change in complex systems. Despite the model having potential for elucidating change across systems, it has been used mainly as a metaphor, describing system dynamics qualitatively. We use a quantitative approach for testing premises (reorganisation, conservatism, adaptation) in the adaptive cycle, using Baltic Sea phytoplankton communities as an example of such complex system dynamics. Phytoplankton organizes in recurring spring and summer blooms, a well-established paradigm in planktology and succession theory, with characteristic temporal trajectories during blooms that may be consistent with adaptive cycle phases. We used long-term (1994–2011) data and multivariate analysis of community structure to assess key components of the adaptive cycle. Specifically, we tested predictions about: reorganisation: spring and summer blooms comprise distinct community states; conservatism: community trajectories during individual adaptive cycles are conservative; and adaptation: phytoplankton species during blooms change in the long term. All predictions were supported by our analyses. Results suggest that traditional ecological paradigms such as phytoplankton successional models have potential for moving the adaptive cycle from a metaphor to a framework that can improve our understanding how complex systems organize and reorganize following collapse. Quantifying reorganization, conservatism and adaptation provides opportunities to cope with the intricacies and uncertainties associated with fast ecological change, driven by shifting system controls. Ultimately, combining traditional ecological paradigms with heuristics of complex system dynamics using quantitative approaches may help refine ecological theory and improve our understanding of the resilience of ecosystems. PMID:26716453
Decentralized adaptive control
NASA Technical Reports Server (NTRS)
Oh, B. J.; Jamshidi, M.; Seraji, H.
1988-01-01
A decentralized adaptive control is proposed to stabilize and track the nonlinear, interconnected subsystems with unknown parameters. The adaptation of the controller gain is derived by using model reference adaptive control theory based on Lyapunov's direct method. The adaptive gains consist of sigma, proportional, and integral combination of the measured and reference values of the corresponding subsystem. The proposed control is applied to the joint control of a two-link robot manipulator, and the performance in computer simulation corresponds with what is expected in theoretical development.
ERIC Educational Resources Information Center
Geri, George A.; Hubbard, David C.
Two adaptive psychophysical procedures (tracking and "yes-no" staircase) for obtaining human visual contrast sensitivity functions (CSF) were evaluated. The procedures were chosen based on their proven validity and the desire to evaluate the practical effects of stimulus transients, since tracking procedures traditionally employ gradual stimulus…
A Comparison of Exposure Control Procedures in CATs Using the 3PL Model
ERIC Educational Resources Information Center
Leroux, Audrey J.; Lopez, Myriam; Hembry, Ian; Dodd, Barbara G.
2013-01-01
This study compares the progressive-restricted standard error (PR-SE) exposure control procedure to three commonly used procedures in computerized adaptive testing, the randomesque, Sympson-Hetter (SH), and no exposure control methods. The performance of these four procedures is evaluated using the three-parameter logistic model under the…
Fisher, J A; Favreau, M B
1991-05-01
We have developed a novel plasmid isolation procedure and have adapted it for use on an automated nucleic acid extraction instrument. The protocol is based on the finding that phenol extraction of a 1 M guanidinium thiocyanate solution at pH 4.5 efficiently removes genomic DNA from the aqueous phase, while supercoiled plasmid DNA is retained in the aqueous phase. S1 nuclease digestion of the removed genomic DNA shows that it has been denatured, which presumably confers solubility in the organic phase. The complete automated protocol for plasmid isolation involves pretreatment of bacterial cells successively with lysozyme, RNase A, and proteinase K. Following these digestions, the solution is extracted twice with a phenol/chloroform/water mixture and once with chloroform. Purified plasmid is then collected by isopropanol precipitation. The purified plasmid is essentially free of genomic DNA, RNA, and protein and is a suitable substrate for DNA sequencing and other applications requiring highly pure supercoiled plasmid. PMID:1713749
Traveling Wave Solutions for Nonlinear Differential-Difference Equations of Rational Types
NASA Astrophysics Data System (ADS)
İsmail, Aslan
2016-01-01
Differential-difference equations are considered to be hybrid systems because the spatial variable n is discrete while the time t is usually kept continuous. Although a considerable amount of research has been carried out in the field of nonlinear differential-difference equations, the majority of the results deal with polynomial types. Limited research has been reported regarding such equations of rational type. In this paper we present an adaptation of the (G‧/G)-expansion method to solve nonlinear rational differential-difference equations. The procedure is demonstrated using two distinct equations. Our approach allows one to construct three types of exact traveling wave solutions (hyperbolic, trigonometric, and rational) by means of the simplified form of the auxiliary equation method with reduced parameters. Our analysis leads to analytic solutions in terms of topological solitons and singular periodic functions as well.
Baltayiannis, Nikolaos; Michail, Chandrinos; Lazaridis, George; Anagnostopoulos, Dimitrios; Baka, Sofia; Mpoukovinas, Ioannis; Karavasilis, Vasilis; Lampaki, Sofia; Papaiwannou, Antonis; Karavergou, Anastasia; Kioumis, Ioannis; Pitsiou, Georgia; Katsikogiannis, Nikolaos; Tsakiridis, Kosmas; Rapti, Aggeliki; Trakada, Georgia; Zissimopoulos, Athanasios; Zarogoulidis, Konstantinos
2015-01-01
Minimally invasive procedures, which include laparoscopic surgery, use state-of-the-art technology to reduce the damage to human tissue when performing surgery. Minimally invasive procedures require small “ports” from which the surgeon inserts thin tubes called trocars. Carbon dioxide gas may be used to inflate the area, creating a space between the internal organs and the skin. Then a miniature camera (usually a laparoscope or endoscope) is placed through one of the trocars so the surgical team can view the procedure as a magnified image on video monitors in the operating room. Specialized equipment is inserted through the trocars based on the type of surgery. There are some advanced minimally invasive surgical procedures that can be performed almost exclusively through a single point of entry—meaning only one small incision, like the “uniport” video-assisted thoracoscopic surgery (VATS). Not only do these procedures usually provide equivalent outcomes to traditional “open” surgery (which sometimes require a large incision), but minimally invasive procedures (using small incisions) may offer significant benefits as well: (I) faster recovery; (II) the patient remains for less days hospitalized; (III) less scarring and (IV) less pain. In our current mini review we will present the minimally invasive procedures for thoracic surgery. PMID:25861610
Adaptation of Selenastrum capricornutum (Chlorophyceae) to copper
Kuwabara, J.S.; Leland, H.V.
1986-01-01
Selenastrum capricornutum Printz, growing in a chemically defined medium, was used as a model for studying adaptation of algae to a toxic metal (copper) ion. Cells exhibited lag-phase adaptation to 0.8 ??M total Cu (10-12 M free ion concentration) after 20 generations of Cu exposure. Selenastrum adapted to the same concentration when Cu was gradually introduced over an 8-h period using a specially designed apparatus that provided a transient increase in exposure concentration. Cu adaptation was not attributable to media conditioning by algal exudates. Duration of lag phase was a more sensitive index of copper toxicity to Selenastrum that was growth rate or stationary-phase cell density under the experimental conditions used. Chemical speciation of the Cu dosing solution influenced the duration of lag phase even when media formulations were identical after dosing. Selenastrum initially exposed to Cu in a CuCl2 injection solution exhibited a lag phase of 3.9 d, but this was reduced to 1.5 d when a CuEDTA solution was used to achieve the same total Cu and EDTA concentrations. Physical and chemical processes that accelerated the rate of increase in cupric ion concentration generally increased the duration of lag phase. ?? 1986.
Adaptive regularization of earthquake slip distribution inversion
NASA Astrophysics Data System (ADS)
Wang, Chisheng; Ding, Xiaoli; Li, Qingquan; Shan, Xinjian; Zhu, Jiasong; Guo, Bo; Liu, Peng
2016-04-01
Regularization is a routine approach used in earthquake slip distribution inversion to avoid numerically abnormal solutions. To date, most slip inversion studies have imposed uniform regularization on all the fault patches. However, adaptive regularization, where each retrieved parameter is regularized differently, has exhibited better performances in other research fields such as image restoration. In this paper, we implement an investigation into adaptive regularization for earthquake slip distribution inversion. It is found that adaptive regularization can achieve a significantly smaller mean square error (MSE) than uniform regularization, if it is set properly. We propose an adaptive regularization method based on weighted total least squares (WTLS). This approach assumes that errors exist in both the regularization matrix and observation, and an iterative algorithm is used to solve the solution. A weight coefficient is used to balance the regularization matrix residual and the observation residual. An experiment using four slip patterns was carried out to validate the proposed method. The results show that the proposed regularization method can derive a smaller MSE than uniform regularization and resolution-based adaptive regularization, and the improvement in MSE is more significant for slip patterns with low-resolution slip patches. In this paper, we apply the proposed regularization method to study the slip distribution of the 2011 Mw 9.0 Tohoku earthquake. The retrieved slip distribution is less smooth and more detailed than the one retrieved with the uniform regularization method, and is closer to the existing slip model from joint inversion of the geodetic and seismic data.
NASA Astrophysics Data System (ADS)
Chun, Tiejun; Zhu, Deqing; Pan, Jian; He, Zhen
2014-06-01
Recovery of alumina from magnetic separation tailings of red mud has been investigated by Na2CO3 solution leaching. X-ray diffraction (XRD) results show that most of the alumina is present as 12CaO·7Al2O3 and CaO·Al2O3 in the magnetic separation tailings. The shrinking core model was employed to describe the leaching kinetics. The results show that the calculated activation energy of 8.31 kJ/mol is characteristic for an internal diffusion-controlled process. The kinetic equation can be used to describe the leaching process. The effects of Na2CO3 concentration, liquid-to-solid ratio, and particle size on recovery of Al2O3 were examined.
Adaptive mesh strategies for the spectral element method
NASA Technical Reports Server (NTRS)
Mavriplis, Catherine
1992-01-01
An adaptive spectral method was developed for the efficient solution of time dependent partial differential equations. Adaptive mesh strategies that include resolution refinement and coarsening by three different methods are illustrated on solutions to the 1-D viscous Burger equation and the 2-D Navier-Stokes equations for driven flow in a cavity. Sharp gradients, singularities, and regions of poor resolution are resolved optimally as they develop in time using error estimators which indicate the choice of refinement to be used. The adaptive formulation presents significant increases in efficiency, flexibility, and general capabilities for high order spectral methods.
Solution of plane cascade flow using improved surface singularity methods
NASA Technical Reports Server (NTRS)
Mcfarland, E. R.
1981-01-01
A solution method has been developed for calculating compressible inviscid flow through a linear cascade of arbitrary blade shapes. The method uses advanced surface singularity formulations which were adapted from those found in current external flow analyses. The resulting solution technique provides a fast flexible calculation for flows through turbomachinery blade rows. The solution method and some examples of the method's capabilities are presented.
Huang, Liping; Wang, Qiang; Jiang, Linjie; Zhou, Peng; Quan, Xie; Logan, Bruce E
2015-08-18
Bioelectrochemical systems (BESs) have been shown to be useful in removing individual metals from solutions, but effective treatment of electroplating and mining wastewaters requires simultaneous removal of several metals in a single system. To develop multiple-reactor BESs for metals removal, biocathodes were first individually acclimated to three different metals using microbial fuel cells with Cr(VI) or Cu(II) as these metals have relatively high redox potentials, and microbial electrolysis cells for reducing Cd(II) as this metal has a more negative redox potential. The BESs were then acclimated to low concentrations of a mixture of metals, followed by more elevated concentrations. This procedure resulted in complete and selective metal reduction at rates of 1.24 ± 0.01 mg/L-h for Cr(VI), 1.07 ± 0.01 mg/L-h for Cu(II), and 0.98 ± 0.01 mg/L-h for Cd(II). These reduction rates were larger than the no adaptive controls by factors of 2.5 for Cr(VI), 2.9 for Cu(II), and 3.6 for Cd(II). This adaptive procedure produced less diverse microbial communities and changes in the microbial communities at the phylum and genus levels. These results demonstrated that bacterial communities can adaptively evolve to utilize solutions containing mixtures of metals, providing a strategy for remediating wastewaters containing Cr(VI), Cu(II), and Cd(II). PMID:26175284
Applying Adaptive Variables in Computerised Adaptive Testing
ERIC Educational Resources Information Center
Triantafillou, Evangelos; Georgiadou, Elissavet; Economides, Anastasios A.
2007-01-01
Current research in computerised adaptive testing (CAT) focuses on applications, in small and large scale, that address self assessment, training, employment, teacher professional development for schools, industry, military, assessment of non-cognitive skills, etc. Dynamic item generation tools and automated scoring of complex, constructed…
Physiologic adaptation to space - Space adaptation syndrome
NASA Technical Reports Server (NTRS)
Vanderploeg, J. M.
1985-01-01
The adaptive changes of the neurovestibular system to microgravity, which result in space motion sickness (SMS), are studied. A list of symptoms, which range from vomiting to drowsiness, is provided. The two patterns of symptom development, rapid and gradual, and the duration of the symptoms are described. The concept of sensory conflict and rearrangements to explain SMS is being investigated.
Dynamic alarm response procedures
Martin, J.; Gordon, P.; Fitch, K.
2006-07-01
The Dynamic Alarm Response Procedure (DARP) system provides a robust, Web-based alternative to existing hard-copy alarm response procedures. This paperless system improves performance by eliminating time wasted looking up paper procedures by number, looking up plant process values and equipment and component status at graphical display or panels, and maintenance of the procedures. Because it is a Web-based system, it is platform independent. DARP's can be served from any Web server that supports CGI scripting, such as Apache{sup R}, IIS{sup R}, TclHTTPD, and others. DARP pages can be viewed in any Web browser that supports Javascript and Scalable Vector Graphics (SVG), such as Netscape{sup R}, Microsoft Internet Explorer{sup R}, Mozilla Firefox{sup R}, Opera{sup R}, and others. (authors)
Definition of "experimental procedures".
2009-11-01
This Practice Committee Opinion provides a revised definition of "experimental procedures." This version replaces the document "Definition of Experimental" that was published most recently in November 2008. PMID:19836733
Common Interventional Radiology Procedures
... of common interventional techniques is below. Common Interventional Radiology Procedures Angiography An X-ray exam of the ... into the vertebra. Copyright © 2016 Society of Interventional Radiology. All rights reserved. 3975 Fair Ridge Drive • Suite ...
... Accessory pathway, such as Wolff-Parkinson-White Syndrome Atrial fibrillation and atrial flutter Ventricular tachycardia ... consensus statement on catheter and surgical ablation of atrial fibrillation: ... for personnel, policy, procedures and follow-up. ...
NASA Technical Reports Server (NTRS)
Coirier, William John
1994-01-01
A Cartesian, cell-based scheme for solving the Euler and Navier-Stokes equations in two dimensions is developed and tested. Grids about geometrically complicated bodies are generated automatically, by recursive subdivision of a single Cartesian cell encompassing the entire flow domain. Where the resulting cells intersect bodies, polygonal 'cut' cells are created. The geometry of the cut cells is computed using polygon-clipping algorithms. The grid is stored in a binary-tree data structure which provides a natural means of obtaining cell-to-cell connectivity and of carrying out solution-adaptive refinement. The Euler and Navier-Stokes equations are solved on the resulting grids using a finite-volume formulation. The convective terms are upwinded, with a limited linear reconstruction of the primitive variables used to provide input states to an approximate Riemann solver for computing the fluxes between neighboring cells. A multi-stage time-stepping scheme is used to reach a steady-state solution. Validation of the Euler solver with benchmark numerical and exact solutions is presented. An assessment of the accuracy of the approach is made by uniform and adaptive grid refinements for a steady, transonic, exact solution to the Euler equations. The error of the approach is directly compared to a structured solver formulation. A non smooth flow is also assessed for grid convergence, comparing uniform and adaptively refined results. Several formulations of the viscous terms are assessed analytically, both for accuracy and positivity. The two best formulations are used to compute adaptively refined solutions of the Navier-Stokes equations. These solutions are compared to each other, to experimental results and/or theory for a series of low and moderate Reynolds numbers flow fields. The most suitable viscous discretization is demonstrated for geometrically-complicated internal flows. For flows at high Reynolds numbers, both an altered grid-generation procedure and a
Bretland, P M
1988-01-01
The existing National Health Service financial system makes comprehensive costing of any service very difficult. A method of costing using modern commercial methods has been devised, classifying costs into variable, semi-variable and fixed and using the principle of overhead absorption for expenditure not readily allocated to individual procedures. It proved possible to establish a cost spectrum over the financial year 1984-85. The cheapest examinations were plain radiographs outside normal working hours, followed by plain radiographs, ultrasound, special procedures, fluoroscopy, nuclear medicine, angiography and angiographic interventional procedures in normal working hours. This differs from some published figures, particularly those in the Körner report. There was some overlap between fluoroscopic interventional and the cheaper nuclear medicine procedures, and between some of the more expensive nuclear medicine procedures and the cheaper angiographic ones. Only angiographic and the few more expensive nuclear medicine procedures exceed the cost of the inpatient day. The total cost of the imaging service to the district was about 4% of total hospital expenditure. It is shown that where more procedures are undertaken, the semi-variable and fixed (including capital) elements of the cost decrease (and vice versa) so that careful study is required to assess the value of proposed economies. The method is initially time-consuming and requires a computer system with 512 Kb of memory, but once the basic costing system is established in a department, detailed financial monitoring should become practicable. The necessity for a standard comprehensive costing procedure of this nature, based on sound cost accounting principles, appears inescapable, particularly in view of its potential application to management budgeting. PMID:3349241
Retinal Imaging: Adaptive Optics
NASA Astrophysics Data System (ADS)
Goncharov, A. S.; Iroshnikov, N. G.; Larichev, Andrey V.
This chapter describes several factors influencing the performance of ophthalmic diagnostic systems with adaptive optics compensation of human eye aberration. Particular attention is paid to speckle modulation, temporal behavior of aberrations, and anisoplanatic effects. The implementation of a fundus camera with adaptive optics is considered.
Uncertainty in adaptive capacity
NASA Astrophysics Data System (ADS)
Adger, W. Neil; Vincent, Katharine
2005-03-01
The capacity to adapt is a critical element of the process of adaptation: it is the vector of resources that represent the asset base from which adaptation actions can be made. Adaptive capacity can in theory be identified and measured at various scales, from the individual to the nation. The assessment of uncertainty within such measures comes from the contested knowledge domain and theories surrounding the nature of the determinants of adaptive capacity and the human action of adaptation. While generic adaptive capacity at the national level, for example, is often postulated as being dependent on health, governance and political rights, and literacy, and economic well-being, the determinants of these variables at national levels are not widely understood. We outline the nature of this uncertainty for the major elements of adaptive capacity and illustrate these issues with the example of a social vulnerability index for countries in Africa. To cite this article: W.N. Adger, K. Vincent, C. R. Geoscience 337 (2005).
Water Resource Adaptation Program
The Water Resource Adaptation Program (WRAP) contributes to the U.S. Environmental Protection Agency’s (U.S. EPA) efforts to provide water resource managers and decision makers with the tools needed to adapt water resources to demographic and economic development, and future clim...
Adaptive Sampling Proxy Application
Energy Science and Technology Software Center (ESTSC)
2012-10-22
ASPA is an implementation of an adaptive sampling algorithm [1-3], which is used to reduce the computational expense of computer simulations that couple disparate physical scales. The purpose of ASPA is to encapsulate the algorithms required for adaptive sampling independently from any specific application, so that alternative algorithms and programming models for exascale computers can be investigated more easily.
Szu, H.; Hsu, C.
1996-12-31
Human sensors systems (HSS) may be approximately described as an adaptive or self-learning version of the Wavelet Transforms (WT) that are capable to learn from several input-output associative pairs of suitable transform mother wavelets. Such an Adaptive WT (AWT) is a redundant combination of mother wavelets to either represent or classify inputs.
Neural Adaptation Effects in Conceptual Processing
Marino, Barbara F. M.; Borghi, Anna M.; Gemmi, Luca; Cacciari, Cristina; Riggio, Lucia
2015-01-01
We investigated the conceptual processing of nouns referring to objects characterized by a highly typical color and orientation. We used a go/no-go task in which we asked participants to categorize each noun as referring or not to natural entities (e.g., animals) after a selective adaptation of color-edge neurons in the posterior LV4 region of the visual cortex was induced by means of a McCollough effect procedure. This manipulation affected categorization: the green-vertical adaptation led to slower responses than the green-horizontal adaptation, regardless of the specific color and orientation of the to-be-categorized noun. This result suggests that the conceptual processing of natural entities may entail the activation of modality-specific neural channels with weights proportional to the reliability of the signals produced by these channels during actual perception. This finding is discussed with reference to the debate about the grounded cognition view. PMID:26264031
Adaptive management for a turbulent future.
Allen, Craig R; Fontaine, Joseph J; Pope, Kevin L; Garmestani, Ahjond S
2011-05-01
The challenges that face humanity today differ from the past because as the scale of human influence has increased, our biggest challenges have become global in nature, and formerly local problems that could be addressed by shifting populations or switching resources, now aggregate (i.e., "scale up") limiting potential management options. Adaptive management is an approach to natural resource management that emphasizes learning through management based on the philosophy that knowledge is incomplete and much of what we think we know is actually wrong. Adaptive management has explicit structure, including careful elucidation of goals, identification of alternative management objectives and hypotheses of causation, and procedures for the collection of data followed by evaluation and reiteration. It is evident that adaptive management has matured, but it has also reached a crossroads. Practitioners and scientists have developed adaptive management and structured decision making techniques, and mathematicians have developed methods to reduce the uncertainties encountered in resource management, yet there continues to be misapplication of the method and misunderstanding of its purpose. Ironically, the confusion over the term "adaptive management" may stem from the flexibility inherent in the approach, which has resulted in multiple interpretations of "adaptive management" that fall along a continuum of complexity and a priori design. Adaptive management is not a panacea for the navigation of 'wicked problems' as it does not produce easy answers, and is only appropriate in a subset of natural resource management problems where both uncertainty and controllability are high. Nonetheless, the conceptual underpinnings of adaptive management are simple; there will always be inherent uncertainty and unpredictability in the dynamics and behavior of complex social-ecological systems, but management decisions must still be made, and whenever possible, we should incorporate
Adaptive Management for a Turbulent Future
Allen, Craig R.; Fontaine, Joseph J.; Pope, Kevin L.; Garmestani, Ahjond S.
2011-01-01
The challenges that face humanity today differ from the past because as the scale of human influence has increased, our biggest challenges have become global in nature, and formerly local problems that could be addressed by shifting populations or switching resources, now aggregate (i.e., "scale up") limiting potential management options. Adaptive management is an approach to natural resource management that emphasizes learning through management based on the philosophy that knowledge is incomplete and much of what we think we know is actually wrong. Adaptive management has explicit structure, including careful elucidation of goals, identification of alternative management objectives and hypotheses of causation, and procedures for the collection of data followed by evaluation and reiteration. It is evident that adaptive management has matured, but it has also reached a crossroads. Practitioners and scientists have developed adaptive management and structured decision making techniques, and mathematicians have developed methods to reduce the uncertainties encountered in resource management, yet there continues to be misapplication of the method and misunderstanding of its purpose. Ironically, the confusion over the term "adaptive management" may stem from the flexibility inherent in the approach, which has resulted in multiple interpretations of "adaptive management" that fall along a continuum of complexity and a priori design. Adaptive management is not a panacea for the navigation of 'wicked problems' as it does not produce easy answers, and is only appropriate in a subset of natural resource management problems where both uncertainty and controllability are high. Nonetheless, the conceptual underpinnings of adaptive management are simple; there will always be inherent uncertainty and unpredictability in the dynamics and behavior of complex social-ecological systems, but management decisions must still be made, and whenever possible, we should incorporate
Adaptive management for a turbulent future
Allen, C.R.; Fontaine, J.J.; Pope, K.L.; Garmestani, A.S.
2011-01-01
The challenges that face humanity today differ from the past because as the scale of human influence has increased, our biggest challenges have become global in nature, and formerly local problems that could be addressed by shifting populations or switching resources, now aggregate (i.e., "scale up") limiting potential management options. Adaptive management is an approach to natural resource management that emphasizes learning through management based on the philosophy that knowledge is incomplete and much of what we think we know is actually wrong. Adaptive management has explicit structure, including careful elucidation of goals, identification of alternative management objectives and hypotheses of causation, and procedures for the collection of data followed by evaluation and reiteration. It is evident that adaptive management has matured, but it has also reached a crossroads. Practitioners and scientists have developed adaptive management and structured decision making techniques, and mathematicians have developed methods to reduce the uncertainties encountered in resource management, yet there continues to be misapplication of the method and misunderstanding of its purpose. Ironically, the confusion over the term "adaptive management" may stem from the flexibility inherent in the approach, which has resulted in multiple interpretations of "adaptive management" that fall along a continuum of complexity and a priori design. Adaptive management is not a panacea for the navigation of 'wicked problems' as it does not produce easy answers, and is only appropriate in a subset of natural resource management problems where both uncertainty and controllability are high. Nonetheless, the conceptual underpinnings of adaptive management are simple; there will always be inherent uncertainty and unpredictability in the dynamics and behavior of complex social-ecological systems, but management decisions must still be made, and whenever possible, we should incorporate
Residual Distribution Schemes for Conservation Laws Via Adaptive Quadrature
NASA Technical Reports Server (NTRS)
Barth, Timothy; Abgrall, Remi; Biegel, Bryan (Technical Monitor)
2000-01-01
This paper considers a family of nonconservative numerical discretizations for conservation laws which retains the correct weak solution behavior in the limit of mesh refinement whenever sufficient order numerical quadrature is used. Our analysis of 2-D discretizations in nonconservative form follows the 1-D analysis of Hou and Le Floch. For a specific family of nonconservative discretizations, it is shown under mild assumptions that the error arising from non-conservation is strictly smaller than the discretization error in the scheme. In the limit of mesh refinement under the same assumptions, solutions are shown to satisfy an entropy inequality. Using results from this analysis, a variant of the "N" (Narrow) residual distribution scheme of van der Weide and Deconinck is developed for first-order systems of conservation laws. The modified form of the N-scheme supplants the usual exact single-state mean-value linearization of flux divergence, typically used for the Euler equations of gasdynamics, by an equivalent integral form on simplex interiors. This integral form is then numerically approximated using an adaptive quadrature procedure. This renders the scheme nonconservative in the sense described earlier so that correct weak solutions are still obtained in the limit of mesh refinement. Consequently, we then show that the modified form of the N-scheme can be easily applied to general (non-simplicial) element shapes and general systems of first-order conservation laws equipped with an entropy inequality where exact mean-value linearization of the flux divergence is not readily obtained, e.g. magnetohydrodynamics, the Euler equations with certain forms of chemistry, etc. Numerical examples of subsonic, transonic and supersonic flows containing discontinuities together with multi-level mesh refinement are provided to verify the analysis.
Samuel, A G; Kat, D
1998-04-01
Two experiments were used to test whether selective adaptation for speech occurs automatically or instead requires attentional resources. A control condition demonstrated the usual large identification shifts caused by repeatedly presenting an adapting sound (/wa/, with listeners identifying members of a /ba/-/wa/ test series). Two types of distractor tasks were used: (1) Subjects did a rapid series of arithmetic problems during the adaptation periods (Experiments 1 and 2), or (2) they made a series of rhyming judgments, requiring phonetic coding (Experiment 2). A control experiment (Experiment 3) demonstrated that these tasks normally impose a heavy attentional cost on phonetic processing. Despite this, for both experimental conditions, the observed adaptation effect was just as large as in the control condition. This result indicates that adaptation is automatic, operating at an early, preattentive level. The implications of these results for current models of speech perception are discussed. PMID:9599999
Adaptive Peer Sampling with Newscast
NASA Astrophysics Data System (ADS)
Tölgyesi, Norbert; Jelasity, Márk
The peer sampling service is a middleware service that provides random samples from a large decentralized network to support gossip-based applications such as multicast, data aggregation and overlay topology management. Lightweight gossip-based implementations of the peer sampling service have been shown to provide good quality random sampling while also being extremely robust to many failure scenarios, including node churn and catastrophic failure. We identify two problems with these approaches. The first problem is related to message drop failures: if a node experiences a higher-than-average message drop rate then the probability of sampling this node in the network will decrease. The second problem is that the application layer at different nodes might request random samples at very different rates which can result in very poor random sampling especially at nodes with high request rates. We propose solutions for both problems. We focus on Newscast, a robust implementation of the peer sampling service. Our solution is based on simple extensions of the protocol and an adaptive self-control mechanism for its parameters, namely—without involving failure detectors—nodes passively monitor local protocol events using them as feedback for a local control loop for self-tuning the protocol parameters. The proposed solution is evaluated by simulation experiments.
Adaptive optical interconnects: the ADDAPT project
NASA Astrophysics Data System (ADS)
Henker, Ronny; Pliva, Jan; Khafaji, Mahdi; Ellinger, Frank; Toifl, Thomas; Offrein, Bert; Cevrero, Alessandro; Oezkaya, Ilter; Seifried, Marc; Ledentsov, Nikolay; Kropp, Joerg-R.; Shchukin, Vitaly; Zoldak, Martin; Halmo, Leos; Turkiewicz, Jaroslaw; Meredith, Wyn; Eddie, Iain; Georgiades, Michael; Charalambides, Savvas; Duis, Jeroen; van Leeuwen, Pieter
2015-09-01
Existing optical networks are driven by dynamic user and application demands but operate statically at their maximum performance. Thus, optical links do not offer much adaptability and are not very energy-efficient. In this paper a novel approach of implementing performance and power adaptivity from system down to optical device, electrical circuit and transistor level is proposed. Depending on the actual data load, the number of activated link paths and individual device parameters like bandwidth, clock rate, modulation format and gain are adapted to enable lowering the components supply power. This enables flexible energy-efficient optical transmission links which pave the way for massive reductions of CO2 emission and operating costs in data center and high performance computing applications. Within the FP7 research project Adaptive Data and Power Aware Transceivers for Optical Communications (ADDAPT) dynamic high-speed energy-efficient transceiver subsystems are developed for short-range optical interconnects taking up new adaptive technologies and methods. The research of eight partners from industry, research and education spanning seven European countries includes the investigation of several adaptive control types and algorithms, the development of a full transceiver system, the design and fabrication of optical components and integrated circuits as well as the development of high-speed, low loss packaging solutions. This paper describes and discusses the idea of ADDAPT and provides an overview about the latest research results in this field.
NASA Astrophysics Data System (ADS)
Morris, Simon Conway
2003-09-01
Life's Solution builds a persuasive case for the predictability of evolutionary outcomes. The case rests on a remarkable compilation of examples of convergent evolution, in which two or more lineages have independently evolved similar structures and functions. The examples range from the aerodynamics of hovering moths and hummingbirds to the use of silk by spiders and some insects to capture prey. Going against the grain of Darwinian orthodoxy, this book is a must read for anyone grappling with the meaning of evolution and our place in the Universe. Simon Conway Morris is the Ad Hominen Professor in the Earth Science Department at the University of Cambridge and a Fellow of St. John's College and the Royal Society. His research focuses on the study of constraints on evolution, and the historical processes that lead to the emergence of complexity, especially with respect to the construction of the major animal body parts in the Cambrian explosion. Previous books include The Crucible of Creation (Getty Center for Education in the Arts, 1999) and co-author of Solnhofen (Cambridge, 1990). Hb ISBN (2003) 0-521-82704-3
NASA Astrophysics Data System (ADS)
Morris, Simon Conway
2004-11-01
Life's Solution builds a persuasive case for the predictability of evolutionary outcomes. The case rests on a remarkable compilation of examples of convergent evolution, in which two or more lineages have independently evolved similar structures and functions. The examples range from the aerodynamics of hovering moths and hummingbirds to the use of silk by spiders and some insects to capture prey. Going against the grain of Darwinian orthodoxy, this book is a must read for anyone grappling with the meaning of evolution and our place in the Universe. Simon Conway Morris is the Ad Hominen Professor in the Earth Science Department at the University of Cambridge and a Fellow of St. John's College and the Royal Society. His research focuses on the study of constraints on evolution, and the historical processes that lead to the emergence of complexity, especially with respect to the construction of the major animal body parts in the Cambrian explosion. Previous books include The Crucible of Creation (Getty Center for Education in the Arts, 1999) and co-author of Solnhofen (Cambridge, 1990). Hb ISBN (2003) 0-521-82704-3
Alloy solution hardening with solute pairs
Mitchell, John W.
1976-08-24
Solution hardened alloys are formed by using at least two solutes which form associated solute pairs in the solvent metal lattice. Copper containing equal atomic percentages of aluminum and palladium is an example.
Vortical Flow Prediction using an Adaptive Unstructured Grid Method. Chapter 11
NASA Technical Reports Server (NTRS)
Pirzadeh, Shahyar Z.
2009-01-01
A computational fluid dynamics (CFD) method has been employed to compute vortical flows around slender wing/body configurations. The emphasis of the paper is on the effectiveness of an adaptive grid procedure in "capturing" concentrated vortices generated at sharp edges or flow separation lines of lifting surfaces flying at high angles of attack. The method is based on a tetrahedral unstructured grid technology developed at the NASA Langley Research Center. Two steady-state, subsonic, inviscid and Navier-Stokes flow test cases are presented to demonstrate the applicability of the method for solving vortical flow problems. The first test case concerns vortex flow over a simple 65 delta wing with different values of leading-edge radius. Although the geometry is quite simple, it poses a challenging problem for computing vortices originating from blunt leading edges. The second case is that of a more complex fighter configuration. The superiority of the adapted solutions in capturing the vortex flow structure over the conventional unadapted results is demonstrated by comparisons with the wind-tunnel experimental data. The study shows that numerical prediction of vortical flows is highly sensitive to the local grid resolution and that the implementation of grid adaptation is essential when applying CFD methods to such complicated flow problems.
Mahillo-Isla, R; Gonźalez-Morales, M J; Dehesa-Martínez, C
2011-06-01
The slowly varying envelope approximation is applied to the radiation problems of the Helmholtz equation with a planar single-layer and dipolar sources. The analyses of such problems provide procedures to recover solutions of the Helmholtz equation based on the evaluation of solutions of the parabolic wave equation at a given plane. Furthermore, the conditions that must be fulfilled to apply each procedure are also discussed. The relations to previous work are given as well. PMID:21643384
Vectorizable algorithms for adaptive schemes for rapid analysis of SSME flows
NASA Technical Reports Server (NTRS)
Oden, J. Tinsley
1987-01-01
An initial study into vectorizable algorithms for use in adaptive schemes for various types of boundary value problems is described. The focus is on two key aspects of adaptive computational methods which are crucial in the use of such methods (for complex flow simulations such as those in the Space Shuttle Main Engine): the adaptive scheme itself and the applicability of element-by-element matrix computations in a vectorizable format for rapid calculations in adaptive mesh procedures.
Dynamical Adaptation in Photoreceptors
Clark, Damon A.; Benichou, Raphael; Meister, Markus; Azeredo da Silveira, Rava
2013-01-01
Adaptation is at the heart of sensation and nowhere is it more salient than in early visual processing. Light adaptation in photoreceptors is doubly dynamical: it depends upon the temporal structure of the input and it affects the temporal structure of the response. We introduce a non-linear dynamical adaptation model of photoreceptors. It is simple enough that it can be solved exactly and simulated with ease; analytical and numerical approaches combined provide both intuition on the behavior of dynamical adaptation and quantitative results to be compared with data. Yet the model is rich enough to capture intricate phenomenology. First, we show that it reproduces the known phenomenology of light response and short-term adaptation. Second, we present new recordings and demonstrate that the model reproduces cone response with great precision. Third, we derive a number of predictions on the response of photoreceptors to sophisticated stimuli such as periodic inputs, various forms of flickering inputs, and natural inputs. In particular, we demonstrate that photoreceptors undergo rapid adaptation of response gain and time scale, over ∼ 300 ms—i. e., over the time scale of the response itself—and we confirm this prediction with data. For natural inputs, this fast adaptation can modulate the response gain more than tenfold and is hence physiologically relevant. PMID:24244119
Advance crew procedures development techniques: Procedures generation program requirements document
NASA Technical Reports Server (NTRS)
Arbet, J. D.; Benbow, R. L.; Hawk, M. L.
1974-01-01
The Procedures Generation Program (PGP) is described as an automated crew procedures generation and performance monitoring system. Computer software requirements to be implemented in PGP for the Advanced Crew Procedures Development Techniques are outlined.
Mobile Energy Laboratory Procedures
Armstrong, P.R.; Batishko, C.R.; Dittmer, A.L.; Hadley, D.L.; Stoops, J.L.
1993-09-01
Pacific Northwest Laboratory (PNL) has been tasked to plan and implement a framework for measuring and analyzing the efficiency of on-site energy conversion, distribution, and end-use application on federal facilities as part of its overall technical support to the US Department of Energy (DOE) Federal Energy Management Program (FEMP). The Mobile Energy Laboratory (MEL) Procedures establish guidelines for specific activities performed by PNL staff. PNL provided sophisticated energy monitoring, auditing, and analysis equipment for on-site evaluation of energy use efficiency. Specially trained engineers and technicians were provided to conduct tests in a safe and efficient manner with the assistance of host facility staff and contractors. Reports were produced to describe test procedures, results, and suggested courses of action. These reports may be used to justify changes in operating procedures, maintenance efforts, system designs, or energy-using equipment. The MEL capabilities can subsequently be used to assess the results of energy conservation projects. These procedures recognize the need for centralized NM administration, test procedure development, operator training, and technical oversight. This need is evidenced by increasing requests fbr MEL use and the economies available by having trained, full-time MEL operators and near continuous MEL operation. DOE will assign new equipment and upgrade existing equipment as new capabilities are developed. The equipment and trained technicians will be made available to federal agencies that provide funding for the direct costs associated with MEL use.
Procedural learning and dyslexia.
Nicolson, R I; Fawcett, A J; Brookes, R L; Needle, J
2010-08-01
Three major 'neural systems', specialized for different types of information processing, are the sensory, declarative, and procedural systems. It has been proposed (Trends Neurosci., 30(4), 135-141) that dyslexia may be attributable to impaired function in the procedural system together with intact declarative function. We provide a brief overview of the increasing evidence relating to the hypothesis, noting that the framework involves two main claims: first that 'neural systems' provides a productive level of description avoiding the underspecificity of cognitive descriptions and the overspecificity of brain structural accounts; and second that a distinctive feature of procedural learning is its extended time course, covering from minutes to months. In this article, we focus on the second claim. Three studies-speeded single word reading, long-term response learning, and overnight skill consolidation-are reviewed which together provide clear evidence of difficulties in procedural learning for individuals with dyslexia, even when the tasks are outside the literacy domain. The educational implications of the results are then discussed, and in particular the potential difficulties that impaired overnight procedural consolidation would entail. It is proposed that response to intervention could be better predicted if diagnostic tests on the different forms of learning were first undertaken. PMID:20680991
NASA Astrophysics Data System (ADS)
Falugi, P.; Olaru, S.; Dumur, D.
2010-08-01
This article proposes an explicit robust predictive control solution based on linear matrix inequalities (LMIs). The considered predictive control strategy uses different local descriptions of the system dynamics and uncertainties and thus allows the handling of less conservative input constraints. The computed control law guarantees constraint satisfaction and asymptotic stability. The technique is effective for a class of nonlinear systems embedded into polytopic models. A detailed discussion of the procedures which adapt the partition of the state space is presented. For the practical implementation the construction of suitable (explicit) descriptions of the control law are described upon concrete algorithms.
Percutaneous urinary procedures - discharge
... x 4-inch gauze sponges, tape, connecting tube, hydrogen peroxide, and warm water (plus a clean container to ... cotton swab soaked with a solution of half hydrogen peroxide and half warm water. Pat it dry with ...
Percutaneous urinary procedures - discharge
... x 4-inch gauze sponges, tape, connecting tube, hydrogen peroxide, and warm water (plus a clean container ... cotton swab soaked with a solution of half hydrogen peroxide and half warm water. Pat it dry ...
An adaptive pseudospectral method for discontinuous problems
NASA Technical Reports Server (NTRS)
Augenbaum, Jeffrey M.
1988-01-01
The accuracy of adaptively chosen, mapped polynomial approximations is studied for functions with steep gradients or discontinuities. It is shown that, for steep gradient functions, one can obtain spectral accuracy in the original coordinate system by using polynomial approximations in a transformed coordinate system with substantially fewer collocation points than are necessary using polynomial expansion directly in the original, physical, coordinate system. It is also shown that one can avoid the usual Gibbs oscillation associated with steep gradient solutions of hyperbolic pde's by approximation in suitably chosen coordinate systems. Continuous, high gradient solutions are computed with spectral accuracy (as measured in the physical coordinate system). Discontinuous solutions associated with nonlinear hyperbolic equations can be accurately computed by using an artificial viscosity chosen to smooth out the solution in the mapped, computational domain. Thus, shocks can be effectively resolved on a scale that is subgrid to the resolution available with collocation only in the physical domain. Examples with Fourier and Chebyshev collocation are given.
Adaptive network countermeasures.
McClelland-Bane, Randy; Van Randwyk, Jamie A.; Carathimas, Anthony G.; Thomas, Eric D.
2003-10-01
This report describes the results of a two-year LDRD funded by the Differentiating Technologies investment area. The project investigated the use of countermeasures in protecting computer networks as well as how current countermeasures could be changed in order to adapt with both evolving networks and evolving attackers. The work involved collaboration between Sandia employees and students in the Sandia - California Center for Cyber Defenders (CCD) program. We include an explanation of the need for adaptive countermeasures, a description of the architecture we designed to provide adaptive countermeasures, and evaluations of the system.
[Adaptive optics for ophthalmology].
Saleh, M
2016-04-01
Adaptive optics is a technology enhancing the visual performance of an optical system by correcting its optical aberrations. Adaptive optics have already enabled several breakthroughs in the field of visual sciences, such as improvement of visual acuity in normal and diseased eyes beyond physiologic limits, and the correction of presbyopia. Adaptive optics technology also provides high-resolution, in vivo imaging of the retina that may eventually help to detect the onset of retinal conditions at an early stage and provide better assessment of treatment efficacy. PMID:27019970
Formulation of numerical procedures for dynamic analysis of spinning structures
NASA Technical Reports Server (NTRS)
Gupta, K. K.
1986-01-01
The paper presents the descriptions of recently developed numerical algorithms that prove to be useful for the solution of the free vibration problem of spinning structures. First, a generalized procedure for the computation of nodal centrifugal forces in a finite element owing to any specified spin rate is derived in detail. This is followed by a description of an improved eigenproblem solution procedure that proves to be economical for the free vibration analysis of spinning structures. Numerical results are also presented which indicate the efficacy of the currently developed procedures.
Environmental Test Screening Procedure
NASA Technical Reports Server (NTRS)
Zeidler, Janet
2000-01-01
This procedure describes the methods to be used for environmental stress screening (ESS) of the Lightning Mapper Sensor (LMS) lens assembly. Unless otherwise specified, the procedures shall be completed in the order listed, prior to performance of the Acceptance Test Procedure (ATP). The first unit, S/N 001, will be subjected to the Qualification Vibration Levels, while the remainder will be tested at the Operational Level. Prior to ESS, all units will undergo Pre-ESS Functional Testing that includes measuring the on-axis and plus or minus 0.95 full field Modulation Transfer Function and Back Focal Length. Next, all units will undergo ESS testing, and then Acceptance testing per PR 460.
Reasoning about procedural knowledge
NASA Technical Reports Server (NTRS)
Georgeff, M. P.
1985-01-01
A crucial aspect of automated reasoning about space operations is that knowledge of the problem domain is often procedural in nature - that is, the knowledge is often in the form of sequences of actions or procedures for achieving given goals or reacting to certain situations. In this paper a system is described that explicitly represents and reasons about procedural knowledge. The knowledge representation used is sufficiently rich to describe the effects of arbitrary sequences of tests and actions, and the inference mechanism provides a means for directly using this knowledge to reach desired operational goals. Furthermore, the representation has a declarative semantics that provides for incremental changes to the system, rich explanatory capabilities, and verifiability. The approach also provides a mechanism for reasoning about the use of this knowledge, thus enabling the system to choose effectively between alternative courses of action.
NASA Technical Reports Server (NTRS)
Goldsack, Stephen J.; Holzbach-Valero, A. A.; Waldrop, Raymond S.; Volz, Richard A.
1991-01-01
This paper describes how the main features of the proposed Ada language extensions intended to support distribution, and offered as possible solutions for Ada9X can be implemented by transformation into standard Ada83. We start by summarizing the features proposed in a paper (Gargaro et al, 1990) which constitutes the definition of the extensions. For convenience we have called the language in its modified form AdaPT which might be interpreted as Ada with partitions. These features were carefully chosen to provide support for the construction of executable modules for execution in nodes of a network of loosely coupled computers, but flexibly configurable for different network architectures and for recovery following failure, or adapting to mode changes. The intention in their design was to provide extensions which would not impact adversely on the normal use of Ada, and would fit well in style and feel with the existing standard. We begin by summarizing the features introduced in AdaPT.
Protocol independent adaptive route update for VANET.
Rasheed, Asim; Ajmal, Sana; Qayyum, Amir
2014-01-01
High relative node velocity and high active node density have presented challenges to existing routing approaches within highly scaled ad hoc wireless networks, such as Vehicular Ad hoc Networks (VANET). Efficient routing requires finding optimum route with minimum delay, updating it on availability of a better one, and repairing it on link breakages. Current routing protocols are generally focused on finding and maintaining an efficient route, with very less emphasis on route update. Adaptive route update usually becomes impractical for dense networks due to large routing overheads. This paper presents an adaptive route update approach which can provide solution for any baseline routing protocol. The proposed adaptation eliminates the classification of reactive and proactive by categorizing them as logical conditions to find and update the route. PMID:24723807
Protocol Independent Adaptive Route Update for VANET
Rasheed, Asim; Qayyum, Amir
2014-01-01
High relative node velocity and high active node density have presented challenges to existing routing approaches within highly scaled ad hoc wireless networks, such as Vehicular Ad hoc Networks (VANET). Efficient routing requires finding optimum route with minimum delay, updating it on availability of a better one, and repairing it on link breakages. Current routing protocols are generally focused on finding and maintaining an efficient route, with very less emphasis on route update. Adaptive route update usually becomes impractical for dense networks due to large routing overheads. This paper presents an adaptive route update approach which can provide solution for any baseline routing protocol. The proposed adaptation eliminates the classification of reactive and proactive by categorizing them as logical conditions to find and update the route. PMID:24723807
PROCESS OF ELIMINATING HYDROGEN PEROXIDE IN SOLUTIONS CONTAINING PLUTONIUM VALUES
Barrick, J.G.; Fries, B.A.
1960-09-27
A procedure is given for peroxide precipitation processes for separating and recovering plutonium values contained in an aqueous solution. When plutonium peroxide is precipitated from an aqueous solution, the supernatant contains appreciable quantities of plutonium and peroxide. It is desirable to process this solution further to recover plutonium contained therein, but the presence of the peroxide introduces difficulties; residual hydrogen peroxide contained in the supernatant solution is eliminated by adding a nitrite or a sulfite to this solution.
Adaptive Mesh Refinement for Microelectronic Device Design
NASA Technical Reports Server (NTRS)
Cwik, Tom; Lou, John; Norton, Charles
1999-01-01
Finite element and finite volume methods are used in a variety of design simulations when it is necessary to compute fields throughout regions that contain varying materials or geometry. Convergence of the simulation can be assessed by uniformly increasing the mesh density until an observable quantity stabilizes. Depending on the electrical size of the problem, uniform refinement of the mesh may be computationally infeasible due to memory limitations. Similarly, depending on the geometric complexity of the object being modeled, uniform refinement can be inefficient since regions that do not need refinement add to the computational expense. In either case, convergence to the correct (measured) solution is not guaranteed. Adaptive mesh refinement methods attempt to selectively refine the region of the mesh that is estimated to contain proportionally higher solution errors. The refinement may be obtained by decreasing the element size (h-refinement), by increasing the order of the element (p-refinement) or by a combination of the two (h-p refinement). A successful adaptive strategy refines the mesh to produce an accurate solution measured against the correct fields without undue computational expense. This is accomplished by the use of a) reliable a posteriori error estimates, b) hierarchal elements, and c) automatic adaptive mesh generation. Adaptive methods are also useful when problems with multi-scale field variations are encountered. These occur in active electronic devices that have thin doped layers and also when mixed physics is used in the calculation. The mesh needs to be fine at and near the thin layer to capture rapid field or charge variations, but can coarsen away from these layers where field variations smoothen and charge densities are uniform. This poster will present an adaptive mesh refinement package that runs on parallel computers and is applied to specific microelectronic device simulations. Passive sensors that operate in the infrared portion of
Grid adaptation using chimera composite overlapping meshes
NASA Technical Reports Server (NTRS)
Kao, Kai-Hsiung; Liou, Meng-Sing; Chow, Chuen-Yen
1994-01-01
The objective of this paper is to perform grid adaptation using composite overlapping meshes in regions of large gradient to accurately capture the salient features during computation. The chimera grid scheme, a multiple overset mesh technique, is used in combination with a Navier-Stokes solver. The numerical solution is first converged to a steady state based on an initial coarse mesh. Solution-adaptive enhancement is then performed by using a secondary fine grid system which oversets on top of the base grid in the high-gradient region, but without requiring the mesh boundaries to join in any special way. Communications through boundary interfaces between those separated grids are carried out using trilinear interpolation. Application to the Euler equations for shock reflections and to shock wave/boundary layer interaction problem are tested. With the present method, the salient features are well-resolved.
Grid adaptation using Chimera composite overlapping meshes
NASA Technical Reports Server (NTRS)
Kao, Kai-Hsiung; Liou, Meng-Sing; Chow, Chuen-Yen
1993-01-01
The objective of this paper is to perform grid adaptation using composite over-lapping meshes in regions of large gradient to capture the salient features accurately during computation. The Chimera grid scheme, a multiple overset mesh technique, is used in combination with a Navier-Stokes solver. The numerical solution is first converged to a steady state based on an initial coarse mesh. Solution-adaptive enhancement is then performed by using a secondary fine grid system which oversets on top of the base grid in the high-gradient region, but without requiring the mesh boundaries to join in any special way. Communications through boundary interfaces between those separated grids are carried out using tri-linear interpolation. Applications to the Euler equations for shock reflections and to a shock wave/boundary layer interaction problem are tested. With the present method, the salient features are well resolved.
Grid adaption using Chimera composite overlapping meshes
NASA Technical Reports Server (NTRS)
Kao, Kai-Hsiung; Liou, Meng-Sing; Chow, Chuen-Yen
1993-01-01
The objective of this paper is to perform grid adaptation using composite over-lapping meshes in regions of large gradient to capture the salient features accurately during computation. The Chimera grid scheme, a multiple overset mesh technique, is used in combination with a Navier-Stokes solver. The numerical solution is first converged to a steady state based on an initial coarse mesh. Solution-adaptive enhancement is then performed by using a secondary fine grid system which oversets on top of the base grid in the high-gradient region, but without requiring the mesh boundaries to join in any special way. Communications through boundary interfaces between those separated grids are carried out using tri-linear interpolation. Applications to the Euler equations for shock reflections and to a shock wave/boundary layer interaction problem are tested. With the present method, the salient features are well resolved.
NASA Astrophysics Data System (ADS)
Nejadmalayeri, Alireza
The current work develops a wavelet-based adaptive variable fidelity approach that integrates Wavelet-based Direct Numerical Simulation (WDNS), Coherent Vortex Simulations (CVS), and Stochastic Coherent Adaptive Large Eddy Simulations (SCALES). The proposed methodology employs the notion of spatially and temporarily varying wavelet thresholding combined with hierarchical wavelet-based turbulence modeling. The transition between WDNS, CVS, and SCALES regimes is achieved through two-way physics-based feedback between the modeled SGS dissipation (or other dynamically important physical quantity) and the spatial resolution. The feedback is based on spatio-temporal variation of the wavelet threshold, where the thresholding level is adjusted on the fly depending on the deviation of local significant SGS dissipation from the user prescribed level. This strategy overcomes a major limitation for all previously existing wavelet-based multi-resolution schemes: the global thresholding criterion, which does not fully utilize the spatial/temporal intermittency of the turbulent flow. Hence, the aforementioned concept of physics-based spatially variable thresholding in the context of wavelet-based numerical techniques for solving PDEs is established. The procedure consists of tracking the wavelet thresholding-factor within a Lagrangian frame by exploiting a Lagrangian Path-Line Diffusive Averaging approach based on either linear averaging along characteristics or direct solution of the evolution equation. This innovative technique represents a framework of continuously variable fidelity wavelet-based space/time/model-form adaptive multiscale methodology. This methodology has been tested and has provided very promising results on a benchmark with time-varying user prescribed level of SGS dissipation. In addition, a longtime effort to develop a novel parallel adaptive wavelet collocation method for numerical solution of PDEs has been completed during the course of the current work
Logarithmic Adaptive Quantization Projection for Audio Watermarking
NASA Astrophysics Data System (ADS)
Zhao, Xuemin; Guo, Yuhong; Liu, Jian; Yan, Yonghong; Fu, Qiang
In this paper, a logarithmic adaptive quantization projection (LAQP) algorithm for digital watermarking is proposed. Conventional quantization index modulation uses a fixed quantization step in the watermarking embedding procedure, which leads to poor fidelity. Moreover, the conventional methods are sensitive to value-metric scaling attack. The LAQP method combines the quantization projection scheme with a perceptual model. In comparison to some conventional quantization methods with a perceptual model, the LAQP only needs to calculate the perceptual model in the embedding procedure, avoiding the decoding errors introduced by the difference of the perceptual model used in the embedding and decoding procedure. Experimental results show that the proposed watermarking scheme keeps a better fidelity and is robust against the common signal processing attack. More importantly, the proposed scheme is invariant to value-metric scaling attack.
Kiely, Edward M; Spitz, Lewis
2015-10-01
The various stages of the separation are carefully planned but despite this, variations which will change the schedule of the procedure may exist. In general the operation commences on the opposite side from the main procedure and then the twins are turned for the remainder of the operation. Each type of conjoined twin is different but basically thoracopagus involves the hearts, omphalopagus involves the liver and small intestine and ischiopagus involves the large intestine and genito-urinary system. Our results are presented together with interesting cases from which lessons have been learned. PMID:26382263
Asymptotic Linearity of Optimal Control Modification Adaptive Law with Analytical Stability Margins
NASA Technical Reports Server (NTRS)
Nguyen, Nhan T.
2010-01-01
Optimal control modification has been developed to improve robustness to model-reference adaptive control. For systems with linear matched uncertainty, optimal control modification adaptive law can be shown by a singular perturbation argument to possess an outer solution that exhibits a linear asymptotic property. Analytical expressions of phase and time delay margins for the outer solution can be obtained. Using the gradient projection operator, a free design parameter of the adaptive law can be selected to satisfy stability margins.
NASA Technical Reports Server (NTRS)
2001-01-01
REI Systems, Inc. developed a software solution that uses the Internet to eliminate the paperwork typically required to document and manage complex business processes. The data management solution, called Electronic Handbooks (EHBs), is presently used for the entire SBIR program processes at NASA. The EHB-based system is ideal for programs and projects whose users are geographically distributed and are involved in complex management processes and procedures. EHBs provide flexible access control and increased communications while maintaining security for systems of all sizes. Through Internet Protocol- based access, user authentication and user-based access restrictions, role-based access control, and encryption/decryption, EHBs provide the level of security required for confidential data transfer. EHBs contain electronic forms and menus, which can be used in real time to execute the described processes. EHBs use standard word processors that generate ASCII HTML code to set up electronic forms that are viewed within a web browser. EHBs require no end-user software distribution, significantly reducing operating costs. Each interactive handbook simulates a hard-copy version containing chapters with descriptions of participants' roles in the online process.
Toddler test or procedure preparation
Preparing toddler for test/procedure; Test/procedure preparation - toddler; Preparing for a medical test or procedure - toddler ... Before the test, know that your child will probably cry. Even if you prepare, your child may feel some discomfort or ...
Preschooler test or procedure preparation
Preparing preschoolers for test/procedure; Test/procedure preparation - preschooler ... Preparing children for medical tests can reduce their distress. It can also make them less likely to cry and resist the procedure. Research shows that ...
Radwan, Jacek; Babik, Wiesław
2012-12-22
The amount and nature of genetic variation available to natural selection affect the rate, course and outcome of evolution. Consequently, the study of the genetic basis of adaptive evolutionary change has occupied biologists for decades, but progress has been hampered by the lack of resolution and the absence of a genome-level perspective. Technological advances in recent years should now allow us to answer many long-standing questions about the nature of adaptation. The data gathered so far are beginning to challenge some widespread views of the way in which natural selection operates at the genomic level. Papers in this Special Feature of Proceedings of the Royal Society B illustrate various aspects of the broad field of adaptation genomics. This introductory article sets up a context and, on the basis of a few selected examples, discusses how genomic data can advance our understanding of the process of adaptation. PMID:23097510
Adaptations, exaptations, and spandrels.
Buss, D M; Haselton, M G; Shackelford, T K; Bleske, A L; Wakefield, J C
1998-05-01
Adaptation and natural selection are central concepts in the emerging science of evolutionary psychology. Natural selection is the only known causal process capable of producing complex functional organic mechanisms. These adaptations, along with their incidental by-products and a residue of noise, comprise all forms of life. Recently, S. J. Gould (1991) proposed that exaptations and spandrels may be more important than adaptations for evolutionary psychology. These refer to features that did not originally arise for their current use but rather were co-opted for new purposes. He suggested that many important phenomena--such as art, language, commerce, and war--although evolutionary in origin, are incidental spandrels of the large human brain. The authors outline the conceptual and evidentiary standards that apply to adaptations, exaptations, and spandrels and discuss the relative utility of these concepts for psychological science. PMID:9612136
NASA Technical Reports Server (NTRS)
Wada, B.
1993-01-01
The term adaptive structures refers to a structural control approach in which sensors, actuators, electronics, materials, structures, structural concepts, and system-performance-validation strategies are integrated to achieve specific objectives.
Adaptive Management of Ecosystems
Adaptive management is an approach to natural resource management that emphasizes learning through management. As such, management may be treated as experiment, with replication, or management may be conducted in an iterative manner. Although the concept has resonated with many...
NASA Astrophysics Data System (ADS)
Allahverdyan, A. E.; Babajanyan, S. G.; Martirosyan, N. H.; Melkikh, A. V.
2016-07-01
A major limitation of many heat engines is that their functioning demands on-line control and/or an external fitting between the environmental parameters (e.g., temperatures of thermal baths) and internal parameters of the engine. We study a model for an adaptive heat engine, where—due to feedback from the functional part—the engine's structure adapts to given thermal baths. Hence, no on-line control and no external fitting are needed. The engine can employ unknown resources; it can also adapt to results of its own functioning that make the bath temperatures closer. We determine resources of adaptation and relate them to the prior information available about the environment.
NASA Astrophysics Data System (ADS)
Dow, Kirstin; Berkhout, Frans; Preston, Benjamin L.; Klein, Richard J. T.; Midgley, Guy; Shaw, M. Rebecca
2013-04-01
An actor-centered, risk-based approach to defining limits to social adaptation provides a useful analytic framing for identifying and anticipating these limits and informing debates over society's responses to climate change.
Rocketing into Adaptive Inquiry.
ERIC Educational Resources Information Center
Farenga, Stephen J.; Joyce, Beverly A.; Dowling, Thomas W.
2002-01-01
Defines adaptive inquiry and argues for employing this method which allows lessons to be shaped in response to student needs. Illustrates this idea by detailing an activity in which teams of students build rockets. (DDR)
Telescope Adaptive Optics Code
Phillion, D.
2005-07-28
The Telescope AO Code has general adaptive optics capabilities plus specialized models for three telescopes with either adaptive optics or active optics systems. It has the capability to generate either single-layer or distributed Kolmogorov turbulence phase screens using the FFT. Missing low order spatial frequencies are added using the Karhunen-Loeve expansion. The phase structure curve is extremely dose to the theoreUcal. Secondly, it has the capability to simulate an adaptive optics control systems. The default parameters are those of the Keck II adaptive optics system. Thirdly, it has a general wave optics capability to model the science camera halo due to scintillation from atmospheric turbulence and the telescope optics. Although this capability was implemented for the Gemini telescopes, the only default parameter specific to the Gemini telescopes is the primary mirror diameter. Finally, it has a model for the LSST active optics alignment strategy. This last model is highly specific to the LSST
Allahverdyan, A E; Babajanyan, S G; Martirosyan, N H; Melkikh, A V
2016-07-15
A major limitation of many heat engines is that their functioning demands on-line control and/or an external fitting between the environmental parameters (e.g., temperatures of thermal baths) and internal parameters of the engine. We study a model for an adaptive heat engine, where-due to feedback from the functional part-the engine's structure adapts to given thermal baths. Hence, no on-line control and no external fitting are needed. The engine can employ unknown resources; it can also adapt to results of its own functioning that make the bath temperatures closer. We determine resources of adaptation and relate them to the prior information available about the environment. PMID:27472104
Adaptive Cuckoo Search Algorithm for Unconstrained Optimization
2014-01-01
Modification of the intensification and diversification approaches in the recently developed cuckoo search algorithm (CSA) is performed. The alteration involves the implementation of adaptive step size adjustment strategy, and thus enabling faster convergence to the global optimal solutions. The feasibility of the proposed algorithm is validated against benchmark optimization functions, where the obtained results demonstrate a marked improvement over the standard CSA, in all the cases. PMID:25298971
Adaptive cuckoo search algorithm for unconstrained optimization.
Ong, Pauline
2014-01-01
Modification of the intensification and diversification approaches in the recently developed cuckoo search algorithm (CSA) is performed. The alteration involves the implementation of adaptive step size adjustment strategy, and thus enabling faster convergence to the global optimal solutions. The feasibility of the proposed algorithm is validated against benchmark optimization functions, where the obtained results demonstrate a marked improvement over the standard CSA, in all the cases. PMID:25298971
Self-adaptive predictor-corrector algorithm for static nonlinear structural analysis
NASA Technical Reports Server (NTRS)
Padovan, J.
1981-01-01
A multiphase selfadaptive predictor corrector type algorithm was developed. This algorithm enables the solution of highly nonlinear structural responses including kinematic, kinetic and material effects as well as pro/post buckling behavior. The strategy involves three main phases: (1) the use of a warpable hyperelliptic constraint surface which serves to upperbound dependent iterate excursions during successive incremental Newton Ramphson (INR) type iterations; (20 uses an energy constraint to scale the generation of successive iterates so as to maintain the appropriate form of local convergence behavior; (3) the use of quality of convergence checks which enable various self adaptive modifications of the algorithmic structure when necessary. The restructuring is achieved by tightening various conditioning parameters as well as switch to different algorithmic levels to improve the convergence process. The capabilities of the procedure to handle various types of static nonlinear structural behavior are illustrated.
Leak test adapter for containers
Hallett, Brian H.; Hartley, Michael S.
1996-01-01
An adapter is provided for facilitating the charging of containers and leak testing penetration areas. The adapter comprises an adapter body and stem which are secured to the container's penetration areas. The container is then pressurized with a tracer gas. Manipulating the adapter stem installs a penetration plug allowing the adapter to be removed and the penetration to be leak tested with a mass spectrometer. Additionally, a method is provided for using the adapter.
Adaptable DC offset correction
NASA Technical Reports Server (NTRS)
Golusky, John M. (Inventor); Muldoon, Kelly P. (Inventor)
2009-01-01
Methods and systems for adaptable DC offset correction are provided. An exemplary adaptable DC offset correction system evaluates an incoming baseband signal to determine an appropriate DC offset removal scheme; removes a DC offset from the incoming baseband signal based on the appropriate DC offset scheme in response to the evaluated incoming baseband signal; and outputs a reduced DC baseband signal in response to the DC offset removed from the incoming baseband signal.
Goulding, J.R. )
1991-01-01
This paper details the approach and methodology used to build adaptive transfer functions in a feed-forward Back-Propagation neural network, and provides insight into the structure dependent properties of using non-scaled analog inputs. The results of using adaptive transfer functions are shown to outperform conventional architectures in the implementation of a mechanical power transmission gearbox design expert system knowledge base. 4 refs., 4 figs., 1 tab.
Adaptive resistance to antibiotics in bacteria: a systems biology perspective.
Sandoval-Motta, Santiago; Aldana, Maximino
2016-05-01
Despite all the major breakthroughs in antibiotic development and treatment procedures, there is still no long-term solution to the bacterial antibiotic resistance problem. Among all the known types of resistance, adaptive resistance (AdR) is particularly inconvenient. This phenotype is known to emerge as a consequence of concentration gradients, as well as contact with subinhibitory concentrations of antibiotics, both known to occur in human patients and livestock. Moreover, AdR has been repeatedly correlated with the appearance of multidrug resistance, although the biological processes behind its emergence and evolution are not well understood. Epigenetic inheritance, population structure and heterogeneity, high mutation rates, gene amplification, efflux pumps, and biofilm formation have all been reported as possible explanations for its development. Nonetheless, these concepts taken independently have not been sufficient to prevent AdR's fast emergence or to predict its low stability. New strains of resistant pathogens continue to appear, and none of the new approaches used to kill them (mixed antibiotics, sequential treatments, and efflux inhibitors) are completely efficient. With the advent of systems biology and its toolsets, integrative models that combine experimentally known features with computational simulations have significantly improved our understanding of the emergence and evolution of the adaptive-resistant phenotype. Apart from outlining these findings, we propose that one of the main cornerstones of AdR in bacteria, is the conjunction of two types of mechanisms: one rapidly responding to transient environmental challenges but not very efficient, and another much more effective and specific, but developing on longer time scales. WIREs Syst Biol Med 2016, 8:253-267. doi: 10.1002/wsbm.1335 For further resources related to this article, please visit the WIREs website. PMID:27103502
Attractor mechanism as a distillation procedure
Levay, Peter; Szalay, Szilard
2010-07-15
In a recent paper it was shown that for double extremal static spherical symmetric BPS black hole solutions in the STU model the well-known process of moduli stabilization at the horizon can be recast in a form of a distillation procedure of a three-qubit entangled state of a Greenberger-Horne-Zeilinger type. By studying the full flow in moduli space in this paper we investigate this distillation procedure in more detail. We introduce a three-qubit state with amplitudes depending on the conserved charges, the warp factor, and the moduli. We show that for the recently discovered non-BPS solutions it is possible to see how the distillation procedure unfolds itself as we approach the horizon. For the non-BPS seed solutions at the asymptotically Minkowski region we are starting with a three-qubit state having seven nonequal nonvanishing amplitudes and finally at the horizon we get a Greenberger-Horne-Zeilinger state with merely four nonvanishing ones with equal magnitudes. The magnitude of the surviving nonvanishing amplitudes is proportional to the macroscopic black hole entropy. A systematic study of such attractor states shows that their properties reflect the structure of the fake superpotential. We also demonstrate that when starting with the very special values for the moduli corresponding to flat directions the uniform structure at the horizon deteriorates due to errors generalizing the usual bit flips acting on the qubits of the attractor states.