Sample records for adaptive solution procedure

  1. Spatial adaptation procedures on tetrahedral meshes for unsteady aerodynamic flow calculations

    NASA Technical Reports Server (NTRS)

    Rausch, Russ D.; Batina, John T.; Yang, Henry T. Y.

    1993-01-01

    Spatial adaptation procedures for the accurate and efficient solution of steady and unsteady inviscid flow problems are described. The adaptation procedures were developed and implemented within a three-dimensional, unstructured-grid, upwind-type Euler code. These procedures involve mesh enrichment and mesh coarsening to either add points in high gradient regions of the flow or remove points where they are not needed, respectively, to produce solutions of high spatial accuracy at minimal computational cost. A detailed description of the enrichment and coarsening procedures are presented and comparisons with experimental data for an ONERA M6 wing and an exact solution for a shock-tube problem are presented to provide an assessment of the accuracy and efficiency of the capability. Steady and unsteady results, obtained using spatial adaptation procedures, are shown to be of high spatial accuracy, primarily in that discontinuities such as shock waves are captured very sharply.

  2. Spatial adaptation procedures on tetrahedral meshes for unsteady aerodynamic flow calculations

    NASA Technical Reports Server (NTRS)

    Rausch, Russ D.; Batina, John T.; Yang, Henry T. Y.

    1993-01-01

    Spatial adaptation procedures for the accurate and efficient solution of steady and unsteady inviscid flow problems are described. The adaptation procedures were developed and implemented within a three-dimensional, unstructured-grid, upwind-type Euler code. These procedures involve mesh enrichment and mesh coarsening to either add points in high gradient regions of the flow or remove points where they are not needed, respectively, to produce solutions of high spatial accuracy at minimal computational cost. The paper gives a detailed description of the enrichment and coarsening procedures and presents comparisons with experimental data for an ONERA M6 wing and an exact solution for a shock-tube problem to provide an assessment of the accuracy and efficiency of the capability. Steady and unsteady results, obtained using spatial adaptation procedures, are shown to be of high spatial accuracy, primarily in that discontinuities such as shock waves are captured very sharply.

  3. An adaptive embedded mesh procedure for leading-edge vortex flows

    NASA Technical Reports Server (NTRS)

    Powell, Kenneth G.; Beer, Michael A.; Law, Glenn W.

    1989-01-01

    A procedure for solving the conical Euler equations on an adaptively refined mesh is presented, along with a method for determining which cells to refine. The solution procedure is a central-difference cell-vertex scheme. The adaptation procedure is made up of a parameter on which the refinement decision is based, and a method for choosing a threshold value of the parameter. The refinement parameter is a measure of mesh-convergence, constructed by comparison of locally coarse- and fine-grid solutions. The threshold for the refinement parameter is based on the curvature of the curve relating the number of cells flagged for refinement to the value of the refinement threshold. Results for three test cases are presented. The test problem is that of a delta wing at angle of attack in a supersonic free-stream. The resulting vortices and shocks are captured efficiently by the adaptive code.

  4. Spatial adaption procedures on unstructured meshes for accurate unsteady aerodynamic flow computation

    NASA Technical Reports Server (NTRS)

    Rausch, Russ D.; Batina, John T.; Yang, Henry T. Y.

    1991-01-01

    Spatial adaption procedures for the accurate and efficient solution of steady and unsteady inviscid flow problems are described. The adaption procedures were developed and implemented within a two-dimensional unstructured-grid upwind-type Euler code. These procedures involve mesh enrichment and mesh coarsening to either add points in a high gradient region or the flow or remove points where they are not needed, respectively, to produce solutions of high spatial accuracy at minimal computational costs. A detailed description is given of the enrichment and coarsening procedures and comparisons with alternative results and experimental data are presented to provide an assessment of the accuracy and efficiency of the capability. Steady and unsteady transonic results, obtained using spatial adaption for the NACA 0012 airfoil, are shown to be of high spatial accuracy, primarily in that the shock waves are very sharply captured. The results were obtained with a computational savings of a factor of approximately fifty-three for a steady case and as much as twenty-five for the unsteady cases.

  5. Spatial adaption procedures on unstructured meshes for accurate unsteady aerodynamic flow computation

    NASA Technical Reports Server (NTRS)

    Rausch, Russ D.; Yang, Henry T. Y.; Batina, John T.

    1991-01-01

    Spatial adaption procedures for the accurate and efficient solution of steady and unsteady inviscid flow problems are described. The adaption procedures were developed and implemented within a two-dimensional unstructured-grid upwind-type Euler code. These procedures involve mesh enrichment and mesh coarsening to either add points in high gradient regions of the flow or remove points where they are not needed, respectively, to produce solutions of high spatial accuracy at minimal computational cost. The paper gives a detailed description of the enrichment and coarsening procedures and presents comparisons with alternative results and experimental data to provide an assessment of the accuracy and efficiency of the capability. Steady and unsteady transonic results, obtained using spatial adaption for the NACA 0012 airfoil, are shown to be of high spatial accuracy, primarily in that the shock waves are very sharply captured. The results were obtained with a computational savings of a factor of approximately fifty-three for a steady case and as much as twenty-five for the unsteady cases.

  6. An adaptive mesh-moving and refinement procedure for one-dimensional conservation laws

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak; Flaherty, Joseph E.; Arney, David C.

    1993-01-01

    We examine the performance of an adaptive mesh-moving and /or local mesh refinement procedure for the finite difference solution of one-dimensional hyperbolic systems of conservation laws. Adaptive motion of a base mesh is designed to isolate spatially distinct phenomena, and recursive local refinement of the time step and cells of the stationary or moving base mesh is performed in regions where a refinement indicator exceeds a prescribed tolerance. These adaptive procedures are incorporated into a computer code that includes a MacCormack finite difference scheme wih Davis' artificial viscosity model and a discretization error estimate based on Richardson's extrapolation. Experiments are conducted on three problems in order to qualify the advantages of adaptive techniques relative to uniform mesh computations and the relative benefits of mesh moving and refinement. Key results indicate that local mesh refinement, with and without mesh moving, can provide reliable solutions at much lower computational cost than possible on uniform meshes; that mesh motion can be used to improve the results of uniform mesh solutions for a modest computational effort; that the cost of managing the tree data structure associated with refinement is small; and that a combination of mesh motion and refinement reliably produces solutions for the least cost per unit accuracy.

  7. Solution-Adaptive Cartesian Cell Approach for Viscous and Inviscid Flows

    NASA Technical Reports Server (NTRS)

    Coirier, William J.; Powell, Kenneth G.

    1996-01-01

    A Cartesian cell-based approach for adaptively refined solutions of the Euler and Navier-Stokes equations in two dimensions is presented. Grids about geometrically complicated bodies are generated automatically, by the recursive subdivision of a single Cartesian cell encompassing the entire flow domain. Where the resulting cells intersect bodies, polygonal cut cells are created using modified polygon-clipping algorithms. The grid is stored in a binary tree data structure that provides a natural means of obtaining cell-to-cell connectivity and of carrying out solution-adaptive mesh refinement. The Euler and Navier-Stokes equations are solved on the resulting grids using a finite volume formulation. The convective terms are upwinded: A linear reconstruction of the primitive variables is performed, providing input states to an approximate Riemann solver for computing the fluxes between neighboring cells. The results of a study comparing the accuracy and positivity of two classes of cell-centered, viscous gradient reconstruction procedures is briefly summarized. Adaptively refined solutions of the Navier-Stokes equations are shown using the more robust of these gradient reconstruction procedures, where the results computed by the Cartesian approach are compared to theory, experiment, and other accepted computational results for a series of low and moderate Reynolds number flows.

  8. Adaptive graph-based multiple testing procedures

    PubMed Central

    Klinglmueller, Florian; Posch, Martin; Koenig, Franz

    2016-01-01

    Multiple testing procedures defined by directed, weighted graphs have recently been proposed as an intuitive visual tool for constructing multiple testing strategies that reflect the often complex contextual relations between hypotheses in clinical trials. Many well-known sequentially rejective tests, such as (parallel) gatekeeping tests or hierarchical testing procedures are special cases of the graph based tests. We generalize these graph-based multiple testing procedures to adaptive trial designs with an interim analysis. These designs permit mid-trial design modifications based on unblinded interim data as well as external information, while providing strong family wise error rate control. To maintain the familywise error rate, it is not required to prespecify the adaption rule in detail. Because the adaptive test does not require knowledge of the multivariate distribution of test statistics, it is applicable in a wide range of scenarios including trials with multiple treatment comparisons, endpoints or subgroups, or combinations thereof. Examples of adaptations are dropping of treatment arms, selection of subpopulations, and sample size reassessment. If, in the interim analysis, it is decided to continue the trial as planned, the adaptive test reduces to the originally planned multiple testing procedure. Only if adaptations are actually implemented, an adjusted test needs to be applied. The procedure is illustrated with a case study and its operating characteristics are investigated by simulations. PMID:25319733

  9. Development of a solution adaptive unstructured scheme for quasi-3D inviscid flows through advanced turbomachinery cascades

    NASA Technical Reports Server (NTRS)

    Usab, William J., Jr.; Jiang, Yi-Tsann

    1991-01-01

    The objective of the present research is to develop a general solution adaptive scheme for the accurate prediction of inviscid quasi-three-dimensional flow in advanced compressor and turbine designs. The adaptive solution scheme combines an explicit finite-volume time-marching scheme for unstructured triangular meshes and an advancing front triangular mesh scheme with a remeshing procedure for adapting the mesh as the solution evolves. The unstructured flow solver has been tested on a series of two-dimensional airfoil configurations including a three-element analytic test case presented here. Mesh adapted quasi-three-dimensional Euler solutions are presented for three spanwise stations of the NASA rotor 67 transonic fan. Computed solutions are compared with available experimental data.

  10. Effects of adaptive refinement on the inverse EEG solution

    NASA Astrophysics Data System (ADS)

    Weinstein, David M.; Johnson, Christopher R.; Schmidt, John A.

    1995-10-01

    One of the fundamental problems in electroencephalography can be characterized by an inverse problem. Given a subset of electrostatic potentials measured on the surface of the scalp and the geometry and conductivity properties within the head, calculate the current vectors and potential fields within the cerebrum. Mathematically the generalized EEG problem can be stated as solving Poisson's equation of electrical conduction for the primary current sources. The resulting problem is mathematically ill-posed i.e., the solution does not depend continuously on the data, such that small errors in the measurement of the voltages on the scalp can yield unbounded errors in the solution, and, for the general treatment of a solution of Poisson's equation, the solution is non-unique. However, if accurate solutions the general treatment of a solution of Poisson's equation, the solution is non-unique. However, if accurate solutions to such problems could be obtained, neurologists would gain noninvasive accesss to patient-specific cortical activity. Access to such data would ultimately increase the number of patients who could be effectively treated for pathological cortical conditions such as temporal lobe epilepsy. In this paper, we present the effects of spatial adaptive refinement on the inverse EEG problem and show that the use of adaptive methods allow for significantly better estimates of electric and potential fileds within the brain through an inverse procedure. To test these methods, we have constructed several finite element head models from magneteic resonance images of a patient. The finite element meshes ranged in size from 2724 nodes and 12,812 elements to 5224 nodes and 29,135 tetrahedral elements, depending on the level of discretization. We show that an adaptive meshing algorithm minimizes the error in the forward problem due to spatial discretization and thus increases the accuracy of the inverse solution.

  11. A solution-adaptive hybrid-grid method for the unsteady analysis of turbomachinery

    NASA Technical Reports Server (NTRS)

    Mathur, Sanjay R.; Madavan, Nateri K.; Rajagopalan, R. G.

    1993-01-01

    A solution-adaptive method for the time-accurate analysis of two-dimensional flows in turbomachinery is described. The method employs a hybrid structured-unstructured zonal grid topology in conjunction with appropriate modeling equations and solution techniques in each zone. The viscous flow region in the immediate vicinity of the airfoils is resolved on structured O-type grids while the rest of the domain is discretized using an unstructured mesh of triangular cells. Implicit, third-order accurate, upwind solutions of the Navier-Stokes equations are obtained in the inner regions. In the outer regions, the Euler equations are solved using an explicit upwind scheme that incorporates a second-order reconstruction procedure. An efficient and robust grid adaptation strategy, including both grid refinement and coarsening capabilities, is developed for the unstructured grid regions. Grid adaptation is also employed to facilitate information transfer at the interfaces between unstructured grids in relative motion. Results for grid adaptation to various features pertinent to turbomachinery flows are presented. Good comparisons between the present results and experimental measurements and earlier structured-grid results are obtained.

  12. Multigrid solution of internal flows using unstructured solution adaptive meshes

    NASA Technical Reports Server (NTRS)

    Smith, Wayne A.; Blake, Kenneth R.

    1992-01-01

    This is the final report of the NASA Lewis SBIR Phase 2 Contract Number NAS3-25785, Multigrid Solution of Internal Flows Using Unstructured Solution Adaptive Meshes. The objective of this project, as described in the Statement of Work, is to develop and deliver to NASA a general three-dimensional Navier-Stokes code using unstructured solution-adaptive meshes for accuracy and multigrid techniques for convergence acceleration. The code will primarily be applied, but not necessarily limited, to high speed internal flows in turbomachinery.

  13. A new procedure for dynamic adaption of three-dimensional unstructured grids

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak; Strawn, Roger

    1993-01-01

    A new procedure is presented for the simultaneous coarsening and refinement of three-dimensional unstructured tetrahedral meshes. This algorithm allows for localized grid adaption that is used to capture aerodynamic flow features such as vortices and shock waves in helicopter flowfield simulations. The mesh-adaption algorithm is implemented in the C programming language and uses a data structure consisting of a series of dynamically-allocated linked lists. These lists allow the mesh connectivity to be rapidly reconstructed when individual mesh points are added and/or deleted. The algorithm allows the mesh to change in an anisotropic manner in order to efficiently resolve directional flow features. The procedure has been successfully implemented on a single processor of a Cray Y-MP computer. Two sample cases are presented involving three-dimensional transonic flow. Computed results show good agreement with conventional structured-grid solutions for the Euler equations.

  14. Constrained Self-adaptive Solutions Procedures for Structure Subject to High Temperature Elastic-plastic Creep Effects

    NASA Technical Reports Server (NTRS)

    Padovan, J.; Tovichakchaikul, S.

    1983-01-01

    This paper will develop a new solution strategy which can handle elastic-plastic-creep problems in an inherently stable manner. This is achieved by introducing a new constrained time stepping algorithm which will enable the solution of creep initiated pre/postbuckling behavior where indefinite tangent stiffnesses are encountered. Due to the generality of the scheme, both monotone and cyclic loading histories can be handled. The presentation will give a thorough overview of current solution schemes and their short comings, the development of constrained time stepping algorithms as well as illustrate the results of several numerical experiments which benchmark the new procedure.

  15. A Structured Grid Based Solution-Adaptive Technique for Complex Separated Flows

    NASA Technical Reports Server (NTRS)

    Thornburg, Hugh; Soni, Bharat K.; Kishore, Boyalakuntla; Yu, Robert

    1996-01-01

    The objective of this work was to enhance the predictive capability of widely used computational fluid dynamic (CFD) codes through the use of solution adaptive gridding. Most problems of engineering interest involve multi-block grids and widely disparate length scales. Hence, it is desirable that the adaptive grid feature detection algorithm be developed to recognize flow structures of different type as well as differing intensity, and adequately address scaling and normalization across blocks. In order to study the accuracy and efficiency improvements due to the grid adaptation, it is necessary to quantify grid size and distribution requirements as well as computational times of non-adapted solutions. Flow fields about launch vehicles of practical interest often involve supersonic freestream conditions at angle of attack exhibiting large scale separate vortical flow, vortex-vortex and vortex-surface interactions, separated shear layers and multiple shocks of different intensity. In this work, a weight function and an associated mesh redistribution procedure is presented which detects and resolves these features without user intervention. Particular emphasis has been placed upon accurate resolution of expansion regions and boundary layers. Flow past a wedge at Mach=2.0 is used to illustrate the enhanced detection capabilities of this newly developed weight function.

  16. Procedures for Computing Transonic Flows for Control of Adaptive Wind Tunnels. Ph.D. Thesis - Technische Univ., Berlin, Mar. 1986

    NASA Technical Reports Server (NTRS)

    Rebstock, Rainer

    1987-01-01

    Numerical methods are developed for control of three dimensional adaptive test sections. The physical properties of the design problem occurring in the external field computation are analyzed, and a design procedure suited for solution of the problem is worked out. To do this, the desired wall shape is determined by stepwise modification of an initial contour. The necessary changes in geometry are determined with the aid of a panel procedure, or, with incident flow near the sonic range, with a transonic small perturbation (TSP) procedure. The designed wall shape, together with the wall deflections set during the tunnel run, are the input to a newly derived one-step formula which immediately yields the adapted wall contour. This is particularly important since the classical iterative adaptation scheme is shown to converge poorly for 3D flows. Experimental results obtained in the adaptive test section with eight flexible walls are presented to demonstrate the potential of the procedure. Finally, a method is described to minimize wall interference in 3D flows by adapting only the top and bottom wind tunnel walls.

  17. Adaptive EAGLE dynamic solution adaptation and grid quality enhancement

    NASA Technical Reports Server (NTRS)

    Luong, Phu Vinh; Thompson, J. F.; Gatlin, B.; Mastin, C. W.; Kim, H. J.

    1992-01-01

    In the effort described here, the elliptic grid generation procedure in the EAGLE grid code was separated from the main code into a subroutine, and a new subroutine which evaluates several grid quality measures at each grid point was added. The elliptic grid routine can now be called, either by a computational fluid dynamics (CFD) code to generate a new adaptive grid based on flow variables and quality measures through multiple adaptation, or by the EAGLE main code to generate a grid based on quality measure variables through static adaptation. Arrays of flow variables can be read into the EAGLE grid code for use in static adaptation as well. These major changes in the EAGLE adaptive grid system make it easier to convert any CFD code that operates on a block-structured grid (or single-block grid) into a multiple adaptive code.

  18. Adaptive Modeling Procedure Selection by Data Perturbation.

    PubMed

    Zhang, Yongli; Shen, Xiaotong

    2015-10-01

    Many procedures have been developed to deal with the high-dimensional problem that is emerging in various business and economics areas. To evaluate and compare these procedures, modeling uncertainty caused by model selection and parameter estimation has to be assessed and integrated into a modeling process. To do this, a data perturbation method estimates the modeling uncertainty inherited in a selection process by perturbing the data. Critical to data perturbation is the size of perturbation, as the perturbed data should resemble the original dataset. To account for the modeling uncertainty, we derive the optimal size of perturbation, which adapts to the data, the model space, and other relevant factors in the context of linear regression. On this basis, we develop an adaptive data-perturbation method that, unlike its nonadaptive counterpart, performs well in different situations. This leads to a data-adaptive model selection method. Both theoretical and numerical analysis suggest that the data-adaptive model selection method adapts to distinct situations in that it yields consistent model selection and optimal prediction, without knowing which situation exists a priori. The proposed method is applied to real data from the commodity market and outperforms its competitors in terms of price forecasting accuracy.

  19. An adaptive large neighborhood search procedure applied to the dynamic patient admission scheduling problem.

    PubMed

    Lusby, Richard Martin; Schwierz, Martin; Range, Troels Martin; Larsen, Jesper

    2016-11-01

    The aim of this paper is to provide an improved method for solving the so-called dynamic patient admission scheduling (DPAS) problem. This is a complex scheduling problem that involves assigning a set of patients to hospital beds over a given time horizon in such a way that several quality measures reflecting patient comfort and treatment efficiency are maximized. Consideration must be given to uncertainty in the length of stays of patients as well as the possibility of emergency patients. We develop an adaptive large neighborhood search (ALNS) procedure to solve the problem. This procedure utilizes a Simulated Annealing framework. We thoroughly test the performance of the proposed ALNS approach on a set of 450 publicly available problem instances. A comparison with the current state-of-the-art indicates that the proposed methodology provides solutions that are of comparable quality for small and medium sized instances (up to 1000 patients); the two approaches provide solutions that differ in quality by approximately 1% on average. The ALNS procedure does, however, provide solutions in a much shorter time frame. On larger instances (between 1000-4000 patients) the improvement in solution quality by the ALNS procedure is substantial, approximately 3-14% on average, and as much as 22% on a single instance. The time taken to find such results is, however, in the worst case, a factor 12 longer on average than the time limit which is granted to the current state-of-the-art. The proposed ALNS procedure is an efficient and flexible method for solving the DPAS problem. Copyright © 2016 Elsevier B.V. All rights reserved.

  20. Staggered solution procedures for multibody dynamics simulation

    NASA Technical Reports Server (NTRS)

    Park, K. C.; Chiou, J. C.; Downer, J. D.

    1990-01-01

    The numerical solution procedure for multibody dynamics (MBD) systems is termed a staggered MBD solution procedure that solves the generalized coordinates in a separate module from that for the constraint force. This requires a reformulation of the constraint conditions so that the constraint forces can also be integrated in time. A major advantage of such a partitioned solution procedure is that additional analysis capabilities such as active controller and design optimization modules can be easily interfaced without embedding them into a monolithic program. After introducing the basic equations of motion for MBD system in the second section, Section 3 briefly reviews some constraint handling techniques and introduces the staggered stabilized technique for the solution of the constraint forces as independent variables. The numerical direct time integration of the equations of motion is described in Section 4. As accurate damping treatment is important for the dynamics of space structures, we have employed the central difference method and the mid-point form of the trapezoidal rule since they engender no numerical damping. This is in contrast to the current practice in dynamic simulations of ground vehicles by employing a set of backward difference formulas. First, the equations of motion are partitioned according to the translational and the rotational coordinates. This sets the stage for an efficient treatment of the rotational motions via the singularity-free Euler parameters. The resulting partitioned equations of motion are then integrated via a two-stage explicit stabilized algorithm for updating both the translational coordinates and angular velocities. Once the angular velocities are obtained, the angular orientations are updated via the mid-point implicit formula employing the Euler parameters. When the two algorithms, namely, the two-stage explicit algorithm for the generalized coordinates and the implicit staggered procedure for the constraint Lagrange

  1. Procedure for Adapting Direct Simulation Monte Carlo Meshes

    NASA Technical Reports Server (NTRS)

    Woronowicz, Michael S.; Wilmoth, Richard G.; Carlson, Ann B.; Rault, Didier F. G.

    1992-01-01

    A technique is presented for adapting computational meshes used in the G2 version of the direct simulation Monte Carlo method. The physical ideas underlying the technique are discussed, and adaptation formulas are developed for use on solutions generated from an initial mesh. The effect of statistical scatter on adaptation is addressed, and results demonstrate the ability of this technique to achieve more accurate results without increasing necessary computational resources.

  2. Combined LAURA-UPS hypersonic solution procedure

    NASA Technical Reports Server (NTRS)

    Wood, William A.; Thompson, Richard A.

    1993-01-01

    A combined solution procedure for hypersonic flowfields around blunted slender bodies was implemented using a thin-layer Navier-Stokes code (LAURA) in the nose region and a parabolized Navier-Stokes code (UPS) on the after body region. Perfect gas, equilibrium air, and non-equilibrium air solutions to sharp cones and a sharp wedge were obtained using UPS alone as a preliminary step. Surface heating rates are presented for two slender bodies with blunted noses, having used LAURA to provide a starting solution to UPS downstream of the sonic line. These are an 8 deg sphere-cone in Mach 5, perfect gas, laminar flow at 0 and 4 deg angles of attack and the Reentry F body at Mach 20, 80,000 ft equilibrium gas conditions for 0 and 0.14 deg angles of attack. The results indicate that this procedure is a timely and accurate method for obtaining aerothermodynamic predictions on slender hypersonic vehicles.

  3. A Rapid Item-Search Procedure for Bayesian Adaptive Testing.

    DTIC Science & Technology

    1977-05-01

    properties of the • procedure , they migh t well introduce undesirable psychological effects on test scores (e.g., Betz & Weiss , 1976r.’ , 1976b...ge of results and adaptive ability test .~~~~ (Research Rep . 76—4). Minneapolis: University of Minnesota , Departmen t of Psychology , Psychometric...t~~[AH ~~~ ~~~~ r _ _ _ _ A RAPID ITEM -SEARC H PROCEDURE FOR BAYESIAN ADAPTIVE TESTING C. David Vale d D D Can David J . Weiss RESEARCH REPORT 77-n

  4. Multiple testing with discrete data: Proportion of true null hypotheses and two adaptive FDR procedures.

    PubMed

    Chen, Xiongzhi; Doerge, Rebecca W; Heyse, Joseph F

    2018-05-11

    We consider multiple testing with false discovery rate (FDR) control when p values have discrete and heterogeneous null distributions. We propose a new estimator of the proportion of true null hypotheses and demonstrate that it is less upwardly biased than Storey's estimator and two other estimators. The new estimator induces two adaptive procedures, that is, an adaptive Benjamini-Hochberg (BH) procedure and an adaptive Benjamini-Hochberg-Heyse (BHH) procedure. We prove that the adaptive BH (aBH) procedure is conservative nonasymptotically. Through simulation studies, we show that these procedures are usually more powerful than their nonadaptive counterparts and that the adaptive BHH procedure is usually more powerful than the aBH procedure and a procedure based on randomized p-value. The adaptive procedures are applied to a study of HIV vaccine efficacy, where they identify more differentially polymorphic positions than the BH procedure at the same FDR level. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. Mixture-based gatekeeping procedures in adaptive clinical trials.

    PubMed

    Kordzakhia, George; Dmitrienko, Alex; Ishida, Eiji

    2018-01-01

    Clinical trials with data-driven decision rules often pursue multiple clinical objectives such as the evaluation of several endpoints or several doses of an experimental treatment. These complex analysis strategies give rise to "multivariate" multiplicity problems with several components or sources of multiplicity. A general framework for defining gatekeeping procedures in clinical trials with adaptive multistage designs is proposed in this paper. The mixture method is applied to build a gatekeeping procedure at each stage and inferences at each decision point (interim or final analysis) are performed using the combination function approach. An advantage of utilizing the mixture method is that it enables powerful gatekeeping procedures applicable to a broad class of settings with complex logical relationships among the hypotheses of interest. Further, the combination function approach supports flexible data-driven decisions such as a decision to increase the sample size or remove a treatment arm. The paper concludes with a clinical trial example that illustrates the methodology by applying it to develop an adaptive two-stage design with a mixture-based gatekeeping procedure.

  6. Adaptation and innovation: a grounded theory study of procedural variation in the academic surgical workplace.

    PubMed

    Apramian, Tavis; Watling, Christopher; Lingard, Lorelei; Cristancho, Sayra

    2015-10-01

    Surgical research struggles to describe the relationship between procedural variations in daily practice and traditional conceptualizations of evidence. The problem has resisted simple solutions, in part, because we lack a solid understanding of how surgeons conceptualize and interact around variation, adaptation, innovation, and evidence in daily practice. This grounded theory study aims to describe the social processes that influence how procedural variation is conceptualized in the surgical workplace. Using the constructivist grounded theory methodology, semi-structured interviews with surgeons (n = 19) from four North American academic centres were collected and analysed. Purposive sampling targeted surgeons with experiential knowledge of the role of variations in the workplace. Theoretical sampling was conducted until a theoretical framework representing key processes was conceptually saturated. Surgical procedural variation was influenced by three key processes. Seeking improvement was shaped by having unsolved procedural problems, adapting in the moment, and pursuing personal opportunities. Orienting self and others to variations consisted of sharing stories of variations with others, taking stock of how a variation promoted personal interests, and placing trust in peers. Acting under cultural and material conditions was characterized by being wary, positioning personal image, showing the logic of a variation, and making use of academic resources to do so. Our findings include social processes that influence how adaptations are incubated in surgical practice and mature into innovations. This study offers a language for conceptualizing the sociocultural influences on procedural variations in surgery. Interventions to change how surgeons interact with variations on a day-to-day basis should consider these social processes in their design. © 2015 John Wiley & Sons, Ltd.

  7. Creating Online Training for Procedures in Global Health with PEARLS (Procedural Education for Adaptation to Resource-Limited Settings).

    PubMed

    Bensman, Rachel S; Slusher, Tina M; Butteris, Sabrina M; Pitt, Michael B; On Behalf Of The Sugar Pearls Investigators; Becker, Amanda; Desai, Brinda; George, Alisha; Hagen, Scott; Kiragu, Andrew; Johannsen, Ron; Miller, Kathleen; Rule, Amy; Webber, Sarah

    2017-11-01

    The authors describe a multiinstitutional collaborative project to address a gap in global health training by creating a free online platform to share a curriculum for performing procedures in resource-limited settings. This curriculum called PEARLS (Procedural Education for Adaptation to Resource-Limited Settings) consists of peer-reviewed instructional and demonstration videos describing modifications for performing common pediatric procedures in resource-limited settings. Adaptations range from the creation of a low-cost spacer for inhaled medications to a suction chamber for continued evacuation of a chest tube. By describing the collaborative process, we provide a model for educators in other fields to collate and disseminate procedural modifications adapted for their own specialty and location, ideally expanding this crowd-sourced curriculum to reach a wide audience of trainees and providers in global health.

  8. Introducing an on-line adaptive procedure for prostate image guided intensity modulate proton therapy.

    PubMed

    Zhang, M; Westerly, D C; Mackie, T R

    2011-08-07

    With on-line image guidance (IG), prostate shifts relative to the bony anatomy can be corrected by realigning the patient with respect to the treatment fields. In image guided intensity modulated proton therapy (IG-IMPT), because the proton range is more sensitive to the material it travels through, the realignment may introduce large dose variations. This effect is studied in this work and an on-line adaptive procedure is proposed to restore the planned dose to the target. A 2D anthropomorphic phantom was constructed from a real prostate patient's CT image. Two-field laterally opposing spot 3D-modulation and 24-field full arc distal edge tracking (DET) plans were generated with a prescription of 70 Gy to the planning target volume. For the simulated delivery, we considered two types of procedures: the non-adaptive procedure and the on-line adaptive procedure. In the non-adaptive procedure, only patient realignment to match the prostate location in the planning CT was performed. In the on-line adaptive procedure, on top of the patient realignment, the kinetic energy for each individual proton pencil beam was re-determined from the on-line CT image acquired after the realignment and subsequently used for delivery. Dose distributions were re-calculated for individual fractions for different plans and different delivery procedures. The results show, without adaptive, that both the 3D-modulation and the DET plans experienced delivered dose degradation by having large cold or hot spots in the prostate. The DET plan had worse dose degradation than the 3D-modulation plan. The adaptive procedure effectively restored the planned dose distribution in the DET plan, with delivered prostate D(98%), D(50%) and D(2%) values less than 1% from the prescription. In the 3D-modulation plan, in certain cases the adaptive procedure was not effective to reduce the delivered dose degradation and yield similar results as the non-adaptive procedure. In conclusion, based on this 2D phantom

  9. Introducing an on-line adaptive procedure for prostate image guided intensity modulate proton therapy

    NASA Astrophysics Data System (ADS)

    Zhang, M.; Westerly, D. C.; Mackie, T. R.

    2011-08-01

    With on-line image guidance (IG), prostate shifts relative to the bony anatomy can be corrected by realigning the patient with respect to the treatment fields. In image guided intensity modulated proton therapy (IG-IMPT), because the proton range is more sensitive to the material it travels through, the realignment may introduce large dose variations. This effect is studied in this work and an on-line adaptive procedure is proposed to restore the planned dose to the target. A 2D anthropomorphic phantom was constructed from a real prostate patient's CT image. Two-field laterally opposing spot 3D-modulation and 24-field full arc distal edge tracking (DET) plans were generated with a prescription of 70 Gy to the planning target volume. For the simulated delivery, we considered two types of procedures: the non-adaptive procedure and the on-line adaptive procedure. In the non-adaptive procedure, only patient realignment to match the prostate location in the planning CT was performed. In the on-line adaptive procedure, on top of the patient realignment, the kinetic energy for each individual proton pencil beam was re-determined from the on-line CT image acquired after the realignment and subsequently used for delivery. Dose distributions were re-calculated for individual fractions for different plans and different delivery procedures. The results show, without adaptive, that both the 3D-modulation and the DET plans experienced delivered dose degradation by having large cold or hot spots in the prostate. The DET plan had worse dose degradation than the 3D-modulation plan. The adaptive procedure effectively restored the planned dose distribution in the DET plan, with delivered prostate D98%, D50% and D2% values less than 1% from the prescription. In the 3D-modulation plan, in certain cases the adaptive procedure was not effective to reduce the delivered dose degradation and yield similar results as the non-adaptive procedure. In conclusion, based on this 2D phantom study

  10. a Procedural Solution to Model Roman Masonry Structures

    NASA Astrophysics Data System (ADS)

    Cappellini, V.; Saleri, R.; Stefani, C.; Nony, N.; De Luca, L.

    2013-07-01

    The paper will describe a new approach based on the development of a procedural modelling methodology for archaeological data representation. This is a custom-designed solution based on the recognition of the rules belonging to the construction methods used in roman times. We have conceived a tool for 3D reconstruction of masonry structures starting from photogrammetric surveying. Our protocol considers different steps. Firstly we have focused on the classification of opus based on the basic interconnections that can lead to a descriptive system used for their unequivocal identification and design. Secondly, we have chosen an automatic, accurate, flexible and open-source photogrammetric pipeline named Pastis Apero Micmac - PAM, developed by IGN (Paris). We have employed it to generate ortho-images from non-oriented images, using a user-friendly interface implemented by CNRS Marseille (France). Thirdly, the masonry elements are created in parametric and interactive way, and finally they are adapted to the photogrammetric data. The presented application, currently under construction, is developed with an open source programming language called Processing, useful for visual, animated or static, 2D or 3D, interactive creations. Using this computer language, a Java environment has been developed. Therefore, even if the procedural modelling reveals an accuracy level inferior to the one obtained by manual modelling (brick by brick), this method can be useful when taking into account the static evaluation on buildings (requiring quantitative aspects) and metric measures for restoration purposes.

  11. Interactive grid adaption

    NASA Technical Reports Server (NTRS)

    Abolhassani, Jamshid S.; Everton, Eric L.

    1990-01-01

    An interactive grid adaption method is developed, discussed and applied to the unsteady flow about an oscillating airfoil. The user is allowed to have direct interaction with the adaption of the grid as well as the solution procedure. Grid points are allowed to adapt simultaneously to several variables. In addition to the theory and results, the hardware and software requirements are discussed.

  12. Procedures for Selecting Items for Computerized Adaptive Tests.

    ERIC Educational Resources Information Center

    Kingsbury, G. Gage; Zara, Anthony R.

    1989-01-01

    Several classical approaches and alternative approaches to item selection for computerized adaptive testing (CAT) are reviewed and compared. The study also describes procedures for constrained CAT that may be added to classical item selection approaches to allow them to be used for applied testing. (TJH)

  13. Stratified and Maximum Information Item Selection Procedures in Computer Adaptive Testing

    ERIC Educational Resources Information Center

    Deng, Hui; Ansley, Timothy; Chang, Hua-Hua

    2010-01-01

    In this study we evaluated and compared three item selection procedures: the maximum Fisher information procedure (F), the a-stratified multistage computer adaptive testing (CAT) (STR), and a refined stratification procedure that allows more items to be selected from the high a strata and fewer items from the low a strata (USTR), along with…

  14. Solution-adaptive finite element method in computational fracture mechanics

    NASA Technical Reports Server (NTRS)

    Min, J. B.; Bass, J. M.; Spradley, L. W.

    1993-01-01

    Some recent results obtained using solution-adaptive finite element method in linear elastic two-dimensional fracture mechanics problems are presented. The focus is on the basic issue of adaptive finite element method for validating the applications of new methodology to fracture mechanics problems by computing demonstration problems and comparing the stress intensity factors to analytical results.

  15. The block adaptive multigrid method applied to the solution of the Euler equations

    NASA Technical Reports Server (NTRS)

    Pantelelis, Nikos

    1993-01-01

    In the present study, a scheme capable of solving very fast and robust complex nonlinear systems of equations is presented. The Block Adaptive Multigrid (BAM) solution method offers multigrid acceleration and adaptive grid refinement based on the prediction of the solution error. The proposed solution method was used with an implicit upwind Euler solver for the solution of complex transonic flows around airfoils. Very fast results were obtained (18-fold acceleration of the solution) using one fourth of the volumes of a global grid with the same solution accuracy for two test cases.

  16. Biclustering of gene expression data using reactive greedy randomized adaptive search procedure

    PubMed Central

    Dharan, Smitha; Nair, Achuthsankar S

    2009-01-01

    Background Biclustering algorithms belong to a distinct class of clustering algorithms that perform simultaneous clustering of both rows and columns of the gene expression matrix and can be a very useful analysis tool when some genes have multiple functions and experimental conditions are diverse. Cheng and Church have introduced a measure called mean squared residue score to evaluate the quality of a bicluster and has become one of the most popular measures to search for biclusters. In this paper, we review basic concepts of the metaheuristics Greedy Randomized Adaptive Search Procedure (GRASP)-construction and local search phases and propose a new method which is a variant of GRASP called Reactive Greedy Randomized Adaptive Search Procedure (Reactive GRASP) to detect significant biclusters from large microarray datasets. The method has two major steps. First, high quality bicluster seeds are generated by means of k-means clustering. In the second step, these seeds are grown using the Reactive GRASP, in which the basic parameter that defines the restrictiveness of the candidate list is self-adjusted, depending on the quality of the solutions found previously. Results We performed statistical and biological validations of the biclusters obtained and evaluated the method against the results of basic GRASP and as well as with the classic work of Cheng and Church. The experimental results indicate that the Reactive GRASP approach outperforms the basic GRASP algorithm and Cheng and Church approach. Conclusion The Reactive GRASP approach for the detection of significant biclusters is robust and does not require calibration efforts. PMID:19208127

  17. Biclustering of gene expression data using reactive greedy randomized adaptive search procedure.

    PubMed

    Dharan, Smitha; Nair, Achuthsankar S

    2009-01-30

    Biclustering algorithms belong to a distinct class of clustering algorithms that perform simultaneous clustering of both rows and columns of the gene expression matrix and can be a very useful analysis tool when some genes have multiple functions and experimental conditions are diverse. Cheng and Church have introduced a measure called mean squared residue score to evaluate the quality of a bicluster and has become one of the most popular measures to search for biclusters. In this paper, we review basic concepts of the metaheuristics Greedy Randomized Adaptive Search Procedure (GRASP)-construction and local search phases and propose a new method which is a variant of GRASP called Reactive Greedy Randomized Adaptive Search Procedure (Reactive GRASP) to detect significant biclusters from large microarray datasets. The method has two major steps. First, high quality bicluster seeds are generated by means of k-means clustering. In the second step, these seeds are grown using the Reactive GRASP, in which the basic parameter that defines the restrictiveness of the candidate list is self-adjusted, depending on the quality of the solutions found previously. We performed statistical and biological validations of the biclusters obtained and evaluated the method against the results of basic GRASP and as well as with the classic work of Cheng and Church. The experimental results indicate that the Reactive GRASP approach outperforms the basic GRASP algorithm and Cheng and Church approach. The Reactive GRASP approach for the detection of significant biclusters is robust and does not require calibration efforts.

  18. The use of solution adaptive grids in solving partial differential equations

    NASA Technical Reports Server (NTRS)

    Anderson, D. A.; Rai, M. M.

    1982-01-01

    The grid point distribution used in solving a partial differential equation using a numerical method has a substantial influence on the quality of the solution. An adaptive grid which adjusts as the solution changes provides the best results when the number of grid points available for use during the calculation is fixed. Basic concepts used in generating and applying adaptive grids are reviewed in this paper, and examples illustrating applications of these concepts are presented.

  19. Space-time mesh adaptation for solute transport in randomly heterogeneous porous media.

    PubMed

    Dell'Oca, Aronne; Porta, Giovanni Michele; Guadagnini, Alberto; Riva, Monica

    2018-05-01

    We assess the impact of an anisotropic space and time grid adaptation technique on our ability to solve numerically solute transport in heterogeneous porous media. Heterogeneity is characterized in terms of the spatial distribution of hydraulic conductivity, whose natural logarithm, Y, is treated as a second-order stationary random process. We consider nonreactive transport of dissolved chemicals to be governed by an Advection Dispersion Equation at the continuum scale. The flow field, which provides the advective component of transport, is obtained through the numerical solution of Darcy's law. A suitable recovery-based error estimator is analyzed to guide the adaptive discretization. We investigate two diverse strategies guiding the (space-time) anisotropic mesh adaptation. These are respectively grounded on the definition of the guiding error estimator through the spatial gradients of: (i) the concentration field only; (ii) both concentration and velocity components. We test the approach for two-dimensional computational scenarios with moderate and high levels of heterogeneity, the latter being expressed in terms of the variance of Y. As quantities of interest, we key our analysis towards the time evolution of section-averaged and point-wise solute breakthrough curves, second centered spatial moment of concentration, and scalar dissipation rate. As a reference against which we test our results, we consider corresponding solutions associated with uniform space-time grids whose level of refinement is established through a detailed convergence study. We find a satisfactory comparison between results for the adaptive methodologies and such reference solutions, our adaptive technique being associated with a markedly reduced computational cost. Comparison of the two adaptive strategies tested suggests that: (i) defining the error estimator relying solely on concentration fields yields some advantages in grasping the key features of solute transport taking place within

  20. Item Selection and Ability Estimation Procedures for a Mixed-Format Adaptive Test

    ERIC Educational Resources Information Center

    Ho, Tsung-Han; Dodd, Barbara G.

    2012-01-01

    In this study we compared five item selection procedures using three ability estimation methods in the context of a mixed-format adaptive test based on the generalized partial credit model. The item selection procedures used were maximum posterior weighted information, maximum expected information, maximum posterior weighted Kullback-Leibler…

  1. An assessment of the adaptive unstructured tetrahedral grid, Euler Flow Solver Code FELISA

    NASA Technical Reports Server (NTRS)

    Djomehri, M. Jahed; Erickson, Larry L.

    1994-01-01

    A three-dimensional solution-adaptive Euler flow solver for unstructured tetrahedral meshes is assessed, and the accuracy and efficiency of the method for predicting sonic boom pressure signatures about simple generic models are demonstrated. Comparison of computational and wind tunnel data and enhancement of numerical solutions by means of grid adaptivity are discussed. The mesh generation is based on the advancing front technique. The FELISA code consists of two solvers, the Taylor-Galerkin and the Runge-Kutta-Galerkin schemes, both of which are spacially discretized by the usual Galerkin weighted residual finite-element methods but with different explicit time-marching schemes to steady state. The solution-adaptive grid procedure is based on either remeshing or mesh refinement techniques. An alternative geometry adaptive procedure is also incorporated.

  2. Adaptive correction procedure for TVL1 image deblurring under impulse noise

    NASA Astrophysics Data System (ADS)

    Bai, Minru; Zhang, Xiongjun; Shao, Qianqian

    2016-08-01

    For the problem of image restoration of observed images corrupted by blur and impulse noise, the widely used TVL1 model may deviate from both the data-acquisition model and the prior model, especially for high noise levels. In order to seek a solution of high recovery quality beyond the reach of the TVL1 model, we propose an adaptive correction procedure for TVL1 image deblurring under impulse noise. Then, a proximal alternating direction method of multipliers (ADMM) is presented to solve the corrected TVL1 model and its convergence is also established under very mild conditions. It is verified by numerical experiments that our proposed approach outperforms the TVL1 model in terms of signal-to-noise ratio (SNR) values and visual quality, especially for high noise levels: it can handle salt-and-pepper noise as high as 90% and random-valued noise as high as 70%. In addition, a comparison with a state-of-the-art method, the two-phase method, demonstrates the superiority of the proposed approach.

  3. hp-Adaptive time integration based on the BDF for viscous flows

    NASA Astrophysics Data System (ADS)

    Hay, A.; Etienne, S.; Pelletier, D.; Garon, A.

    2015-06-01

    This paper presents a procedure based on the Backward Differentiation Formulas of order 1 to 5 to obtain efficient time integration of the incompressible Navier-Stokes equations. The adaptive algorithm performs both stepsize and order selections to control respectively the solution accuracy and the computational efficiency of the time integration process. The stepsize selection (h-adaptivity) is based on a local error estimate and an error controller to guarantee that the numerical solution accuracy is within a user prescribed tolerance. The order selection (p-adaptivity) relies on the idea that low-accuracy solutions can be computed efficiently by low order time integrators while accurate solutions require high order time integrators to keep computational time low. The selection is based on a stability test that detects growing numerical noise and deems a method of order p stable if there is no method of lower order that delivers the same solution accuracy for a larger stepsize. Hence, it guarantees both that (1) the used method of integration operates inside of its stability region and (2) the time integration procedure is computationally efficient. The proposed time integration procedure also features a time-step rejection and quarantine mechanisms, a modified Newton method with a predictor and dense output techniques to compute solution at off-step points.

  4. A Stochastic Total Least Squares Solution of Adaptive Filtering Problem

    PubMed Central

    Ahmad, Noor Atinah

    2014-01-01

    An efficient and computationally linear algorithm is derived for total least squares solution of adaptive filtering problem, when both input and output signals are contaminated by noise. The proposed total least mean squares (TLMS) algorithm is designed by recursively computing an optimal solution of adaptive TLS problem by minimizing instantaneous value of weighted cost function. Convergence analysis of the algorithm is given to show the global convergence of the proposed algorithm, provided that the stepsize parameter is appropriately chosen. The TLMS algorithm is computationally simpler than the other TLS algorithms and demonstrates a better performance as compared with the least mean square (LMS) and normalized least mean square (NLMS) algorithms. It provides minimum mean square deviation by exhibiting better convergence in misalignment for unknown system identification under noisy inputs. PMID:24688412

  5. Differential Effects of Two Spelling Procedures on Acquisition, Maintenance and Adaption to Reading

    ERIC Educational Resources Information Center

    Cates, Gary L.; Dunne, Megan; Erkfritz, Karyn N.; Kivisto, Aaron; Lee, Nicole; Wierzbicki, Jennifer

    2007-01-01

    An alternating treatments design was used to assess the effects of a constant time delay (CTD) procedure and a cover-copy-compare (CCC) procedure on three students' acquisition, subsequent maintenance, and adaptation (i.e., application) of acquired spelling words to reading passages. Students were randomly presented two trials of word lists from…

  6. Adaptively Refined Euler and Navier-Stokes Solutions with a Cartesian-Cell Based Scheme

    NASA Technical Reports Server (NTRS)

    Coirier, William J.; Powell, Kenneth G.

    1995-01-01

    A Cartesian-cell based scheme with adaptive mesh refinement for solving the Euler and Navier-Stokes equations in two dimensions has been developed and tested. Grids about geometrically complicated bodies were generated automatically, by recursive subdivision of a single Cartesian cell encompassing the entire flow domain. Where the resulting cells intersect bodies, N-sided 'cut' cells were created using polygon-clipping algorithms. The grid was stored in a binary-tree data structure which provided a natural means of obtaining cell-to-cell connectivity and of carrying out solution-adaptive mesh refinement. The Euler and Navier-Stokes equations were solved on the resulting grids using an upwind, finite-volume formulation. The inviscid fluxes were found in an upwinded manner using a linear reconstruction of the cell primitives, providing the input states to an approximate Riemann solver. The viscous fluxes were formed using a Green-Gauss type of reconstruction upon a co-volume surrounding the cell interface. Data at the vertices of this co-volume were found in a linearly K-exact manner, which ensured linear K-exactness of the gradients. Adaptively-refined solutions for the inviscid flow about a four-element airfoil (test case 3) were compared to theory. Laminar, adaptively-refined solutions were compared to accepted computational, experimental and theoretical results.

  7. Inactivation of Heat Adapted and Chlorine Adapted Listeria Monocytogenes ATCC 7644 on Tomatoes Using Sodium Dodecyl Sulphate, Levulinic Acid and Sodium Hypochlorite Solution.

    PubMed

    Ijabadeniyi, Oluwatosin Ademola; Mnyandu, Elizabeth

    2017-04-13

    The effectiveness of sodium dodecyl sulphate (SDS), sodium hypochlorite solution and levulinic acid in reducing the survival of heat adapted and chlorine adapted Listeria monocytogenes ATCC 7644 was evaluated. The results against heat adapted L. monocytognes revealed that sodium hypochlorite solution was the least effective, achieving log reduction of 2.75, 2.94 and 3.97 log colony forming unit (CFU)/mL for 1, 3 and 5 minutes, respectively. SDS was able to achieve 8 log reduction for both heat adapted and chlorine adapted bacteria. When used against chlorine adapted L. monocytogenes sodium hypochlorite solution achieved log reduction of 2.76, 2.93 and 3.65 log CFU/mL for 1, 3 and 5 minutes, respectively. Using levulinic acid on heat adapted bacteria achieved log reduction of 3.07, 2.78 and 4.97 log CFU/mL for 1, 3, 5 minutes, respectively. On chlorine adapted bacteria levulinic acid achieved log reduction of 2.77, 3.07 and 5.21 log CFU/mL for 1, 3 and 5 minutes, respectively. Using a mixture of 0.05% SDS and 0.5% levulinic acid on heat adapted bacteria achieved log reduction of 3.13, 3.32 and 4.79 log CFU/mL for 1, 3 and 5 minutes while on chlorine adapted bacteria it achieved 3.20, 3.33 and 5.66 log CFU/mL, respectively. Increasing contact time also increased log reduction for both test pathogens. A storage period of up to 72 hours resulted in progressive log reduction for both test pathogens. Results also revealed that there was a significant difference (P≤0.05) among contact times, storage times and sanitizers. Findings from this study can be used to select suitable sanitizers and contact times for heat and chlorine adapted L. monocytogenes in the fresh produce industry.

  8. Adaptive clustering procedure for continuous gravitational wave searches

    NASA Astrophysics Data System (ADS)

    Singh, Avneet; Papa, Maria Alessandra; Eggenstein, Heinz-Bernd; Walsh, Sinéad

    2017-10-01

    In hierarchical searches for continuous gravitational waves, clustering of candidates is an important post-processing step because it reduces the number of noise candidates that are followed up at successive stages [J. Aasi et al., Phys. Rev. Lett. 88, 102002 (2013), 10.1103/PhysRevD.88.102002; B. Behnke, M. A. Papa, and R. Prix, Phys. Rev. D 91, 064007 (2015), 10.1103/PhysRevD.91.064007; M. A. Papa et al., Phys. Rev. D 94, 122006 (2016), 10.1103/PhysRevD.94.122006]. Previous clustering procedures bundled together nearby candidates ascribing them to the same root cause (be it a signal or a disturbance), based on a predefined cluster volume. In this paper, we present a procedure that adapts the cluster volume to the data itself and checks for consistency of such volume with what is expected from a signal. This significantly improves the noise rejection capabilities at fixed detection threshold, and at fixed computing resources for the follow-up stages, this results in an overall more sensitive search. This new procedure was employed in the first Einstein@Home search on data from the first science run of the advanced LIGO detectors (O1) [LIGO Scientific Collaboration and Virgo Collaboration, arXiv:1707.02669 [Phys. Rev. D (to be published)

  9. Reporting of the translation and cultural adaptation procedures of the Addenbrooke's Cognitive Examination version III (ACE-III) and its predecessors: a systematic review.

    PubMed

    Mirza, Nadine; Panagioti, Maria; Waheed, Muhammad Wali; Waheed, Waquas

    2017-09-13

    The ACE-III, a gold standard for screening cognitive impairment, is restricted by language and culture, with no uniform set of guidelines for its adaptation. To develop guidelines a compilation of all the adaptation procedures undertaken by adapters of the ACE-III and its predecessors is needed. We searched EMBASE, Medline and PsychINFO and screened publications from a previous review. We included publications on adapted versions of the ACE-III and its predecessors, extracting translation and cultural adaptation procedures and assessing their quality. We deemed 32 papers suitable for analysis. 7 translation steps were identified and we determined which items of the ACE-III are culturally dependent. This review lists all adaptations of the ACE, ACE-R and ACE-III, rates the reporting of their adaptation procedures and summarises adaptation procedures into steps that can be undertaken by adapters.

  10. Cartesian Off-Body Grid Adaption for Viscous Time- Accurate Flow Simulation

    NASA Technical Reports Server (NTRS)

    Buning, Pieter G.; Pulliam, Thomas H.

    2011-01-01

    An improved solution adaption capability has been implemented in the OVERFLOW overset grid CFD code. Building on the Cartesian off-body approach inherent in OVERFLOW and the original adaptive refinement method developed by Meakin, the new scheme provides for automated creation of multiple levels of finer Cartesian grids. Refinement can be based on the undivided second-difference of the flow solution variables, or on a specific flow quantity such as vorticity. Coupled with load-balancing and an inmemory solution interpolation procedure, the adaption process provides very good performance for time-accurate simulations on parallel compute platforms. A method of using refined, thin body-fitted grids combined with adaption in the off-body grids is presented, which maximizes the part of the domain subject to adaption. Two- and three-dimensional examples are used to illustrate the effectiveness and performance of the adaption scheme.

  11. Solution adaptive grids applied to low Reynolds number flow

    NASA Astrophysics Data System (ADS)

    de With, G.; Holdø, A. E.; Huld, T. A.

    2003-08-01

    A numerical study has been undertaken to investigate the use of a solution adaptive grid for flow around a cylinder in the laminar flow regime. The main purpose of this work is twofold. The first aim is to investigate the suitability of a grid adaptation algorithm and the reduction in mesh size that can be obtained. Secondly, the uniform asymmetric flow structures are ideal to validate the mesh structures due to mesh refinement and consequently the selected refinement criteria. The refinement variable used in this work is a product of the rate of strain and the mesh cell size, and contains two variables Cm and Cstr which determine the order of each term. By altering the order of either one of these terms the refinement behaviour can be modified.

  12. Salt taste adaptation: the psychophysical effects of adapting solutions and residual stimuli from prior tastings on the taste of sodium chloride.

    PubMed

    O'Mahony, M

    1979-01-01

    The paper reviews how adaptation to sodium chloride, changing in concentration as a result of various experimental procedures, affects measurements of the sensitivity, intensity, and quality of the salt taste. The development of and evidence for the current model that the salt taste depends on an adaptation level (taste zero) determined by the sodium cation concentration is examined and found to be generally supported, despite great methodological complications. It would seem that lower adaptation levels elicit lower thresholds, higher intensity estimates, and altered quality descriptions with predictable effects on psychophysical measures.

  13. Solution procedure of dynamical contact problems with friction

    NASA Astrophysics Data System (ADS)

    Abdelhakim, Lotfi

    2017-07-01

    Dynamical contact is one of the common research topics because of its wide applications in the engineering field. The main goal of this work is to develop a time-stepping algorithm for dynamic contact problems. We propose a finite element approach for elastodynamics contact problems [1]. Sticking, sliding and frictional contact can be taken into account. Lagrange multipliers are used to enforce non-penetration condition. For the time discretization, we propose a scheme equivalent to the explicit Newmark scheme. Each time step requires solving a nonlinear problem similar to a static friction problem. The nonlinearity of the system of equation needs an iterative solution procedure based on Uzawa's algorithm [2][3]. The applicability of the algorithm is illustrated by selected sample numerical solutions to static and dynamic contact problems. Results obtained with the model have been compared and verified with results from an independent numerical method.

  14. Adjoint-Based, Three-Dimensional Error Prediction and Grid Adaptation

    NASA Technical Reports Server (NTRS)

    Park, Michael A.

    2002-01-01

    Engineering computational fluid dynamics (CFD) analysis and design applications focus on output functions (e.g., lift, drag). Errors in these output functions are generally unknown and conservatively accurate solutions may be computed. Computable error estimates can offer the possibility to minimize computational work for a prescribed error tolerance. Such an estimate can be computed by solving the flow equations and the linear adjoint problem for the functional of interest. The computational mesh can be modified to minimize the uncertainty of a computed error estimate. This robust mesh-adaptation procedure automatically terminates when the simulation is within a user specified error tolerance. This procedure for estimating and adapting to error in a functional is demonstrated for three-dimensional Euler problems. An adaptive mesh procedure that links to a Computer Aided Design (CAD) surface representation is demonstrated for wing, wing-body, and extruded high lift airfoil configurations. The error estimation and adaptation procedure yielded corrected functions that are as accurate as functions calculated on uniformly refined grids with ten times as many grid points.

  15. Combined LAURA-UPS solution procedure for chemically-reacting flows. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Wood, William A.

    1994-01-01

    A new procedure seeks to combine the thin-layer Navier-Stokes solver LAURA with the parabolized Navier-Stokes solver UPS for the aerothermodynamic solution of chemically-reacting air flowfields. The interface protocol is presented and the method is applied to two slender, blunted shapes. Both axisymmetric and three dimensional solutions are included with surface pressure and heat transfer comparisons between the present method and previously published results. The case of Mach 25 flow over an axisymmetric six degree sphere-cone with a noncatalytic wall is considered to 100 nose radii. A stability bound on the marching step size was observed with this case and is attributed to chemistry effects resulting from the noncatalytic wall boundary condition. A second case with Mach 28 flow over a sphere-cone-cylinder-flare configuration is computed at both two and five degree angles of attack with a fully-catalytic wall. Surface pressures are seen to be within five percent with the present method compared to the baseline LAURA solution and heat transfers are within 10 percent. The effect of grid resolution is investigated and the nonequilibrium results are compared with a perfect gas solution, showing that while the surface pressure is relatively unchanged by the inclusion of reacting chemistry the nonequilibrium heating is 25 percent higher. The procedure demonstrates significant, order of magnitude reductions in solution time and required memory for the three dimensional case over an all thin-layer Navier-Stokes solution.

  16. Eulerian Lagrangian Adaptive Fup Collocation Method for solving the conservative solute transport in heterogeneous porous media

    NASA Astrophysics Data System (ADS)

    Gotovac, Hrvoje; Srzic, Veljko

    2014-05-01

    Contaminant transport in natural aquifers is a complex, multiscale process that is frequently studied using different Eulerian, Lagrangian and hybrid numerical methods. Conservative solute transport is typically modeled using the advection-dispersion equation (ADE). Despite the large number of available numerical methods that have been developed to solve it, the accurate numerical solution of the ADE still presents formidable challenges. In particular, current numerical solutions of multidimensional advection-dominated transport in non-uniform velocity fields are affected by one or all of the following problems: numerical dispersion that introduces artificial mixing and dilution, grid orientation effects, unresolved spatial and temporal scales and unphysical numerical oscillations (e.g., Herrera et al, 2009; Bosso et al., 2012). In this work we will present Eulerian Lagrangian Adaptive Fup Collocation Method (ELAFCM) based on Fup basis functions and collocation approach for spatial approximation and explicit stabilized Runge-Kutta-Chebyshev temporal integration (public domain routine SERK2) which is especially well suited for stiff parabolic problems. Spatial adaptive strategy is based on Fup basis functions which are closely related to the wavelets and splines so that they are also compactly supported basis functions; they exactly describe algebraic polynomials and enable a multiresolution adaptive analysis (MRA). MRA is here performed via Fup Collocation Transform (FCT) so that at each time step concentration solution is decomposed using only a few significant Fup basis functions on adaptive collocation grid with appropriate scales (frequencies) and locations, a desired level of accuracy and a near minimum computational cost. FCT adds more collocations points and higher resolution levels only in sensitive zones with sharp concentration gradients, fronts and/or narrow transition zones. According to the our recent achievements there is no need for solving the large

  17. An edge-based solution-adaptive method applied to the AIRPLANE code

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak; Thomas, Scott D.; Cliff, Susan E.

    1995-01-01

    Computational methods to solve large-scale realistic problems in fluid flow can be made more efficient and cost effective by using them in conjunction with dynamic mesh adaption procedures that perform simultaneous coarsening and refinement to capture flow features of interest. This work couples the tetrahedral mesh adaption scheme, 3D_TAG, with the AIRPLANE code to solve complete aircraft configuration problems in transonic and supersonic flow regimes. Results indicate that the near-field sonic boom pressure signature of a cone-cylinder is improved, the oblique and normal shocks are better resolved on a transonic wing, and the bow shock ahead of an unstarted inlet is better defined.

  18. Transonic flow solutions using a composite velocity procedure for potential, Euler and RNS equations

    NASA Technical Reports Server (NTRS)

    Gordnier, R. E.; Rubin, S. G.

    1986-01-01

    Solutions for transonic viscous and inviscid flows using a composite velocity procedure are presented. The velocity components of the compressible flow equations are written in terms of a multiplicative composite consisting of a viscous or rotational velocity and an inviscid, irrotational, potential-like function. This provides for an efficient solution procedure that is locally representative of both asymptotic inviscid and boundary layer theories. A modified conservative form of the axial momentum equation that is required to obtain rotational solutions in the inviscid region is presented and a combined conservation/nonconservation form is applied for evaluation of the reduced Navier-Stokes (RNS), Euler and potential equations. A variety of results is presented and the effects of the approximations on entropy production, shock capturing, and viscous interaction are discussed.

  19. Numerical Simulations of STOVL Hot Gas Ingestion in Ground Proximity Using a Multigrid Solution Procedure

    NASA Technical Reports Server (NTRS)

    Wang, Gang

    2003-01-01

    A multi grid solution procedure for the numerical simulation of turbulent flows in complex geometries has been developed. A Full Multigrid-Full Approximation Scheme (FMG-FAS) is incorporated into the continuity and momentum equations, while the scalars are decoupled from the multi grid V-cycle. A standard kappa-Epsilon turbulence model with wall functions has been used to close the governing equations. The numerical solution is accomplished by solving for the Cartesian velocity components either with a traditional grid staggering arrangement or with a multiple velocity grid staggering arrangement. The two solution methodologies are evaluated for relative computational efficiency. The solution procedure with traditional staggering arrangement is subsequently applied to calculate the flow and temperature fields around a model Short Take-off and Vertical Landing (STOVL) aircraft hovering in ground proximity.

  20. Comparing adaptive procedures for estimating the psychometric function for an auditory gap detection task.

    PubMed

    Shen, Yi

    2013-05-01

    A subject's sensitivity to a stimulus variation can be studied by estimating the psychometric function. Generally speaking, three parameters of the psychometric function are of interest: the performance threshold, the slope of the function, and the rate at which attention lapses occur. In the present study, three psychophysical procedures were used to estimate the three-parameter psychometric function for an auditory gap detection task. These were an up-down staircase (up-down) procedure, an entropy-based Bayesian (entropy) procedure, and an updated maximum-likelihood (UML) procedure. Data collected from four young, normal-hearing listeners showed that while all three procedures provided similar estimates of the threshold parameter, the up-down procedure performed slightly better in estimating the slope and lapse rate for 200 trials of data collection. When the lapse rate was increased by mixing in random responses for the three adaptive procedures, the larger lapse rate was especially detrimental to the efficiency of the up-down procedure, and the UML procedure provided better estimates of the threshold and slope than did the other two procedures.

  1. Digital adaptive flight controller development

    NASA Technical Reports Server (NTRS)

    Kaufman, H.; Alag, G.; Berry, P.; Kotob, S.

    1974-01-01

    A design study of adaptive control logic suitable for implementation in modern airborne digital flight computers was conducted. Two designs are described for an example aircraft. Each of these designs uses a weighted least squares procedure to identify parameters defining the dynamics of the aircraft. The two designs differ in the way in which control law parameters are determined. One uses the solution of an optimal linear regulator problem to determine these parameters while the other uses a procedure called single stage optimization. Extensive simulation results and analysis leading to the designs are presented.

  2. A Comparison of Procedures for Content-Sensitive Item Selection in Computerized Adaptive Tests.

    ERIC Educational Resources Information Center

    Kingsbury, G. Gage; Zara, Anthony R.

    1991-01-01

    This simulation investigated two procedures that reduce differences between paper-and-pencil testing and computerized adaptive testing (CAT) by making CAT content sensitive. Results indicate that the price in terms of additional test items of using constrained CAT for content balancing is much smaller than that of using testlets. (SLD)

  3. A Solution Adaptive Technique Using Tetrahedral Unstructured Grids

    NASA Technical Reports Server (NTRS)

    Pirzadeh, Shahyar Z.

    2000-01-01

    An adaptive unstructured grid refinement technique has been developed and successfully applied to several three dimensional inviscid flow test cases. The method is based on a combination of surface mesh subdivision and local remeshing of the volume grid Simple functions of flow quantities are employed to detect dominant features of the flowfield The method is designed for modular coupling with various error/feature analyzers and flow solvers. Several steady-state, inviscid flow test cases are presented to demonstrate the applicability of the method for solving practical three-dimensional problems. In all cases, accurate solutions featuring complex, nonlinear flow phenomena such as shock waves and vortices have been generated automatically and efficiently.

  4. A Discontinuous Petrov-Galerkin Methodology for Adaptive Solutions to the Incompressible Navier-Stokes Equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Roberts, Nathan V.; Demkowiz, Leszek; Moser, Robert

    2015-11-15

    The discontinuous Petrov-Galerkin methodology with optimal test functions (DPG) of Demkowicz and Gopalakrishnan [18, 20] guarantees the optimality of the solution in an energy norm, and provides several features facilitating adaptive schemes. Whereas Bubnov-Galerkin methods use identical trial and test spaces, Petrov-Galerkin methods allow these function spaces to differ. In DPG, test functions are computed on the fly and are chosen to realize the supremum in the inf-sup condition; the method is equivalent to a minimum residual method. For well-posed problems with sufficiently regular solutions, DPG can be shown to converge at optimal rates—the inf-sup constants governing the convergence aremore » mesh-independent, and of the same order as those governing the continuous problem [48]. DPG also provides an accurate mechanism for measuring the error, and this can be used to drive adaptive mesh refinements. We employ DPG to solve the steady incompressible Navier-Stokes equations in two dimensions, building on previous work on the Stokes equations, and focusing particularly on the usefulness of the approach for automatic adaptivity starting from a coarse mesh. We apply our approach to a manufactured solution due to Kovasznay as well as the lid-driven cavity flow, backward-facing step, and flow past a cylinder problems.« less

  5. Near-Body Grid Adaption for Overset Grids

    NASA Technical Reports Server (NTRS)

    Buning, Pieter G.; Pulliam, Thomas H.

    2016-01-01

    A solution adaption capability for curvilinear near-body grids has been implemented in the OVERFLOW overset grid computational fluid dynamics code. The approach follows closely that used for the Cartesian off-body grids, but inserts refined grids in the computational space of original near-body grids. Refined curvilinear grids are generated using parametric cubic interpolation, with one-sided biasing based on curvature and stretching ratio of the original grid. Sensor functions, grid marking, and solution interpolation tasks are implemented in the same fashion as for off-body grids. A goal-oriented procedure, based on largest error first, is included for controlling growth rate and maximum size of the adapted grid system. The adaption process is almost entirely parallelized using MPI, resulting in a capability suitable for viscous, moving body simulations. Two- and three-dimensional examples are presented.

  6. Numerical difficulties and computational procedures for thermo-hydro-mechanical coupled problems of saturated porous media

    NASA Astrophysics Data System (ADS)

    Simoni, L.; Secchi, S.; Schrefler, B. A.

    2008-12-01

    This paper analyses the numerical difficulties commonly encountered in solving fully coupled numerical models and proposes a numerical strategy apt to overcome them. The proposed procedure is based on space refinement and time adaptivity. The latter, which in mainly studied here, is based on the use of a finite element approach in the space domain and a Discontinuous Galerkin approximation within each time span. Error measures are defined for the jump of the solution at each time station. These constitute the parameters allowing for the time adaptivity. Some care is however, needed for a useful definition of the jump measures. Numerical tests are presented firstly to demonstrate the advantages and shortcomings of the method over the more traditional use of finite differences in time, then to assess the efficiency of the proposed procedure for adapting the time step. The proposed method reveals its efficiency and simplicity to adapt the time step in the solution of coupled field problems.

  7. Evaluation of solution procedures for material and/or geometrically nonlinear structural analysis by the direct stiffness method.

    NASA Technical Reports Server (NTRS)

    Stricklin, J. A.; Haisler, W. E.; Von Riesemann, W. A.

    1972-01-01

    This paper presents an assessment of the solution procedures available for the analysis of inelastic and/or large deflection structural behavior. A literature survey is given which summarized the contribution of other researchers in the analysis of structural problems exhibiting material nonlinearities and combined geometric-material nonlinearities. Attention is focused at evaluating the available computation and solution techniques. Each of the solution techniques is developed from a common equation of equilibrium in terms of pseudo forces. The solution procedures are applied to circular plates and shells of revolution in an attempt to compare and evaluate each with respect to computational accuracy, economy, and efficiency. Based on the numerical studies, observations and comments are made with regard to the accuracy and economy of each solution technique.

  8. An iterative transformation procedure for numerical solution of flutter and similar characteristics-value problems

    NASA Technical Reports Server (NTRS)

    Gossard, Myron L

    1952-01-01

    An iterative transformation procedure suggested by H. Wielandt for numerical solution of flutter and similar characteristic-value problems is presented. Application of this procedure to ordinary natural-vibration problems and to flutter problems is shown by numerical examples. Comparisons of computed results with experimental values and with results obtained by other methods of analysis are made.

  9. A new solution procedure for a nonlinear infinite beam equation of motion

    NASA Astrophysics Data System (ADS)

    Jang, T. S.

    2016-10-01

    Our goal of this paper is of a purely theoretical question, however which would be fundamental in computational partial differential equations: Can a linear solution-structure for the equation of motion for an infinite nonlinear beam be directly manipulated for constructing its nonlinear solution? Here, the equation of motion is modeled as mathematically a fourth-order nonlinear partial differential equation. To answer the question, a pseudo-parameter is firstly introduced to modify the equation of motion. And then, an integral formalism for the modified equation is found here, being taken as a linear solution-structure. It enables us to formulate a nonlinear integral equation of second kind, equivalent to the original equation of motion. The fixed point approach, applied to the integral equation, results in proposing a new iterative solution procedure for constructing the nonlinear solution of the original beam equation of motion, which consists luckily of just the simple regular numerical integration for its iterative process; i.e., it appears to be fairly simple as well as straightforward to apply. A mathematical analysis is carried out on both natures of convergence and uniqueness of the iterative procedure by proving a contractive character of a nonlinear operator. It follows conclusively,therefore, that it would be one of the useful nonlinear strategies for integrating the equation of motion for a nonlinear infinite beam, whereby the preceding question may be answered. In addition, it may be worth noticing that the pseudo-parameter introduced here has double roles; firstly, it connects the original beam equation of motion with the integral equation, second, it is related with the convergence of the iterative method proposed here.

  10. A Cartesian, cell-based approach for adaptively-refined solutions of the Euler and Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Coirier, William J.; Powell, Kenneth G.

    1994-01-01

    A Cartesian, cell-based approach for adaptively-refined solutions of the Euler and Navier-Stokes equations in two dimensions is developed and tested. Grids about geometrically complicated bodies are generated automatically, by recursive subdivision of a single Cartesian cell encompassing the entire flow domain. Where the resulting cells intersect bodies, N-sided 'cut' cells are created using polygon-clipping algorithms. The grid is stored in a binary-tree structure which provides a natural means of obtaining cell-to-cell connectivity and of carrying out solution-adaptive mesh refinement. The Euler and Navier-Stokes equations are solved on the resulting grids using a finite-volume formulation. The convective terms are upwinded: a gradient-limited, linear reconstruction of the primitive variables is performed, providing input states to an approximate Riemann solver for computing the fluxes between neighboring cells. The more robust of a series of viscous flux functions is used to provide the viscous fluxes at the cell interfaces. Adaptively-refined solutions of the Navier-Stokes equations using the Cartesian, cell-based approach are obtained and compared to theory, experiment, and other accepted computational results for a series of low and moderate Reynolds number flows.

  11. A Cartesian, cell-based approach for adaptively-refined solutions of the Euler and Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Coirier, William J.; Powell, Kenneth G.

    1995-01-01

    A Cartesian, cell-based approach for adaptively-refined solutions of the Euler and Navier-Stokes equations in two dimensions is developed and tested. Grids about geometrically complicated bodies are generated automatically, by recursive subdivision of a single Cartesian cell encompassing the entire flow domain. Where the resulting cells intersect bodies, N-sided 'cut' cells are created using polygon-clipping algorithms. The grid is stored in a binary-tree data structure which provides a natural means of obtaining cell-to-cell connectivity and of carrying out solution-adaptive mesh refinement. The Euler and Navier-Stokes equations are solved on the resulting grids using a finite-volume formulation. The convective terms are upwinded: A gradient-limited, linear reconstruction of the primitive variables is performed, providing input states to an approximate Riemann solver for computing the fluxes between neighboring cells. The more robust of a series of viscous flux functions is used to provide the viscous fluxes at the cell interfaces. Adaptively-refined solutions of the Navier-Stokes equations using the Cartesian, cell-based approach are obtained and compared to theory, experiment and other accepted computational results for a series of low and moderate Reynolds number flows.

  12. Effects of Estimation Bias on Multiple-Category Classification with an IRT-Based Adaptive Classification Procedure

    ERIC Educational Resources Information Center

    Yang, Xiangdong; Poggio, John C.; Glasnapp, Douglas R.

    2006-01-01

    The effects of five ability estimators, that is, maximum likelihood estimator, weighted likelihood estimator, maximum a posteriori, expected a posteriori, and Owen's sequential estimator, on the performances of the item response theory-based adaptive classification procedure on multiple categories were studied via simulations. The following…

  13. An adaptive staircase procedure for the E-Prime programming environment.

    PubMed

    Hairston, W David; Maldjian, Joseph A

    2009-01-01

    Many studies need to determine a subject's threshold for a given task. This can be achieved efficiently using an adaptive staircase procedure. While the logic and algorithms for staircases have been well established, the few pre-programmed routines currently available to researchers require at least moderate programming experience to integrate into new paradigms and experimental settings. Here, we describe a freely distributed routine developed for the E-Prime programming environment that can be easily integrated into any experimental protocol with only a basic understanding of E-Prime. An example experiment (visual temporal-order-judgment task) where subjects report the order of occurrence of two circles illustrates the behavior and consistency of the routine.

  14. Procedures to develop a computerized adaptive test to assess patient-reported physical functioning.

    PubMed

    McCabe, Erin; Gross, Douglas P; Bulut, Okan

    2018-06-07

    The purpose of this paper is to demonstrate the procedures to develop and implement a computerized adaptive patient-reported outcome (PRO) measure using secondary analysis of a dataset and items from fixed-format legacy measures. We conducted secondary analysis of a dataset of responses from 1429 persons with work-related lower extremity impairment. We calibrated three measures of physical functioning on the same metric, based on item response theory (IRT). We evaluated efficiency and measurement precision of various computerized adaptive test (CAT) designs using computer simulations. IRT and confirmatory factor analyses support combining the items from the three scales for a CAT item bank of 31 items. The item parameters for IRT were calculated using the generalized partial credit model. CAT simulations show that reducing the test length from the full 31 items to a maximum test length of 8 items, or 20 items is possible without a significant loss of information (95, 99% correlation with legacy measure scores). We demonstrated feasibility and efficiency of using CAT for PRO measurement of physical functioning. The procedures we outlined are straightforward, and can be applied to other PRO measures. Additionally, we have included all the information necessary to implement the CAT of physical functioning in the electronic supplementary material of this paper.

  15. Adaptive Discrete Hypergraph Matching.

    PubMed

    Yan, Junchi; Li, Changsheng; Li, Yin; Cao, Guitao

    2018-02-01

    This paper addresses the problem of hypergraph matching using higher-order affinity information. We propose a solver that iteratively updates the solution in the discrete domain by linear assignment approximation. The proposed method is guaranteed to converge to a stationary discrete solution and avoids the annealing procedure and ad-hoc post binarization step that are required in several previous methods. Specifically, we start with a simple iterative discrete gradient assignment solver. This solver can be trapped in an -circle sequence under moderate conditions, where is the order of the graph matching problem. We then devise an adaptive relaxation mechanism to jump out this degenerating case and show that the resulting new path will converge to a fixed solution in the discrete domain. The proposed method is tested on both synthetic and real-world benchmarks. The experimental results corroborate the efficacy of our method.

  16. Comparison of 7.2% hypertonic saline - 6% hydroxyethyl starch solution and 6% hydroxyethyl starch solution after the induction of anesthesia in patients undergoing elective neurosurgical procedures.

    PubMed

    Shao, Liujiazi; Wang, Baoguo; Wang, Shuangyan; Mu, Feng; Gu, Ke

    2013-01-01

    The ideal solution for fluid management during neurosurgical procedures remains controversial. The aim of this study was to compare the effects of a 7.2% hypertonic saline - 6% hydroxyethyl starch (HS-HES) solution and a 6% hydroxyethyl starch (HES) solution on clinical, hemodynamic and laboratory variables during elective neurosurgical procedures. Forty patients scheduled for elective neurosurgical procedures were randomly assigned to the HS-HES group orthe HES group. Afterthe induction of anesthesia, patients in the HS-HES group received 250 mL of HS-HES (500 mL/h), whereas the patients in the HES group received 1,000 mL of HES (1000 mL/h). The monitored variables included clinical, hemodynamic and laboratory parameters. Chictr.org: ChiCTR-TRC-12002357 The patients who received the HS-HES solution had a significant decrease in the intraoperative total fluid input (p<0.01), the volume of Ringer's solution required (p<0.05), the fluid balance (p<0.01) and their dural tension scores (p<0.05). The total urine output, blood loss, bleeding severity scores, operation duration and hemodynamic variables were similar in both groups (p>0.05). Moreover, compared with the HES group, the HS-HES group had significantly higher plasma concentrations of sodium and chloride, increasing the osmolality (p<0.01). Our results suggest that HS-HES reduced the volume of intraoperative fluid required to maintain the patients undergoing surgery and led to a decrease in the intraoperative fluid balance. Moreover, HS-HES improved the dural tension scores and provided satisfactory brain relaxation. Our results indicate that HS-HES may represent a new avenue for volume therapy during elective neurosurgical procedures.

  17. Approximate solution of the multiple watchman routes problem with restricted visibility range.

    PubMed

    Faigl, Jan

    2010-10-01

    In this paper, a new self-organizing map (SOM) based adaptation procedure is proposed to address the multiple watchman route problem with the restricted visibility range in the polygonal domain W. A watchman route is represented by a ring of connected neuron weights that evolves in W, while obstacles are considered by approximation of the shortest path. The adaptation procedure considers a coverage of W by the ring in order to attract nodes toward uncovered parts of W. The proposed procedure is experimentally verified in a set of environments and several visibility ranges. Performance of the procedure is compared with the decoupled approach based on solutions of the art gallery problem and the consecutive traveling salesman problem. The experimental results show the suitability of the proposed procedure based on relatively simple supporting geometrical structures, enabling application of the SOM principles to watchman route problems in W.

  18. On the dynamics of some grid adaption schemes

    NASA Technical Reports Server (NTRS)

    Sweby, Peter K.; Yee, Helen C.

    1994-01-01

    The dynamics of a one-parameter family of mesh equidistribution schemes coupled with finite difference discretisations of linear and nonlinear convection-diffusion model equations is studied numerically. It is shown that, when time marched to steady state, the grid adaption not only influences the stability and convergence rate of the overall scheme, but can also introduce spurious dynamics to the numerical solution procedure.

  19. Comparison of 7.2% hypertonic saline - 6% hydroxyethyl starch solution and 6% hydroxyethyl starch solution after the induction of anesthesia in patients undergoing elective neurosurgical procedures

    PubMed Central

    Shao, Liujiazi; Wang, Baoguo; Wang, Shuangyan; Mu, Feng; Gu, Ke

    2013-01-01

    OBJECTIVE: The ideal solution for fluid management during neurosurgical procedures remains controversial. The aim of this study was to compare the effects of a 7.2% hypertonic saline - 6% hydroxyethyl starch (HS-HES) solution and a 6% hydroxyethyl starch (HES) solution on clinical, hemodynamic and laboratory variables during elective neurosurgical procedures. METHODS: Forty patients scheduled for elective neurosurgical procedures were randomly assigned to the HS-HES group or the HES group. After the induction of anesthesia, patients in the HS-HES group received 250 mL of HS-HES (500 mL/h), whereas the patients in the HES group received 1,000 mL of HES (1000 mL/h). The monitored variables included clinical, hemodynamic and laboratory parameters. Chictr.org: ChiCTR-TRC-12002357 RESULTS: The patients who received the HS-HES solution had a significant decrease in the intraoperative total fluid input (p<0.01), the volume of Ringer's solution required (p<0.05), the fluid balance (p<0.01) and their dural tension scores (p<0.05). The total urine output, blood loss, bleeding severity scores, operation duration and hemodynamic variables were similar in both groups (p>0.05). Moreover, compared with the HES group, the HS-HES group had significantly higher plasma concentrations of sodium and chloride, increasing the osmolality (p<0.01). CONCLUSION: Our results suggest that HS-HES reduced the volume of intraoperative fluid required to maintain the patients undergoing surgery and led to a decrease in the intraoperative fluid balance. Moreover, HS-HES improved the dural tension scores and provided satisfactory brain relaxation. Our results indicate that HS-HES may represent a new avenue for volume therapy during elective neurosurgical procedures. PMID:23644851

  20. Development of an unstructured solution adaptive method for the quasi-three-dimensional Euler and Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Jiang, Yi-Tsann

    1993-01-01

    A general solution adaptive scheme-based on a remeshing technique is developed for solving the two-dimensional and quasi-three-dimensional Euler and Favre-averaged Navier-Stokes equations. The numerical scheme is formulated on an unstructured triangular mesh utilizing an edge-based pointer system which defines the edge connectivity of the mesh structure. Jameson's four-stage hybrid Runge-Kutta scheme is used to march the solution in time. The convergence rate is enhanced through the use of local time stepping and implicit residual averaging. As the solution evolves, the mesh is regenerated adaptively using flow field information. Mesh adaptation parameters are evaluated such that an estimated local numerical error is equally distributed over the whole domain. For inviscid flows, the present approach generates a complete unstructured triangular mesh using the advancing front method. For turbulent flows, the approach combines a local highly stretched structured triangular mesh in the boundary layer region with an unstructured mesh in the remaining regions to efficiently resolve the important flow features. One-equation and two-equation turbulence models are incorporated into the present unstructured approach. Results are presented for a wide range of flow problems including two-dimensional multi-element airfoils, two-dimensional cascades, and quasi-three-dimensional cascades. This approach is shown to gain flow resolution in the refined regions while achieving a great reduction in the computational effort and storage requirements since solution points are not wasted in regions where they are not required.

  1. Development of an unstructured solution adaptive method for the quasi-three-dimensional Euler and Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Jiang, Yi-Tsann; Usab, William J., Jr.

    1993-01-01

    A general solution adaptive scheme based on a remeshing technique is developed for solving the two-dimensional and quasi-three-dimensional Euler and Favre-averaged Navier-Stokes equations. The numerical scheme is formulated on an unstructured triangular mesh utilizing an edge-based pointer system which defines the edge connectivity of the mesh structure. Jameson's four-stage hybrid Runge-Kutta scheme is used to march the solution in time. The convergence rate is enhanced through the use of local time stepping and implicit residual averaging. As the solution evolves, the mesh is regenerated adaptively using flow field information. Mesh adaptation parameters are evaluated such that an estimated local numerical error is equally distributed over the whole domain. For inviscid flows, the present approach generates a complete unstructured triangular mesh using the advancing front method. For turbulent flows, the approach combines a local highly stretched structured triangular mesh in the boundary layer region with an unstructured mesh in the remaining regions to efficiently resolve the important flow features. One-equation and two-equation turbulence models are incorporated into the present unstructured approach. Results are presented for a wide range of flow problems including two-dimensional multi-element airfoils, two-dimensional cascades, and quasi-three-dimensional cascades. This approach is shown to gain flow resolution in the refined regions while achieving a great reduction in the computational effort and storage requirements since solution points are not wasted in regions where they are not required.

  2. Physiology driven adaptivity for the numerical solution of the bidomain equations.

    PubMed

    Whiteley, Jonathan P

    2007-09-01

    Previous work [Whiteley, J. P. IEEE Trans. Biomed. Eng. 53:2139-2147, 2006] derived a stable, semi-implicit numerical scheme for solving the bidomain equations. This scheme allows the timestep used when solving the bidomain equations numerically to be chosen by accuracy considerations rather than stability considerations. In this study we modify this scheme to allow an adaptive numerical solution in both time and space. The spatial mesh size is determined by the gradient of the transmembrane and extracellular potentials while the timestep is determined by the values of: (i) the fast sodium current; and (ii) the calcium release from junctional sarcoplasmic reticulum to myoplasm current. For two-dimensional simulations presented here, combining the numerical algorithm in the paper cited above with the adaptive algorithm presented here leads to an increase in computational efficiency by a factor of around 250 over previous work, together with significantly less computational memory being required. The speedup for three-dimensional simulations is likely to be more impressive.

  3. Cooperative solutions coupling a geometry engine and adaptive solver codes

    NASA Technical Reports Server (NTRS)

    Dickens, Thomas P.

    1995-01-01

    Follow-on work has progressed in using Aero Grid and Paneling System (AGPS), a geometry and visualization system, as a dynamic real time geometry monitor, manipulator, and interrogator for other codes. In particular, AGPS has been successfully coupled with adaptive flow solvers which iterate, refining the grid in areas of interest, and continuing on to a solution. With the coupling to the geometry engine, the new grids represent the actual geometry much more accurately since they are derived directly from the geometry and do not use refits to the first-cut grids. Additional work has been done with design runs where the geometric shape is modified to achieve a desired result. Various constraints are used to point the solution in a reasonable direction which also more closely satisfies the desired results. Concepts and techniques are presented, as well as examples of sample case studies. Issues such as distributed operation of the cooperative codes versus running all codes locally and pre-calculation for performance are discussed. Future directions are considered which will build on these techniques in light of changing computer environments.

  4. qPR: An adaptive partial-report procedure based on Bayesian inference.

    PubMed

    Baek, Jongsoo; Lesmes, Luis Andres; Lu, Zhong-Lin

    2016-08-01

    Iconic memory is best assessed with the partial report procedure in which an array of letters appears briefly on the screen and a poststimulus cue directs the observer to report the identity of the cued letter(s). Typically, 6-8 cue delays or 600-800 trials are tested to measure the iconic memory decay function. Here we develop a quick partial report, or qPR, procedure based on a Bayesian adaptive framework to estimate the iconic memory decay function with much reduced testing time. The iconic memory decay function is characterized by an exponential function and a joint probability distribution of its three parameters. Starting with a prior of the parameters, the method selects the stimulus to maximize the expected information gain in the next test trial. It then updates the posterior probability distribution of the parameters based on the observer's response using Bayesian inference. The procedure is reiterated until either the total number of trials or the precision of the parameter estimates reaches a certain criterion. Simulation studies showed that only 100 trials were necessary to reach an average absolute bias of 0.026 and a precision of 0.070 (both in terms of probability correct). A psychophysical validation experiment showed that estimates of the iconic memory decay function obtained with 100 qPR trials exhibited good precision (the half width of the 68.2% credible interval = 0.055) and excellent agreement with those obtained with 1,600 trials of the conventional method of constant stimuli procedure (RMSE = 0.063). Quick partial-report relieves the data collection burden in characterizing iconic memory and makes it possible to assess iconic memory in clinical populations.

  5. qPR: An adaptive partial-report procedure based on Bayesian inference

    PubMed Central

    Baek, Jongsoo; Lesmes, Luis Andres; Lu, Zhong-Lin

    2016-01-01

    Iconic memory is best assessed with the partial report procedure in which an array of letters appears briefly on the screen and a poststimulus cue directs the observer to report the identity of the cued letter(s). Typically, 6–8 cue delays or 600–800 trials are tested to measure the iconic memory decay function. Here we develop a quick partial report, or qPR, procedure based on a Bayesian adaptive framework to estimate the iconic memory decay function with much reduced testing time. The iconic memory decay function is characterized by an exponential function and a joint probability distribution of its three parameters. Starting with a prior of the parameters, the method selects the stimulus to maximize the expected information gain in the next test trial. It then updates the posterior probability distribution of the parameters based on the observer's response using Bayesian inference. The procedure is reiterated until either the total number of trials or the precision of the parameter estimates reaches a certain criterion. Simulation studies showed that only 100 trials were necessary to reach an average absolute bias of 0.026 and a precision of 0.070 (both in terms of probability correct). A psychophysical validation experiment showed that estimates of the iconic memory decay function obtained with 100 qPR trials exhibited good precision (the half width of the 68.2% credible interval = 0.055) and excellent agreement with those obtained with 1,600 trials of the conventional method of constant stimuli procedure (RMSE = 0.063). Quick partial-report relieves the data collection burden in characterizing iconic memory and makes it possible to assess iconic memory in clinical populations. PMID:27580045

  6. Hybrid Solution-Adaptive Unstructured Cartesian Method for Large-Eddy Simulation of Detonation in Multi-Phase Turbulent Reactive Mixtures

    DTIC Science & Technology

    2012-03-27

    pulse- detonation engines ( PDE ), stage separation, supersonic cav- ity oscillations, hypersonic aerodynamics, detonation induced structural...ADAPTIVE UNSTRUCTURED CARTESIAN METHOD FOR LARGE-EDDY SIMULATION OF DETONATION IN MULTI-PHASE TURBULENT REACTIVE MIXTURES 5b. GRANT NUMBER FA9550...CCL Report TR-2012-03-03 Hybrid Solution-Adaptive Unstructured Cartesian Method for Large-Eddy Simulation of Detonation in Multi-Phase Turbulent

  7. Multiscale computations with a wavelet-adaptive algorithm

    NASA Astrophysics Data System (ADS)

    Rastigejev, Yevgenii Anatolyevich

    A wavelet-based adaptive multiresolution algorithm for the numerical solution of multiscale problems governed by partial differential equations is introduced. The main features of the method include fast algorithms for the calculation of wavelet coefficients and approximation of derivatives on nonuniform stencils. The connection between the wavelet order and the size of the stencil is established. The algorithm is based on the mathematically well established wavelet theory. This allows us to provide error estimates of the solution which are used in conjunction with an appropriate threshold criteria to adapt the collocation grid. The efficient data structures for grid representation as well as related computational algorithms to support grid rearrangement procedure are developed. The algorithm is applied to the simulation of phenomena described by Navier-Stokes equations. First, we undertake the study of the ignition and subsequent viscous detonation of a H2 : O2 : Ar mixture in a one-dimensional shock tube. Subsequently, we apply the algorithm to solve the two- and three-dimensional benchmark problem of incompressible flow in a lid-driven cavity at large Reynolds numbers. For these cases we show that solutions of comparable accuracy as the benchmarks are obtained with more than an order of magnitude reduction in degrees of freedom. The simulations show the striking ability of the algorithm to adapt to a solution having different scales at different spatial locations so as to produce accurate results at a relatively low computational cost.

  8. Adaptive multigrid domain decomposition solutions for viscous interacting flows

    NASA Technical Reports Server (NTRS)

    Rubin, Stanley G.; Srinivasan, Kumar

    1992-01-01

    Several viscous incompressible flows with strong pressure interaction and/or axial flow reversal are considered with an adaptive multigrid domain decomposition procedure. Specific examples include the triple deck structure surrounding the trailing edge of a flat plate, the flow recirculation in a trough geometry, and the flow in a rearward facing step channel. For the latter case, there are multiple recirculation zones, of different character, for laminar and turbulent flow conditions. A pressure-based form of flux-vector splitting is applied to the Navier-Stokes equations, which are represented by an implicit lowest-order reduced Navier-Stokes (RNS) system and a purely diffusive, higher-order, deferred-corrector. A trapezoidal or box-like form of discretization insures that all mass conservation properties are satisfied at interfacial and outflow boundaries, even for this primitive-variable, non-staggered grid computation.

  9. Topology and grid adaption for high-speed flow computations

    NASA Technical Reports Server (NTRS)

    Abolhassani, Jamshid S.; Tiwari, Surendra N.

    1989-01-01

    This study investigates the effects of grid topology and grid adaptation on numerical solutions of the Navier-Stokes equations. In the first part of this study, a general procedure is presented for computation of high-speed flow over complex three-dimensional configurations. The flow field is simulated on the surface of a Butler wing in a uniform stream. Results are presented for Mach number 3.5 and a Reynolds number of 2,000,000. The O-type and H-type grids have been used for this study, and the results are compared together and with other theoretical and experimental results. The results demonstrate that while the H-type grid is suitable for the leading and trailing edges, a more accurate solution can be obtained for the middle part of the wing with an O-type grid. In the second part of this study, methods of grid adaption are reviewed and a method is developed with the capability of adapting to several variables. This method is based on a variational approach and is an algebraic method. Also, the method has been formulated in such a way that there is no need for any matrix inversion. This method is used in conjunction with the calculation of hypersonic flow over a blunt-nose body. A movie has been produced which shows simultaneously the transient behavior of the solution and the grid adaption.

  10. Updating QR factorization procedure for solution of linear least squares problem with equality constraints.

    PubMed

    Zeb, Salman; Yousaf, Muhammad

    2017-01-01

    In this article, we present a QR updating procedure as a solution approach for linear least squares problem with equality constraints. We reduce the constrained problem to unconstrained linear least squares and partition it into a small subproblem. The QR factorization of the subproblem is calculated and then we apply updating techniques to its upper triangular factor R to obtain its solution. We carry out the error analysis of the proposed algorithm to show that it is backward stable. We also illustrate the implementation and accuracy of the proposed algorithm by providing some numerical experiments with particular emphasis on dense problems.

  11. Space-time adaptive solution of inverse problems with the discrete adjoint method

    NASA Astrophysics Data System (ADS)

    Alexe, Mihai; Sandu, Adrian

    2014-08-01

    This paper develops a framework for the construction and analysis of discrete adjoint sensitivities in the context of time dependent, adaptive grid, adaptive step models. Discrete adjoints are attractive in practice since they can be generated with low effort using automatic differentiation. However, this approach brings several important challenges. The space-time adjoint of the forward numerical scheme may be inconsistent with the continuous adjoint equations. A reduction in accuracy of the discrete adjoint sensitivities may appear due to the inter-grid transfer operators. Moreover, the optimization algorithm may need to accommodate state and gradient vectors whose dimensions change between iterations. This work shows that several of these potential issues can be avoided through a multi-level optimization strategy using discontinuous Galerkin (DG) hp-adaptive discretizations paired with Runge-Kutta (RK) time integration. We extend the concept of dual (adjoint) consistency to space-time RK-DG discretizations, which are then shown to be well suited for the adaptive solution of time-dependent inverse problems. Furthermore, we prove that DG mesh transfer operators on general meshes are also dual consistent. This allows the simultaneous derivation of the discrete adjoint for both the numerical solver and the mesh transfer logic with an automatic code generation mechanism such as algorithmic differentiation (AD), potentially speeding up development of large-scale simulation codes. The theoretical analysis is supported by numerical results reported for a two-dimensional non-stationary inverse problem.

  12. Breeding to adapt agriculture to climate change: affordable phenotyping solutions.

    PubMed

    Araus, José L; Kefauver, Shawn C

    2018-05-28

    Breeding is one of the central pillars of adaptation of crops to climate change. However, phenotyping is a key bottleneck that is limiting breeding efficiency. The awareness of phenotyping as a breeding limitation is not only sustained by the lack of adequate approaches, but also by the perception that phenotyping is an expensive activity. Phenotyping is not just dependent on the choice of appropriate traits and tools (e.g. sensors) but relies on how these tools are deployed on their carrying platforms, the speed and volume of data extraction and analysis (throughput), the handling of spatial variability and characterization of environmental conditions, and finally how all the information is integrated and processed. Affordable high throughput phenotyping aims to achieve reasonably priced solutions for all the components comprising the phenotyping pipeline. This mini-review will cover current and imminent solutions for all these components, from the increasing use of conventional digital RGB cameras, within the category of sensors, to open-access cloud-structured data processing and the use of smartphones. Emphasis will be placed on field phenotyping, which is really the main application for day-to-day phenotyping. Copyright © 2018 Elsevier Ltd. All rights reserved.

  13. An efficient solution procedure for the thermoelastic analysis of truss space structures

    NASA Technical Reports Server (NTRS)

    Givoli, D.; Rand, O.

    1992-01-01

    A solution procedure is proposed for the thermal and thermoelastic analysis of truss space structures in periodic motion. In this method, the spatial domain is first descretized using a consistent finite element formulation. Then the resulting semi-discrete equations in time are solved analytically by using Fourier decomposition. Geometrical symmetry is taken advantage of completely. An algorithm is presented for the calculation of heat flux distribution. The method is demonstrated via a numerical example of a cylindrically shaped space structure.

  14. AP-Cloud: Adaptive particle-in-cloud method for optimal solutions to Vlasov–Poisson equation

    DOE PAGES

    Wang, Xingyu; Samulyak, Roman; Jiao, Xiangmin; ...

    2016-04-19

    We propose a new adaptive Particle-in-Cloud (AP-Cloud) method for obtaining optimal numerical solutions to the Vlasov–Poisson equation. Unlike the traditional particle-in-cell (PIC) method, which is commonly used for solving this problem, the AP-Cloud adaptively selects computational nodes or particles to deliver higher accuracy and efficiency when the particle distribution is highly non-uniform. Unlike other adaptive techniques for PIC, our method balances the errors in PDE discretization and Monte Carlo integration, and discretizes the differential operators using a generalized finite difference (GFD) method based on a weighted least square formulation. As a result, AP-Cloud is independent of the geometric shapes ofmore » computational domains and is free of artificial parameters. Efficient and robust implementation is achieved through an octree data structure with 2:1 balance. We analyze the accuracy and convergence order of AP-Cloud theoretically, and verify the method using an electrostatic problem of a particle beam with halo. Here, simulation results show that the AP-Cloud method is substantially more accurate and faster than the traditional PIC, and it is free of artificial forces that are typical for some adaptive PIC techniques.« less

  15. AP-Cloud: Adaptive Particle-in-Cloud method for optimal solutions to Vlasov–Poisson equation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Xingyu; Samulyak, Roman, E-mail: roman.samulyak@stonybrook.edu; Computational Science Initiative, Brookhaven National Laboratory, Upton, NY 11973

    We propose a new adaptive Particle-in-Cloud (AP-Cloud) method for obtaining optimal numerical solutions to the Vlasov–Poisson equation. Unlike the traditional particle-in-cell (PIC) method, which is commonly used for solving this problem, the AP-Cloud adaptively selects computational nodes or particles to deliver higher accuracy and efficiency when the particle distribution is highly non-uniform. Unlike other adaptive techniques for PIC, our method balances the errors in PDE discretization and Monte Carlo integration, and discretizes the differential operators using a generalized finite difference (GFD) method based on a weighted least square formulation. As a result, AP-Cloud is independent of the geometric shapes ofmore » computational domains and is free of artificial parameters. Efficient and robust implementation is achieved through an octree data structure with 2:1 balance. We analyze the accuracy and convergence order of AP-Cloud theoretically, and verify the method using an electrostatic problem of a particle beam with halo. Simulation results show that the AP-Cloud method is substantially more accurate and faster than the traditional PIC, and it is free of artificial forces that are typical for some adaptive PIC techniques.« less

  16. AP-Cloud: Adaptive particle-in-cloud method for optimal solutions to Vlasov–Poisson equation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Xingyu; Samulyak, Roman; Jiao, Xiangmin

    We propose a new adaptive Particle-in-Cloud (AP-Cloud) method for obtaining optimal numerical solutions to the Vlasov–Poisson equation. Unlike the traditional particle-in-cell (PIC) method, which is commonly used for solving this problem, the AP-Cloud adaptively selects computational nodes or particles to deliver higher accuracy and efficiency when the particle distribution is highly non-uniform. Unlike other adaptive techniques for PIC, our method balances the errors in PDE discretization and Monte Carlo integration, and discretizes the differential operators using a generalized finite difference (GFD) method based on a weighted least square formulation. As a result, AP-Cloud is independent of the geometric shapes ofmore » computational domains and is free of artificial parameters. Efficient and robust implementation is achieved through an octree data structure with 2:1 balance. We analyze the accuracy and convergence order of AP-Cloud theoretically, and verify the method using an electrostatic problem of a particle beam with halo. Here, simulation results show that the AP-Cloud method is substantially more accurate and faster than the traditional PIC, and it is free of artificial forces that are typical for some adaptive PIC techniques.« less

  17. A Solution Adaptive Structured/Unstructured Overset Grid Flow Solver with Applications to Helicopter Rotor Flows

    NASA Technical Reports Server (NTRS)

    Duque, Earl P. N.; Biswas, Rupak; Strawn, Roger C.

    1995-01-01

    This paper summarizes a method that solves both the three dimensional thin-layer Navier-Stokes equations and the Euler equations using overset structured and solution adaptive unstructured grids with applications to helicopter rotor flowfields. The overset structured grids use an implicit finite-difference method to solve the thin-layer Navier-Stokes/Euler equations while the unstructured grid uses an explicit finite-volume method to solve the Euler equations. Solutions on a helicopter rotor in hover show the ability to accurately convect the rotor wake. However, isotropic subdivision of the tetrahedral mesh rapidly increases the overall problem size.

  18. Adaptive Standard Operating Procedures for Complex Disasters

    DTIC Science & Technology

    2017-03-01

    Developments in Business Simulation and Experiential Learning 33 (2014). 23 Patrick Lagadec and Benjamin Topper, “How Crises Model the Modern World...field of crisis response . Therefore, this experiment supports the argument for implementing the adaptive design proposals. The adaptive SOP enhancement...Kalay. “An Event- Based Model to Simulate Human Behaviour in Built Environments.” Proceedings of the 30th eCAADe Conference 1 (2012). Snowden

  19. SAGE - MULTIDIMENSIONAL SELF-ADAPTIVE GRID CODE

    NASA Technical Reports Server (NTRS)

    Davies, C. B.

    1994-01-01

    SAGE, Self Adaptive Grid codE, is a flexible tool for adapting and restructuring both 2D and 3D grids. Solution-adaptive grid methods are useful tools for efficient and accurate flow predictions. In supersonic and hypersonic flows, strong gradient regions such as shocks, contact discontinuities, shear layers, etc., require careful distribution of grid points to minimize grid error and produce accurate flow-field predictions. SAGE helps the user obtain more accurate solutions by intelligently redistributing (i.e. adapting) the original grid points based on an initial or interim flow-field solution. The user then computes a new solution using the adapted grid as input to the flow solver. The adaptive-grid methodology poses the problem in an algebraic, unidirectional manner for multi-dimensional adaptations. The procedure is analogous to applying tension and torsion spring forces proportional to the local flow gradient at every grid point and finding the equilibrium position of the resulting system of grid points. The multi-dimensional problem of grid adaption is split into a series of one-dimensional problems along the computational coordinate lines. The reduced one dimensional problem then requires a tridiagonal solver to find the location of grid points along a coordinate line. Multi-directional adaption is achieved by the sequential application of the method in each coordinate direction. The tension forces direct the redistribution of points to the strong gradient region. To maintain smoothness and a measure of orthogonality of grid lines, torsional forces are introduced that relate information between the family of lines adjacent to one another. The smoothness and orthogonality constraints are direction-dependent, since they relate only the coordinate lines that are being adapted to the neighboring lines that have already been adapted. Therefore the solutions are non-unique and depend on the order and direction of adaption. Non-uniqueness of the adapted grid is

  20. “They Have to Adapt to Learn”: Surgeons’ Perspectives on the Role of Procedural Variation in Surgical Education

    PubMed Central

    Apramian, Tavis; Cristancho, Sayra; Watling, Chris; Ott, Michael; Lingard, Lorelei

    2017-01-01

    OBJECTIVE Clinical research increasingly acknowledges the existence of significant procedural variation in surgical practice. This study explored surgeons’ perspectives regarding the influence of intersurgeon procedural variation on the teaching and learning of surgical residents. DESIGN AND SETTING This qualitative study used a grounded theory-based analysis of observational and interview data. Observational data were collected in 3 tertiary care teaching hospitals in Ontario, Canada. Semistructured interviews explored potential procedural variations arising during the observations and prompts from an iteratively refined guide. Ongoing data analysis refined the theoretical framework and informed data collection strategies, as prescribed by the iterative nature of grounded theory research. PARTICIPANTS Our sample included 99 hours of observation across 45 cases with 14 surgeons. Semistructured, audio-recorded interviews (n = 14) occurred immediately following observational periods. RESULTS Surgeons endorsed the use of intersurgeon procedural variations to teach residents about adapting to the complexity of surgical practice and the norms of surgical culture. Surgeons suggested that residents’ efforts to identify thresholds of principle and preference are crucial to professional development. Principles that emerged from the study included the following: (1) knowing what comes next, (2) choosing the right plane, (3) handling tissue appropriately, (4) recognizing the abnormal, and (5) making safe progress. Surgeons suggested that learning to follow these principles while maintaining key aspects of surgical culture, like autonomy and individuality, are important social processes in surgical education. CONCLUSIONS Acknowledging intersurgeon variation has important implications for curriculum development and workplace-based assessment in surgical education. Adapting to intersurgeon procedural variations may foster versatility in surgical residents. However, the

  1. An upwind method for the solution of the 3D Euler and Navier-Stokes equations on adaptively refined meshes

    NASA Astrophysics Data System (ADS)

    Aftosmis, Michael J.

    1992-10-01

    A new node based upwind scheme for the solution of the 3D Navier-Stokes equations on adaptively refined meshes is presented. The method uses a second-order upwind TVD scheme to integrate the convective terms, and discretizes the viscous terms with a new compact central difference technique. Grid adaptation is achieved through directional division of hexahedral cells in response to evolving features as the solution converges. The method is advanced in time with a multistage Runge-Kutta time stepping scheme. Two- and three-dimensional examples establish the accuracy of the inviscid and viscous discretization. These investigations highlight the ability of the method to produce crisp shocks, while accurately and economically resolving viscous layers. The representation of these and other structures is shown to be comparable to that obtained by structured methods. Further 3D examples demonstrate the ability of the adaptive algorithm to effectively locate and resolve multiple scale features in complex 3D flows with many interacting, viscous, and inviscid structures.

  2. Genetic algorithms in adaptive fuzzy control

    NASA Technical Reports Server (NTRS)

    Karr, C. Lucas; Harper, Tony R.

    1992-01-01

    Researchers at the U.S. Bureau of Mines have developed adaptive process control systems in which genetic algorithms (GA's) are used to augment fuzzy logic controllers (FLC's). GA's are search algorithms that rapidly locate near-optimum solutions to a wide spectrum of problems by modeling the search procedures of natural genetics. FLC's are rule based systems that efficiently manipulate a problem environment by modeling the 'rule-of-thumb' strategy used in human decision making. Together, GA's and FLC's possess the capabilities necessary to produce powerful, efficient, and robust adaptive control systems. To perform efficiently, such control systems require a control element to manipulate the problem environment, an analysis element to recognize changes in the problem environment, and a learning element to adjust fuzzy membership functions in response to the changes in the problem environment. Details of an overall adaptive control system are discussed. A specific computer-simulated chemical system is used to demonstrate the ideas presented.

  3. "They Have to Adapt to Learn": Surgeons' Perspectives on the Role of Procedural Variation in Surgical Education.

    PubMed

    Apramian, Tavis; Cristancho, Sayra; Watling, Chris; Ott, Michael; Lingard, Lorelei

    2016-01-01

    Clinical research increasingly acknowledges the existence of significant procedural variation in surgical practice. This study explored surgeons' perspectives regarding the influence of intersurgeon procedural variation on the teaching and learning of surgical residents. This qualitative study used a grounded theory-based analysis of observational and interview data. Observational data were collected in 3 tertiary care teaching hospitals in Ontario, Canada. Semistructured interviews explored potential procedural variations arising during the observations and prompts from an iteratively refined guide. Ongoing data analysis refined the theoretical framework and informed data collection strategies, as prescribed by the iterative nature of grounded theory research. Our sample included 99 hours of observation across 45 cases with 14 surgeons. Semistructured, audio-recorded interviews (n = 14) occurred immediately following observational periods. Surgeons endorsed the use of intersurgeon procedural variations to teach residents about adapting to the complexity of surgical practice and the norms of surgical culture. Surgeons suggested that residents' efforts to identify thresholds of principle and preference are crucial to professional development. Principles that emerged from the study included the following: (1) knowing what comes next, (2) choosing the right plane, (3) handling tissue appropriately, (4) recognizing the abnormal, and (5) making safe progress. Surgeons suggested that learning to follow these principles while maintaining key aspects of surgical culture, like autonomy and individuality, are important social processes in surgical education. Acknowledging intersurgeon variation has important implications for curriculum development and workplace-based assessment in surgical education. Adapting to intersurgeon procedural variations may foster versatility in surgical residents. However, the existence of procedural variations and their active use in surgeons

  4. Wavelet multiresolution analyses adapted for the fast solution of boundary value ordinary differential equations

    NASA Technical Reports Server (NTRS)

    Jawerth, Bjoern; Sweldens, Wim

    1993-01-01

    We present ideas on how to use wavelets in the solution of boundary value ordinary differential equations. Rather than using classical wavelets, we adapt their construction so that they become (bi)orthogonal with respect to the inner product defined by the operator. The stiffness matrix in a Galerkin method then becomes diagonal and can thus be trivially inverted. We show how one can construct an O(N) algorithm for various constant and variable coefficient operators.

  5. Adaptive multiresolution modeling of groundwater flow in heterogeneous porous media

    NASA Astrophysics Data System (ADS)

    Malenica, Luka; Gotovac, Hrvoje; Srzic, Veljko; Andric, Ivo

    2016-04-01

    Proposed methodology was originally developed by our scientific team in Split who designed multiresolution approach for analyzing flow and transport processes in highly heterogeneous porous media. The main properties of the adaptive Fup multi-resolution approach are: 1) computational capabilities of Fup basis functions with compact support capable to resolve all spatial and temporal scales, 2) multi-resolution presentation of heterogeneity as well as all other input and output variables, 3) accurate, adaptive and efficient strategy and 4) semi-analytical properties which increase our understanding of usually complex flow and transport processes in porous media. The main computational idea behind this approach is to separately find the minimum number of basis functions and resolution levels necessary to describe each flow and transport variable with the desired accuracy on a particular adaptive grid. Therefore, each variable is separately analyzed, and the adaptive and multi-scale nature of the methodology enables not only computational efficiency and accuracy, but it also describes subsurface processes closely related to their understood physical interpretation. The methodology inherently supports a mesh-free procedure, avoiding the classical numerical integration, and yields continuous velocity and flux fields, which is vitally important for flow and transport simulations. In this paper, we will show recent improvements within the proposed methodology. Since "state of the art" multiresolution approach usually uses method of lines and only spatial adaptive procedure, temporal approximation was rarely considered as a multiscale. Therefore, novel adaptive implicit Fup integration scheme is developed, resolving all time scales within each global time step. It means that algorithm uses smaller time steps only in lines where solution changes are intensive. Application of Fup basis functions enables continuous time approximation, simple interpolation calculations across

  6. Refined numerical solution of the transonic flow past a wedge

    NASA Technical Reports Server (NTRS)

    Liang, S.-M.; Fung, K.-Y.

    1985-01-01

    A numerical procedure combining the ideas of solving a modified difference equation and of adaptive mesh refinement is introduced. The numerical solution on a fixed grid is improved by using better approximations of the truncation error computed from local subdomain grid refinements. This technique is used to obtain refined solutions of steady, inviscid, transonic flow past a wedge. The effects of truncation error on the pressure distribution, wave drag, sonic line, and shock position are investigated. By comparing the pressure drag on the wedge and wave drag due to the shocks, a supersonic-to-supersonic shock originating from the wedge shoulder is confirmed.

  7. An Adaptively-Refined, Cartesian, Cell-Based Scheme for the Euler and Navier-Stokes Equations. Ph.D. Thesis - Michigan Univ.

    NASA Technical Reports Server (NTRS)

    Coirier, William John

    1994-01-01

    A Cartesian, cell-based scheme for solving the Euler and Navier-Stokes equations in two dimensions is developed and tested. Grids about geometrically complicated bodies are generated automatically, by recursive subdivision of a single Cartesian cell encompassing the entire flow domain. Where the resulting cells intersect bodies, polygonal 'cut' cells are created. The geometry of the cut cells is computed using polygon-clipping algorithms. The grid is stored in a binary-tree data structure which provides a natural means of obtaining cell-to-cell connectivity and of carrying out solution-adaptive refinement. The Euler and Navier-Stokes equations are solved on the resulting grids using a finite-volume formulation. The convective terms are upwinded, with a limited linear reconstruction of the primitive variables used to provide input states to an approximate Riemann solver for computing the fluxes between neighboring cells. A multi-stage time-stepping scheme is used to reach a steady-state solution. Validation of the Euler solver with benchmark numerical and exact solutions is presented. An assessment of the accuracy of the approach is made by uniform and adaptive grid refinements for a steady, transonic, exact solution to the Euler equations. The error of the approach is directly compared to a structured solver formulation. A non smooth flow is also assessed for grid convergence, comparing uniform and adaptively refined results. Several formulations of the viscous terms are assessed analytically, both for accuracy and positivity. The two best formulations are used to compute adaptively refined solutions of the Navier-Stokes equations. These solutions are compared to each other, to experimental results and/or theory for a series of low and moderate Reynolds numbers flow fields. The most suitable viscous discretization is demonstrated for geometrically-complicated internal flows. For flows at high Reynolds numbers, both an altered grid-generation procedure and a

  8. Adaptive [theta]-methods for pricing American options

    NASA Astrophysics Data System (ADS)

    Khaliq, Abdul Q. M.; Voss, David A.; Kazmi, Kamran

    2008-12-01

    We develop adaptive [theta]-methods for solving the Black-Scholes PDE for American options. By adding a small, continuous term, the Black-Scholes PDE becomes an advection-diffusion-reaction equation on a fixed spatial domain. Standard implementation of [theta]-methods would require a Newton-type iterative procedure at each time step thereby increasing the computational complexity of the methods. Our linearly implicit approach avoids such complications. We establish a general framework under which [theta]-methods satisfy a discrete version of the positivity constraint characteristic of American options, and numerically demonstrate the sensitivity of the constraint. The positivity results are established for the single-asset and independent two-asset models. In addition, we have incorporated and analyzed an adaptive time-step control strategy to increase the computational efficiency. Numerical experiments are presented for one- and two-asset American options, using adaptive exponential splitting for two-asset problems. The approach is compared with an iterative solution of the two-asset problem in terms of computational efficiency.

  9. Adaptive Distributed Environment for Procedure Training (ADEPT)

    NASA Technical Reports Server (NTRS)

    Domeshek, Eric; Ong, James; Mohammed, John

    2013-01-01

    ADEPT (Adaptive Distributed Environment for Procedure Training) is designed to provide more effective, flexible, and portable training for NASA systems controllers. When creating a training scenario, an exercise author can specify a representative rationale structure using the graphical user interface, annotating the results with instructional texts where needed. The author's structure may distinguish between essential and optional parts of the rationale, and may also include "red herrings" - hypotheses that are essential to consider, until evidence and reasoning allow them to be ruled out. The system is built from pre-existing components, including Stottler Henke's SimVentive? instructional simulation authoring tool and runtime. To that, a capability was added to author and exploit explicit control decision rationale representations. ADEPT uses SimVentive's Scalable Vector Graphics (SVG)- based interactive graphic display capability as the basis of the tool for quickly noting aspects of decision rationale in graph form. The ADEPT prototype is built in Java, and will run on any computer using Windows, MacOS, or Linux. No special peripheral equipment is required. The software enables a style of student/ tutor interaction focused on the reasoning behind systems control behavior that better mimics proven Socratic human tutoring behaviors for highly cognitive skills. It supports fast, easy, and convenient authoring of such tutoring behaviors, allowing specification of detailed scenario-specific, but content-sensitive, high-quality tutor hints and feedback. The system places relatively light data-entry demands on the student to enable its rationale-centered discussions, and provides a support mechanism for fostering coherence in the student/ tutor dialog by including focusing, sequencing, and utterance tuning mechanisms intended to better fit tutor hints and feedback into the ongoing context.

  10. Adaptive process control using fuzzy logic and genetic algorithms

    NASA Technical Reports Server (NTRS)

    Karr, C. L.

    1993-01-01

    Researchers at the U.S. Bureau of Mines have developed adaptive process control systems in which genetic algorithms (GA's) are used to augment fuzzy logic controllers (FLC's). GA's are search algorithms that rapidly locate near-optimum solutions to a wide spectrum of problems by modeling the search procedures of natural genetics. FLC's are rule based systems that efficiently manipulate a problem environment by modeling the 'rule-of-thumb' strategy used in human decision making. Together, GA's and FLC's possess the capabilities necessary to produce powerful, efficient, and robust adaptive control systems. To perform efficiently, such control systems require a control element to manipulate the problem environment, and a learning element to adjust to the changes in the problem environment. Details of an overall adaptive control system are discussed. A specific laboratory acid-base pH system is used to demonstrate the ideas presented.

  11. An adapted isolation procedure reveals Photobacterium spp. as common spoilers on modified atmosphere packaged meats.

    PubMed

    Hilgarth, M; Fuertes-Pèrez, S; Ehrmann, M; Vogel, R F

    2018-04-01

    The genus Photobacterium comprises species of marine bacteria, commonly found in open-ocean and deep-sea environments. Some species (e.g. Photobacterium phosphoreum) are associated with fish spoilage. Recently, culture-independent studies have drawn attention to the presence of photobacteria on meat. This study employed a comparative isolation approach of Photobacterium spp. and aimed to develop an adapted isolation procedure for recovery from food samples, as demonstrated for different meats: Marine broth is used for resuspending and dilution of food samples, followed by aerobic cultivation on marine broth agar supplemented with meat extract and vancomycin at 15°C for 72 h. Identification of spoilage-associated microbiota was carried out via Matrix Assisted Laser Desorption/Ionization Time of Flight Mass Spectrometry using a database supplemented with additional mass spectrometry profiles of Photobacterium spp. This study provides evidence for the common abundance of multiple Photobacterium species in relevant quantities on various modified atmosphere packaged meats. Photobacterium carnosum was predominant on beef and chicken, while Photobacterium iliopiscarium represented the major species on pork and Photobacterium phosphoreum on salmon, respectively. This study demonstrates highly frequent isolation of multiple photobacteria (Photobacterium carnosum, Photobacterium phosphoreum, and Photobacterium iliopiscarium) from different modified-atmosphere packaged spoiled and unspoiled meats using an adapted isolation procedure. The abundance of photobacteria in high numbers provides evidence for the hitherto neglected importance and relevance of Photobacterium spp. to meat spoilage. © 2018 The Society for Applied Microbiology.

  12. Boundedness of the solutions for certain classes of fractional differential equations with application to adaptive systems.

    PubMed

    Aguila-Camacho, Norelys; Duarte-Mermoud, Manuel A

    2016-01-01

    This paper presents the analysis of three classes of fractional differential equations appearing in the field of fractional adaptive systems, for the case when the fractional order is in the interval α ∈(0,1] and the Caputo definition for fractional derivatives is used. The boundedness of the solutions is proved for all three cases, and the convergence to zero of the mean value of one of the variables is also proved. Applications of the obtained results to fractional adaptive schemes in the context of identification and control problems are presented at the end of the paper, including numerical simulations which support the analytical results. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  13. Generic Procedure for Coupling the PHREEQC Geochemical Modeling Framework with Flow and Solute Transport Simulators

    NASA Astrophysics Data System (ADS)

    Wissmeier, L. C.; Barry, D. A.

    2009-12-01

    Computer simulations of water availability and quality play an important role in state-of-the-art water resources management. However, many of the most utilized software programs focus either on physical flow and transport phenomena (e.g., MODFLOW, MT3DMS, FEFLOW, HYDRUS) or on geochemical reactions (e.g., MINTEQ, PHREEQC, CHESS, ORCHESTRA). In recent years, several couplings between both genres of programs evolved in order to consider interactions between flow and biogeochemical reactivity (e.g., HP1, PHWAT). Software coupling procedures can be categorized as ‘close couplings’, where programs pass information via the memory stack at runtime, and ‘remote couplings’, where the information is exchanged at each time step via input/output files. The former generally involves modifications of software codes and therefore expert programming skills are required. We present a generic recipe for remotely coupling the PHREEQC geochemical modeling framework and flow and solute transport (FST) simulators. The iterative scheme relies on operator splitting with continuous re-initialization of PHREEQC and the FST of choice at each time step. Since PHREEQC calculates the geochemistry of aqueous solutions in contact with soil minerals, the procedure is primarily designed for couplings to FST’s for liquid phase flow in natural environments. It requires the accessibility of initial conditions and numerical parameters such as time and space discretization in the input text file for the FST and control of the FST via commands to the operating system (batch on Windows; bash/shell on Unix/Linux). The coupling procedure is based on PHREEQC’s capability to save the state of a simulation with all solid, liquid and gaseous species as a PHREEQC input file by making use of the dump file option in the TRANSPORT keyword. The output from one reaction calculation step is therefore reused as input for the following reaction step where changes in element amounts due to advection

  14. A grid generation and flow solution method for the Euler equations on unstructured grids

    NASA Astrophysics Data System (ADS)

    Anderson, W. Kyle

    1994-01-01

    A grid generation and flow solution algorithm for the Euler equations on unstructured grids is presented. The grid generation scheme utilizes Delaunay triangulation and self-generates the field points for the mesh based on cell aspect ratios and allows for clustering near solid surfaces. The flow solution method is an implicit algorithm in which the linear set of equations arising at each time step is solved using a Gauss Seidel procedure which is completely vectorizable. In addition, a study is conducted to examine the number of subiterations required for good convergence of the overall algorithm. Grid generation results are shown in two dimensions for a National Advisory Committee for Aeronautics (NACA) 0012 airfoil as well as a two-element configuration. Flow solution results are shown for two-dimensional flow over the NACA 0012 airfoil and for a two-element configuration in which the solution has been obtained through an adaptation procedure and compared to an exact solution. Preliminary three-dimensional results are also shown in which subsonic flow over a business jet is computed.

  15. [Apheresis in children: procedures and outcome].

    PubMed

    Tummolo, Albina; Colella, Vincenzo; Bellantuono, Rosa; Giordano, Mario; Messina, Giovanni; Puteo, Flora; Sorino, Palma; De Palo, Tommaso

    2012-01-01

    Apheresis procedures are used in children to treat an increasing number of conditions by removing different types of substances from the bloodstream. In a previous study we evaluated the first results of our experience in children, emphasizing the solutions adopted to overcome technical difficulties and to adapt adult apheresis procedures to a pediatric population. The aim of the present study is to present data on a larger number of patients in whom apheresis was the main treatment. Ninety-three children (50 m, 43 f) affected by renal and/or extrarenal diseases were included. They were treated with LDL apheresis, protein A immunoadsorption, or plasma exchange. Our therapeutic protocol was the same as described in the previous study. Renal diseases and immunological disorders remained the most common conditions requiring this therapeutic approach. However, hemolytic uremic syndrome (HUS) was no longer the most frequent renal condition to be treated, as apheresis is currently the first treatment option only in cases of atypical HUS. In this series we also treated small children, showing that low weight should no longer be considered a contraindication to apheresis procedures. The low rate of complications and the overall satisfactory clinical results with increasingly advanced technical procedures make a wider use of apheresis in children realistic in the years to come.

  16. An Adaptive QoS Routing Solution for MANET Based Multimedia Communications in Emergency Cases

    NASA Astrophysics Data System (ADS)

    Ramrekha, Tipu Arvind; Politis, Christos

    The Mobile Ad hoc Networks (MANET) is a wireless network deprived of any fixed central authoritative routing entity. It relies entirely on collaborating nodes forwarding packets from source to destination. This paper describes the design, implementation and performance evaluation of CHAMELEON, an adaptive Quality of Service (QoS) routing solution, with improved delay and jitter performances, enabling multimedia communication for MANETs in extreme emergency situations such as forest fire and terrorist attacks as defined in the PEACE project. CHAMELEON is designed to adapt its routing behaviour according to the size of a MANET. The reactive Ad Hoc on-Demand Distance Vector Routing (AODV) and proactive Optimized Link State Routing (OLSR) protocols are deemed appropriate for CHAMELEON through their performance evaluation in terms of delay and jitter for different MANET sizes in a building fire emergency scenario. CHAMELEON is then implemented in NS-2 and evaluated similarly. The paper concludes with a summary of findings so far and intended future work.

  17. A model-adaptivity method for the solution of Lennard-Jones based adhesive contact problems

    NASA Astrophysics Data System (ADS)

    Ben Dhia, Hachmi; Du, Shuimiao

    2018-05-01

    The surface micro-interaction model of Lennard-Jones (LJ) is used for adhesive contact problems (ACP). To address theoretical and numerical pitfalls of this model, a sequence of partitions of contact models is adaptively constructed to both extend and approximate the LJ model. It is formed by a combination of the LJ model with a sequence of shifted-Signorini (or, alternatively, -Linearized-LJ) models, indexed by a shift parameter field. For each model of this sequence, a weak formulation of the associated local ACP is developed. To track critical localized adhesive areas, a two-step strategy is developed: firstly, a macroscopic frictionless (as first approach) linear-elastic contact problem is solved once to detect contact separation zones. Secondly, at each shift-adaptive iteration, a micro-macro ACP is re-formulated and solved within the multiscale Arlequin framework, with significant reduction of computational costs. Comparison of our results with available analytical and numerical solutions shows the effectiveness of our global strategy.

  18. Full Gradient Solution to Adaptive Hybrid Control

    NASA Technical Reports Server (NTRS)

    Bean, Jacob; Schiller, Noah H.; Fuller, Chris

    2017-01-01

    This paper focuses on the adaptation mechanisms in adaptive hybrid controllers. Most adaptive hybrid controllers update two filters individually according to the filtered reference least mean squares (FxLMS) algorithm. Because this algorithm was derived for feedforward control, it does not take into account the presence of a feedback loop in the gradient calculation. This paper provides a derivation of the proper weight vector gradient for hybrid (or feedback) controllers that takes into account the presence of feedback. In this formulation, a single weight vector is updated rather than two individually. An internal model structure is assumed for the feedback part of the controller. The full gradient is equivalent to that used in the standard FxLMS algorithm with the addition of a recursive term that is a function of the modeling error. Some simulations are provided to highlight the advantages of using the full gradient in the weight vector update rather than the approximation.

  19. Full Gradient Solution to Adaptive Hybrid Control

    NASA Technical Reports Server (NTRS)

    Bean, Jacob; Schiller, Noah H.; Fuller, Chris

    2016-01-01

    This paper focuses on the adaptation mechanisms in adaptive hybrid controllers. Most adaptive hybrid controllers update two filters individually according to the filtered-reference least mean squares (FxLMS) algorithm. Because this algorithm was derived for feedforward control, it does not take into account the presence of a feedback loop in the gradient calculation. This paper provides a derivation of the proper weight vector gradient for hybrid (or feedback) controllers that takes into account the presence of feedback. In this formulation, a single weight vector is updated rather than two individually. An internal model structure is assumed for the feedback part of the controller. The full gradient is equivalent to that used in the standard FxLMS algorithm with the addition of a recursive term that is a function of the modeling error. Some simulations are provided to highlight the advantages of using the full gradient in the weight vector update rather than the approximation.

  20. Adaptation of instructional materials: a commentary on the research on adaptations of Who Polluted the Potomac

    NASA Astrophysics Data System (ADS)

    Ercikan, Kadriye; Alper, Naim

    2009-03-01

    This commentary first summarizes and discusses the analysis of the two translation processes described in the Oliveira, Colak, and Akerson article and the inferences these researchers make based on their research. In the second part of the commentary, we describe procedures and criteria used in adapting tests into different languages and how they may apply to adaptation of instructional materials. The authors provide a good theoretical analysis of what took place in two translation instances and make an important contribution by taking the first step in providing a systematic discussion of adaptation of instructional materials. Our discussion proposes procedures for adapting instructional materials for examining equivalence of source and target versions of adapted instructional materials. We highlight that many of the procedures and criteria used in examining comparability of educational tests is missing in this emerging research of area.

  1. An object-oriented and quadrilateral-mesh based solution adaptive algorithm for compressible multi-fluid flows

    NASA Astrophysics Data System (ADS)

    Zheng, H. W.; Shu, C.; Chew, Y. T.

    2008-07-01

    In this paper, an object-oriented and quadrilateral-mesh based solution adaptive algorithm for the simulation of compressible multi-fluid flows is presented. The HLLC scheme (Harten, Lax and van Leer approximate Riemann solver with the Contact wave restored) is extended to adaptively solve the compressible multi-fluid flows under complex geometry on unstructured mesh. It is also extended to the second-order of accuracy by using MUSCL extrapolation. The node, edge and cell are arranged in such an object-oriented manner that each of them inherits from a basic object. A home-made double link list is designed to manage these objects so that the inserting of new objects and removing of the existing objects (nodes, edges and cells) are independent of the number of objects and only of the complexity of O( 1). In addition, the cells with different levels are further stored in different lists. This avoids the recursive calculation of solution of mother (non-leaf) cells. Thus, high efficiency is obtained due to these features. Besides, as compared to other cell-edge adaptive methods, the separation of nodes would reduce the memory requirement of redundant nodes, especially in the cases where the level number is large or the space dimension is three. Five two-dimensional examples are used to examine its performance. These examples include vortex evolution problem, interface only problem under structured mesh and unstructured mesh, bubble explosion under the water, bubble-shock interaction, and shock-interface interaction inside the cylindrical vessel. Numerical results indicate that there is no oscillation of pressure and velocity across the interface and it is feasible to apply it to solve compressible multi-fluid flows with large density ratio (1000) and strong shock wave (the pressure ratio is 10,000) interaction with the interface.

  2. Rocket injector anomalies study. Volume 1: Description of the mathematical model and solution procedure

    NASA Technical Reports Server (NTRS)

    Przekwas, A. J.; Singhal, A. K.; Tam, L. T.

    1984-01-01

    The capability of simulating three dimensional two phase reactive flows with combustion in the liquid fuelled rocket engines is demonstrated. This was accomplished by modifying an existing three dimensional computer program (REFLAN3D) with Eulerian Lagrangian approach to simulate two phase spray flow, evaporation and combustion. The modified code is referred as REFLAN3D-SPRAY. The mathematical formulation of the fluid flow, heat transfer, combustion and two phase flow interaction of the numerical solution procedure, boundary conditions and their treatment are described.

  3. Efficient robust doubly adaptive regularized regression with applications.

    PubMed

    Karunamuni, Rohana J; Kong, Linglong; Tu, Wei

    2018-01-01

    We consider the problem of estimation and variable selection for general linear regression models. Regularized regression procedures have been widely used for variable selection, but most existing methods perform poorly in the presence of outliers. We construct a new penalized procedure that simultaneously attains full efficiency and maximum robustness. Furthermore, the proposed procedure satisfies the oracle properties. The new procedure is designed to achieve sparse and robust solutions by imposing adaptive weights on both the decision loss and the penalty function. The proposed method of estimation and variable selection attains full efficiency when the model is correct and, at the same time, achieves maximum robustness when outliers are present. We examine the robustness properties using the finite-sample breakdown point and an influence function. We show that the proposed estimator attains the maximum breakdown point. Furthermore, there is no loss in efficiency when there are no outliers or the error distribution is normal. For practical implementation of the proposed method, we present a computational algorithm. We examine the finite-sample and robustness properties using Monte Carlo studies. Two datasets are also analyzed.

  4. Modified Method of Adaptive Artificial Viscosity for Solution of Gas Dynamics Problems on Parallel Computer Systems

    NASA Astrophysics Data System (ADS)

    Popov, Igor; Sukov, Sergey

    2018-02-01

    A modification of the adaptive artificial viscosity (AAV) method is considered. This modification is based on one stage time approximation and is adopted to calculation of gasdynamics problems on unstructured grids with an arbitrary type of grid elements. The proposed numerical method has simplified logic, better performance and parallel efficiency compared to the implementation of the original AAV method. Computer experiments evidence the robustness and convergence of the method to difference solution.

  5. Adaptive Process Control with Fuzzy Logic and Genetic Algorithms

    NASA Technical Reports Server (NTRS)

    Karr, C. L.

    1993-01-01

    Researchers at the U.S. Bureau of Mines have developed adaptive process control systems in which genetic algorithms (GA's) are used to augment fuzzy logic controllers (FLC's). GA's are search algorithms that rapidly locate near-optimum solutions to a wide spectrum of problems by modeling the search procedures of natural genetics. FLC's are rule based systems that efficiently manipulate a problem environment by modeling the 'rule-of-thumb' strategy used in human decision-making. Together, GA's and FLC's possess the capabilities necessary to produce powerful, efficient, and robust adaptive control systems. To perform efficiently, such control systems require a control element to manipulate the problem environment, an analysis element to recognize changes in the problem environment, and a learning element to adjust to the changes in the problem environment. Details of an overall adaptive control system are discussed. A specific laboratory acid-base pH system is used to demonstrate the ideas presented.

  6. Advancing Continence in Typically Developing Children: Adapting the Procedures of Foxx and Azrin for Primary Care.

    PubMed

    Warzak, William J; Forcino, Stacy S; Sanberg, Sela Ann; Gross, Amy C

    2016-01-01

    To (1) identify and summarize procedures of Foxx and Azrin's classic toilet training protocol that continue to be used in training typically developing children and (2) adapt recent findings with the original Foxx and Azrin procedures to inform practical suggestions for the rapid toilet training of typically developing children in the primary care setting. Literature searches of PsychINFO and MEDLINE databases used the search terms "(toilet* OR potty* AND train*)." Selection criteria were only peer-reviewed experimental articles that evaluated intensive toilet training with typically developing children. Exclusion criteria were (1) nonpeer reviewed research, (2) studies addressing encopresis and/or enuresis, (3) studies excluding typically developing children, and (4) studies evaluating toilet training during infancy. In addition to the study of Foxx and Azrin, only 4 publications met the above criteria. Toilet training procedures from each article were reviewed to determine which toilet training methods were similar to components described by Foxx and Azrin. Common training elements include increasing the frequency of learning opportunities through fluid loading and having differential consequences for being dry versus being wet and for voiding in the toilet versus elsewhere. There is little research on intensive toilet training of typically developing children. Practice sits and positive reinforcement for voids in the toilet are commonplace, consistent with the Foxx and Azrin protocol, whereas positive practice as a corrective procedure for wetting accidents often is omitted. Fluid loading and differential consequences for being dry versus being wet and for voiding in the toilet also are suggested procedures, consistent with the Foxx and Azrin protocol.

  7. Co-evolution of proteins and solutions: protein adaptation versus cytoprotective micromolecules and their roles in marine organisms.

    PubMed

    Yancey, Paul H; Siebenaller, Joseph F

    2015-06-01

    Organisms experience a wide range of environmental factors such as temperature, salinity and hydrostatic pressure, which pose challenges to biochemical processes. Studies on adaptations to such factors have largely focused on macromolecules, especially intrinsic adaptations in protein structure and function. However, micromolecular cosolutes can act as cytoprotectants in the cellular milieu to affect biochemical function and they are now recognized as important extrinsic adaptations. These solutes, both inorganic and organic, have been best characterized as osmolytes, which accumulate to reduce osmotic water loss. Singly, and in combination, many cosolutes have properties beyond simple osmotic effects, e.g. altering the stability and function of proteins in the face of numerous stressors. A key example is the marine osmolyte trimethylamine oxide (TMAO), which appears to enhance water structure and is excluded from peptide backbones, favoring protein folding and stability and counteracting destabilizers like urea and temperature. Co-evolution of intrinsic and extrinsic adaptations is illustrated with high hydrostatic pressure in deep-living organisms. Cytosolic and membrane proteins and G-protein-coupled signal transduction in fishes under pressure show inhibited function and stability, while revealing a number of intrinsic adaptations in deep species. Yet, intrinsic adaptations are often incomplete, and those fishes accumulate TMAO linearly with depth, suggesting a role for TMAO as an extrinsic 'piezolyte' or pressure cosolute. Indeed, TMAO is able to counteract the inhibitory effects of pressure on the stability and function of many proteins. Other cosolutes are cytoprotective in other ways, such as via antioxidation. Such observations highlight the importance of considering the cellular milieu in biochemical and cellular adaptation. © 2015. Published by The Company of Biologists Ltd.

  8. CTEPP STANDARD OPERATING PROCEDURE FOR PREPARATION OF SURROGATE RECOVERY STANDARD AND INTERNAL STANDARD SOLUTIONS FOR NEUTRAL TARGET ANALYTES (SOP-5.25)

    EPA Science Inventory

    This standard operating procedure describes the method used for preparing internal standard, surrogate recovery standard and calibration standard solutions for neutral analytes used for gas chromatography/mass spectrometry analysis.

  9. An evaluation of inferential procedures for adaptive clinical trial designs with pre-specified rules for modifying the sample size.

    PubMed

    Levin, Gregory P; Emerson, Sarah C; Emerson, Scott S

    2014-09-01

    Many papers have introduced adaptive clinical trial methods that allow modifications to the sample size based on interim estimates of treatment effect. There has been extensive commentary on type I error control and efficiency considerations, but little research on estimation after an adaptive hypothesis test. We evaluate the reliability and precision of different inferential procedures in the presence of an adaptive design with pre-specified rules for modifying the sampling plan. We extend group sequential orderings of the outcome space based on the stage at stopping, likelihood ratio statistic, and sample mean to the adaptive setting in order to compute median-unbiased point estimates, exact confidence intervals, and P-values uniformly distributed under the null hypothesis. The likelihood ratio ordering is found to average shorter confidence intervals and produce higher probabilities of P-values below important thresholds than alternative approaches. The bias adjusted mean demonstrates the lowest mean squared error among candidate point estimates. A conditional error-based approach in the literature has the benefit of being the only method that accommodates unplanned adaptations. We compare the performance of this and other methods in order to quantify the cost of failing to plan ahead in settings where adaptations could realistically be pre-specified at the design stage. We find the cost to be meaningful for all designs and treatment effects considered, and to be substantial for designs frequently proposed in the literature. © 2014, The International Biometric Society.

  10. The students' ability in the mathematical literacy for uncertainty problems on the PISA adaptation test

    NASA Astrophysics Data System (ADS)

    Julie, Hongki; Sanjaya, Febi; Anggoro, Ant. Yudhi

    2017-08-01

    One of purposes of this study was to describe the solution profile of the junior high school students for the PISA adaptation test. The procedures conducted by researchers to achieve this objective were (1) adapting the PISA test, (2) validating the adapting PISA test, (3) asking junior high school students to do the adapting PISA test, and (4) making the students' solution profile. The PISA problems for mathematics could be classified into four areas, namely quantity, space and shape, change and relationship, and uncertainty. The research results that would be presented in this paper were the result test for uncertainty problems. In the adapting PISA test, there were fifteen questions. Subjects in this study were 18 students from 11 junior high schools in Yogyakarta, Central Java, and Banten. The type of research that used by the researchers was a qualitative research. For the first uncertainty problem in the adapting test, 66.67% of students reached level 3. For the second uncertainty problem in the adapting test, 44.44% of students achieved level 4, and 33.33% of students reached level 3. For the third uncertainty problem in the adapting test n, 38.89% of students achieved level 5, 11.11% of students reached level 4, and 5.56% of students achieved level 3. For the part a of the fourth uncertainty problem in the adapting test, 72.22% of students reached level 4 and for the part b of the fourth uncertainty problem in the adapting test, 83.33% students achieved level 4.

  11. Definition and use of Solution-focused Sustainability Assessment: A novel approach to generate, explore and decide on sustainable solutions for wicked problems.

    PubMed

    Zijp, Michiel C; Posthuma, Leo; Wintersen, Arjen; Devilee, Jeroen; Swartjes, Frank A

    2016-05-01

    This paper introduces Solution-focused Sustainability Assessment (SfSA), provides practical guidance formatted as a versatile process framework, and illustrates its utility for solving a wicked environmental management problem. Society faces complex and increasingly wicked environmental problems for which sustainable solutions are sought. Wicked problems are multi-faceted, and deriving of a management solution requires an approach that is participative, iterative, innovative, and transparent in its definition of sustainability and translation to sustainability metrics. We suggest to add the use of a solution-focused approach. The SfSA framework is collated from elements from risk assessment, risk governance, adaptive management and sustainability assessment frameworks, expanded with the 'solution-focused' paradigm as recently proposed in the context of risk assessment. The main innovation of this approach is the broad exploration of solutions upfront in assessment projects. The case study concerns the sustainable management of slightly contaminated sediments continuously formed in ditches in rural, agricultural areas. This problem is wicked, as disposal of contaminated sediment on adjacent land is potentially hazardous to humans, ecosystems and agricultural products. Non-removal would however reduce drainage capacity followed by increased risks of flooding, while contaminated sediment removal followed by offsite treatment implies high budget costs and soil subsidence. Application of the steps in the SfSA-framework served in solving this problem. Important elements were early exploration of a wide 'solution-space', stakeholder involvement from the onset of the assessment, clear agreements on the risk and sustainability metrics of the problem and on the interpretation and decision procedures, and adaptive management. Application of the key elements of the SfSA approach eventually resulted in adoption of a novel sediment management policy. The stakeholder

  12. A multigrid method for steady Euler equations on unstructured adaptive grids

    NASA Technical Reports Server (NTRS)

    Riemslagh, Kris; Dick, Erik

    1993-01-01

    A flux-difference splitting type algorithm is formulated for the steady Euler equations on unstructured grids. The polynomial flux-difference splitting technique is used. A vertex-centered finite volume method is employed on a triangular mesh. The multigrid method is in defect-correction form. A relaxation procedure with a first order accurate inner iteration and a second-order correction performed only on the finest grid, is used. A multi-stage Jacobi relaxation method is employed as a smoother. Since the grid is unstructured a Jacobi type is chosen. The multi-staging is necessary to provide sufficient smoothing properties. The domain is discretized using a Delaunay triangular mesh generator. Three grids with more or less uniform distribution of nodes but with different resolution are generated by successive refinement of the coarsest grid. Nodes of coarser grids appear in the finer grids. The multigrid method is started on these grids. As soon as the residual drops below a threshold value, an adaptive refinement is started. The solution on the adaptively refined grid is accelerated by a multigrid procedure. The coarser multigrid grids are generated by successive coarsening through point removement. The adaption cycle is repeated a few times. Results are given for the transonic flow over a NACA-0012 airfoil.

  13. Adaptive Assessment for Nonacademic Secondary Reading.

    ERIC Educational Resources Information Center

    Hittleman, Daniel R.

    Adaptive assessment procedures are a means of determining the quality of a reader's performance in a variety of reading situations and on a variety of written materials. Such procedures are consistent with the idea that there are functional competencies which change with the reading task. Adaptive assessment takes into account that a lack of…

  14. Time-dependent grid adaptation for meshes of triangles and tetrahedra

    NASA Technical Reports Server (NTRS)

    Rausch, Russ D.

    1993-01-01

    This paper presents in viewgraph form a method of optimizing grid generation for unsteady CFD flow calculations that distributes the numerical error evenly throughout the mesh. Adaptive meshing is used to locally enrich in regions of relatively large errors and to locally coarsen in regions of relatively small errors. The enrichment/coarsening procedures are robust for isotropic cells; however, enrichment of high aspect ratio cells may fail near boundary surfaces with relatively large curvature. The enrichment indicator worked well for the cases shown, but in general requires user supervision for a more efficient solution.

  15. Optimization in the utility maximization framework for conservation planning: a comparison of solution procedures in a study of multifunctional agriculture

    PubMed Central

    Stoms, David M.; Davis, Frank W.

    2014-01-01

    Quantitative methods of spatial conservation prioritization have traditionally been applied to issues in conservation biology and reserve design, though their use in other types of natural resource management is growing. The utility maximization problem is one form of a covering problem where multiple criteria can represent the expected social benefits of conservation action. This approach allows flexibility with a problem formulation that is more general than typical reserve design problems, though the solution methods are very similar. However, few studies have addressed optimization in utility maximization problems for conservation planning, and the effect of solution procedure is largely unquantified. Therefore, this study mapped five criteria describing elements of multifunctional agriculture to determine a hypothetical conservation resource allocation plan for agricultural land conservation in the Central Valley of CA, USA. We compared solution procedures within the utility maximization framework to determine the difference between an open source integer programming approach and a greedy heuristic, and find gains from optimization of up to 12%. We also model land availability for conservation action as a stochastic process and determine the decline in total utility compared to the globally optimal set using both solution algorithms. Our results are comparable to other studies illustrating the benefits of optimization for different conservation planning problems, and highlight the importance of maximizing the effectiveness of limited funding for conservation and natural resource management. PMID:25538868

  16. Optimization in the utility maximization framework for conservation planning: a comparison of solution procedures in a study of multifunctional agriculture

    USGS Publications Warehouse

    Kreitler, Jason R.; Stoms, David M.; Davis, Frank W.

    2014-01-01

    Quantitative methods of spatial conservation prioritization have traditionally been applied to issues in conservation biology and reserve design, though their use in other types of natural resource management is growing. The utility maximization problem is one form of a covering problem where multiple criteria can represent the expected social benefits of conservation action. This approach allows flexibility with a problem formulation that is more general than typical reserve design problems, though the solution methods are very similar. However, few studies have addressed optimization in utility maximization problems for conservation planning, and the effect of solution procedure is largely unquantified. Therefore, this study mapped five criteria describing elements of multifunctional agriculture to determine a hypothetical conservation resource allocation plan for agricultural land conservation in the Central Valley of CA, USA. We compared solution procedures within the utility maximization framework to determine the difference between an open source integer programming approach and a greedy heuristic, and find gains from optimization of up to 12%. We also model land availability for conservation action as a stochastic process and determine the decline in total utility compared to the globally optimal set using both solution algorithms. Our results are comparable to other studies illustrating the benefits of optimization for different conservation planning problems, and highlight the importance of maximizing the effectiveness of limited funding for conservation and natural resource management.

  17. Adapting to life: ocean biogeochemical modelling and adaptive remeshing

    NASA Astrophysics Data System (ADS)

    Hill, J.; Popova, E. E.; Ham, D. A.; Piggott, M. D.; Srokosz, M.

    2014-05-01

    An outstanding problem in biogeochemical modelling of the ocean is that many of the key processes occur intermittently at small scales, such as the sub-mesoscale, that are not well represented in global ocean models. This is partly due to their failure to resolve sub-mesoscale phenomena, which play a significant role in vertical nutrient supply. Simply increasing the resolution of the models may be an inefficient computational solution to this problem. An approach based on recent advances in adaptive mesh computational techniques may offer an alternative. Here the first steps in such an approach are described, using the example of a simple vertical column (quasi-1-D) ocean biogeochemical model. We present a novel method of simulating ocean biogeochemical behaviour on a vertically adaptive computational mesh, where the mesh changes in response to the biogeochemical and physical state of the system throughout the simulation. We show that the model reproduces the general physical and biological behaviour at three ocean stations (India, Papa and Bermuda) as compared to a high-resolution fixed mesh simulation and to observations. The use of an adaptive mesh does not increase the computational error, but reduces the number of mesh elements by a factor of 2-3. Unlike previous work the adaptivity metric used is flexible and we show that capturing the physical behaviour of the model is paramount to achieving a reasonable solution. Adding biological quantities to the adaptivity metric further refines the solution. We then show the potential of this method in two case studies where we change the adaptivity metric used to determine the varying mesh sizes in order to capture the dynamics of chlorophyll at Bermuda and sinking detritus at Papa. We therefore demonstrate that adaptive meshes may provide a suitable numerical technique for simulating seasonal or transient biogeochemical behaviour at high vertical resolution whilst minimising the number of elements in the mesh. More

  18. Investigation of the continuous flow of the sample solution on the performance of electromembrane extraction: Comparison with conventional procedure.

    PubMed

    Nojavan, Saeed; Sirani, Mahsa; Asadi, Sakine

    2017-10-01

    In this study, electromembrane extraction from a flowing sample solution, termed as continuous-flow electromembrane extraction, was developed and compared with conventional procedures for the determination of four basic drugs in real samples. Experimental parameters affecting the extraction efficiency were further studied and optimized. Under optimum conditions, linearity of continuous-flow procedure was within 8.0-500 ng/mL, while it was wider for conventional procedures (2.0-500 ng/mL). Moreover, repeatability (percentage relative standard deviation) was found to range between 5.6 and 10.4% (n = 3) for the continuous-flow procedure, with a better repeatability than that of conventional procedures (2.3-5.5% (n = 3)). Also, for the continuous-flow procedure, the estimated detection limit (signal-to-noise ratio = 3) was less than 2.4 ng/mL and extraction recoveries were within 8-10%, while the corresponding figures for conventional procedures were less than 0.6 ng/mL and 42-60%, respectively. Thus, the results showed that both continuous flow and conventional procedures were applicable for the extraction of model compounds. However, the conventional procedure was more convenient to use, and thus it was applied to determine sample drugs in real urine and wastewater samples. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  19. Adaptive kernel independent component analysis and UV spectrometry applied to characterize the procedure for processing prepared rhubarb roots.

    PubMed

    Wang, Guoqing; Hou, Zhenyu; Peng, Yang; Wang, Yanjun; Sun, Xiaoli; Sun, Yu-an

    2011-11-07

    By determination of the number of absorptive chemical components (ACCs) in mixtures using median absolute deviation (MAD) analysis and extraction of spectral profiles of ACCs using kernel independent component analysis (KICA), an adaptive KICA (AKICA) algorithm was proposed. The proposed AKICA algorithm was used to characterize the procedure for processing prepared rhubarb roots by resolution of the measured mixed raw UV spectra of the rhubarb samples that were collected at different steaming intervals. The results show that the spectral features of ACCs in the mixtures can be directly estimated without chemical and physical pre-separation and other prior information. The estimated three independent components (ICs) represent different chemical components in the mixtures, which are mainly polysaccharides (IC1), tannin (IC2), and anthraquinone glycosides (IC3). The variations of the relative concentrations of the ICs can account for the chemical and physical changes during the processing procedure: IC1 increases significantly before the first 5 h, and is nearly invariant after 6 h; IC2 has no significant changes or is slightly decreased during the processing procedure; IC3 decreases significantly before the first 5 h and decreases slightly after 6 h. The changes of IC1 can explain why the colour became black and darkened during the processing procedure, and the changes of IC3 can explain why the processing procedure can reduce the bitter and dry taste of the rhubarb roots. The endpoint of the processing procedure can be determined as 5-6 h, when the increasing or decreasing trends of the estimated ICs are insignificant. The AKICA-UV method provides an alternative approach for the characterization of the processing procedure of rhubarb roots preparation, and provides a novel way for determination of the endpoint of the traditional Chinese medicine (TCM) processing procedure by inspection of the change trends of the ICs.

  20. Hybrid Self-Adaptive Evolution Strategies Guided by Neighborhood Structures for Combinatorial Optimization Problems.

    PubMed

    Coelho, V N; Coelho, I M; Souza, M J F; Oliveira, T A; Cota, L P; Haddad, M N; Mladenovic, N; Silva, R C P; Guimarães, F G

    2016-01-01

    This article presents an Evolution Strategy (ES)--based algorithm, designed to self-adapt its mutation operators, guiding the search into the solution space using a Self-Adaptive Reduced Variable Neighborhood Search procedure. In view of the specific local search operators for each individual, the proposed population-based approach also fits into the context of the Memetic Algorithms. The proposed variant uses the Greedy Randomized Adaptive Search Procedure with different greedy parameters for generating its initial population, providing an interesting exploration-exploitation balance. To validate the proposal, this framework is applied to solve three different [Formula: see text]-Hard combinatorial optimization problems: an Open-Pit-Mining Operational Planning Problem with dynamic allocation of trucks, an Unrelated Parallel Machine Scheduling Problem with Setup Times, and the calibration of a hybrid fuzzy model for Short-Term Load Forecasting. Computational results point out the convergence of the proposed model and highlight its ability in combining the application of move operations from distinct neighborhood structures along the optimization. The results gathered and reported in this article represent a collective evidence of the performance of the method in challenging combinatorial optimization problems from different application domains. The proposed evolution strategy demonstrates an ability of adapting the strength of the mutation disturbance during the generations of its evolution process. The effectiveness of the proposal motivates the application of this novel evolutionary framework for solving other combinatorial optimization problems.

  1. Adaptive Batch Mode Active Learning.

    PubMed

    Chakraborty, Shayok; Balasubramanian, Vineeth; Panchanathan, Sethuraman

    2015-08-01

    Active learning techniques have gained popularity to reduce human effort in labeling data instances for inducing a classifier. When faced with large amounts of unlabeled data, such algorithms automatically identify the exemplar and representative instances to be selected for manual annotation. More recently, there have been attempts toward a batch mode form of active learning, where a batch of data points is simultaneously selected from an unlabeled set. Real-world applications require adaptive approaches for batch selection in active learning, depending on the complexity of the data stream in question. However, the existing work in this field has primarily focused on static or heuristic batch size selection. In this paper, we propose two novel optimization-based frameworks for adaptive batch mode active learning (BMAL), where the batch size as well as the selection criteria are combined in a single formulation. We exploit gradient-descent-based optimization strategies as well as properties of submodular functions to derive the adaptive BMAL algorithms. The solution procedures have the same computational complexity as existing state-of-the-art static BMAL techniques. Our empirical results on the widely used VidTIMIT and the mobile biometric (MOBIO) data sets portray the efficacy of the proposed frameworks and also certify the potential of these approaches in being used for real-world biometric recognition applications.

  2. Investigations in adaptive processing of multispectral data

    NASA Technical Reports Server (NTRS)

    Kriegler, F. J.; Horwitz, H. M.

    1973-01-01

    Adaptive data processing procedures are applied to the problem of classifying objects in a scene scanned by multispectral sensor. These procedures show a performance improvement over standard nonadaptive techniques. Some sources of error in classification are identified and those correctable by adaptive processing are discussed. Experiments in adaptation of signature means by decision-directed methods are described. Some of these methods assume correlation between the trajectories of different signature means; for others this assumption is not made.

  3. Multiple-try differential evolution adaptive Metropolis for efficient solution of highly parameterized models

    NASA Astrophysics Data System (ADS)

    Eric, L.; Vrugt, J. A.

    2010-12-01

    Spatially distributed hydrologic models potentially contain hundreds of parameters that need to be derived by calibration against a historical record of input-output data. The quality of this calibration strongly determines the predictive capability of the model and thus its usefulness for science-based decision making and forecasting. Unfortunately, high-dimensional optimization problems are typically difficult to solve. Here we present our recent developments to the Differential Evolution Adaptive Metropolis (DREAM) algorithm (Vrugt et al., 2009) to warrant efficient solution of high-dimensional parameter estimation problems. The algorithm samples from an archive of past states (Ter Braak and Vrugt, 2008), and uses multiple-try Metropolis sampling (Liu et al., 2000) to decrease the required burn-in time for each individual chain and increase efficiency of posterior sampling. This approach is hereafter referred to as MT-DREAM. We present results for 2 synthetic mathematical case studies, and 2 real-world examples involving from 10 to 240 parameters. Results for those cases show that our multiple-try sampler, MT-DREAM, can consistently find better solutions than other Bayesian MCMC methods. Moreover, MT-DREAM is admirably suited to be implemented and ran on a parallel machine and is therefore a powerful method for posterior inference.

  4. A "Rearrangement Procedure" for Scoring Adaptive Tests with Review Options.

    ERIC Educational Resources Information Center

    Papanastasiou, Elena C.

    Due to the increased popularity of computerized adaptive testing (CAT), many admissions tests, as well as certification and licensure examinations, have been transformed from their paper-and-pencil versions to computerized adaptive versions. A major difference between paper-and-pencil tests and CAT, from an examinees point of view, is that in many…

  5. A "Rearrangement Procedure" for Scoring Adaptive Tests with Review Options

    ERIC Educational Resources Information Center

    Papanastasiou, Elena C.; Reckase, Mark D.

    2007-01-01

    Because of the increased popularity of computerized adaptive testing (CAT), many admissions tests, as well as certification and licensure examinations, have been transformed from their paper-and-pencil versions to computerized adaptive versions. A major difference between paper-and-pencil tests and CAT from an examinee's point of view is that in…

  6. Adaptive sampling in research on risk-related behaviors.

    PubMed

    Thompson, Steven K; Collins, Linda M

    2002-11-01

    This article introduces adaptive sampling designs to substance use researchers. Adaptive sampling is particularly useful when the population of interest is rare, unevenly distributed, hidden, or hard to reach. Examples of such populations are injection drug users, individuals at high risk for HIV/AIDS, and young adolescents who are nicotine dependent. In conventional sampling, the sampling design is based entirely on a priori information, and is fixed before the study begins. By contrast, in adaptive sampling, the sampling design adapts based on observations made during the survey; for example, drug users may be asked to refer other drug users to the researcher. In the present article several adaptive sampling designs are discussed. Link-tracing designs such as snowball sampling, random walk methods, and network sampling are described, along with adaptive allocation and adaptive cluster sampling. It is stressed that special estimation procedures taking the sampling design into account are needed when adaptive sampling has been used. These procedures yield estimates that are considerably better than conventional estimates. For rare and clustered populations adaptive designs can give substantial gains in efficiency over conventional designs, and for hidden populations link-tracing and other adaptive procedures may provide the only practical way to obtain a sample large enough for the study objectives.

  7. Lattice model for water-solute mixtures.

    PubMed

    Furlan, A P; Almarza, N G; Barbosa, M C

    2016-10-14

    A lattice model for the study of mixtures of associating liquids is proposed. Solvent and solute are modeled by adapting the associating lattice gas (ALG) model. The nature of interaction of solute/solvent is controlled by tuning the energy interactions between the patches of ALG model. We have studied three set of parameters, resulting in, hydrophilic, inert, and hydrophobic interactions. Extensive Monte Carlo simulations were carried out, and the behavior of pure components and the excess properties of the mixtures have been studied. The pure components, water (solvent) and solute, have quite similar phase diagrams, presenting gas, low density liquid, and high density liquid phases. In the case of solute, the regions of coexistence are substantially reduced when compared with both the water and the standard ALG models. A numerical procedure has been developed in order to attain series of results at constant pressure from simulations of the lattice gas model in the grand canonical ensemble. The excess properties of the mixtures, volume and enthalpy as the function of the solute fraction, have been studied for different interaction parameters of the model. Our model is able to reproduce qualitatively well the excess volume and enthalpy for different aqueous solutions. For the hydrophilic case, we show that the model is able to reproduce the excess volume and enthalpy of mixtures of small alcohols and amines. The inert case reproduces the behavior of large alcohols such as propanol, butanol, and pentanol. For the last case (hydrophobic), the excess properties reproduce the behavior of ionic liquids in aqueous solution.

  8. Adaptive Texture Synthesis for Large Scale City Modeling

    NASA Astrophysics Data System (ADS)

    Despine, G.; Colleu, T.

    2015-02-01

    Large scale city models textured with aerial images are well suited for bird-eye navigation but generally the image resolution does not allow pedestrian navigation. One solution to face this problem is to use high resolution terrestrial photos but it requires huge amount of manual work to remove occlusions. Another solution is to synthesize generic textures with a set of procedural rules and elementary patterns like bricks, roof tiles, doors and windows. This solution may give realistic textures but with no correlation to the ground truth. Instead of using pure procedural modelling we present a method to extract information from aerial images and adapt the texture synthesis to each building. We describe a workflow allowing the user to drive the information extraction and to select the appropriate texture patterns. We also emphasize the importance to organize the knowledge about elementary pattern in a texture catalogue allowing attaching physical information, semantic attributes and to execute selection requests. Roofs are processed according to the detected building material. Façades are first described in terms of principal colours, then opening positions are detected and some window features are computed. These features allow selecting the most appropriate patterns from the texture catalogue. We experimented this workflow on two samples with 20 cm and 5 cm resolution images. The roof texture synthesis and opening detection were successfully conducted on hundreds of buildings. The window characterization is still sensitive to the distortions inherent to the projection of aerial images onto the facades.

  9. Impact of Metal Nanoform Colloidal Solution on the Adaptive Potential of Plants

    NASA Astrophysics Data System (ADS)

    Taran, Nataliya; Batsmanova, Ludmila; Kovalenko, Mariia; Okanenko, Alexander

    2016-02-01

    Nanoparticles are a known cause of oxidative stress and so induce antistress action. The latter property was the purpose of our study. The effect of two concentrations (120 and 240 mg/l) of nanoform biogenic metal (Ag, Cu, Fe, Zn, Mn) colloidal solution on antioxidant enzymes, superoxide dismutase and catalase; the level of the factor of the antioxidant state; and the content of thiobarbituric acid reactive substances (TBARSs) of soybean plant in terms of field experience were studied. It was found that the oxidative processes developed a metal nanoparticle pre-sowing seed treatment variant at a concentration of 120 mg/l, as evidenced by the increase in the content of TBARS in photosynthetic tissues by 12 %. Pre-sowing treatment in a double concentration (240 mg/l) resulted in a decrease in oxidative processes (19 %), and pre-sowing treatment combined with vegetative treatment also contributed to the reduction of TBARS (10 %). Increased activity of superoxide dismutase (SOD) was observed in a variant by increasing the content of TBARS; SOD activity was at the control level in two other variants. Catalase activity decreased in all variants. The factor of antioxidant activity was highest (0.3) in a variant with nanoparticle double treatment (pre-sowing and vegetative) at a concentration of 120 mg/l. Thus, the studied nanometal colloidal solution when used in small doses, in a certain time interval, can be considered as a low-level stress factor which according to hormesis principle promoted adaptive response reaction.

  10. Adaptive implicit-explicit and parallel element-by-element iteration schemes

    NASA Technical Reports Server (NTRS)

    Tezduyar, T. E.; Liou, J.; Nguyen, T.; Poole, S.

    1989-01-01

    Adaptive implicit-explicit (AIE) and grouped element-by-element (GEBE) iteration schemes are presented for the finite element solution of large-scale problems in computational mechanics and physics. The AIE approach is based on the dynamic arrangement of the elements into differently treated groups. The GEBE procedure, which is a way of rewriting the EBE formulation to make its parallel processing potential and implementation more clear, is based on the static arrangement of the elements into groups with no inter-element coupling within each group. Various numerical tests performed demonstrate the savings in the CPU time and memory.

  11. A New Method for 3D Radiative Transfer with Adaptive Grids

    NASA Astrophysics Data System (ADS)

    Folini, D.; Walder, R.; Psarros, M.; Desboeufs, A.

    2003-01-01

    We present a new method for 3D NLTE radiative transfer in moving media, including an adaptive grid, along with some test examples and first applications. The central features of our approach we briefly outline in the following. For the solution of the radiative transfer equation, we make use of a generalized mean intensity approach. In this approach, the transfer eqation is solved directly, instead of using the moments of the transfer equation, thus avoiding the associated closure problem. In a first step, a system of equations for the transfer of each directed intensity is set up, using short characteristics. Next, the entity of systems of equations for each directed intensity is re-formulated in the form of one system of equations for the angle-integrated mean intensity. This system then is solved by a modern, fast BiCGStab iterative solver. An additional advantage of this procedure is that convergence rates barely depend on the spatial discretization. For the solution of the rate equations we use Housholder transformations. Lines are treated by a 3D generalization of the well-known Sobolev-approximation. The two parts, solution of the transfer equation and solution of the rate equations, are iteratively coupled. We recently have implemented an adaptive grid, which allows for recursive refinement on a cell-by-cell basis. The spatial resolution, which is always a problematic issue in 3D simulations, we can thus locally reduce or augment, depending on the problem to be solved.

  12. PLUM: Parallel Load Balancing for Unstructured Adaptive Meshes. Degree awarded by Colorado Univ.

    NASA Technical Reports Server (NTRS)

    Oliker, Leonid

    1998-01-01

    Dynamic mesh adaption on unstructured grids is a powerful tool for computing large-scale problems that require grid modifications to efficiently resolve solution features. By locally refining and coarsening the mesh to capture physical phenomena of interest, such procedures make standard computational methods more cost effective. Unfortunately, an efficient parallel implementation of these adaptive methods is rather difficult to achieve, primarily due to the load imbalance created by the dynamically-changing nonuniform grid. This requires significant communication at runtime, leading to idle processors and adversely affecting the total execution time. Nonetheless, it is generally thought that unstructured adaptive- grid techniques will constitute a significant fraction of future high-performance supercomputing. Various dynamic load balancing methods have been reported to date; however, most of them either lack a global view of loads across processors or do not apply their techniques to realistic large-scale applications.

  13. The golden 35 min of stroke intervention with ADAPT: effect of thrombectomy procedural time in acute ischemic stroke on outcome.

    PubMed

    Alawieh, Ali; Pierce, Alyssa K; Vargas, Jan; Turk, Aquilla S; Turner, Raymond D; Chaudry, M Imran; Spiotta, Alejandro M

    2018-03-01

    In acute ischemic stroke (AIS), extending mechanical thrombectomy procedural times beyond 60 min has previously been associated with an increased complication rate and poorer outcomes. After improvements in thrombectomy methods, to reassess whether this relationship holds true with a more contemporary thrombectomy approach: a direct aspiration first pass technique (ADAPT). We retrospectively studied a database of patients with AIS who underwent ADAPT thrombectomy for large vessel occlusions. Patients were dichotomized into two groups: 'early recan', in which recanalization (recan) was achieved in ≤35 min, and 'late recan', in which procedures extended beyond 35 min. 197 patients (47.7% women, mean age 66.3 years) were identified. We determined that after 35 min, a poor outcome was more likely than a good (modified Rankin Scale (mRS) score 0-2) outcome. The baseline National Institutes of Health Stroke Scale (NIHSS) score was similar between 'early recan' (n=122) (14.7±6.9) and 'late recan' patients (n=75) (15.9±7.2). Among 'early recan' patients, recanalization was achieved in 17.8±8.8 min compared with 70±39.8 min in 'late recan' patients. The likelihood of achieving a good outcome was higher in the 'early recan' group (65.2%) than in the 'late recan' group (38.2%; p<0.001). Patients in the 'late recan' group had a higher likelihood of postprocedural hemorrhage, specifically parenchymal hematoma type 2, than those in the 'early recan' group. Logistic regression analysis showed that baseline NIHSS, recanalization time, and atrial fibrillation had a significant impact on 90-day outcomes. Our findings suggest that extending ADAPT thrombectomy procedure times beyond 35 min increases the likelihood of complications such as intracerebral hemorrhage while reducing the likelihood of a good outcome. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  14. Incompressible Navier-Stokes and parabolized Navier-Stokes solution procedures and computational techniques

    NASA Technical Reports Server (NTRS)

    Rubin, S. G.

    1982-01-01

    Recent developments with finite-difference techniques are emphasized. The quotation marks reflect the fact that any finite discretization procedure can be included in this category. Many so-called finite element collocation and galerkin methods can be reproduced by appropriate forms of the differential equations and discretization formulas. Many of the difficulties encountered in early Navier-Stokes calculations were inherent not only in the choice of the different equations (accuracy), but also in the method of solution or choice of algorithm (convergence and stability, in the manner in which the dependent variables or discretized equations are related (coupling), in the manner that boundary conditions are applied, in the manner that the coordinate mesh is specified (grid generation), and finally, in recognizing that for many high Reynolds number flows not all contributions to the Navier-Stokes equations are necessarily of equal importance (parabolization, preferred direction, pressure interaction, asymptotic and mathematical character). It is these elements that are reviewed. Several Navier-Stokes and parabolized Navier-Stokes formulations are also presented.

  15. Game-Theoretical Design of an Adaptive Distributed Dissemination Protocol for VANETs.

    PubMed

    Iza-Paredes, Cristhian; Mezher, Ahmad Mohamad; Aguilar Igartua, Mónica; Forné, Jordi

    2018-01-19

    Road safety applications envisaged for Vehicular Ad Hoc Networks (VANETs) depend largely on the dissemination of warning messages to deliver information to concerned vehicles. The intended applications, as well as some inherent VANET characteristics, make data dissemination an essential service and a challenging task in this kind of networks. This work lays out a decentralized stochastic solution for the data dissemination problem through two game-theoretical mechanisms. Given the non-stationarity induced by a highly dynamic topology, diverse network densities, and intermittent connectivity, a solution for the formulated game requires an adaptive procedure able to exploit the environment changes. Extensive simulations reveal that our proposal excels in terms of number of transmissions, lower end-to-end delay and reduced overhead while maintaining high delivery ratio, compared to other proposals.

  16. Game-Theoretical Design of an Adaptive Distributed Dissemination Protocol for VANETs

    PubMed Central

    Mezher, Ahmad Mohamad; Aguilar Igartua, Mónica

    2018-01-01

    Road safety applications envisaged for Vehicular Ad Hoc Networks (VANETs) depend largely on the dissemination of warning messages to deliver information to concerned vehicles. The intended applications, as well as some inherent VANET characteristics, make data dissemination an essential service and a challenging task in this kind of networks. This work lays out a decentralized stochastic solution for the data dissemination problem through two game-theoretical mechanisms. Given the non-stationarity induced by a highly dynamic topology, diverse network densities, and intermittent connectivity, a solution for the formulated game requires an adaptive procedure able to exploit the environment changes. Extensive simulations reveal that our proposal excels in terms of number of transmissions, lower end-to-end delay and reduced overhead while maintaining high delivery ratio, compared to other proposals. PMID:29351255

  17. A mineral separation procedure using hot Clerici solution

    USGS Publications Warehouse

    Rosenblum, Sam

    1974-01-01

    Careful boiling of Clerici solution in a Pyrex test tube in an oil bath is used to float minerals with densities up to 5.0 in order to obtain purified concentrates of monazite (density 5.1) for analysis. The "sink" and "float" fractions are trapped in solidified Clerici salts on rapid chilling, and the fractions are washed into separate filter papers with warm water. The hazardous nature of Clerici solution requires unusual care in handling.

  18. Developing Competency in Payroll Procedures

    ERIC Educational Resources Information Center

    Jackson, Allen L.

    1975-01-01

    The author describes a sequence of units that provides for competency in payroll procedures. The units could be the basis for a four to six week minicourse and are adaptable, so that the student, upon completion, will be able to apply his learning to any payroll procedures system. (Author/AJ)

  19. Efficient electromagnetic source imaging with adaptive standardized LORETA/FOCUSS.

    PubMed

    Schimpf, Paul H; Liu, Hesheng; Ramon, Ceon; Haueisen, Jens

    2005-05-01

    Functional brain imaging and source localization based on the scalp's potential field require a solution to an ill-posed inverse problem with many solutions. This makes it necessary to incorporate a priori knowledge in order to select a particular solution. A computational challenge for some subject-specific head models is that many inverse algorithms require a comprehensive sampling of the candidate source space at the desired resolution. In this study, we present an algorithm that can accurately reconstruct details of localized source activity from a sparse sampling of the candidate source space. Forward computations are minimized through an adaptive procedure that increases source resolution as the spatial extent is reduced. With this algorithm, we were able to compute inverses using only 6% to 11% of the full resolution lead-field, with a localization accuracy that was not significantly different than an exhaustive search through a fully-sampled source space. The technique is, therefore, applicable for use with anatomically-realistic, subject-specific forward models for applications with spatially concentrated source activity.

  20. A proposal for amending administrative law to facilitate adaptive management

    USGS Publications Warehouse

    Craig, Robin K.; Ruhl, J.B.; Brown, Eleanor D.; Williams, Byron K.

    2017-01-01

    In this article we examine how federal agencies use adaptive management. In order for federal agencies to implement adaptive management more successfully, administrative law must adapt to adaptive management, and we propose changes in administrative law that will help to steer the current process out of a dead end. Adaptive management is a form of structured decision making that is widely used in natural resources management. It involves specific steps integrated in an iterative process for adjusting management actions as new information becomes available. Theoretical requirements for adaptive management notwithstanding, federal agency decision making is subject to the requirements of the federal Administrative Procedure Act, and state agencies are subject to the states' parallel statutes. We argue that conventional administrative law has unnecessarily shackled effective use of adaptive management. We show that through a specialized 'adaptive management track' of administrative procedures, the core values of administrative law—especially public participation, judicial review, and finality— can be implemented in ways that allow for more effective adaptive management. We present and explain draft model legislation (the Model Adaptive Management Procedure Act) that would create such a track for the specific types of agency decision making that could benefit from adaptive management.

  1. A shock-capturing SPH scheme based on adaptive kernel estimation

    NASA Astrophysics Data System (ADS)

    Sigalotti, Leonardo Di G.; López, Hender; Donoso, Arnaldo; Sira, Eloy; Klapp, Jaime

    2006-02-01

    Here we report a method that converts standard smoothed particle hydrodynamics (SPH) into a working shock-capturing scheme without relying on solutions to the Riemann problem. Unlike existing adaptive SPH simulations, the present scheme is based on an adaptive kernel estimation of the density, which combines intrinsic features of both the kernel and nearest neighbor approaches in a way that the amount of smoothing required in low-density regions is effectively controlled. Symmetrized SPH representations of the gas dynamic equations along with the usual kernel summation for the density are used to guarantee variational consistency. Implementation of the adaptive kernel estimation involves a very simple procedure and allows for a unique scheme that handles strong shocks and rarefactions the same way. Since it represents a general improvement of the integral interpolation on scattered data, it is also applicable to other fluid-dynamic models. When the method is applied to supersonic compressible flows with sharp discontinuities, as in the classical one-dimensional shock-tube problem and its variants, the accuracy of the results is comparable, and in most cases superior, to that obtained from high quality Godunov-type methods and SPH formulations based on Riemann solutions. The extension of the method to two- and three-space dimensions is straightforward. In particular, for the two-dimensional cylindrical Noh's shock implosion and Sedov point explosion problems the present scheme produces much better results than those obtained with conventional SPH codes.

  2. Vortical Flow Prediction Using an Adaptive Unstructured Grid Method

    NASA Technical Reports Server (NTRS)

    Pirzadeh, Shahyar Z.

    2001-01-01

    A computational fluid dynamics (CFD) method has been employed to compute vortical flows around slender wing/body configurations. The emphasis of the paper is on the effectiveness of an adaptive grid procedure in "capturing" concentrated vortices generated at sharp edges or flow separation lines of lifting surfaces flying at high angles of attack. The method is based on a tetrahedral unstructured grid technology developed at the NASA Langley Research Center. Two steady-state, subsonic, inviscid and Navier-Stokes flow test cases are presented to demonstrate the applicability of the method for solving practical vortical flow problems. The first test case concerns vortex flow over a simple 65deg delta wing with different values of leading-edge bluntness, and the second case is that of a more complex fighter configuration. The superiority of the adapted solutions in capturing the vortex flow structure over the conventional unadapted results is demonstrated by comparisons with the windtunnel experimental data. The study shows that numerical prediction of vortical flows is highly sensitive to the local grid resolution and that the implementation of grid adaptation is essential when applying CFD methods to such complicated flow problems.

  3. Two-stage sequential sampling: A neighborhood-free adaptive sampling procedure

    USGS Publications Warehouse

    Salehi, M.; Smith, D.R.

    2005-01-01

    Designing an efficient sampling scheme for a rare and clustered population is a challenging area of research. Adaptive cluster sampling, which has been shown to be viable for such a population, is based on sampling a neighborhood of units around a unit that meets a specified condition. However, the edge units produced by sampling neighborhoods have proven to limit the efficiency and applicability of adaptive cluster sampling. We propose a sampling design that is adaptive in the sense that the final sample depends on observed values, but it avoids the use of neighborhoods and the sampling of edge units. Unbiased estimators of population total and its variance are derived using Murthy's estimator. The modified two-stage sampling design is easy to implement and can be applied to a wider range of populations than adaptive cluster sampling. We evaluate the proposed sampling design by simulating sampling of two real biological populations and an artificial population for which the variable of interest took the value either 0 or 1 (e.g., indicating presence and absence of a rare event). We show that the proposed sampling design is more efficient than conventional sampling in nearly all cases. The approach used to derive estimators (Murthy's estimator) opens the door for unbiased estimators to be found for similar sequential sampling designs. ?? 2005 American Statistical Association and the International Biometric Society.

  4. Development of a Countermeasure to Enhance Postflight Locomotor Adaptability

    NASA Technical Reports Server (NTRS)

    Bloomberg, Jacob J.

    2006-01-01

    Astronauts returning from space flight experience locomotor dysfunction following their return to Earth. Our laboratory is currently developing a gait adaptability training program that is designed to facilitate recovery of locomotor function following a return to a gravitational environment. The training program exploits the ability of the sensorimotor system to generalize from exposure to multiple adaptive challenges during training so that the gait control system essentially learns to learn and therefore can reorganize more rapidly when faced with a novel adaptive challenge. We have previously confirmed that subjects participating in adaptive generalization training programs using a variety of visuomotor distortions can enhance their ability to adapt to a novel sensorimotor environment. Importantly, this increased adaptability was retained even one month after completion of the training period. Adaptive generalization has been observed in a variety of other tasks requiring sensorimotor transformations including manual control tasks and reaching (Bock et al., 2001, Seidler, 2003) and obstacle avoidance during walking (Lam and Dietz, 2004). Taken together, the evidence suggests that a training regimen exposing crewmembers to variation in locomotor conditions, with repeated transitions among states, may enhance their ability to learn how to reassemble appropriate locomotor patterns upon return from microgravity. We believe exposure to this type of training will extend crewmembers locomotor behavioral repertoires, facilitating the return of functional mobility after long duration space flight. Our proposed training protocol will compel subjects to develop new behavioral solutions under varying sensorimotor demands. Over time subjects will learn to create appropriate locomotor solution more rapidly enabling acquisition of mobility sooner after long-duration space flight. Our laboratory is currently developing adaptive generalization training procedures and the

  5. Solving delay differential equations in S-ADAPT by method of steps.

    PubMed

    Bauer, Robert J; Mo, Gary; Krzyzanski, Wojciech

    2013-09-01

    S-ADAPT is a version of the ADAPT program that contains additional simulation and optimization abilities such as parametric population analysis. S-ADAPT utilizes LSODA to solve ordinary differential equations (ODEs), an algorithm designed for large dimension non-stiff and stiff problems. However, S-ADAPT does not have a solver for delay differential equations (DDEs). Our objective was to implement in S-ADAPT a DDE solver using the methods of steps. The method of steps allows one to solve virtually any DDE system by transforming it to an ODE system. The solver was validated for scalar linear DDEs with one delay and bolus and infusion inputs for which explicit analytic solutions were derived. Solutions of nonlinear DDE problems coded in S-ADAPT were validated by comparing them with ones obtained by the MATLAB DDE solver dde23. The estimation of parameters was tested on the MATLB simulated population pharmacodynamics data. The comparison of S-ADAPT generated solutions for DDE problems with the explicit solutions as well as MATLAB produced solutions which agreed to at least 7 significant digits. The population parameter estimates from using importance sampling expectation-maximization in S-ADAPT agreed with ones used to generate the data. Published by Elsevier Ireland Ltd.

  6. Ketamine. A solution to procedural pain in burned children.

    PubMed

    Groeneveld, A; Inkson, T

    1992-09-01

    Our experience has shown ketamine to be a safe and effective method of providing pain relief during specific procedures in burned children. It renders high doses of narcotics unnecessary and offers children the benefit of general anesthesia without the requirement of endotracheal intubation and a trip to the operating room. The response of parents and staff to the use of ketamine has been positive. Parents often experience feelings of guilt following injury to a child and are eager to employ methods that reduce their child's pain. So far, no parent has refused the administration of ketamine; some have even asked that it be used during subsequent procedures on their child. With adequate pre-procedure teaching, parents are prepared for the possible occurrence of emergent reactions and can assist in reorienting the child during recovery. Staff have found that the stress of doing painful procedures on children is reduced when ketamine is used. The procedures tend to be quicker and the predicament of working on a screaming, agitated child is eliminated. At the same time, nursing staff have had to get used to the nystagmic gaze of the children and accept that these patients are truly anesthetized even though they might move and talk. Despite the success we and others have had with ketamine, several questions about its use in burn patients remain unanswered. The literature does not answer such questions as: Which nursing measures reduce the incidence of emergent reactions? How many ketamine anesthetics can safely be administered to one individual? How does the frequency of administration relate to tolerance in a burn patient? Are there detrimental effects of frequent or long-term use? Clearly, an understanding of these questions is necessary to determine the safe boundaries of ketamine use in burn patients. Ketamine is not a panacea for the problem of pain in burned children. But it is one means of managing procedural pain, which is, after all, a significant clinical

  7. A proposal for amending administrative law to facilitate adaptive management

    NASA Astrophysics Data System (ADS)

    Craig, Robin K.; Ruhl, J. B.; Brown, Eleanor D.; Williams, Byron K.

    2017-07-01

    In this article we examine how federal agencies use adaptive management. In order for federal agencies to implement adaptive management more successfully, administrative law must adapt to adaptive management, and we propose changes in administrative law that will help to steer the current process out of a dead end. Adaptive management is a form of structured decision making that is widely used in natural resources management. It involves specific steps integrated in an iterative process for adjusting management actions as new information becomes available. Theoretical requirements for adaptive management notwithstanding, federal agency decision making is subject to the requirements of the federal Administrative Procedure Act, and state agencies are subject to the states’ parallel statutes. We argue that conventional administrative law has unnecessarily shackled effective use of adaptive management. We show that through a specialized ‘adaptive management track’ of administrative procedures, the core values of administrative law—especially public participation, judicial review, and finality— can be implemented in ways that allow for more effective adaptive management. We present and explain draft model legislation (the Model Adaptive Management Procedure Act) that would create such a track for the specific types of agency decision making that could benefit from adaptive management.

  8. Apically Extruded Debris after Retreatment Procedure with Reciproc, ProTaper Next, and Twisted File Adaptive Instruments.

    PubMed

    Yılmaz, Koray; Özyürek, Taha

    2017-04-01

    The aim of this study was to compare the amount of debris extruded from the apex during retreatment procedures with ProTaper Next (PTN; Dentsply Maillefer, Ballaigues, Switzerland), Reciproc (RCP; VDW, Munich, Germany), and Twisted File Adaptive (TFA; SybronEndo, Orange, CA) files and the duration of these retreatment procedures. Ninety upper central incisor teeth were prepared and filled with gutta-percha and AH Plus sealer (Dentsply DeTrey, Konstanz, Germany) using the vertical compaction technique. The teeth were randomly divided into 3 groups of 30 for removal of the root filling material with PTN, RCP, and TFA files. The apically extruded debris was collected in preweighed Eppendorf tubes. The time for gutta-percha removal was recorded. Data were statistically analyzed using Kruskal-Wallis and 1-way analysis of variance tests. The amount of debris extruded was RPC > TFA > PTN, respectively. Compared with the PTN group, the amount of debris extruded in the RPC group was statistically significantly higher (P < .001). There was no statistically significant difference among the RCP, TFA, and PTN groups regarding the time for retreatment (P > .05). Within the limitations of this in vitro study, all groups were associated with debris extrusion from the apex. The RCP file system led to higher levels of apical extrusion in proportion to the PTN file system. In addition, there was no significant difference among groups in the duration of the retreatment procedures. Copyright © 2017 American Association of Endodontists. Published by Elsevier Inc. All rights reserved.

  9. [European Portuguese EARS test battery adaptation].

    PubMed

    Alves, Marisa; Ramos, Daniela; Oliveira, Graça; Alves, Helena; Anderson, Ilona; Magalhães, Isabel; Martins, Jorge H; Simões, Margarida; Ferreira, Raquel; Fonseca, Rita; Andrade, Susana; Silva, Luís; Ribeiro, Carlos; Ferreira, Pedro Lopes

    2014-01-01

    The use of adequate assessment tools in health care is crucial for the management of care. The lack of specific tools in Portugal for assessing the performance of children who use cochlear implants motivated the translation and adaptation of the EARS (Evaluation of Auditory Responses to Speech) test battery into European Portuguese. This test battery is today one of the most commonly used by (re)habilitation teams of deaf children who use cochlear implants worldwide. The goal to be achieved with the validation of EARS was to provide (re)habilitation teams an instrument that enables: (i) monitoring the progress of individual (re)habilitation, (ii) managing a (re)habilitation program according to objective results, comparable between different (re)habilitation teams, (iii) obtaining data that can be compared with the results of international teams, and (iv) improving engagement and motivation of the family and other professionals from local teams. For the test battery translation and adaptation process, the adopted procedures were the following: (i) translation of the English version into European Portuguese by a professional translator, (ii) revision of the translation performed by an expert panel, including doctors, speech-language pathologists and audiologists, (iii) adaptation of the test stimuli by the team's speechlanguage pathologist, and (iv) further review by the expert panel. For each of the tests that belong to the EARS battery, the introduced adaptations and adjustments are presented, combining the characteristics and objectives of the original tests with the linguistic and cultural specificities of the Portuguese population. The difficulties that have been encountered during the translation and adaptation process and the adopted solutions are discussed. Comparisons are made with other versions of the EARS battery. We defend that the translation and the adaptation process followed for the EARS test battery into European Portuguese was correctly conducted

  10. Modified Sham Feeding of Sweet Solutions in Women with and without Bulimia Nervosa

    PubMed Central

    Klein, DA; Schebendach, JE; Brown, AJ; Smith, GP; Walsh, BT

    2009-01-01

    Although it is possible that binge eating in humans is due to increased responsiveness of orosensory excitatory controls of eating, there is no direct evidence for this because food ingested during a test meal stimulates both orosensory excitatory and postingestive inhibitory controls. To overcome this problem, we adapted the modified sham feeding technique (MSF) to measure the orosensory excitatory control of intake of a series of sweetened solutions. Previously published data showed the feasibility of a “sip-and-spit” procedure in nine healthy control women using solutions flavored with cherry Kool Aid® and sweetened with sucrose (0-20%)1. The current study extended this technique to measure the intake of artificially sweetened solutions in women with bulimia nervosa (BN) and in women with no history of eating disorders. Ten healthy women and 11 women with BN were randomly presented with cherry Kool Aid® solutions sweetened with five concentrations of aspartame (0, 0.01, 0.03, 0.08 and 0.28%) in a closed opaque container fitted with a straw. They were instructed to sip as much as they wanted of the solution during 1-minute trials and to spit the fluid out into another opaque container. Across all subjects, presence of sweetener increased intake (p<0.001). Women with BN sipped 40.5-53.1% more of all solutions than controls (p=0.03 for total intake across all solutions). Self-report ratings of liking, wanting and sweetness of solutions did not differ between groups. These results support the feasibility of a MSF procedure using artificially sweetened solutions, and the hypothesis that the orosensory stimulation of MSF provokes larger intake in women with BN than controls. PMID:18773914

  11. Adaptive multitaper time-frequency spectrum estimation

    NASA Astrophysics Data System (ADS)

    Pitton, James W.

    1999-11-01

    In earlier work, Thomson's adaptive multitaper spectrum estimation method was extended to the nonstationary case. This paper reviews the time-frequency multitaper method and the adaptive procedure, and explores some properties of the eigenvalues and eigenvectors. The variance of the adaptive estimator is used to construct an adaptive smoother, which is used to form a high resolution estimate. An F-test for detecting and removing sinusoidal components in the time-frequency spectrum is also given.

  12. Grid adaption for hypersonic flow

    NASA Technical Reports Server (NTRS)

    Abolhassani, Jamshid S.; Tiwari, Surendra N.; Smith, Robert E.

    1987-01-01

    The methods of grid adaption are reviewed and a method is developed with the capability of adaption to several flow variables. This method is based on a variational approach and is an algebraic method which does not require the solution of partial differential equations. Also the method has been formulated in such a way that there is no need for any matrix inversion. The method is used in conjunction with the calculation of hypersonic flow over a blunt nose body. The equations of motion are the compressible Navier-Stokes equations where all viscous terms are retained. They are solved by the MacCormack time-splitting method. A movie has been produced which shows simultaneously the transient behavior of the solution and the grid adaption.

  13. Adaptive management

    USGS Publications Warehouse

    Allen, Craig R.; Garmestani, Ahjond S.

    2015-01-01

    Adaptive management is an approach to natural resource management that emphasizes learning through management where knowledge is incomplete, and when, despite inherent uncertainty, managers and policymakers must act. Unlike a traditional trial and error approach, adaptive management has explicit structure, including a careful elucidation of goals, identification of alternative management objectives and hypotheses of causation, and procedures for the collection of data followed by evaluation and reiteration. The process is iterative, and serves to reduce uncertainty, build knowledge and improve management over time in a goal-oriented and structured process.

  14. Adaptive building skin structures

    NASA Astrophysics Data System (ADS)

    Del Grosso, A. E.; Basso, P.

    2010-12-01

    The concept of adaptive and morphing structures has gained considerable attention in the recent years in many fields of engineering. In civil engineering very few practical applications are reported to date however. Non-conventional structural concepts like deployable, inflatable and morphing structures may indeed provide innovative solutions to some of the problems that the construction industry is being called to face. To give some examples, searches for low-energy consumption or even energy-harvesting green buildings are amongst such problems. This paper first presents a review of the above problems and technologies, which shows how the solution to these problems requires a multidisciplinary approach, involving the integration of architectural and engineering disciplines. The discussion continues with the presentation of a possible application of two adaptive and dynamically morphing structures which are proposed for the realization of an acoustic envelope. The core of the two applications is the use of a novel optimization process which leads the search for optimal solutions by means of an evolutionary technique while the compatibility of the resulting configurations of the adaptive envelope is ensured by the virtual force density method.

  15. Adaptive and dynamic meshing methods for numerical simulations

    NASA Astrophysics Data System (ADS)

    Acikgoz, Nazmiye

    For the numerical simulation of many problems of engineering interest, it is desirable to have an automated mesh adaption tool capable of producing high quality meshes with an affordably low number of mesh points. This is important especially for problems, which are characterized by anisotropic features of the solution and require mesh clustering in the direction of high gradients. Another significant issue in meshing emerges in the area of unsteady simulations with moving boundaries or interfaces, where the motion of the boundary has to be accommodated by deforming the computational grid. Similarly, there exist problems where current mesh needs to be adapted to get more accurate solutions because either the high gradient regions are initially predicted inaccurately or they change location throughout the simulation. To solve these problems, we propose three novel procedures. For this purpose, in the first part of this work, we present an optimization procedure for three-dimensional anisotropic tetrahedral grids based on metric-driven h-adaptation. The desired anisotropy in the grid is dictated by a metric that defines the size, shape, and orientation of the grid elements throughout the computational domain. Through the use of topological and geometrical operators, the mesh is iteratively adapted until the final mesh minimizes a given objective function. In this work, the objective function measures the distance between the metric of each simplex and a target metric, which can be either user-defined (a-priori) or the result of a-posteriori error analysis. During the adaptation process, one tries to decrease the metric-based objective function until the final mesh is compliant with the target within a given tolerance. However, in regions such as corners and complex face intersections, the compliance condition was found to be very difficult or sometimes impossible to satisfy. In order to address this issue, we propose an optimization process based on an ad

  16. Function Allocation in Complex Socio-Technical Systems: Procedure usage in nuclear power and the Context Analysis Method for Identifying Design Solutions (CAMIDS) Model

    NASA Astrophysics Data System (ADS)

    Schmitt, Kara Anne

    This research aims to prove that strict adherence to procedures and rigid compliance to process in the US Nuclear Industry may not prevent incidents or increase safety. According to the Institute of Nuclear Power Operations, the nuclear power industry has seen a recent rise in events, and this research claims that a contributing factor to this rise is organizational, cultural, and based on peoples overreliance on procedures and policy. Understanding the proper balance of function allocation, automation and human decision-making is imperative to creating a nuclear power plant that is safe, efficient, and reliable. This research claims that new generations of operators are less engaged and thinking because they have been instructed to follow procedures to a fault. According to operators, they were once to know the plant and its interrelations, but organizationally more importance is now put on following procedure and policy. Literature reviews were performed, experts were questioned, and a model for context analysis was developed. The Context Analysis Method for Identifying Design Solutions (CAMIDS) Model was created, verified and validated through both peer review and application in real world scenarios in active nuclear power plant simulators. These experiments supported the claim that strict adherence and rigid compliance to procedures may not increase safety by studying the industry's propensity for following incorrect procedures, and when it directly affects the outcome of safety or security of the plant. The findings of this research indicate that the younger generations of operators rely highly on procedures, and the organizational pressures of required compliance to procedures may lead to incidents within the plant because operators feel pressured into following the rules and policy above performing the correct actions in a timely manner. The findings support computer based procedures, efficient alarm systems, and skill of the craft matrices. The solution to

  17. Comparing Anisotropic Output-Based Grid Adaptation Methods by Decomposition

    NASA Technical Reports Server (NTRS)

    Park, Michael A.; Loseille, Adrien; Krakos, Joshua A.; Michal, Todd

    2015-01-01

    Anisotropic grid adaptation is examined by decomposing the steps of flow solution, ad- joint solution, error estimation, metric construction, and simplex grid adaptation. Multiple implementations of each of these steps are evaluated by comparison to each other and expected analytic results when available. For example, grids are adapted to analytic metric fields and grid measures are computed to illustrate the properties of multiple independent implementations of grid adaptation mechanics. Different implementations of each step in the adaptation process can be evaluated in a system where the other components of the adaptive cycle are fixed. Detailed examination of these properties allows comparison of different methods to identify the current state of the art and where further development should be targeted.

  18. QUEST - A Bayesian adaptive psychometric method

    NASA Technical Reports Server (NTRS)

    Watson, A. B.; Pelli, D. G.

    1983-01-01

    An adaptive psychometric procedure that places each trial at the current most probable Bayesian estimate of threshold is described. The procedure takes advantage of the common finding that the human psychometric function is invariant in form when expressed as a function of log intensity. The procedure is simple, fast, and efficient, and may be easily implemented on any computer.

  19. Grid adaption for bluff bodies

    NASA Technical Reports Server (NTRS)

    Abolhassani, Jamshid S.; Tiwari, Surendra N.

    1986-01-01

    Methods of grid adaptation are reviewed and a method is developed with the capability of adaptation to several flow variables. This method is based on a variational approach and is an algebraic method which does not require the solution of partial differential equations. Also the method was formulated in such a way that there is no need for any matrix inversion. The method is used in conjunction with the calculation of hypersonic flow over a blunt nose. The equations of motion are the compressible Navier-Stokes equations where all viscous terms are retained. They are solved by the MacCormack time-splitting method and a movie was produced which shows simulataneously the transient behavior of the solution and the grid adaptation. The results are compared with the experimental and other numerical results.

  20. Self-adaptive difference method for the effective solution of computationally complex problems of boundary layer theory

    NASA Technical Reports Server (NTRS)

    Schoenauer, W.; Daeubler, H. G.; Glotz, G.; Gruening, J.

    1986-01-01

    An implicit difference procedure for the solution of equations for a chemically reacting hypersonic boundary layer is described. Difference forms of arbitrary error order in the x and y coordinate plane were used to derive estimates for discretization error. Computational complexity and time were minimized by the use of this difference method and the iteration of the nonlinear boundary layer equations was regulated by discretization error. Velocity and temperature profiles are presented for Mach 20.14 and Mach 18.5; variables are velocity profiles, temperature profiles, mass flow factor, Stanton number, and friction drag coefficient; three figures include numeric data.

  1. 40 CFR 60.648 - Optional procedure for measuring hydrogen sulfide in acid gas-Tutwiler Procedure. 1

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... hydrogen sulfide in acid gas-Tutwiler Procedure. 1 60.648 Section 60.648 Protection of Environment..., 2011 § 60.648 Optional procedure for measuring hydrogen sulfide in acid gas—Tutwiler Procedure. 1 1 Gas... dilute solutions are used. In principle, this method consists of titrating hydrogen sulfide in a gas...

  2. 40 CFR 60.648 - Optional procedure for measuring hydrogen sulfide in acid gas-Tutwiler Procedure. 1

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... hydrogen sulfide in acid gas-Tutwiler Procedure. 1 60.648 Section 60.648 Protection of Environment..., 2011 § 60.648 Optional procedure for measuring hydrogen sulfide in acid gas—Tutwiler Procedure. 1 1 Gas... dilute solutions are used. In principle, this method consists of titrating hydrogen sulfide in a gas...

  3. Procedure for Adaptive Laboratory Evolution of Microorganisms Using a Chemostat.

    PubMed

    Jeong, Haeyoung; Lee, Sang J; Kim, Pil

    2016-09-20

    Natural evolution involves genetic diversity such as environmental change and a selection between small populations. Adaptive laboratory evolution (ALE) refers to the experimental situation in which evolution is observed using living organisms under controlled conditions and stressors; organisms are thereby artificially forced to make evolutionary changes. Microorganisms are subject to a variety of stressors in the environment and are capable of regulating certain stress-inducible proteins to increase their chances of survival. Naturally occurring spontaneous mutations bring about changes in a microorganism's genome that affect its chances of survival. Long-term exposure to chemostat culture provokes an accumulation of spontaneous mutations and renders the most adaptable strain dominant. Compared to the colony transfer and serial transfer methods, chemostat culture entails the highest number of cell divisions and, therefore, the highest number of diverse populations. Although chemostat culture for ALE requires more complicated culture devices, it is less labor intensive once the operation begins. Comparative genomic and transcriptome analyses of the adapted strain provide evolutionary clues as to how the stressors contribute to mutations that overcome the stress. The goal of the current paper is to bring about accelerated evolution of microorganisms under controlled laboratory conditions.

  4. The Hartree-Fock calculation of the magnetic properties of molecular solutes

    NASA Astrophysics Data System (ADS)

    Cammi, R.

    1998-08-01

    In this paper we set the formal bases for the calculation of the magnetic susceptibility and of the nuclear magnetic shielding tensors for molecular solutes described within the framework of the polarizable continuum model (PCM). The theory has been developed at self-consistent field (SCF) level and adapted to be used within the framework of some of the computational procedures of larger use, i.e., the gauge invariant atomic orbital method (GIAO) and the continuous set gauge transformation method (CSGT). The numerical results relative to the magnetizabilities and chemical shielding of acetonitrile and nitrometane in various solvents computed with the PCM-CSGT method are also presented.

  5. Interdisciplinarity in Adapted Physical Activity

    ERIC Educational Resources Information Center

    Bouffard, Marcel; Spencer-Cavaliere, Nancy

    2016-01-01

    It is commonly accepted that inquiry in adapted physical activity involves the use of different disciplines to address questions. It is often advanced today that complex problems of the kind frequently encountered in adapted physical activity require a combination of disciplines for their solution. At the present time, individual research…

  6. The provision of aids and adaptations, risk assessments, and incident reporting and recording procedures in relation to injury prevention for adults with intellectual disabilities: cohort study.

    PubMed

    Finlayson, J; Jackson, A; Mantry, D; Morrison, J; Cooper, S-A

    2015-06-01

    Adults with intellectual disabilities (IDs) experience a higher incidence of injury, compared with the general population. The aim of this study was to investigate the provision of aids and adaptations, residential service providers' individual risk assessments and training in these, and injury incident recording and reporting procedures, in relation to injury prevention. Interviews were conducted with a community-based cohort of adults with IDs (n = 511) who live in Greater Glasgow, Scotland, UK and their key carer (n = 446). They were asked about their aids and adaptations at home, and paid carers (n = 228) were asked about individual risk assessments, their training, and incident recording and reporting procedures. Four hundred and twelve (80.6%) of the adults with IDs had at least one aid or adaptation at home to help prevent injury. However, a proportion who might benefit, were not in receipt of them, and surprisingly few had temperature controlled hot water or a bath thermometer in place to help prevent burns/scalds, or kitchen safety equipment to prevent burns/scalds from electric kettles or irons. Fifty-four (23.7%) of the paid carers were not aware of the adult they supported having had any risk assessments, and only 142 (57.9%) had received any training on risk assessments. Considerable variation in incident recording and reporting procedures was evident. More work is needed to better understand, and more fully incorporate, best practice injury prevention measures into routine support planning for adults with IDs within a positive risk-taking and risk reduction framework. © 2014 MENCAP and International Association of the Scientific Study of Intellectual and Developmental Disabilities and John Wiley & Sons Ltd.

  7. An interpolation-free ALE scheme for unsteady inviscid flows computations with large boundary displacements over three-dimensional adaptive grids

    NASA Astrophysics Data System (ADS)

    Re, B.; Dobrzynski, C.; Guardone, A.

    2017-07-01

    A novel strategy to solve the finite volume discretization of the unsteady Euler equations within the Arbitrary Lagrangian-Eulerian framework over tetrahedral adaptive grids is proposed. The volume changes due to local mesh adaptation are treated as continuous deformations of the finite volumes and they are taken into account by adding fictitious numerical fluxes to the governing equation. This peculiar interpretation enables to avoid any explicit interpolation of the solution between different grids and to compute grid velocities so that the Geometric Conservation Law is automatically fulfilled also for connectivity changes. The solution on the new grid is obtained through standard ALE techniques, thus preserving the underlying scheme properties, such as conservativeness, stability and monotonicity. The adaptation procedure includes node insertion, node deletion, edge swapping and points relocation and it is exploited both to enhance grid quality after the boundary movement and to modify the grid spacing to increase solution accuracy. The presented approach is assessed by three-dimensional simulations of steady and unsteady flow fields. The capability of dealing with large boundary displacements is demonstrated by computing the flow around the translating infinite- and finite-span NACA 0012 wing moving through the domain at the flight speed. The proposed adaptive scheme is applied also to the simulation of a pitching infinite-span wing, where the bi-dimensional character of the flow is well reproduced despite the three-dimensional unstructured grid. Finally, the scheme is exploited in a piston-induced shock-tube problem to take into account simultaneously the large deformation of the domain and the shock wave. In all tests, mesh adaptation plays a crucial role.

  8. Multigrid solution strategies for adaptive meshing problems

    NASA Technical Reports Server (NTRS)

    Mavriplis, Dimitri J.

    1995-01-01

    This paper discusses the issues which arise when combining multigrid strategies with adaptive meshing techniques for solving steady-state problems on unstructured meshes. A basic strategy is described, and demonstrated by solving several inviscid and viscous flow cases. Potential inefficiencies in this basic strategy are exposed, and various alternate approaches are discussed, some of which are demonstrated with an example. Although each particular approach exhibits certain advantages, all methods have particular drawbacks, and the formulation of a completely optimal strategy is considered to be an open problem.

  9. Economic Justification Of Robust Or Adaptive Planning And Design Of Resilient Water Resources Systems Under Deep Uncertainty: A Case Study In The Iolanda Water Treatment Plant Of Lusaka, Zambia

    NASA Astrophysics Data System (ADS)

    Mendoza, G.; Tkach, M.; Kucharski, J.; Chaudhry, R.

    2017-12-01

    This discussion is focused around the application of a bottom-up vulnerability assessment procedure for planning of climate resilience to a water treament plant for teh city of Iolanda, Zambia. This project is a Millennium Challenge Corporation (MCC) innitiaive with technical support by the UNESCO category II, International Center for Integrated Water Resources Management (ICIWaRM) with secretariat at the US Army Corps of Engineers Institute for Water Resources. The MCC is an innovative and independent U.S. foreign aid agency that is helping lead the fight against global poverty. The bottom-up vulnerability assessmentt framework examines critical performance thresholds, examines the external drivers that would lead to failure, establishes plausibility and analytical uncertainty that would lead to failure, and provides the economic justification for robustness or adaptability. This presentation will showcase the experiences in the application of the bottom-up framework to a region that is very vulnerable to climate variability, has poor instituional capacities, and has very limited data. It will illustrate the technical analysis and a decision process that led to a non-obvious climate robust solution. Most importantly it will highlight the challenges of utilizing discounted cash flow analysis (DCFA), such as net present value, in justifying robust or adaptive solutions, i.e. comparing solution under different future risks. We highlight a solution to manage the potential biases these DCFA procedures can incur.

  10. In vitro Evaluation of the Marginal Fit and Internal Adaptation of Zirconia and Lithium Disilicate Single Crowns: Micro-CT Comparison Between Different Manufacturing Procedures.

    PubMed

    Riccitiello, Francesco; Amato, Massimo; Leone, Renato; Spagnuolo, Gianrico; Sorrentino, Roberto

    2018-01-01

    Prosthetic precision can be affected by several variables, such as restorative materials, manufacturing procedures, framework design, cementation techniques and aging. Marginal adaptation is critical for long-term longevity and clinical success of dental restorations. Marginal misfit may lead to cement exposure to oral fluids, resulting in microleakage and cement dissolution. As a consequence, marginal discrepancies enhance percolation of bacteria, food and oral debris, potentially causing secondary caries, endodontic inflammation and periodontal disease. The aim of the present in vitro study was to evaluate the marginal and internal adaptation of zirconia and lithium disilicate single crowns, produced with different manufacturing procedures. Forty-five intact human maxillary premolars were prepared for single crowns by means of standardized preparations. All-ceramic crowns were fabricated with either CAD-CAM or heat-pressing procedures (CAD-CAM zirconia, CAD-CAM lithium disilicate, heat-pressed lithium disilicate) and cemented onto the teeth with a universal resin cement. Non-destructive micro-CT scanning was used to achieve the marginal and internal gaps in the coronal and sagittal planes; then, precision of fit measurements were calculated in a dedicated software and the results were statistically analyzed. The heat-pressed lithium disilicate crowns were significantly less accurate at the prosthetic margins (p<0.05) while they performed better at the occlusal surface ( p <0.05). No significant differences were noticed between CAD-CAM zirconia and lithium disilicate crowns ( p >0.05); nevertheless CAD-CAM zirconia copings presented the best marginal fit among the experimental groups. As to the thickness of the cement layer, reduced amounts of luting agent were noticed at the finishing line, whereas a thicker layer was reported at the occlusal level. Within the limitations of the present in vitro investigation, the following conclusions can be drawn: the recorded

  11. Fast solution of elliptic partial differential equations using linear combinations of plane waves.

    PubMed

    Pérez-Jordá, José M

    2016-02-01

    Given an arbitrary elliptic partial differential equation (PDE), a procedure for obtaining its solution is proposed based on the method of Ritz: the solution is written as a linear combination of plane waves and the coefficients are obtained by variational minimization. The PDE to be solved is cast as a system of linear equations Ax=b, where the matrix A is not sparse, which prevents the straightforward application of standard iterative methods in order to solve it. This sparseness problem can be circumvented by means of a recursive bisection approach based on the fast Fourier transform, which makes it possible to implement fast versions of some stationary iterative methods (such as Gauss-Seidel) consuming O(NlogN) memory and executing an iteration in O(Nlog(2)N) time, N being the number of plane waves used. In a similar way, fast versions of Krylov subspace methods and multigrid methods can also be implemented. These procedures are tested on Poisson's equation expressed in adaptive coordinates. It is found that the best results are obtained with the GMRES method using a multigrid preconditioner with Gauss-Seidel relaxation steps.

  12. Adapting heart failure guidelines for nursing care in home health settings: challenges and solutions.

    PubMed

    Radhakrishnan, Kavita; Topaz, Maxim; Masterson Creber, Ruth

    2014-07-01

    Nurses provide most of home health services for patients with heart failure, and yet there are no evidence-based practice guidelines developed for home health nurses. The purpose of this article was to review the challenges and solutions for adapting generally available HF clinical practice guidelines to home health nursing. Appropriate HF guidelines were identified and home health nursing-relevant guidelines were extracted by the research team. In addition, a team of nursing academic and practice experts evaluated the extracted guidelines and reached consensus through Delphi rounds. We identified 172 recommendations relevant to home health nursing from the American Heart Association and Heart Failure Society of America guidelines. The recommendations were divided into 5 groups (generic, minority populations, normal ejection fraction, reduced ejection fraction, and comorbidities) and further subgroups. Experts agreed that 87% of the recommendations selected by the research team were relevant to home health nursing and rejected 6% of the selected recommendations. Experts' opinions were split on 7% of guideline recommendations. Experts mostly disagreed on recommendations related to HF medication and laboratory prescription as well as HF patient assessment. These disagreements were due to lack of patient information available to home health nurses as well as unclear understanding of scope of practice regulations for home health nursing. After 2 Delphi rounds over 8 months, we achieved 100% agreement on the recommendations. The finalized guideline included 153 recommendations. Guideline adaptation projects should include a broad scope of nursing practice recommendations from which home health agencies can customize relevant recommendations in accordance with available information and state and agency regulations.

  13. Adaptive sampling in behavioral surveys.

    PubMed

    Thompson, S K

    1997-01-01

    Studies of populations such as drug users encounter difficulties because the members of the populations are rare, hidden, or hard to reach. Conventionally designed large-scale surveys detect relatively few members of the populations so that estimates of population characteristics have high uncertainty. Ethnographic studies, on the other hand, reach suitable numbers of individuals only through the use of link-tracing, chain referral, or snowball sampling procedures that often leave the investigators unable to make inferences from their sample to the hidden population as a whole. In adaptive sampling, the procedure for selecting people or other units to be in the sample depends on variables of interest observed during the survey, so the design adapts to the population as encountered. For example, when self-reported drug use is found among members of the sample, sampling effort may be increased in nearby areas. Types of adaptive sampling designs include ordinary sequential sampling, adaptive allocation in stratified sampling, adaptive cluster sampling, and optimal model-based designs. Graph sampling refers to situations with nodes (for example, people) connected by edges (such as social links or geographic proximity). An initial sample of nodes or edges is selected and edges are subsequently followed to bring other nodes into the sample. Graph sampling designs include network sampling, snowball sampling, link-tracing, chain referral, and adaptive cluster sampling. A graph sampling design is adaptive if the decision to include linked nodes depends on variables of interest observed on nodes already in the sample. Adjustment methods for nonsampling errors such as imperfect detection of drug users in the sample apply to adaptive as well as conventional designs.

  14. Quality assessment and control of finite element solutions

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K.; Babuska, Ivo

    1987-01-01

    Status and some recent developments in the techniques for assessing the reliability of finite element solutions are summarized. Discussion focuses on a number of aspects including: the major types of errors in the finite element solutions; techniques used for a posteriori error estimation and the reliability of these estimators; the feedback and adaptive strategies for improving the finite element solutions; and postprocessing approaches used for improving the accuracy of stresses and other important engineering data. Also, future directions for research needed to make error estimation and adaptive movement practical are identified.

  15. On Accuracy of Adaptive Grid Methods for Captured Shocks

    NASA Technical Reports Server (NTRS)

    Yamaleev, Nail K.; Carpenter, Mark H.

    2002-01-01

    The accuracy of two grid adaptation strategies, grid redistribution and local grid refinement, is examined by solving the 2-D Euler equations for the supersonic steady flow around a cylinder. Second- and fourth-order linear finite difference shock-capturing schemes, based on the Lax-Friedrichs flux splitting, are used to discretize the governing equations. The grid refinement study shows that for the second-order scheme, neither grid adaptation strategy improves the numerical solution accuracy compared to that calculated on a uniform grid with the same number of grid points. For the fourth-order scheme, the dominant first-order error component is reduced by the grid adaptation, while the design-order error component drastically increases because of the grid nonuniformity. As a result, both grid adaptation techniques improve the numerical solution accuracy only on the coarsest mesh or on very fine grids that are seldom found in practical applications because of the computational cost involved. Similar error behavior has been obtained for the pressure integral across the shock. A simple analysis shows that both grid adaptation strategies are not without penalties in the numerical solution accuracy. Based on these results, a new grid adaptation criterion for captured shocks is proposed.

  16. 1H NMR quantification in very dilute toxin solutions: application to anatoxin-a analysis.

    PubMed

    Dagnino, Denise; Schripsema, Jan

    2005-08-01

    A complete procedure is described for the extraction, detection and quantification of anatoxin-a in biological samples. Anatoxin-a is extracted from biomass by a routine acid base extraction. The extract is analysed by GC-MS, without the need of derivatization, with a detection limit of 0.5 ng. A method was developed for the accurate quantification of anatoxin-a in the standard solution to be used for the calibration of the GC analysis. 1H NMR allowed the accurate quantification of microgram quantities of anatoxin-a. The accurate quantification of compounds in standard solutions is rarely discussed, but for compounds like anatoxin-a (toxins with prices in the range of a million dollar a gram), of which generally only milligram quantities or less are available, this factor in the quantitative analysis is certainly not trivial. The method that was developed can easily be adapted for the accurate quantification of other toxins in very dilute solutions.

  17. Transition between free-space Helmholtz equation solutions with plane sources and parabolic wave equation solutions.

    PubMed

    Mahillo-Isla, R; Gonźalez-Morales, M J; Dehesa-Martínez, C

    2011-06-01

    The slowly varying envelope approximation is applied to the radiation problems of the Helmholtz equation with a planar single-layer and dipolar sources. The analyses of such problems provide procedures to recover solutions of the Helmholtz equation based on the evaluation of solutions of the parabolic wave equation at a given plane. Furthermore, the conditions that must be fulfilled to apply each procedure are also discussed. The relations to previous work are given as well.

  18. Adaptive mesh strategies for the spectral element method

    NASA Technical Reports Server (NTRS)

    Mavriplis, Catherine

    1992-01-01

    An adaptive spectral method was developed for the efficient solution of time dependent partial differential equations. Adaptive mesh strategies that include resolution refinement and coarsening by three different methods are illustrated on solutions to the 1-D viscous Burger equation and the 2-D Navier-Stokes equations for driven flow in a cavity. Sharp gradients, singularities, and regions of poor resolution are resolved optimally as they develop in time using error estimators which indicate the choice of refinement to be used. The adaptive formulation presents significant increases in efficiency, flexibility, and general capabilities for high order spectral methods.

  19. Statistical efficiency of adaptive algorithms.

    PubMed

    Widrow, Bernard; Kamenetsky, Max

    2003-01-01

    The statistical efficiency of a learning algorithm applied to the adaptation of a given set of variable weights is defined as the ratio of the quality of the converged solution to the amount of data used in training the weights. Statistical efficiency is computed by averaging over an ensemble of learning experiences. A high quality solution is very close to optimal, while a low quality solution corresponds to noisy weights and less than optimal performance. In this work, two gradient descent adaptive algorithms are compared, the LMS algorithm and the LMS/Newton algorithm. LMS is simple and practical, and is used in many applications worldwide. LMS/Newton is based on Newton's method and the LMS algorithm. LMS/Newton is optimal in the least squares sense. It maximizes the quality of its adaptive solution while minimizing the use of training data. Many least squares adaptive algorithms have been devised over the years, but no other least squares algorithm can give better performance, on average, than LMS/Newton. LMS is easily implemented, but LMS/Newton, although of great mathematical interest, cannot be implemented in most practical applications. Because of its optimality, LMS/Newton serves as a benchmark for all least squares adaptive algorithms. The performances of LMS and LMS/Newton are compared, and it is found that under many circumstances, both algorithms provide equal performance. For example, when both algorithms are tested with statistically nonstationary input signals, their average performances are equal. When adapting with stationary input signals and with random initial conditions, their respective learning times are on average equal. However, under worst-case initial conditions, the learning time of LMS can be much greater than that of LMS/Newton, and this is the principal disadvantage of the LMS algorithm. But the strong points of LMS are ease of implementation and optimal performance under important practical conditions. For these reasons, the LMS

  20. 40 CFR 60.5408 - What is an optional procedure for measuring hydrogen sulfide in acid gas-Tutwiler Procedure?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... measuring hydrogen sulfide in acid gas-Tutwiler Procedure? 60.5408 Section 60.5408 Protection of Environment... § 60.5408 What is an optional procedure for measuring hydrogen sulfide in acid gas—Tutwiler Procedure... of titrating hydrogen sulfide in a gas sample directly with a standard solution of iodine. (b...

  1. 40 CFR 60.5408 - What is an optional procedure for measuring hydrogen sulfide in acid gas-Tutwiler Procedure?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... measuring hydrogen sulfide in acid gas-Tutwiler Procedure? 60.5408 Section 60.5408 Protection of Environment... § 60.5408 What is an optional procedure for measuring hydrogen sulfide in acid gas—Tutwiler Procedure... of titrating hydrogen sulfide in a gas sample directly with a standard solution of iodine. (b...

  2. An hp-adaptivity and error estimation for hyperbolic conservation laws

    NASA Technical Reports Server (NTRS)

    Bey, Kim S.

    1995-01-01

    This paper presents an hp-adaptive discontinuous Galerkin method for linear hyperbolic conservation laws. A priori and a posteriori error estimates are derived in mesh-dependent norms which reflect the dependence of the approximate solution on the element size (h) and the degree (p) of the local polynomial approximation. The a posteriori error estimate, based on the element residual method, provides bounds on the actual global error in the approximate solution. The adaptive strategy is designed to deliver an approximate solution with the specified level of error in three steps. The a posteriori estimate is used to assess the accuracy of a given approximate solution and the a priori estimate is used to predict the mesh refinements and polynomial enrichment needed to deliver the desired solution. Numerical examples demonstrate the reliability of the a posteriori error estimates and the effectiveness of the hp-adaptive strategy.

  3. Analyzing Hedges in Verbal Communication: An Adaptation-Based Approach

    ERIC Educational Resources Information Center

    Wang, Yuling

    2010-01-01

    Based on Adaptation Theory, the article analyzes the production process of hedges. The procedure consists of the continuous making of choices in linguistic forms and communicative strategies. These choices are made just for adaptation to the contextual correlates. Besides, the adaptation process is dynamic, intentional and bidirectional.

  4. Failure of Anisotropic Unstructured Mesh Adaption Based on Multidimensional Residual Minimization

    NASA Technical Reports Server (NTRS)

    Wood, William A.; Kleb, William L.

    2003-01-01

    An automated anisotropic unstructured mesh adaptation strategy is proposed, implemented, and assessed for the discretization of viscous flows. The adaption criteria is based upon the minimization of the residual fluctuations of a multidimensional upwind viscous flow solver. For scalar advection, this adaption strategy has been shown to use fewer grid points than gradient based adaption, naturally aligning mesh edges with discontinuities and characteristic lines. The adaption utilizes a compact stencil and is local in scope, with four fundamental operations: point insertion, point deletion, edge swapping, and nodal displacement. Evaluation of the solution-adaptive strategy is performed for a two-dimensional blunt body laminar wind tunnel case at Mach 10. The results demonstrate that the strategy suffers from a lack of robustness, particularly with regard to alignment of the bow shock in the vicinity of the stagnation streamline. In general, constraining the adaption to such a degree as to maintain robustness results in negligible improvement to the solution. Because the present method fails to consistently or significantly improve the flow solution, it is rejected in favor of simple uniform mesh refinement.

  5. Adaptive Finite Element Methods for Continuum Damage Modeling

    NASA Technical Reports Server (NTRS)

    Min, J. B.; Tworzydlo, W. W.; Xiques, K. E.

    1995-01-01

    The paper presents an application of adaptive finite element methods to the modeling of low-cycle continuum damage and life prediction of high-temperature components. The major objective is to provide automated and accurate modeling of damaged zones through adaptive mesh refinement and adaptive time-stepping methods. The damage modeling methodology is implemented in an usual way by embedding damage evolution in the transient nonlinear solution of elasto-viscoplastic deformation problems. This nonlinear boundary-value problem is discretized by adaptive finite element methods. The automated h-adaptive mesh refinements are driven by error indicators, based on selected principal variables in the problem (stresses, non-elastic strains, damage, etc.). In the time domain, adaptive time-stepping is used, combined with a predictor-corrector time marching algorithm. The time selection is controlled by required time accuracy. In order to take into account strong temperature dependency of material parameters, the nonlinear structural solution a coupled with thermal analyses (one-way coupling). Several test examples illustrate the importance and benefits of adaptive mesh refinements in accurate prediction of damage levels and failure time.

  6. Adaptive Grid Generation for Numerical Solution of Partial Differential Equations.

    DTIC Science & Technology

    1983-12-01

    numerical solution of fluid dynamics problems is presented. However, the method is applicable to the numer- ical evaluation of any partial differential...emphasis is being placed on numerical solution of the governing differential equations by finite difference methods . In the past two decades, considerable...original equations presented in that paper. The solution of the second problem is more difficult. 2 The method of Thompson et al. provides control for

  7. Ultrasound-Guided Foot and Ankle Procedures.

    PubMed

    Henning, P Troy

    2016-08-01

    This article reviews commonly performed injections about the foot and ankle region. Although not exhaustive in its description of available techniques, general approaches to these procedures are applicable to any injection about the foot and ankle. As much as possible, the procedures described are based on commonly used or published techniques. An in-depth knowledge of the regional anatomy and understanding of different approaches when performing ultrasonography-guided procedures allows clinicians to adapt to any clinical scenario. Copyright © 2016 Elsevier Inc. All rights reserved.

  8. Modelling Detailed-Chemistry Effects on Turbulent Diffusion Flames using a Parallel Solution-Adaptive Scheme

    NASA Astrophysics Data System (ADS)

    Jha, Pradeep Kumar

    Capturing the effects of detailed-chemistry on turbulent combustion processes is a central challenge faced by the numerical combustion community. However, the inherent complexity and non-linear nature of both turbulence and chemistry require that combustion models rely heavily on engineering approximations to remain computationally tractable. This thesis proposes a computationally efficient algorithm for modelling detailed-chemistry effects in turbulent diffusion flames and numerically predicting the associated flame properties. The cornerstone of this combustion modelling tool is the use of parallel Adaptive Mesh Refinement (AMR) scheme with the recently proposed Flame Prolongation of Intrinsic low-dimensional manifold (FPI) tabulated-chemistry approach for modelling complex chemistry. The effect of turbulence on the mean chemistry is incorporated using a Presumed Conditional Moment (PCM) approach based on a beta-probability density function (PDF). The two-equation k-w turbulence model is used for modelling the effects of the unresolved turbulence on the mean flow field. The finite-rate of methane-air combustion is represented here by using the GRI-Mech 3.0 scheme. This detailed mechanism is used to build the FPI tables. A state of the art numerical scheme based on a parallel block-based solution-adaptive algorithm has been developed to solve the Favre-averaged Navier-Stokes (FANS) and other governing partial-differential equations using a second-order accurate, fully-coupled finite-volume formulation on body-fitted, multi-block, quadrilateral/hexahedral mesh for two-dimensional and three-dimensional flow geometries, respectively. A standard fourth-order Runge-Kutta time-marching scheme is used for time-accurate temporal discretizations. Numerical predictions of three different diffusion flames configurations are considered in the present work: a laminar counter-flow flame; a laminar co-flow diffusion flame; and a Sydney bluff-body turbulent reacting flow

  9. An adaptive gridless methodology in one dimension

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Snyder, N.T.; Hailey, C.E.

    1996-09-01

    Gridless numerical analysis offers great potential for accurately solving for flow about complex geometries or moving boundary problems. Because gridless methods do not require point connection, the mesh cannot twist or distort. The gridless method utilizes a Taylor series about each point to obtain the unknown derivative terms from the current field variable estimates. The governing equation is then numerically integrated to determine the field variables for the next iteration. Effects of point spacing and Taylor series order on accuracy are studied, and they follow similar trends of traditional numerical techniques. Introducing adaption by point movement using a spring analogymore » allows the solution method to track a moving boundary. The adaptive gridless method models linear, nonlinear, steady, and transient problems. Comparison with known analytic solutions is given for these examples. Although point movement adaption does not provide a significant increase in accuracy, it helps capture important features and provides an improved solution.« less

  10. An adaptive procedure for defect identification problems in elasticity

    NASA Astrophysics Data System (ADS)

    Gutiérrez, Sergio; Mura, J.

    2010-07-01

    In the context of inverse problems in mechanics, it is well known that the most typical situation is that neither the interior nor all the boundary is available to obtain data to detect the presence of inclusions or defects. We propose here an adaptive method that uses loads and measures of displacements only on part of the surface of the body, to detect defects in the interior of an elastic body. The method is based on Small Amplitude Homogenization, that is, we work under the assumption that the contrast on the values of the Lamé elastic coefficients between the defect and the matrix is not very large. The idea is that given the data for one loading state and one location of the displacement sensors, we use an optimization method to obtain a guess for the location of the inclusion and then, using this guess, we adapt the position of the sensors and the loading zone, hoping to refine the current guess. Numerical results show that the method is quite efficient in some cases, using in those cases no more than three loading positions and three different positions of the sensors.

  11. Three-dimensional Cross-Platform Planning for Complex Spinal Procedures: A New Method Adaptive to Different Navigation Systems.

    PubMed

    Kosterhon, Michael; Gutenberg, Angelika; Kantelhardt, Sven R; Conrad, Jens; Nimer Amr, Amr; Gawehn, Joachim; Giese, Alf

    2017-08-01

    A feasibility study. To develop a method based on the DICOM standard which transfers complex 3-dimensional (3D) trajectories and objects from external planning software to any navigation system for planning and intraoperative guidance of complex spinal procedures. There have been many reports about navigation systems with embedded planning solutions but only few on how to transfer planning data generated in external software. Patients computerized tomography and/or magnetic resonance volume data sets of the affected spinal segments were imported to Amira software, reconstructed to 3D images and fused with magnetic resonance data for soft-tissue visualization, resulting in a virtual patient model. Objects needed for surgical plans or surgical procedures such as trajectories, implants or surgical instruments were either digitally constructed or computerized tomography scanned and virtually positioned within the 3D model as required. As crucial step of this method these objects were fused with the patient's original diagnostic image data, resulting in a single DICOM sequence, containing all preplanned information necessary for the operation. By this step it was possible to import complex surgical plans into any navigation system. We applied this method not only to intraoperatively adjustable implants and objects under experimental settings, but also planned and successfully performed surgical procedures, such as the percutaneous lateral approach to the lumbar spine following preplanned trajectories and a thoracic tumor resection including intervertebral body replacement using an optical navigation system. To demonstrate the versatility and compatibility of the method with an entirely different navigation system, virtually preplanned lumbar transpedicular screw placement was performed with a robotic guidance system. The presented method not only allows virtual planning of complex surgical procedures, but to export objects and surgical plans to any navigation or

  12. Investigation of the effects of color on judgments of sweetness using a taste adaptation method.

    PubMed

    Hidaka, Souta; Shimoda, Kazumasa

    2014-01-01

    It has been reported that color can affect the judgment of taste. For example, a dark red color enhances the subjective intensity of sweetness. However, the underlying mechanisms of the effect of color on taste have not been fully investigated; in particular, it remains unclear whether the effect is based on cognitive/decisional or perceptual processes. Here, we investigated the effect of color on sweetness judgments using a taste adaptation method. A sweet solution whose color was subjectively congruent with sweetness was judged as sweeter than an uncolored sweet solution both before and after adaptation to an uncolored sweet solution. In contrast, subjective judgment of sweetness for uncolored sweet solutions did not differ between the conditions following adaptation to a colored sweet solution and following adaptation to an uncolored one. Color affected sweetness judgment when the target solution was colored, but the colored sweet solution did not modulate the magnitude of taste adaptation. Therefore, it is concluded that the effect of color on the judgment of taste would occur mainly in cognitive/decisional domains.

  13. Auto-adaptive finite element meshes

    NASA Technical Reports Server (NTRS)

    Richter, Roland; Leyland, Penelope

    1995-01-01

    Accurate capturing of discontinuities within compressible flow computations is achieved by coupling a suitable solver with an automatic adaptive mesh algorithm for unstructured triangular meshes. The mesh adaptation procedures developed rely on non-hierarchical dynamical local refinement/derefinement techniques, which hence enable structural optimization as well as geometrical optimization. The methods described are applied for a number of the ICASE test cases are particularly interesting for unsteady flow simulations.

  14. Knowledge Retrieval Solutions.

    ERIC Educational Resources Information Center

    Khan, Kamran

    1998-01-01

    Excalibur RetrievalWare offers true knowledge retrieval solutions. Its fundamental technologies, Adaptive Pattern Recognition Processing and Semantic Networks, have capabilities for knowledge discovery and knowledge management of full-text, structured and visual information. The software delivers a combination of accuracy, extensibility,…

  15. Measuring working memory capacity in children using adaptive tasks: Example validation of an adaptive complex span.

    PubMed

    Gonthier, Corentin; Aubry, Alexandre; Bourdin, Béatrice

    2018-06-01

    Working memory tasks designed for children usually present trials in order of ascending difficulty, with testing discontinued when the child fails a particular level. Unfortunately, this procedure comes with a number of issues, such as decreased engagement from high-ability children, vulnerability of the scores to temporary mind-wandering, and large between-subjects variations in number of trials, testing time, and proactive interference. To circumvent these problems, the goal of the present study was to demonstrate the feasibility of assessing working memory using an adaptive testing procedure. The principle of adaptive testing is to dynamically adjust the level of difficulty as the task progresses to match the participant's ability. We used this method to develop an adaptive complex span task (the ACCES) comprising verbal and visuo-spatial subtests. The task presents a fixed number of trials to all participants, allows for partial credit scoring, and can be used with children regardless of ability level. The ACCES demonstrated satisfying psychometric properties in a sample of 268 children aged 8-13 years, confirming the feasibility of using adaptive tasks to measure working memory capacity in children. A free-to-use implementation of the ACCES is provided.

  16. Sclerotherapy with tetracycline solution for hydrocele.

    PubMed

    Hu, K N; Khan, A S; Gonder, M

    1984-12-01

    A study of sclerotherapy for hydrocele using different concentrations (10%, 5%, 2.5%) for tetracycline solution was done on 24 patients, 23 patients were cured. The effectiveness of sclerotherapy was the same for the three groups of patients with use of each different concentration of the solution. Pain was the only adverse effect. Nonspecific cellular foreign body reaction and fibrin strand proliferation were observed in the hydrocele fluid after this procedure. We consider sclerotherapy for hydrocele with tetracycline solution safe and the procedure of choice for patients in whom surgery or anesthesia is contraindicated, for patients who refuse surgery, and for economic reasons.

  17. Rapid, generalized adaptation to asynchronous audiovisual speech.

    PubMed

    Van der Burg, Erik; Goodbourn, Patrick T

    2015-04-07

    The brain is adaptive. The speed of propagation through air, and of low-level sensory processing, differs markedly between auditory and visual stimuli; yet the brain can adapt to compensate for the resulting cross-modal delays. Studies investigating temporal recalibration to audiovisual speech have used prolonged adaptation procedures, suggesting that adaptation is sluggish. Here, we show that adaptation to asynchronous audiovisual speech occurs rapidly. Participants viewed a brief clip of an actor pronouncing a single syllable. The voice was either advanced or delayed relative to the corresponding lip movements, and participants were asked to make a synchrony judgement. Although we did not use an explicit adaptation procedure, we demonstrate rapid recalibration based on a single audiovisual event. We find that the point of subjective simultaneity on each trial is highly contingent upon the modality order of the preceding trial. We find compelling evidence that rapid recalibration generalizes across different stimuli, and different actors. Finally, we demonstrate that rapid recalibration occurs even when auditory and visual events clearly belong to different actors. These results suggest that rapid temporal recalibration to audiovisual speech is primarily mediated by basic temporal factors, rather than higher-order factors such as perceived simultaneity and source identity. © 2015 The Author(s) Published by the Royal Society. All rights reserved.

  18. Rapid, generalized adaptation to asynchronous audiovisual speech

    PubMed Central

    Van der Burg, Erik; Goodbourn, Patrick T.

    2015-01-01

    The brain is adaptive. The speed of propagation through air, and of low-level sensory processing, differs markedly between auditory and visual stimuli; yet the brain can adapt to compensate for the resulting cross-modal delays. Studies investigating temporal recalibration to audiovisual speech have used prolonged adaptation procedures, suggesting that adaptation is sluggish. Here, we show that adaptation to asynchronous audiovisual speech occurs rapidly. Participants viewed a brief clip of an actor pronouncing a single syllable. The voice was either advanced or delayed relative to the corresponding lip movements, and participants were asked to make a synchrony judgement. Although we did not use an explicit adaptation procedure, we demonstrate rapid recalibration based on a single audiovisual event. We find that the point of subjective simultaneity on each trial is highly contingent upon the modality order of the preceding trial. We find compelling evidence that rapid recalibration generalizes across different stimuli, and different actors. Finally, we demonstrate that rapid recalibration occurs even when auditory and visual events clearly belong to different actors. These results suggest that rapid temporal recalibration to audiovisual speech is primarily mediated by basic temporal factors, rather than higher-order factors such as perceived simultaneity and source identity. PMID:25716790

  19. Delivering Telecourses: Procedural Issues.

    ERIC Educational Resources Information Center

    Rothstein, Bette M.

    The logistics for college adaptation of telecourses entail certain procedures which, though they differ from one school to another, still encompass a basic minimum of steps that need to be taken: (1) the decision to investigate; (2) the ascertainment of interest within the relevant disciplines; (3) the evaluation and acceptance of an available…

  20. An adaptive grid algorithm for one-dimensional nonlinear equations

    NASA Technical Reports Server (NTRS)

    Gutierrez, William E.; Hills, Richard G.

    1990-01-01

    Richards' equation, which models the flow of liquid through unsaturated porous media, is highly nonlinear and difficult to solve. Step gradients in the field variables require the use of fine grids and small time step sizes. The numerical instabilities caused by the nonlinearities often require the use of iterative methods such as Picard or Newton interation. These difficulties result in large CPU requirements in solving Richards equation. With this in mind, adaptive and multigrid methods are investigated for use with nonlinear equations such as Richards' equation. Attention is focused on one-dimensional transient problems. To investigate the use of multigrid and adaptive grid methods, a series of problems are studied. First, a multigrid program is developed and used to solve an ordinary differential equation, demonstrating the efficiency with which low and high frequency errors are smoothed out. The multigrid algorithm and an adaptive grid algorithm is used to solve one-dimensional transient partial differential equations, such as the diffusive and convective-diffusion equations. The performance of these programs are compared to that of the Gauss-Seidel and tridiagonal methods. The adaptive and multigrid schemes outperformed the Gauss-Seidel algorithm, but were not as fast as the tridiagonal method. The adaptive grid scheme solved the problems slightly faster than the multigrid method. To solve nonlinear problems, Picard iterations are introduced into the adaptive grid and tridiagonal methods. Burgers' equation is used as a test problem for the two algorithms. Both methods obtain solutions of comparable accuracy for similar time increments. For the Burgers' equation, the adaptive grid method finds the solution approximately three times faster than the tridiagonal method. Finally, both schemes are used to solve the water content formulation of the Richards' equation. For this problem, the adaptive grid method obtains a more accurate solution in fewer work units and

  1. FOAM: the modular adaptive optics framework

    NASA Astrophysics Data System (ADS)

    van Werkhoven, T. I. M.; Homs, L.; Sliepen, G.; Rodenhuis, M.; Keller, C. U.

    2012-07-01

    Control software for adaptive optics systems is mostly custom built and very specific in nature. We have developed FOAM, a modular adaptive optics framework for controlling and simulating adaptive optics systems in various environments. Portability is provided both for different control hardware and adaptive optics setups. To achieve this, FOAM is written in C++ and runs on standard CPUs. Furthermore we use standard Unix libraries and compilation procedures and implemented a hardware abstraction layer in FOAM. We have successfully implemented FOAM on the adaptive optics system of ExPo - a high-contrast imaging polarimeter developed at our institute - in the lab and will test it on-sky late June 2012. We also plan to implement FOAM on adaptive optics systems for microscopy and solar adaptive optics. FOAM is available* under the GNU GPL license and is free to be used by anyone.

  2. Object-oriented philosophy in designing adaptive finite-element package for 3D elliptic deferential equations

    NASA Astrophysics Data System (ADS)

    Zhengyong, R.; Jingtian, T.; Changsheng, L.; Xiao, X.

    2007-12-01

    Although adaptive finite-element (AFE) analysis is becoming more and more focused in scientific and engineering fields, its efficient implementations are remain to be a discussed problem as its more complex procedures. In this paper, we propose a clear C++ framework implementation to show the powerful properties of Object-oriented philosophy (OOP) in designing such complex adaptive procedure. In terms of the modal functions of OOP language, the whole adaptive system is divided into several separate parts such as the mesh generation or refinement, a-posterior error estimator, adaptive strategy and the final post processing. After proper designs are locally performed on these separate modals, a connected framework of adaptive procedure is formed finally. Based on the general elliptic deferential equation, little efforts should be added in the adaptive framework to do practical simulations. To show the preferable properties of OOP adaptive designing, two numerical examples are tested. The first one is the 3D direct current resistivity problem in which the powerful framework is efficiently shown as only little divisions are added. And then, in the second induced polarization£¨IP£©exploration case, new adaptive procedure is easily added which adequately shows the strong extendibility and re-usage of OOP language. Finally we believe based on the modal framework adaptive implementation by OOP methodology, more advanced adaptive analysis system will be available in future.

  3. Adaptive Units of Learning and Educational Videogames

    ERIC Educational Resources Information Center

    Moreno-Ger, Pablo; Thomas, Pilar Sancho; Martinez-Ortiz, Ivan; Sierra, Jose Luis; Fernandez-Manjon, Baltasar

    2007-01-01

    In this paper, we propose three different ways of using IMS Learning Design to support online adaptive learning modules that include educational videogames. The first approach relies on IMS LD to support adaptation procedures where the educational games are considered as Learning Objects. These games can be included instead of traditional content…

  4. Adaptation of Selenastrum capricornutum (Chlorophyceae) to copper

    USGS Publications Warehouse

    Kuwabara, J.S.; Leland, H.V.

    1986-01-01

    Selenastrum capricornutum Printz, growing in a chemically defined medium, was used as a model for studying adaptation of algae to a toxic metal (copper) ion. Cells exhibited lag-phase adaptation to 0.8 ??M total Cu (10-12 M free ion concentration) after 20 generations of Cu exposure. Selenastrum adapted to the same concentration when Cu was gradually introduced over an 8-h period using a specially designed apparatus that provided a transient increase in exposure concentration. Cu adaptation was not attributable to media conditioning by algal exudates. Duration of lag phase was a more sensitive index of copper toxicity to Selenastrum that was growth rate or stationary-phase cell density under the experimental conditions used. Chemical speciation of the Cu dosing solution influenced the duration of lag phase even when media formulations were identical after dosing. Selenastrum initially exposed to Cu in a CuCl2 injection solution exhibited a lag phase of 3.9 d, but this was reduced to 1.5 d when a CuEDTA solution was used to achieve the same total Cu and EDTA concentrations. Physical and chemical processes that accelerated the rate of increase in cupric ion concentration generally increased the duration of lag phase. ?? 1986.

  5. Using Multicriteria Analysis in Issues Concerning Adaptation of Historic Facilities for the Needs of Public Utility Buildings with a Function of a Theatre

    NASA Astrophysics Data System (ADS)

    Obracaj, Piotr; Fabianowski, Dariusz

    2017-10-01

    Implementations concerning adaptation of historic facilities for public utility objects are associated with the necessity of solving many complex, often conflicting expectations of future users. This mainly concerns the function that includes construction, technology and aesthetic issues. The list of issues is completed with proper protection of historic values, different in each case. The procedure leading to obtaining the expected solution is a multicriteria procedure, usually difficult to accurately define and requiring designer’s large experience. An innovative approach has been used for the analysis, namely - the modified EA FAHP (Extent Analysis Fuzzy Analytic Hierarchy Process) Chang’s method of a multicriteria analysis for the assessment of complex functional and spatial issues. Selection of optimal spatial form of an adapted historic building intended for the multi-functional public utility facility was analysed. The assumed functional flexibility was determined in the scope of: education, conference, and chamber spectacles, such as drama, concerts, in different stage-audience layouts.

  6. A novel model for simultaneous study of neointestinal regeneration and intestinal adaptation.

    PubMed

    Jwo, Shyh-Chuan; Tang, Shye-Jye; Chen, Jim-Ray; Chiang, Kun-Chun; Huang, Ting-Shou; Chen, Huang-Yang

    2013-01-01

    The use of autologous grafts, fabricated from tissue-engineered neointestine, to enhance insufficient compensation of intestinal adaptation for severe short bowel syndrome is a compelling idea. Unfortunately, current approaches and knowledge for neointestinal regeneration, unlike intestinal adaptation, are still unsatisfactory. Thus, we have designed a novel model of intestinal adaptation with simultaneous neointestinal regeneration and evaluated its feasibility for future basic research and clinical application. Fifty male Sprague-Dawley rats weighing 250-350 g underwent this procedure and sacrificed at 4, 8, and 12 weeks postoperatively. Spatiotemporal analyses were carried out by gross, histology, and DNA/protein quantification. Three rats died of operative complications. In early experiments, the use of hard silicone stent as tissue scaffold in 11 rats was unsatisfactory for neointestinal regeneration. In later experiments, when a soft silastic tube was used, the success rate increased up to 90.9%. Further analyses revealed that no neointestine developed without donor intestine; regenerated lengths of mucosa and muscle were positively related to time postsurgery but independent of donor length with 0.5 or 1 cm. Other parameters of neointestinal regeneration or intestinal adaptation showed no relationship to both time postsurgery and donor length. In conclusion, this is a potentially important model for investigators searching for solutions to short bowel syndrome. © 2013 by the Wound Healing Society.

  7. A computational procedure for large rotational motions in multibody dynamics

    NASA Technical Reports Server (NTRS)

    Park, K. C.; Chiou, J. C.

    1987-01-01

    A computational procedure suitable for the solution of equations of motion for multibody systems is presented. The present procedure adopts a differential partitioning of the translational motions and the rotational motions. The translational equations of motion are then treated by either a conventional explicit or an implicit direct integration method. A principle feature of this procedure is a nonlinearly implicit algorithm for updating rotations via the Euler four-parameter representation. This procedure is applied to the rolling of a sphere through a specific trajectory, which shows that it yields robust solutions.

  8. The Development and Assesment of Adaptation Pathways for Urban Pluvial Flooding

    NASA Astrophysics Data System (ADS)

    Babovic, F.; Mijic, A.; Madani, K.

    2017-12-01

    Around the globe, urban areas are growing in both size and importance. However, due to the prevalence of impermeable surfaces within the urban fabric of cities these areas have a high risk of pluvial flooding. Due to the convergence of population growth and climate change the risk of pluvial flooding is growing. When designing solutions and adaptations to pluvial flood risk urban planners and engineers encounter a great deal of uncertainty due to model uncertainty, uncertainty within the data utilised, and uncertainty related to future climate and land use conditions. The interaction of these uncertainties leads to conditions of deep uncertainty. However, infrastructure systems must be designed and built in the face of this deep uncertainty. An Adaptation Tipping Points (ATP) methodology was used to develop a strategy to adapt an urban drainage system in the North East of London under conditions of deep uncertainty. The ATP approach was used to assess the current drainage system and potential drainage system adaptations. These adaptations were assessed against potential changes in rainfall depth and peakedness-defined as the ratio of mean to peak rainfall. These solutions encompassed both traditional and blue-green solutions that the Local Authority are known to be considering. This resulted in a set of Adaptation Pathways. However, theses pathways do not convey any information regarding the relative merits and demerits of the potential adaptation options presented. To address this a cost-benefit metric was developed that would reflect the solutions' costs and benefits under uncertainty. The resulting metric combines elements of the Benefits of SuDS Tool (BeST) with real options analysis in order to reflect the potential value of ecosystem services delivered by blue-green solutions under uncertainty. Lastly, it is discussed how a local body can utilise the adaptation pathways; their relative costs and benefits; and a system of local data collection to help guide

  9. ICASE/LaRC Workshop on Adaptive Grid Methods

    NASA Technical Reports Server (NTRS)

    South, Jerry C., Jr. (Editor); Thomas, James L. (Editor); Vanrosendale, John (Editor)

    1995-01-01

    Solution-adaptive grid techniques are essential to the attainment of practical, user friendly, computational fluid dynamics (CFD) applications. In this three-day workshop, experts gathered together to describe state-of-the-art methods in solution-adaptive grid refinement, analysis, and implementation; to assess the current practice; and to discuss future needs and directions for research. This was accomplished through a series of invited and contributed papers. The workshop focused on a set of two-dimensional test cases designed by the organizers to aid in assessing the current state of development of adaptive grid technology. In addition, a panel of experts from universities, industry, and government research laboratories discussed their views of needs and future directions in this field.

  10. Quality factors and local adaption (with applications in Eulerian hydrodynamics)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Crowley, W.P.

    1992-06-17

    Adapting the mesh to suit the solution is a technique commonly used for solving both ode`s and pde`s. For Lagrangian hydrodynamics, ALE and Free-Lagrange are examples of structured and unstructured adaptive methods. For Eulerian hydrodynamics the two basic approaches are the macro-unstructuring technique pioneered by Oliger and Berger and the micro-structuring technique due to Lohner and others. Here we will describe a new micro-unstructuring technique, LAM, (for Local Adaptive Mesh) as applied to Eulerian hydrodynamics. The LAM technique consists of two independent parts: (1) the time advance scheme is a variation on the artificial viscosity method; (2) the adaption schememore » uses a micro-unstructured mesh with quadrilateral mesh elements. The adaption scheme makes use of quality factors and the relation between these and truncation errors is discussed. The time advance scheme; the adaption strategy; and the effect of different adaption parameters on numerical solutions are described.« less

  11. Quality factors and local adaption (with applications in Eulerian hydrodynamics)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Crowley, W.P.

    1992-06-17

    Adapting the mesh to suit the solution is a technique commonly used for solving both ode's and pde's. For Lagrangian hydrodynamics, ALE and Free-Lagrange are examples of structured and unstructured adaptive methods. For Eulerian hydrodynamics the two basic approaches are the macro-unstructuring technique pioneered by Oliger and Berger and the micro-structuring technique due to Lohner and others. Here we will describe a new micro-unstructuring technique, LAM, (for Local Adaptive Mesh) as applied to Eulerian hydrodynamics. The LAM technique consists of two independent parts: (1) the time advance scheme is a variation on the artificial viscosity method; (2) the adaption schememore » uses a micro-unstructured mesh with quadrilateral mesh elements. The adaption scheme makes use of quality factors and the relation between these and truncation errors is discussed. The time advance scheme; the adaption strategy; and the effect of different adaption parameters on numerical solutions are described.« less

  12. Biological adaptive control model: a mechanical analogue of multi-factorial bone density adaptation.

    PubMed

    Davidson, Peter L; Milburn, Peter D; Wilson, Barry D

    2004-03-21

    The mechanism of how bone adapts to every day demands needs to be better understood to gain insight into situations in which the musculoskeletal system is perturbed. This paper offers a novel multi-factorial mathematical model of bone density adaptation which combines previous single-factor models in a single adaptation system as a means of gaining this insight. Unique aspects of the model include provision for interaction between factors and an estimation of the relative contribution of each factor. This interacting system is considered analogous to a Newtonian mechanical system and the governing response equation is derived as a linear version of the adaptation process. The transient solution to sudden environmental change is found to be exponential or oscillatory depending on the balance between cellular activation and deactivation frequencies.

  13. How Near is a Near-Optimal Solution: Confidence Limits for the Global Optimum.

    DTIC Science & Technology

    1980-05-01

    or near-optimal solutions are the only practical solutions available. This paper identifies and compares some procedures which use independent near...approximate or near-optimal solutions are the only practical solutions available. This paper identifies and compares some procedures which use inde- pendent...The objective of this paper is to indicate some relatively new statistical procedures for obtaining an upper confidence limit on G Each of these

  14. NHEXAS PHASE I ARIZONA STUDY--STANDARD OPERATING PROCEDURE FOR PREPARATION OF CALIBRATION AND SURROGATE RECOVERY SOLUTIONS FOR GC/MS ANALYSIS OF PESTICIDES (BCO-L-21.1)

    EPA Science Inventory

    The purpose of this SOP is to describe procedures for preparing calibration curve solutions used for gas chromatography/mass spectrometry (GC/MS) analysis of chlorpyrifos, diazinon, malathion, DDT, DDE, DDD, a-chlordane, and g-chlordane in dust, soil, air, and handwipe sample ext...

  15. Adaptive management: Chapter 1

    USGS Publications Warehouse

    Allen, Craig R.; Garmestani, Ahjond S.; Allen, Craig R.; Garmestani, Ahjond S.

    2015-01-01

    Adaptive management is an approach to natural resource management that emphasizes learning through management where knowledge is incomplete, and when, despite inherent uncertainty, managers and policymakers must act. Unlike a traditional trial and error approach, adaptive management has explicit structure, including a careful elucidation of goals, identification of alternative management objectives and hypotheses of causation, and procedures for the collection of data followed by evaluation and reiteration. The process is iterative, and serves to reduce uncertainty, build knowledge and improve management over time in a goal-oriented and structured process.

  16. Adaptive unstructured triangular mesh generation and flow solvers for the Navier-Stokes equations at high Reynolds number

    NASA Technical Reports Server (NTRS)

    Ashford, Gregory A.; Powell, Kenneth G.

    1995-01-01

    A method for generating high quality unstructured triangular grids for high Reynolds number Navier-Stokes calculations about complex geometries is described. Careful attention is paid in the mesh generation process to resolving efficiently the disparate length scales which arise in these flows. First the surface mesh is constructed in a way which ensures that the geometry is faithfully represented. The volume mesh generation then proceeds in two phases thus allowing the viscous and inviscid regions of the flow to be meshed optimally. A solution-adaptive remeshing procedure which allows the mesh to adapt itself to flow features is also described. The procedure for tracking wakes and refinement criteria appropriate for shock detection are described. Although at present it has only been implemented in two dimensions, the grid generation process has been designed with the extension to three dimensions in mind. An implicit, higher-order, upwind method is also presented for computing compressible turbulent flows on these meshes. Two recently developed one-equation turbulence models have been implemented to simulate the effects of the fluid turbulence. Results for flow about a RAE 2822 airfoil and a Douglas three-element airfoil are presented which clearly show the improved resolution obtainable.

  17. Evaluating Content Alignment in Computerized Adaptive Testing

    ERIC Educational Resources Information Center

    Wise, Steven L.; Kingsbury, G. Gage; Webb, Norman L.

    2015-01-01

    The alignment between a test and the content domain it measures represents key evidence for the validation of test score inferences. Although procedures have been developed for evaluating the content alignment of linear tests, these procedures are not readily applicable to computerized adaptive tests (CATs), which require large item pools and do…

  18. Dynamic mesh adaption for triangular and tetrahedral grids

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak; Strawn, Roger

    1993-01-01

    The following topics are discussed: requirements for dynamic mesh adaption; linked-list data structure; edge-based data structure; adaptive-grid data structure; three types of element subdivision; mesh refinement; mesh coarsening; additional constraints for coarsening; anisotropic error indicator for edges; unstructured-grid Euler solver; inviscid 3-D wing; and mesh quality for solution-adaptive grids. The discussion is presented in viewgraph form.

  19. An adaptive quantum mechanics/molecular mechanics method for the infrared spectrum of water: incorporation of the quantum effect between solute and solvent.

    PubMed

    Watanabe, Hiroshi C; Banno, Misa; Sakurai, Minoru

    2016-03-14

    Quantum effects in solute-solvent interactions, such as the many-body effect and the dipole-induced dipole, are known to be critical factors influencing the infrared spectra of species in the liquid phase. For accurate spectrum evaluation, the surrounding solvent molecules, in addition to the solute of interest, should be treated using a quantum mechanical method. However, conventional quantum mechanics/molecular mechanics (QM/MM) methods cannot handle free QM solvent molecules during molecular dynamics (MD) simulation because of the diffusion problem. To deal with this problem, we have previously proposed an adaptive QM/MM "size-consistent multipartitioning (SCMP) method". In the present study, as the first application of the SCMP method, we demonstrate the reproduction of the infrared spectrum of liquid-phase water, and evaluate the quantum effect in comparison with conventional QM/MM simulations.

  20. Organic compatible solutes of halotolerant and halophilic microorganisms

    PubMed Central

    Roberts, Mary F

    2005-01-01

    Microorganisms that adapt to moderate and high salt environments use a variety of solutes, organic and inorganic, to counter external osmotic pressure. The organic solutes can be zwitterionic, noncharged, or anionic (along with an inorganic cation such as K+). The range of solutes, their diverse biosynthetic pathways, and physical properties of the solutes that effect molecular stability are reviewed. PMID:16176595

  1. Study of solution procedures for nonlinear structural equations

    NASA Technical Reports Server (NTRS)

    Young, C. T., II; Jones, R. F., Jr.

    1980-01-01

    A method for the redution of the cost of solution of large nonlinear structural equations was developed. Verification was made using the MARC-STRUC structure finite element program with test cases involving single and multiple degrees of freedom for static geometric nonlinearities. The method developed was designed to exist within the envelope of accuracy and convergence characteristic of the particular finite element methodology used.

  2. Cohomogeneity-one solutions in Einstein-Maxwell-dilaton gravity

    NASA Astrophysics Data System (ADS)

    Lim, Yen-Kheng

    2017-05-01

    The field equations for Einstein-Maxwell-dilaton gravity in D dimensions are reduced to an effective one-dimensional system under the influence of exponential potentials. Various cases where exact solutions can be found are explored. With this procedure, we present interesting solutions such as a one-parameter generalization of the dilaton-Melvin spacetime and a three-parameter solution that interpolates between the Reissner-Nordström and Bertotti-Robinson solutions. This procedure also allows simple, alternative derivations of known solutions such as the Lifshitz spacetime and the planar anti-de Sitter naked singularity. In the latter case, the metric is cast in a simpler form which reveals the presence of an additional curvature singularity.

  3. Errors in the estimation method for the rejection of vibrations in adaptive optics systems

    NASA Astrophysics Data System (ADS)

    Kania, Dariusz

    2017-06-01

    In recent years the problem of the mechanical vibrations impact in adaptive optics (AO) systems has been renewed. These signals are damped sinusoidal signals and have deleterious effect on the system. One of software solutions to reject the vibrations is an adaptive method called AVC (Adaptive Vibration Cancellation) where the procedure has three steps: estimation of perturbation parameters, estimation of the frequency response of the plant, update the reference signal to reject/minimalize the vibration. In the first step a very important problem is the estimation method. A very accurate and fast (below 10 ms) estimation method of these three parameters has been presented in several publications in recent years. The method is based on using the spectrum interpolation and MSD time windows and it can be used to estimate multifrequency signals. In this paper the estimation method is used in the AVC method to increase the system performance. There are several parameters that affect the accuracy of obtained results, e.g. CiR - number of signal periods in a measurement window, N - number of samples in the FFT procedure, H - time window order, SNR, b - number of ADC bits, γ - damping ratio of the tested signal. Systematic errors increase when N, CiR, H decrease and when γ increases. The value for systematic error is approximately 10^-10 Hz/Hz for N = 2048 and CiR = 0.1. This paper presents equations that can used to estimate maximum systematic errors for given values of H, CiR and N before the start of the estimation process.

  4. The students’ ability in mathematical literacy for the quantity, and the change and relationship problems on the PISA adaptation test

    NASA Astrophysics Data System (ADS)

    Julie, Hongki; Sanjaya, Febi; Yudhi Anggoro, Ant.

    2017-09-01

    One of purposes of this study was to describe the solution profile of the junior high school students for the PISA adaptation test. The procedures conducted by researchers to achieve this objective were (1) adapting the PISA test, (2) validating the adapting PISA test, (3) asking junior high school students to do the adapting PISA test, and (4) making the students’ solution profile. The PISA problems for mathematics could be classified into four areas, namely quantity, space and shape, change and relationship, and uncertainty. The research results that would be presented in this paper were the result test for quantity, and change and relationship problems. In the adapting PISA test, there were fifteen questions that consist of two questions for the quantity group, six questions for space and shape group, three questions for the change and relationship group, and four questions for uncertainty. Subjects in this study were 18 students from 11 junior high schools in Yogyakarta, Central Java, and Banten. The type of research that used by the researchers was a qualitative research. For the first quantity problem, there were 38.89 % students who achieved level 3. For the second quantity problem, there were 88.89 % students who achieved level 2. For part a of the first change and relationship problem, there were 55.56 % students who achieved level 5. For part b of the first change and relationship problem, there were 77.78 % students who achieved level 2. For the second change and relationship problem, there were 38.89 % students who achieved level 2.

  5. Element-by-element Solution Procedures for Nonlinear Structural Analysis

    NASA Technical Reports Server (NTRS)

    Hughes, T. J. R.; Winget, J. M.; Levit, I.

    1984-01-01

    Element-by-element approximate factorization procedures are proposed for solving the large finite element equation systems which arise in nonlinear structural mechanics. Architectural and data base advantages of the present algorithms over traditional direct elimination schemes are noted. Results of calculations suggest considerable potential for the methods described.

  6. Prism Adaptation in Schizophrenia

    ERIC Educational Resources Information Center

    Bigelow, Nirav O.; Turner, Beth M.; Andreasen, Nancy C.; Paulsen, Jane S.; O'Leary, Daniel S.; Ho, Beng-Choon

    2006-01-01

    The prism adaptation test examines procedural learning (PL) in which performance facilitation occurs with practice on tasks without the need for conscious awareness. Dynamic interactions between frontostriatal cortices, basal ganglia, and the cerebellum have been shown to play key roles in PL. Disruptions within these neural networks have also…

  7. A FLOW-THROUGH TESTING PROCEDURE WITH DUCKWEED (LEMNA MINOR L.)

    EPA Science Inventory

    Lemna minor is one of the smallest flowering plants. Because of its floating habit, ease of culture, and small size it is well adapted for laboratory investigations. Procedures for flow-through tests were developed. Testing procedures were developed with this apparatus. By using ...

  8. A low cost solution for post-biopsy complications using available RFA generator and coaxial core biopsy needle.

    PubMed

    Azlan, C A; Mohd Nasir, N F; Saifizul, A A; Faizul, M S; Ng, K H; Abdullah, B J J

    2007-12-01

    Percutaneous image-guided needle biopsy is typically performed in highly vascular organs or in tumours with rich macroscopic and microscopic blood supply. The main risks related to this procedure are haemorrhage and implantation of tumour cells in the needle tract after the biopsy needle is withdrawn. From numerous conducted studies, it was found that heating the needle tract using alternating current in radiofrequency (RF) range has a potential to minimize these effects. However, this solution requires the use of specially designed needles, which would make the procedure relatively expensive and complicated. Thus, we propose a simple solution by using readily available coaxial core biopsy needles connected to a radiofrequency ablation (RFA) generator. In order to do so, we have designed and developed an adapter to interface between these two devices. For evaluation purpose, we used a bovine liver as a sample tissue. The experimental procedure was done to study the effect of different parameter settings on the size of coagulation necrosis caused by the RF current heating on the subject. The delivery of the RF energy was varied by changing the values for delivered power, power delivery duration, and insertion depth. The results showed that the size of the coagulation necrosis is affected by all of the parameters tested. In general, the size of the region is enlarged with higher delivery of RF power, longer duration of power delivery, and shallower needle insertion and become relatively constant after a certain value. We also found that the solution proposed provides a low cost and practical way to minimizes unwanted post-biopsy effects.

  9. Asymptotic Linearity of Optimal Control Modification Adaptive Law with Analytical Stability Margins

    NASA Technical Reports Server (NTRS)

    Nguyen, Nhan T.

    2010-01-01

    Optimal control modification has been developed to improve robustness to model-reference adaptive control. For systems with linear matched uncertainty, optimal control modification adaptive law can be shown by a singular perturbation argument to possess an outer solution that exhibits a linear asymptotic property. Analytical expressions of phase and time delay margins for the outer solution can be obtained. Using the gradient projection operator, a free design parameter of the adaptive law can be selected to satisfy stability margins.

  10. Disentangling Complexity in Bayesian Automatic Adaptive Quadrature

    NASA Astrophysics Data System (ADS)

    Adam, Gheorghe; Adam, Sanda

    2018-02-01

    The paper describes a Bayesian automatic adaptive quadrature (BAAQ) solution for numerical integration which is simultaneously robust, reliable, and efficient. Detailed discussion is provided of three main factors which contribute to the enhancement of these features: (1) refinement of the m-panel automatic adaptive scheme through the use of integration-domain-length-scale-adapted quadrature sums; (2) fast early problem complexity assessment - enables the non-transitive choice among three execution paths: (i) immediate termination (exceptional cases); (ii) pessimistic - involves time and resource consuming Bayesian inference resulting in radical reformulation of the problem to be solved; (iii) optimistic - asks exclusively for subrange subdivision by bisection; (3) use of the weaker accuracy target from the two possible ones (the input accuracy specifications and the intrinsic integrand properties respectively) - results in maximum possible solution accuracy under minimum possible computing time.

  11. Embedded pitch adapters: A high-yield interconnection solution for strip sensors

    NASA Astrophysics Data System (ADS)

    Ullán, M.; Allport, P. P.; Baca, M.; Broughton, J.; Chisholm, A.; Nikolopoulos, K.; Pyatt, S.; Thomas, J. P.; Wilson, J. A.; Kierstead, J.; Kuczewski, P.; Lynn, D.; Hommels, L. B. A.; Fleta, C.; Fernandez-Tejero, J.; Quirion, D.; Bloch, I.; Díez, S.; Gregor, I. M.; Lohwasser, K.; Poley, L.; Tackmann, K.; Hauser, M.; Jakobs, K.; Kuehn, S.; Mahboubi, K.; Mori, R.; Parzefall, U.; Clark, A.; Ferrere, D.; Gonzalez Sevilla, S.; Ashby, J.; Blue, A.; Bates, R.; Buttar, C.; Doherty, F.; McMullen, T.; McEwan, F.; O'Shea, V.; Kamada, S.; Yamamura, K.; Ikegami, Y.; Nakamura, K.; Takubo, Y.; Unno, Y.; Takashima, R.; Chilingarov, A.; Fox, H.; Affolder, A. A.; Casse, G.; Dervan, P.; Forshaw, D.; Greenall, A.; Wonsak, S.; Wormald, M.; Cindro, V.; Kramberger, G.; Mandić, I.; Mikuž, M.; Gorelov, I.; Hoeferkamp, M.; Palni, P.; Seidel, S.; Taylor, A.; Toms, K.; Wang, R.; Hessey, N. P.; Valencic, N.; Hanagaki, K.; Dolezal, Z.; Kodys, P.; Bohm, J.; Mikestikova, M.; Bevan, A.; Beck, G.; Milke, C.; Domingo, M.; Fadeyev, V.; Galloway, Z.; Hibbard-Lubow, D.; Liang, Z.; Sadrozinski, H. F.-W.; Seiden, A.; To, K.; French, R.; Hodgson, P.; Marin-Reyes, H.; Parker, K.; Jinnouchi, O.; Hara, K.; Bernabeu, J.; Civera, J. V.; Garcia, C.; Lacasta, C.; Marti i Garcia, S.; Rodriguez, D.; Santoyo, D.; Solaz, C.; Soldevila, U.

    2016-09-01

    A proposal to fabricate large area strip sensors with integrated, or embedded, pitch adapters is presented for the End-cap part of the Inner Tracker in the ATLAS experiment. To implement the embedded pitch adapters, a second metal layer is used in the sensor fabrication, for signal routing to the ASICs. Sensors with different embedded pitch adapters have been fabricated in order to optimize the design and technology. Inter-strip capacitance, noise, pick-up, cross-talk, signal efficiency, and fabrication yield have been taken into account in their design and fabrication. Inter-strip capacitance tests taking into account all channel neighbors reveal the important differences between the various designs considered. These tests have been correlated with noise figures obtained in full assembled modules, showing that the tests performed on the bare sensors are a valid tool to estimate the final noise in the full module. The full modules have been subjected to test beam experiments in order to evaluate the incidence of cross-talk, pick-up, and signal loss. The detailed analysis shows no indication of cross-talk or pick-up as no additional hits can be observed in any channel not being hit by the beam above 170 mV threshold, and the signal in those channels is always below 1% of the signal recorded in the channel being hit, above 100 mV threshold. First results on irradiated mini-sensors with embedded pitch adapters do not show any change in the interstrip capacitance measurements with only the first neighbors connected.

  12. Removing damped sinusoidal vibrations in adaptive optics systems using a DFT-based estimation method

    NASA Astrophysics Data System (ADS)

    Kania, Dariusz

    2017-06-01

    The problem of a vibrations rejection in adaptive optics systems is still present in publications. These undesirable signals emerge because of shaking the system structure, the tracking process, etc., and they usually are damped sinusoidal signals. There are some mechanical solutions to reduce the signals but they are not very effective. One of software solutions are very popular adaptive methods. An AVC (Adaptive Vibration Cancellation) method has been presented and developed in recent years. The method is based on the estimation of three vibrations parameters and values of frequency, amplitude and phase are essential to produce and adjust a proper signal to reduce or eliminate vibrations signals. This paper presents a fast (below 10 ms) and accurate estimation method of frequency, amplitude and phase of a multifrequency signal that can be used in the AVC method to increase the AO system performance. The method accuracy depends on several parameters: CiR - number of signal periods in a measurement window, N - number of samples in the FFT procedure, H - time window order, SNR, THD, b - number of A/D converter bits in a real time system, γ - the damping ratio of the tested signal, φ - the phase of the tested signal. Systematic errors increase when N, CiR, H decrease and when γ increases. The value of systematic error for γ = 0.1%, CiR = 1.1 and N = 32 is approximately 10^-4 Hz/Hz. This paper focuses on systematic errors of and effect of the signal phase and values of γ on the results.

  13. Three-dimensional self-adaptive grid method for complex flows

    NASA Technical Reports Server (NTRS)

    Djomehri, M. Jahed; Deiwert, George S.

    1988-01-01

    A self-adaptive grid procedure for efficient computation of three-dimensional complex flow fields is described. The method is based on variational principles to minimize the energy of a spring system analogy which redistributes the grid points. Grid control parameters are determined by specifying maximum and minimum grid spacing. Multidirectional adaptation is achieved by splitting the procedure into a sequence of successive applications of a unidirectional adaptation. One-sided, two-directional constraints for orthogonality and smoothness are used to enhance the efficiency of the method. Feasibility of the scheme is demonstrated by application to a multinozzle, afterbody, plume flow field. Application of the algorithm for initial grid generation is illustrated by constructing a three-dimensional grid about a bump-like geometry.

  14. Design of Robust Adaptive Unbalance Response Controllers for Rotors with Magnetic Bearings

    NASA Technical Reports Server (NTRS)

    Knospe, Carl R.; Tamer, Samir M.; Fedigan, Stephen J.

    1996-01-01

    Experimental results have recently demonstrated that an adaptive open loop control strategy can be highly effective in the suppression of unbalance induced vibration on rotors supported in active magnetic bearings. This algorithm, however, relies upon a predetermined gain matrix. Typically, this matrix is determined by an optimal control formulation resulting in the choice of the pseudo-inverse of the nominal influence coefficient matrix as the gain matrix. This solution may result in problems with stability and performance robustness since the estimated influence coefficient matrix is not equal to the actual influence coefficient matrix. Recently, analysis tools have been developed to examine the robustness of this control algorithm with respect to structured uncertainty. Herein, these tools are extended to produce a design procedure for determining the adaptive law's gain matrix. The resulting control algorithm has a guaranteed convergence rate and steady state performance in spite of the uncertainty in the rotor system. Several examples are presented which demonstrate the effectiveness of this approach and its advantages over the standard optimal control formulation.

  15. Conformational energy calculations on polypeptides and proteins: use of a statistical mechanical procedure for evaluating structure and properties.

    PubMed

    Scheraga, H A; Paine, G H

    1986-01-01

    We are using a variety of theoretical and computational techniques to study protein structure, protein folding, and higher-order structures. Our earlier work involved treatments of liquid water and aqueous solutions of nonpolar and polar solutes, computations of the stabilities of the fundamental structures of proteins and their packing arrangements, conformations of small cyclic and open-chain peptides, structures of fibrous proteins (collagen), structures of homologous globular proteins, introduction of special procedures as constraints during energy minimization of globular proteins, and structures of enzyme-substrate complexes. Recently, we presented a new methodology for predicting polypeptide structure (described here); the method is based on the calculation of the probable and average conformation of a polypeptide chain by the application of equilibrium statistical mechanics in conjunction with an adaptive, importance sampling Monte Carlo algorithm. As a test, it was applied to Met-enkephalin.

  16. Adaptive statistical pattern classifiers for remotely sensed data

    NASA Technical Reports Server (NTRS)

    Gonzalez, R. C.; Pace, M. O.; Raulston, H. S.

    1975-01-01

    A technique for the adaptive estimation of nonstationary statistics necessary for Bayesian classification is developed. The basic approach to the adaptive estimation procedure consists of two steps: (1) an optimal stochastic approximation of the parameters of interest and (2) a projection of the parameters in time or position. A divergence criterion is developed to monitor algorithm performance. Comparative results of adaptive and nonadaptive classifier tests are presented for simulated four dimensional spectral scan data.

  17. Creative ideation and adaptive reuse: a solution to sustainable urban heritage conservation

    NASA Astrophysics Data System (ADS)

    Hasnain, H.; Mohseni, F.

    2018-03-01

    The rapid phenomenon of urbanization in the last century has resulted in abandonment of historic urban centers and still today attracting people to suburbs and newly developed districts. This urban expansion poses a serious threat to historic properties when people opt to move away leaving behind their heritage. The adaptive reuse of heritage is considered to be a dominant strategy for handling this issue bypassing demolition of heritage properties. However, the adaptive reuse cannot only consider the preservation of the heritage as the structural retrofit or functional revitalization but rather it needs to enunciate the image and the creative ideation that reflect upon the future of heritage. Consequently, this paper aims to examine the role of branding as an innovative source of ideation in the implication of adaptive reuse on heritage. In this regards, the case study of Zalando outlet store in Berlin is selected, which is an old building situated in the commercial district of the city in a wide range of styles and heritage buildings from the middle ages. This research uses semi-structured interviews to examine the significance of Brand making in a successful adaptive reuse process. The findings indicate that the importance of the outlet building lies not only in its physical fabric or commercial aspects, but the spirit of the place that lies in the magical essence of big labels as emblems. This underlying essence of place stimulated the adaptation of building in a way that it surpasses the physical and functional aspect of a building and makes it merge in new times and new sustainable development.

  18. Numerical Procedures for Inlet/Diffuser/Nozzle Flows

    NASA Technical Reports Server (NTRS)

    Rubin, Stanley G.

    1998-01-01

    Two primitive variable, pressure based, flux-split, RNS/NS solution procedures for viscous flows are presented. Both methods are uniformly valid across the full Mach number range, Le., from the incompressible limit to high supersonic speeds. The first method is an 'optimized' version of a previously developed global pressure relaxation RNS procedure. Considerable reduction in the number of relatively expensive matrix inversion, and thereby in the computational time, has been achieved with this procedure. CPU times are reduced by a factor of 15 for predominantly elliptic flows (incompressible and low subsonic). The second method is a time-marching, 'linearized' convection RNS/NS procedure. The key to the efficiency of this procedure is the reduction to a single LU inversion at the inflow cross-plane. The remainder of the algorithm simply requires back-substitution with this LU and the corresponding residual vector at any cross-plane location. This method is not time-consistent, but has a convective-type CFL stability limitation. Both formulations are robust and provide accurate solutions for a variety of internal viscous flows to be provided herein.

  19. Numerical Boundary Condition Procedures

    NASA Technical Reports Server (NTRS)

    1981-01-01

    Topics include numerical procedures for treating inflow and outflow boundaries, steady and unsteady discontinuous surfaces, far field boundaries, and multiblock grids. In addition, the effects of numerical boundary approximations on stability, accuracy, and convergence rate of the numerical solution are discussed.

  20. Recent developments in the Dorfman-Berbaum-Metz procedure for multireader ROC study analysis.

    PubMed

    Hillis, Stephen L; Berbaum, Kevin S; Metz, Charles E

    2008-05-01

    The Dorfman-Berbaum-Metz (DBM) method has been one of the most popular methods for analyzing multireader receiver-operating characteristic (ROC) studies since it was proposed in 1992. Despite its popularity, the original procedure has several drawbacks: it is limited to jackknife accuracy estimates, it is substantially conservative, and it is not based on a satisfactory conceptual or theoretical model. Recently, solutions to these problems have been presented in three papers. Our purpose is to summarize and provide an overview of these recent developments. We present and discuss the recently proposed solutions for the various drawbacks of the original DBM method. We compare the solutions in a simulation study and find that they result in improved performance for the DBM procedure. We also compare the solutions using two real data studies and find that the modified DBM procedure that incorporates these solutions yields more significant results and clearer interpretations of the variance component parameters than the original DBM procedure. We recommend using the modified DBM procedure that incorporates the recent developments.

  1. Basis adaptation and domain decomposition for steady partial differential equations with random coefficients

    DOE PAGES

    Tipireddy, R.; Stinis, P.; Tartakovsky, A. M.

    2017-09-04

    In this paper, we present a novel approach for solving steady-state stochastic partial differential equations (PDEs) with high-dimensional random parameter space. The proposed approach combines spatial domain decomposition with basis adaptation for each subdomain. The basis adaptation is used to address the curse of dimensionality by constructing an accurate low-dimensional representation of the stochastic PDE solution (probability density function and/or its leading statistical moments) in each subdomain. Restricting the basis adaptation to a specific subdomain affords finding a locally accurate solution. Then, the solutions from all of the subdomains are stitched together to provide a global solution. We support ourmore » construction with numerical experiments for a steady-state diffusion equation with a random spatially dependent coefficient. Lastly, our results show that highly accurate global solutions can be obtained with significantly reduced computational costs.« less

  2. Finding the Genomic Basis of Local Adaptation: Pitfalls, Practical Solutions, and Future Directions.

    PubMed

    Hoban, Sean; Kelley, Joanna L; Lotterhos, Katie E; Antolin, Michael F; Bradburd, Gideon; Lowry, David B; Poss, Mary L; Reed, Laura K; Storfer, Andrew; Whitlock, Michael C

    2016-10-01

    Uncovering the genetic and evolutionary basis of local adaptation is a major focus of evolutionary biology. The recent development of cost-effective methods for obtaining high-quality genome-scale data makes it possible to identify some of the loci responsible for adaptive differences among populations. Two basic approaches for identifying putatively locally adaptive loci have been developed and are broadly used: one that identifies loci with unusually high genetic differentiation among populations (differentiation outlier methods) and one that searches for correlations between local population allele frequencies and local environments (genetic-environment association methods). Here, we review the promises and challenges of these genome scan methods, including correcting for the confounding influence of a species' demographic history, biases caused by missing aspects of the genome, matching scales of environmental data with population structure, and other statistical considerations. In each case, we make suggestions for best practices for maximizing the accuracy and efficiency of genome scans to detect the underlying genetic basis of local adaptation. With attention to their current limitations, genome scan methods can be an important tool in finding the genetic basis of adaptive evolutionary change.

  3. Assessing speech perception in children with cochlear implants using a modified hybrid visual habituation procedure.

    PubMed

    Core, Cynthia; Brown, Janean W; Larsen, Michael D; Mahshie, James

    2014-01-01

    The objectives of this research were to determine whether an adapted version of a Hybrid Visual Habituation procedure could be used to assess speech perception of phonetic and prosodic features of speech (vowel height, lexical stress, and intonation) in individual pre-school-age children who use cochlear implants. Nine children ranging in age from 3;4 to 5;5 participated in this study. Children were prelingually deaf and used cochlear implants and had no other known disabilities. Children received two speech feature tests using an adaptation of a Hybrid Visual Habituation procedure. Seven of the nine children demonstrated perception of at least one speech feature using this procedure using results from a Bayesian linear regression analysis. At least one child demonstrated perception of each speech feature using this assessment procedure. An adapted version of the Hybrid Visual Habituation Procedure with an appropriate statistical analysis provides a way to assess phonetic and prosodicaspects of speech in pre-school-age children who use cochlear implants.

  4. Adaptive reconnection-based arbitrary Lagrangian Eulerian method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bo, Wurigen; Shashkov, Mikhail

    We present a new adaptive Arbitrary Lagrangian Eulerian (ALE) method. This method is based on the reconnection-based ALE (ReALE) methodology of Refs. [35], [34] and [6]. The main elements in a standard ReALE method are: an explicit Lagrangian phase on an arbitrary polygonal (in 2D) mesh in which the solution and positions of grid nodes are updated; a rezoning phase in which a new grid is defined by changing the connectivity (using Voronoi tessellation) but not the number of cells; and a remapping phase in which the Lagrangian solution is transferred onto the new grid. Furthermore, in the standard ReALEmore » method, the rezoned mesh is smoothed by using one or several steps toward centroidal Voronoi tessellation, but it is not adapted to the solution in any way.« less

  5. Adaptive reconnection-based arbitrary Lagrangian Eulerian method

    DOE PAGES

    Bo, Wurigen; Shashkov, Mikhail

    2015-07-21

    We present a new adaptive Arbitrary Lagrangian Eulerian (ALE) method. This method is based on the reconnection-based ALE (ReALE) methodology of Refs. [35], [34] and [6]. The main elements in a standard ReALE method are: an explicit Lagrangian phase on an arbitrary polygonal (in 2D) mesh in which the solution and positions of grid nodes are updated; a rezoning phase in which a new grid is defined by changing the connectivity (using Voronoi tessellation) but not the number of cells; and a remapping phase in which the Lagrangian solution is transferred onto the new grid. Furthermore, in the standard ReALEmore » method, the rezoned mesh is smoothed by using one or several steps toward centroidal Voronoi tessellation, but it is not adapted to the solution in any way.« less

  6. Reduced rank regression via adaptive nuclear norm penalization

    PubMed Central

    Chen, Kun; Dong, Hongbo; Chan, Kung-Sik

    2014-01-01

    Summary We propose an adaptive nuclear norm penalization approach for low-rank matrix approximation, and use it to develop a new reduced rank estimation method for high-dimensional multivariate regression. The adaptive nuclear norm is defined as the weighted sum of the singular values of the matrix, and it is generally non-convex under the natural restriction that the weight decreases with the singular value. However, we show that the proposed non-convex penalized regression method has a global optimal solution obtained from an adaptively soft-thresholded singular value decomposition. The method is computationally efficient, and the resulting solution path is continuous. The rank consistency of and prediction/estimation performance bounds for the estimator are established for a high-dimensional asymptotic regime. Simulation studies and an application in genetics demonstrate its efficacy. PMID:25045172

  7. Awareness of Sensorimotor Adaptation to Visual Rotations of Different Size

    PubMed Central

    Werner, Susen; van Aken, Bernice C.; Hulst, Thomas; Frens, Maarten A.; van der Geest, Jos N.; Strüder, Heiko K.; Donchin, Opher

    2015-01-01

    Previous studies on sensorimotor adaptation revealed no awareness of the nature of the perturbation after adaptation to an abrupt 30° rotation of visual feedback or after adaptation to gradually introduced perturbations. Whether the degree of awareness depends on the magnitude of the perturbation, though, has as yet not been tested. Instead of using questionnaires, as was often done in previous work, the present study used a process dissociation procedure to measure awareness and unawareness. A naïve, implicit group and a group of subjects using explicit strategies adapted to 20°, 40° and 60° cursor rotations in different adaptation blocks that were each followed by determination of awareness and unawareness indices. The awareness index differed between groups and increased from 20° to 60° adaptation. In contrast, there was no group difference for the unawareness index, but it also depended on the size of the rotation. Early adaptation varied between groups and correlated with awareness: The more awareness a participant had developed the more the person adapted in the beginning of the adaptation block. In addition, there was a significant group difference for savings but it did not correlate with awareness. Our findings suggest that awareness depends on perturbation size and that aware and strategic processes are differentially involved during adaptation and savings. Moreover, the use of the process dissociation procedure opens the opportunity to determine awareness and unawareness indices in future sensorimotor adaptation research. PMID:25894396

  8. The Spanish national health care-associated infection surveillance network (INCLIMECC): data summary January 1997 through December 2006 adapted to the new National Healthcare Safety Network Procedure-associated module codes.

    PubMed

    Pérez, Cristina Díaz-Agero; Rodela, Ana Robustillo; Monge Jodrá, Vincente

    2009-12-01

    In 1997, a national standardized surveillance system (designated INCLIMECC [Indicadores Clínicos de Mejora Continua de la Calidad]) was established in Spain for health care-associated infection (HAI) in surgery patients, based on the National Nosocomial Infection Surveillance (NNIS) system. In 2005, in its procedure-associated module, the National Healthcare Safety Network (NHSN) inherited the NNIS program for surveillance of HAI in surgery patients and reorganized all surgical procedures. INCLIMECC actively monitors all patients referred to the surgical ward of each participating hospital. We present a summary of the data collected from January 1997 to December 2006 adapted to the new NHSN procedures. Surgical site infection (SSI) rates are provided by operative procedure and NNIS risk index category. Further quality indicators reported are surgical complications, length of stay, antimicrobial prophylaxis, mortality, readmission because of infection or other complication, and revision surgery. Because the ICD-9-CM surgery procedure code is included in each patient's record, we were able to reorganize our database avoiding the loss of extensive information, as has occurred with other systems.

  9. Evaluation Plan for the Computerized Adaptive Vocational Aptitude Battery.

    ERIC Educational Resources Information Center

    Green, Bert F.; And Others

    The United States Armed Services are planning to introduce computerized adaptive testing (CAT) into the Armed Services Vocational Aptitude Battery (ASVAB), which is a major part of the present personnel assessment procedures. Adaptive testing will improve efficiency greatly by assessing each candidate's answers as the test progresses and posing…

  10. Adaptive Assessment of Young Children with Visual Impairment

    ERIC Educational Resources Information Center

    Ruiter, Selma; Nakken, Han; Janssen, Marleen; Van Der Meulen, Bieuwe; Looijestijn, Paul

    2011-01-01

    The aim of this study was to assess the effect of adaptations for children with low vision of the Bayley Scales, a standardized developmental instrument widely used to assess development in young children. Low vision adaptations were made to the procedures, item instructions and play material of the Dutch version of the Bayley Scales of Infant…

  11. Construction of a Computerized Adaptive Testing Version of the Quebec Adaptive Behavior Scale.

    ERIC Educational Resources Information Center

    Tasse, Marc J.; And Others

    Multilog (Thissen, 1991) was used to estimate parameters of 225 items from the Quebec Adaptive Behavior Scale (QABS). A database containing actual data from 2,439 subjects was used for the parameterization procedures. The two-parameter-logistic model was used in estimating item parameters and in the testing strategy. MicroCAT (Assessment Systems…

  12. Self-adaptive predictor-corrector algorithm for static nonlinear structural analysis

    NASA Technical Reports Server (NTRS)

    Padovan, J.

    1981-01-01

    A multiphase selfadaptive predictor corrector type algorithm was developed. This algorithm enables the solution of highly nonlinear structural responses including kinematic, kinetic and material effects as well as pro/post buckling behavior. The strategy involves three main phases: (1) the use of a warpable hyperelliptic constraint surface which serves to upperbound dependent iterate excursions during successive incremental Newton Ramphson (INR) type iterations; (20 uses an energy constraint to scale the generation of successive iterates so as to maintain the appropriate form of local convergence behavior; (3) the use of quality of convergence checks which enable various self adaptive modifications of the algorithmic structure when necessary. The restructuring is achieved by tightening various conditioning parameters as well as switch to different algorithmic levels to improve the convergence process. The capabilities of the procedure to handle various types of static nonlinear structural behavior are illustrated.

  13. Evaluation of the CATSIB DIF Procedure in a Pretest Setting

    ERIC Educational Resources Information Center

    Nandakumar, Ratna; Roussos, Louis

    2004-01-01

    A new procedure, CATSIB, for assessing differential item functioning (DIF) on computerized adaptive tests (CATs) is proposed. CATSIB, a modified SIBTEST procedure, matches test takers on estimated ability and controls for impact-induced Type 1 error inflation by employing a CAT version of the IBTEST "regression correction." The…

  14. 49 CFR 572.142 - Head assembly and test procedure.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 49 Transportation 7 2013-10-01 2013-10-01 false Head assembly and test procedure. 572.142 Section...-year-Old Child Crash Test Dummy, Alpha Version § 572.142 Head assembly and test procedure. (a) The head assembly (refer to § 572.140(a)(1)(i)) for this test consists of the head (drawing 210-1000), adapter plate...

  15. 49 CFR 572.142 - Head assembly and test procedure.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 49 Transportation 7 2011-10-01 2011-10-01 false Head assembly and test procedure. 572.142 Section...-year-Old Child Crash Test Dummy, Alpha Version § 572.142 Head assembly and test procedure. (a) The head assembly (refer to § 572.140(a)(1)(i)) for this test consists of the head (drawing 210-1000), adapter plate...

  16. 49 CFR 572.142 - Head assembly and test procedure.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 49 Transportation 7 2014-10-01 2014-10-01 false Head assembly and test procedure. 572.142 Section...-year-Old Child Crash Test Dummy, Alpha Version § 572.142 Head assembly and test procedure. (a) The head assembly (refer to § 572.140(a)(1)(i)) for this test consists of the head (drawing 210-1000), adapter plate...

  17. 49 CFR 572.142 - Head assembly and test procedure.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 49 Transportation 7 2012-10-01 2012-10-01 false Head assembly and test procedure. 572.142 Section...-year-Old Child Crash Test Dummy, Alpha Version § 572.142 Head assembly and test procedure. (a) The head assembly (refer to § 572.140(a)(1)(i)) for this test consists of the head (drawing 210-1000), adapter plate...

  18. Research in digital adaptive flight controllers

    NASA Technical Reports Server (NTRS)

    Kaufman, H.

    1976-01-01

    A design study of adaptive control logic suitable for implementation in modern airborne digital flight computers was conducted. Both explicit controllers which directly utilize parameter identification and implicit controllers which do not require identification were considered. Extensive analytical and simulation efforts resulted in the recommendation of two explicit digital adaptive flight controllers. Interface weighted least squares estimation procedures with control logic were developed using either optimal regulator theory or with control logic based upon single stage performance indices.

  19. Parallel Tetrahedral Mesh Adaptation with Dynamic Load Balancing

    NASA Technical Reports Server (NTRS)

    Oliker, Leonid; Biswas, Rupak; Gabow, Harold N.

    1999-01-01

    The ability to dynamically adapt an unstructured grid is a powerful tool for efficiently solving computational problems with evolving physical features. In this paper, we report on our experience parallelizing an edge-based adaptation scheme, called 3D_TAG. using message passing. Results show excellent speedup when a realistic helicopter rotor mesh is randomly refined. However. performance deteriorates when the mesh is refined using a solution-based error indicator since mesh adaptation for practical problems occurs in a localized region., creating a severe load imbalance. To address this problem, we have developed PLUM, a global dynamic load balancing framework for adaptive numerical computations. Even though PLUM primarily balances processor workloads for the solution phase, it reduces the load imbalance problem within mesh adaptation by repartitioning the mesh after targeting edges for refinement but before the actual subdivision. This dramatically improves the performance of parallel 3D_TAG since refinement occurs in a more load balanced fashion. We also present optimal and heuristic algorithms that, when applied to the default mapping of a parallel repartitioner, significantly reduce the data redistribution overhead. Finally, portability is examined by comparing performance on three state-of-the-art parallel machines.

  20. Basis adaptation and domain decomposition for steady-state partial differential equations with random coefficients

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tipireddy, R.; Stinis, P.; Tartakovsky, A. M.

    We present a novel approach for solving steady-state stochastic partial differential equations (PDEs) with high-dimensional random parameter space. The proposed approach combines spatial domain decomposition with basis adaptation for each subdomain. The basis adaptation is used to address the curse of dimensionality by constructing an accurate low-dimensional representation of the stochastic PDE solution (probability density function and/or its leading statistical moments) in each subdomain. Restricting the basis adaptation to a specific subdomain affords finding a locally accurate solution. Then, the solutions from all of the subdomains are stitched together to provide a global solution. We support our construction with numericalmore » experiments for a steady-state diffusion equation with a random spatially dependent coefficient. Our results show that highly accurate global solutions can be obtained with significantly reduced computational costs.« less

  1. An efficient and accurate solution methodology for bilevel multi-objective programming problems using a hybrid evolutionary-local-search algorithm.

    PubMed

    Deb, Kalyanmoy; Sinha, Ankur

    2010-01-01

    Bilevel optimization problems involve two optimization tasks (upper and lower level), in which every feasible upper level solution must correspond to an optimal solution to a lower level optimization problem. These problems commonly appear in many practical problem solving tasks including optimal control, process optimization, game-playing strategy developments, transportation problems, and others. However, they are commonly converted into a single level optimization problem by using an approximate solution procedure to replace the lower level optimization task. Although there exist a number of theoretical, numerical, and evolutionary optimization studies involving single-objective bilevel programming problems, not many studies look at the context of multiple conflicting objectives in each level of a bilevel programming problem. In this paper, we address certain intricate issues related to solving multi-objective bilevel programming problems, present challenging test problems, and propose a viable and hybrid evolutionary-cum-local-search based algorithm as a solution methodology. The hybrid approach performs better than a number of existing methodologies and scales well up to 40-variable difficult test problems used in this study. The population sizing and termination criteria are made self-adaptive, so that no additional parameters need to be supplied by the user. The study indicates a clear niche of evolutionary algorithms in solving such difficult problems of practical importance compared to their usual solution by a computationally expensive nested procedure. The study opens up many issues related to multi-objective bilevel programming and hopefully this study will motivate EMO and other researchers to pay more attention to this important and difficult problem solving activity.

  2. 46 CFR 153.1065 - Sodium chlorate solutions.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 46 Shipping 5 2010-10-01 2010-10-01 false Sodium chlorate solutions. 153.1065 Section 153.1065... Procedures § 153.1065 Sodium chlorate solutions. (a) No person may load sodium chlorate solutions into a... before loading. (b) The person in charge of cargo transfer shall make sure that spills of sodium chlorate...

  3. 46 CFR 153.1065 - Sodium chlorate solutions.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 46 Shipping 5 2011-10-01 2011-10-01 false Sodium chlorate solutions. 153.1065 Section 153.1065... Procedures § 153.1065 Sodium chlorate solutions. (a) No person may load sodium chlorate solutions into a... before loading. (b) The person in charge of cargo transfer shall make sure that spills of sodium chlorate...

  4. 46 CFR 153.1065 - Sodium chlorate solutions.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 46 Shipping 5 2012-10-01 2012-10-01 false Sodium chlorate solutions. 153.1065 Section 153.1065... Procedures § 153.1065 Sodium chlorate solutions. (a) No person may load sodium chlorate solutions into a... before loading. (b) The person in charge of cargo transfer shall make sure that spills of sodium chlorate...

  5. Adaptive Shape Functions and Internal Mesh Adaptation for Modelling Progressive Failure in Adhesively Bonded Joints

    NASA Technical Reports Server (NTRS)

    Stapleton, Scott; Gries, Thomas; Waas, Anthony M.; Pineda, Evan J.

    2014-01-01

    Enhanced finite elements are elements with an embedded analytical solution that can capture detailed local fields, enabling more efficient, mesh independent finite element analysis. The shape functions are determined based on the analytical model rather than prescribed. This method was applied to adhesively bonded joints to model joint behavior with one element through the thickness. This study demonstrates two methods of maintaining the fidelity of such elements during adhesive non-linearity and cracking without increasing the mesh needed for an accurate solution. The first method uses adaptive shape functions, where the shape functions are recalculated at each load step based on the softening of the adhesive. The second method is internal mesh adaption, where cracking of the adhesive within an element is captured by further discretizing the element internally to represent the partially cracked geometry. By keeping mesh adaptations within an element, a finer mesh can be used during the analysis without affecting the global finite element model mesh. Examples are shown which highlight when each method is most effective in reducing the number of elements needed to capture adhesive nonlinearity and cracking. These methods are validated against analogous finite element models utilizing cohesive zone elements.

  6. Catheter for Cleaning Surgical Optics During Surgical Procedures: A Possible Solution for Residue Buildup and Fogging in Video Surgery.

    PubMed

    de Abreu, Igor Renato Louro Bruno; Abrão, Fernando Conrado; Silva, Alessandra Rodrigues; Corrêa, Larissa Teresa Cirera; Younes, Riad Nain

    2015-05-01

    Currently, there is a tendency to perform surgical procedures via laparoscopic or thoracoscopic access. However, even with the impressive technological advancement in surgical materials, such as improvement in quality of monitors, light sources, and optical fibers, surgeons have to face simple problems that can greatly hinder surgery by video. One is the formation of "fog" or residue buildup on the lens, causing decreased visibility. Intracavitary techniques for cleaning surgical optics and preventing fog formation have been described; however, some of these techniques employ the use of expensive and complex devices designed solely for this purpose. Moreover, these techniques allow the cleaning of surgical optics when they becomes dirty, which does not prevent the accumulation of residue in the optics. To solve this problem we have designed a device that allows cleaning the optics with no surgical stops and prevents the fogging and residue accumulation. The objective of this study is to evaluate through experimental testing the effectiveness of a simple device that prevents the accumulation of residue and fogging of optics used in surgical procedures performed through thoracoscopic or laparoscopic access. Ex-vivo experiments were performed simulating the conditions of residue presence in surgical optics during a video surgery. The experiment consists in immersing the optics and catheter set connected to the IV line with crystalloid solution in three types of materials: blood, blood plus fat solution, and 200 mL of distilled water and 1 vial of methylene blue. The optics coupled to the device were immersed in 200 mL of each type of residue, repeating each immersion 10 times for each distinct residue for both thirty and zero degrees optics, totaling 420 experiments. A success rate of 98.1% was observed after the experiments, in these cases the device was able to clean and prevent the residue accumulation in the optics.

  7. Evolutionary online behaviour learning and adaptation in real robots

    PubMed Central

    Correia, Luís; Christensen, Anders Lyhne

    2017-01-01

    Online evolution of behavioural control on real robots is an open-ended approach to autonomous learning and adaptation: robots have the potential to automatically learn new tasks and to adapt to changes in environmental conditions, or to failures in sensors and/or actuators. However, studies have so far almost exclusively been carried out in simulation because evolution in real hardware has required several days or weeks to produce capable robots. In this article, we successfully evolve neural network-based controllers in real robotic hardware to solve two single-robot tasks and one collective robotics task. Controllers are evolved either from random solutions or from solutions pre-evolved in simulation. In all cases, capable solutions are found in a timely manner (1 h or less). Results show that more accurate simulations may lead to higher-performing controllers, and that completing the optimization process in real robots is meaningful, even if solutions found in simulation differ from solutions in reality. We furthermore demonstrate for the first time the adaptive capabilities of online evolution in real robotic hardware, including robots able to overcome faults injected in the motors of multiple units simultaneously, and to modify their behaviour in response to changes in the task requirements. We conclude by assessing the contribution of each algorithmic component on the performance of the underlying evolutionary algorithm. PMID:28791130

  8. Evolutionary online behaviour learning and adaptation in real robots.

    PubMed

    Silva, Fernando; Correia, Luís; Christensen, Anders Lyhne

    2017-07-01

    Online evolution of behavioural control on real robots is an open-ended approach to autonomous learning and adaptation: robots have the potential to automatically learn new tasks and to adapt to changes in environmental conditions, or to failures in sensors and/or actuators. However, studies have so far almost exclusively been carried out in simulation because evolution in real hardware has required several days or weeks to produce capable robots. In this article, we successfully evolve neural network-based controllers in real robotic hardware to solve two single-robot tasks and one collective robotics task. Controllers are evolved either from random solutions or from solutions pre-evolved in simulation. In all cases, capable solutions are found in a timely manner (1 h or less). Results show that more accurate simulations may lead to higher-performing controllers, and that completing the optimization process in real robots is meaningful, even if solutions found in simulation differ from solutions in reality. We furthermore demonstrate for the first time the adaptive capabilities of online evolution in real robotic hardware, including robots able to overcome faults injected in the motors of multiple units simultaneously, and to modify their behaviour in response to changes in the task requirements. We conclude by assessing the contribution of each algorithmic component on the performance of the underlying evolutionary algorithm.

  9. Real-time adaptive finite element solution of time-dependent Kohn-Sham equation

    NASA Astrophysics Data System (ADS)

    Bao, Gang; Hu, Guanghui; Liu, Di

    2015-01-01

    In our previous paper (Bao et al., 2012 [1]), a general framework of using adaptive finite element methods to solve the Kohn-Sham equation has been presented. This work is concerned with solving the time-dependent Kohn-Sham equations. The numerical methods are studied in the time domain, which can be employed to explain both the linear and the nonlinear effects. A Crank-Nicolson scheme and linear finite element space are employed for the temporal and spatial discretizations, respectively. To resolve the trouble regions in the time-dependent simulations, a heuristic error indicator is introduced for the mesh adaptive methods. An algebraic multigrid solver is developed to efficiently solve the complex-valued system derived from the semi-implicit scheme. A mask function is employed to remove or reduce the boundary reflection of the wavefunction. The effectiveness of our method is verified by numerical simulations for both linear and nonlinear phenomena, in which the effectiveness of the mesh adaptive methods is clearly demonstrated.

  10. Survey of adaptive control using Liapunov design

    NASA Technical Reports Server (NTRS)

    Lindorff, D. P.; Carroll, R. L.

    1972-01-01

    A survey was made of the literature devoted to the synthesis of model-tracking adaptive systems based on application of Liapunov's second method. The basic synthesis procedure is introduced and a critical review of extensions made to the theory since 1966 is made. The extensions relate to design for relative stability, reduction of order techniques, design with disturbance, design with time variable parameters, multivariable systems, identification, and an adaptive observer.

  11. Spatial Data Quality Control Procedure applied to the Okavango Basin Information System

    NASA Astrophysics Data System (ADS)

    Butchart-Kuhlmann, Daniel

    2014-05-01

    Spatial data is a powerful form of information, capable of providing information of great interest and tremendous use to a variety of users. However, much like other data representing the 'real world', precision and accuracy must be high for the results of data analysis to be deemed reliable and thus applicable to real world projects and undertakings. The spatial data quality control (QC) procedure presented here was developed as the topic of a Master's thesis, in the sphere of and using data from the Okavango Basin Information System (OBIS), itself a part of The Future Okavango (TFO) project. The aim of the QC procedure was to form the basis of a method through which to determine the quality of spatial data relevant for application to hydrological, solute, and erosion transport modelling using the Jena Adaptable Modelling System (JAMS). As such, the quality of all data present in OBIS classified under the topics of elevation, geoscientific information, or inland waters, was evaluated. Since the initial data quality has been evaluated, efforts are underway to correct the errors found, thus improving the quality of the dataset.

  12. Multithreaded Model for Dynamic Load Balancing Parallel Adaptive PDE Computations

    NASA Technical Reports Server (NTRS)

    Chrisochoides, Nikos

    1995-01-01

    We present a multithreaded model for the dynamic load-balancing of numerical, adaptive computations required for the solution of Partial Differential Equations (PDE's) on multiprocessors. Multithreading is used as a means of exploring concurrency in the processor level in order to tolerate synchronization costs inherent to traditional (non-threaded) parallel adaptive PDE solvers. Our preliminary analysis for parallel, adaptive PDE solvers indicates that multithreading can be used an a mechanism to mask overheads required for the dynamic balancing of processor workloads with computations required for the actual numerical solution of the PDE's. Also, multithreading can simplify the implementation of dynamic load-balancing algorithms, a task that is very difficult for traditional data parallel adaptive PDE computations. Unfortunately, multithreading does not always simplify program complexity, often makes code re-usability not an easy task, and increases software complexity.

  13. New multigrid approach for three-dimensional unstructured, adaptive grids

    NASA Technical Reports Server (NTRS)

    Parthasarathy, Vijayan; Kallinderis, Y.

    1994-01-01

    A new multigrid method with adaptive unstructured grids is presented. The three-dimensional Euler equations are solved on tetrahedral grids that are adaptively refined or coarsened locally. The multigrid method is employed to propagate the fine grid corrections more rapidly by redistributing the changes-in-time of the solution from the fine grid to the coarser grids to accelerate convergence. A new approach is employed that uses the parent cells of the fine grid cells in an adapted mesh to generate successively coaser levels of multigrid. This obviates the need for the generation of a sequence of independent, nonoverlapping grids as well as the relatively complicated operations that need to be performed to interpolate the solution and the residuals between the independent grids. The solver is an explicit, vertex-based, finite volume scheme that employs edge-based data structures and operations. Spatial discretization is of central-differencing type combined with a special upwind-like smoothing operators. Application cases include adaptive solutions obtained with multigrid acceleration for supersonic and subsonic flow over a bump in a channel, as well as transonic flow around the ONERA M6 wing. Two levels of multigrid resulted in reduction in the number of iterations by a factor of 5.

  14. A Comparison of Exposure Control Procedures in CATs Using the 3PL Model

    ERIC Educational Resources Information Center

    Leroux, Audrey J.; Lopez, Myriam; Hembry, Ian; Dodd, Barbara G.

    2013-01-01

    This study compares the progressive-restricted standard error (PR-SE) exposure control procedure to three commonly used procedures in computerized adaptive testing, the randomesque, Sympson-Hetter (SH), and no exposure control methods. The performance of these four procedures is evaluated using the three-parameter logistic model under the…

  15. Higher-order adaptive finite-element methods for Kohn–Sham density functional theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Motamarri, P.; Nowak, M.R.; Leiter, K.

    2013-11-15

    We present an efficient computational approach to perform real-space electronic structure calculations using an adaptive higher-order finite-element discretization of Kohn–Sham density-functional theory (DFT). To this end, we develop an a priori mesh-adaption technique to construct a close to optimal finite-element discretization of the problem. We further propose an efficient solution strategy for solving the discrete eigenvalue problem by using spectral finite-elements in conjunction with Gauss–Lobatto quadrature, and a Chebyshev acceleration technique for computing the occupied eigenspace. The proposed approach has been observed to provide a staggering 100–200-fold computational advantage over the solution of a generalized eigenvalue problem. Using the proposedmore » solution procedure, we investigate the computational efficiency afforded by higher-order finite-element discretizations of the Kohn–Sham DFT problem. Our studies suggest that staggering computational savings—of the order of 1000-fold—relative to linear finite-elements can be realized, for both all-electron and local pseudopotential calculations, by using higher-order finite-element discretizations. On all the benchmark systems studied, we observe diminishing returns in computational savings beyond the sixth-order for accuracies commensurate with chemical accuracy, suggesting that the hexic spectral-element may be an optimal choice for the finite-element discretization of the Kohn–Sham DFT problem. A comparative study of the computational efficiency of the proposed higher-order finite-element discretizations suggests that the performance of finite-element basis is competing with the plane-wave discretization for non-periodic local pseudopotential calculations, and compares to the Gaussian basis for all-electron calculations to within an order of magnitude. Further, we demonstrate the capability of the proposed approach to compute the electronic structure of a metallic system

  16. Favorite Demonstrations: Exothermic Crystallization from a Supersaturated Solution.

    ERIC Educational Resources Information Center

    Kauffman, George B.; And Others

    1986-01-01

    The use of sodium acetate solution to show supersaturation is a favorite among lecture demonstrations. However, careful adjustment of the solute-to-water ratio must be made to attain the most spectacular effect--complete solidification of the solution. Procedures to accomplish this are provided and discussed. (JN)

  17. Salt adaptation in Bufo bufo

    PubMed Central

    Ferreira, H. G.; Jesus, C. H.

    1973-01-01

    1. The capacity of adaptation of toads (Bufo bufo) to environments of high salinity was studied and the relative importance of skin, kidney and urinary bladder in controlling the balance of water and salt was assessed. 2. Toads were kept in NaCl solutions of 20, 50, 110, 150 and 220 mM and studied in their fourth week of adaptation. A group of animals considered as `control' was kept in wet soil with free access to water. Plasma, ureter urine, and bladder and colon contents were analysed for sodium, potassium, chloride and osmolality, and total body sodium and water were determined. Absorption of water and 22Na through the skin, and water flow and sodium excretion through the ureter, of intact animals was studied. Hydrosmotic water transport through the isolated urinary bladder of `control' and adapted animals was determined. The effects of pitressin and aldosterone on the water and sodium balance are described. 3. The survival rates of toads kept in saline concentrations up to 150 mM were identical to that of `control' animals, but half of the animals kept in 220 mM died within 4 weeks. 4. There is a linear correlation between the sodium concentrations and osmolality of plasma and of the external media. 5. The sodium concentration in colon contents rose with rising external concentrations, up to values higher than the values in plasma. 6. Sodium concentrations and osmolalities of ureter and bladder urine increased in adapted animals, the values for bladder urine becoming much higher than those for ureter urine in animals adapted to 110, 150 and 220 mM. 7. Total body water, as a percentage of total weight was kept within very narrow limits, although the total body sodium increased with adaptation. 8. Absorption of water through the skin for the same osmotic gradients was smaller in adapted than in `control' animals. 9. The ureteral output of water of toads adapted to 110 and 150 mM-NaCl was larger than the water absorption through the skin. 10. Skin absorption of

  18. A roadmap to effective urban climate change adaptation

    NASA Astrophysics Data System (ADS)

    Setiadi, R.

    2018-03-01

    This paper outlines a roadmap to effective urban climate change adaptation built from our practical understanding of the evidence and effects of climate change and the preparation of climate change adaptation strategies and plans. This roadmap aims to drive research in achieving fruitful knowledge and solution-based achievable recommendations in adapting to climate change in urban areas with effective and systematic manner. This paper underscores the importance of the interplay between local government initiatives and a national government for effective adaptation to climate change and takes into account the policy process and politics. This paper argues that effective urban climate change adaptation has a contribution to build urban resilience and helps the achievement of national government goals and targets in climate change adaptation.

  19. Adaptive neuro-heuristic hybrid model for fruit peel defects detection.

    PubMed

    Woźniak, Marcin; Połap, Dawid

    2018-02-01

    Fusion of machine learning methods benefits in decision support systems. A composition of approaches gives a possibility to use the most efficient features composed into one solution. In this article we would like to present an approach to the development of adaptive method based on fusion of proposed novel neural architecture and heuristic search into one co-working solution. We propose a developed neural network architecture that adapts to processed input co-working with heuristic method used to precisely detect areas of interest. Input images are first decomposed into segments. This is to make processing easier, since in smaller images (decomposed segments) developed Adaptive Artificial Neural Network (AANN) processes less information what makes numerical calculations more precise. For each segment a descriptor vector is composed to be presented to the proposed AANN architecture. Evaluation is run adaptively, where the developed AANN adapts to inputs and their features by composed architecture. After evaluation, selected segments are forwarded to heuristic search, which detects areas of interest. As a result the system returns the image with pixels located over peel damages. Presented experimental research results on the developed solution are discussed and compared with other commonly used methods to validate the efficacy and the impact of the proposed fusion in the system structure and training process on classification results. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. Evaluation of truncation error and adaptive grid generation for the transonic full potential flow calculations

    NASA Technical Reports Server (NTRS)

    Nakamura, S.

    1983-01-01

    The effects of truncation error on the numerical solution of transonic flows using the full potential equation are studied. The effects of adapting grid point distributions to various solution aspects including shock waves is also discussed. A conclusion is that a rapid change of grid spacing is damaging to the accuracy of the flow solution. Therefore, in a solution adaptive grid application an optimal grid is obtained as a tradeoff between the amount of grid refinement and the rate of grid stretching.

  1. A Novel Approach for Adaptive Signal Processing

    NASA Technical Reports Server (NTRS)

    Chen, Ya-Chin; Juang, Jer-Nan

    1998-01-01

    Adaptive linear predictors have been used extensively in practice in a wide variety of forms. In the main, their theoretical development is based upon the assumption of stationarity of the signals involved, particularly with respect to the second order statistics. On this basis, the well-known normal equations can be formulated. If high- order statistical stationarity is assumed, then the equivalent normal equations involve high-order signal moments. In either case, the cross moments (second or higher) are needed. This renders the adaptive prediction procedure non-blind. A novel procedure for blind adaptive prediction has been proposed and considerable implementation has been made in our contributions in the past year. The approach is based upon a suitable interpretation of blind equalization methods that satisfy the constant modulus property and offers significant deviations from the standard prediction methods. These blind adaptive algorithms are derived by formulating Lagrange equivalents from mechanisms of constrained optimization. In this report, other new update algorithms are derived from the fundamental concepts of advanced system identification to carry out the proposed blind adaptive prediction. The results of the work can be extended to a number of control-related problems, such as disturbance identification. The basic principles are outlined in this report and differences from other existing methods are discussed. The applications implemented are speech processing, such as coding and synthesis. Simulations are included to verify the novel modelling method.

  2. Balancing Flexible Constraints and Measurement Precision in Computerized Adaptive Testing

    ERIC Educational Resources Information Center

    Moyer, Eric L.; Galindo, Jennifer L.; Dodd, Barbara G.

    2012-01-01

    Managing test specifications--both multiple nonstatistical constraints and flexibly defined constraints--has become an important part of designing item selection procedures for computerized adaptive tests (CATs) in achievement testing. This study compared the effectiveness of three procedures: constrained CAT, flexible modified constrained CAT,…

  3. A Pilot Program in Adapted Physical Education: Hillsborough High School.

    ERIC Educational Resources Information Center

    Thompson, Vince

    The instructor of an adapted physical education program describes his experiences and suggests guidelines for implementing other programs. Reviewed are such aspects as program orientation, class procedures, identification of student participants, and grading procedures. Objectives, lesson plans and evaluations are presented for the following units…

  4. Sampling procedures for inventory of commercial volume tree species in Amazon Forest.

    PubMed

    Netto, Sylvio P; Pelissari, Allan L; Cysneiros, Vinicius C; Bonazza, Marcelo; Sanquetta, Carlos R

    2017-01-01

    The spatial distribution of tropical tree species can affect the consistency of the estimators in commercial forest inventories, therefore, appropriate sampling procedures are required to survey species with different spatial patterns in the Amazon Forest. For this, the present study aims to evaluate the conventional sampling procedures and introduce the adaptive cluster sampling for volumetric inventories of Amazonian tree species, considering the hypotheses that the density, the spatial distribution and the zero-plots affect the consistency of the estimators, and that the adaptive cluster sampling allows to obtain more accurate volumetric estimation. We use data from a census carried out in Jamari National Forest, Brazil, where trees with diameters equal to or higher than 40 cm were measured in 1,355 plots. Species with different spatial patterns were selected and sampled with simple random sampling, systematic sampling, linear cluster sampling and adaptive cluster sampling, whereby the accuracy of the volumetric estimation and presence of zero-plots were evaluated. The sampling procedures applied to species were affected by the low density of trees and the large number of zero-plots, wherein the adaptive clusters allowed concentrating the sampling effort in plots with trees and, thus, agglutinating more representative samples to estimate the commercial volume.

  5. Post-processing procedure for industrial quantum key distribution systems

    NASA Astrophysics Data System (ADS)

    Kiktenko, Evgeny; Trushechkin, Anton; Kurochkin, Yury; Fedorov, Aleksey

    2016-08-01

    We present algorithmic solutions aimed on post-processing procedure for industrial quantum key distribution systems with hardware sifting. The main steps of the procedure are error correction, parameter estimation, and privacy amplification. Authentication of classical public communication channel is also considered.

  6. Adaptive optics ophthalmoscopy: results and applications.

    PubMed

    Pallikaris, A

    2005-01-01

    The living human eye's optical aberrations set a limit to retinal imaging in the clinical setting. Progress in the field of adaptive optics has offered unique solutions to this problem. The purpose of this review is to summarize the most recent advances in adaptive optics ophthalmoscopy. Adaptive optics technology has been combined with flood illumination imaging, confocal scanning laser ophthalmoscopy, and optical coherence tomography for the high resolution imaging of the retina. The advent of adaptive optics technology has provided the technical platform for the compensation of the eye's aberration and made possible the observation of single cones, small capillaries, nerve fibers, and leukocyte dynamics as well as the ultrastructure of the optic nerve head lamina cribrosa in vivo. Detailed imaging of retinal infrastructure provides valuable information for the study of retinal physiology and pathology.

  7. Adaptive triangular mesh generation

    NASA Technical Reports Server (NTRS)

    Erlebacher, G.; Eiseman, P. R.

    1984-01-01

    A general adaptive grid algorithm is developed on triangular grids. The adaptivity is provided by a combination of node addition, dynamic node connectivity and a simple node movement strategy. While the local restructuring process and the node addition mechanism take place in the physical plane, the nodes are displaced on a monitor surface, constructed from the salient features of the physical problem. An approximation to mean curvature detects changes in the direction of the monitor surface, and provides the pulling force on the nodes. Solutions to the axisymmetric Grad-Shafranov equation demonstrate the capturing, by triangles, of the plasma-vacuum interface in a free-boundary equilibrium configuration.

  8. Reflections on the Adaptive Designs Accelerating Promising Trials Into Treatments (ADAPT-IT) Process—Findings from a Qualitative Study

    PubMed Central

    Guetterman, Timothy C.; Fetters, Michael D.; Legocki, Laurie J.; Mawocha, Samkeliso; Barsan, William G.; Lewis, Roger J.; Berry, Donald A.; Meurer, William J.

    2015-01-01

    Context The context for this study was the Adaptive Designs Advancing Promising Treatments Into Trials (ADAPT-IT) project, which aimed to incorporate flexible adaptive designs into pivotal clinical trials and to conduct an assessment of the trial development process. Little research provides guidance to academic institutions in planning adaptive trials. Objectives The purpose of this qualitative study was to explore the perspectives and experiences of stakeholders as they reflected back about the interactive ADAPT-IT adaptive design development process, and to understand their perspectives regarding lessons learned about the design of the trials and trial development. Materials and methods We conducted semi-structured interviews with ten key stakeholders and observations of the process. We employed qualitative thematic text data analysis to reduce the data into themes about the ADAPT-IT project and adaptive clinical trials. Results The qualitative analysis revealed four themes: education of the project participants, how the process evolved with participant feedback, procedures that could enhance the development of other trials, and education of the broader research community. Discussion and conclusions While participants became more likely to consider flexible adaptive designs, additional education is needed to both understand the adaptive methodology and articulate it when planning trials. PMID:26622163

  9. Global Load Balancing with Parallel Mesh Adaption on Distributed-Memory Systems

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak; Oliker, Leonid; Sohn, Andrew

    1996-01-01

    Dynamic mesh adaptation on unstructured grids is a powerful tool for efficiently computing unsteady problems to resolve solution features of interest. Unfortunately, this causes load inbalances among processors on a parallel machine. This paper described the parallel implementation of a tetrahedral mesh adaption scheme and a new global load balancing method. A heuristic remapping algorithm is presented that assigns partitions to processors such that the redistribution coast is minimized. Results indicate that the parallel performance of the mesh adaption code depends on the nature of the adaption region and show a 35.5X speedup on 64 processors of an SP2 when 35 percent of the mesh is randomly adapted. For large scale scientific computations, our load balancing strategy gives an almost sixfold reduction in solver execution times over non-balanced loads. Furthermore, our heuristic remappier yields processor assignments that are less than 3 percent of the optimal solutions, but requires only 1 percent of the computational time.

  10. Global Load Balancing with Parallel Mesh Adaption on Distributed-Memory Systems

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak; Oliker, Leonid; Sohn, Andrew

    1996-01-01

    Dynamic mesh adaption on unstructured grids is a powerful tool for efficiently computing unsteady problems to resolve solution features of interest. Unfortunately, this causes load imbalance among processors on a parallel machine. This paper describes the parallel implementation of a tetrahedral mesh adaption scheme and a new global load balancing method. A heuristic remapping algorithm is presented that assigns partitions to processors such that the redistribution cost is minimized. Results indicate that the parallel performance of the mesh adaption code depends on the nature of the adaption region and show a 35.5X speedup on 64 processors of an SP2 when 35% of the mesh is randomly adapted. For large-scale scientific computations, our load balancing strategy gives almost a sixfold reduction in solver execution times over non-balanced loads. Furthermore, our heuristic remapper yields processor assignments that are less than 3% off the optimal solutions but requires only 1% of the computational time.

  11. Topical Hazard Evaluation Program Procedural Guide.

    DTIC Science & Technology

    1982-01-01

    conditions and are percent (w/v) Oil of tion reaction under test not expected to cause a Bergamot solution conditions. photochemical irritation...photochemical skin irritant ( Bergamot oil). d. All compounds-are handled with caution. Current test procedures cannot eliminate the possibility of individual...percent ethyl alcohol. One additional compound applied along with the test compounds is a 10 percent solution (w/v) of Bergamot oil" in 95 percent ethyl

  12. Organization of Distributed Adaptive Learning

    ERIC Educational Resources Information Center

    Vengerov, Alexander

    2009-01-01

    The growing sensitivity of various systems and parts of industry, society, and even everyday individual life leads to the increased volume of changes and needs for adaptation and learning. This creates a new situation where learning from being purely academic knowledge transfer procedure is becoming a ubiquitous always-on essential part of all…

  13. QUEST+: A general multidimensional Bayesian adaptive psychometric method.

    PubMed

    Watson, Andrew B

    2017-03-01

    QUEST+ is a Bayesian adaptive psychometric testing method that allows an arbitrary number of stimulus dimensions, psychometric function parameters, and trial outcomes. It is a generalization and extension of the original QUEST procedure and incorporates many subsequent developments in the area of parametric adaptive testing. With a single procedure, it is possible to implement a wide variety of experimental designs, including conventional threshold measurement; measurement of psychometric function parameters, such as slope and lapse; estimation of the contrast sensitivity function; measurement of increment threshold functions; measurement of noise-masking functions; Thurstone scale estimation using pair comparisons; and categorical ratings on linear and circular stimulus dimensions. QUEST+ provides a general method to accelerate data collection in many areas of cognitive and perceptual science.

  14. A computational procedure for multibody systems including flexible beam dynamics

    NASA Technical Reports Server (NTRS)

    Downer, J. D.; Park, K. C.; Chiou, J. C.

    1990-01-01

    A computational procedure suitable for the solution of equations of motions for flexible multibody systems has been developed. A fully nonlinear continuum approach capable of accounting for both finite rotations and large deformations has been used to model a flexible beam component. The beam kinematics are referred directly to an inertial reference frame such that the degrees of freedom embody both the rigid and flexible deformation motions. As such, the beam inertia expression is identical to that of rigid body dynamics. The nonlinear coupling between gross body motion and elastic deformation is contained in the internal force expression. Numerical solution procedures for the integration of spatial kinematic systems can be directily applied to the generalized coordinates of both the rigid and flexible components. An accurate computation of the internal force term which is invariant to rigid motions is incorporated into the general solution procedure.

  15. Broadcasting satellite service synthesis using gradient and cyclic coordinate search procedures

    NASA Technical Reports Server (NTRS)

    Reilly, C. H.; Mount-Campbell, C. A.; Gonsalvez, D. J.; Martin, C. H.; Levis, C. A.

    1986-01-01

    Two search techniques are considered for solving satellite synthesis problems. Neither is likely to find a globally optimal solution. In order to determine which method performs better and what factors affect their performance, an experiment is designed and the same problem is solved under a variety of starting solution configuration-algorithm combinations. Since there is no randomization in the experiment, results of practical, rather than statistical, significance are presented. Implementation of a cyclic coordinate search procedure clearly finds better synthesis solutions than implementation of a gradient search procedure does with the objective of maximizing the minimum C/I ratio computed at test points on the perimeters of the intended service areas. The length of the available orbital arc and the configuration of the starting solution are shown to affect the quality of the solutions found.

  16. Micro-scale NMR Experiments for Monitoring the Optimization of Membrane Protein Solutions for Structural Biology.

    PubMed

    Horst, Reto; Wüthrich, Kurt

    2015-07-20

    Reconstitution of integral membrane proteins (IMP) in aqueous solutions of detergent micelles has been extensively used in structural biology, using either X-ray crystallography or NMR in solution. Further progress could be achieved by establishing a rational basis for the selection of detergent and buffer conditions, since the stringent bottleneck that slows down the structural biology of IMPs is the preparation of diffracting crystals or concentrated solutions of stable isotope labeled IMPs. Here, we describe procedures to monitor the quality of aqueous solutions of [ 2 H, 15 N]-labeled IMPs reconstituted in detergent micelles. This approach has been developed for studies of β-barrel IMPs, where it was successfully applied for numerous NMR structure determinations, and it has also been adapted for use with α-helical IMPs, in particular GPCRs, in guiding crystallization trials and optimizing samples for NMR studies (Horst et al ., 2013). 2D [ 15 N, 1 H]-correlation maps are used as "fingerprints" to assess the foldedness of the IMP in solution. For promising samples, these "inexpensive" data are then supplemented with measurements of the translational and rotational diffusion coefficients, which give information on the shape and size of the IMP/detergent mixed micelles. Using microcoil equipment for these NMR experiments enables data collection with only micrograms of protein and detergent. This makes serial screens of variable solution conditions viable, enabling the optimization of parameters such as the detergent concentration, sample temperature, pH and the composition of the buffer.

  17. Structured programming: Principles, notation, procedure

    NASA Technical Reports Server (NTRS)

    JOST

    1978-01-01

    Structured programs are best represented using a notation which gives a clear representation of the block encapsulation. In this report, a set of symbols which can be used until binding directives are republished is suggested. Structured programming also allows a new method of procedure for design and testing. Programs can be designed top down, that is, they can start at the highest program plane and can penetrate to the lowest plane by step-wise refinements. The testing methodology also is adapted to this procedure. First, the highest program plane is tested, and the programs which are not yet finished in the next lower plane are represented by so-called dummies. They are gradually replaced by the real programs.

  18. Self-adaptive multi-objective harmony search for optimal design of water distribution networks

    NASA Astrophysics Data System (ADS)

    Choi, Young Hwan; Lee, Ho Min; Yoo, Do Guen; Kim, Joong Hoon

    2017-11-01

    In multi-objective optimization computing, it is important to assign suitable parameters to each optimization problem to obtain better solutions. In this study, a self-adaptive multi-objective harmony search (SaMOHS) algorithm is developed to apply the parameter-setting-free technique, which is an example of a self-adaptive methodology. The SaMOHS algorithm attempts to remove some of the inconvenience from parameter setting and selects the most adaptive parameters during the iterative solution search process. To verify the proposed algorithm, an optimal least cost water distribution network design problem is applied to three different target networks. The results are compared with other well-known algorithms such as multi-objective harmony search and the non-dominated sorting genetic algorithm-II. The efficiency of the proposed algorithm is quantified by suitable performance indices. The results indicate that SaMOHS can be efficiently applied to the search for Pareto-optimal solutions in a multi-objective solution space.

  19. Finite element methods for the biomechanics of soft hydrated tissues: nonlinear analysis and adaptive control of meshes.

    PubMed

    Spilker, R L; de Almeida, E S; Donzelli, P S

    1992-01-01

    This chapter addresses computationally demanding numerical formulations in the biomechanics of soft tissues. The theory of mixtures can be used to represent soft hydrated tissues in the human musculoskeletal system as a two-phase continuum consisting of an incompressible solid phase (collagen and proteoglycan) and an incompressible fluid phase (interstitial water). We first consider the finite deformation of soft hydrated tissues in which the solid phase is represented as hyperelastic. A finite element formulation of the governing nonlinear biphasic equations is presented based on a mixed-penalty approach and derived using the weighted residual method. Fluid and solid phase deformation, velocity, and pressure are interpolated within each element, and the pressure variables within each element are eliminated at the element level. A system of nonlinear, first-order differential equations in the fluid and solid phase deformation and velocity is obtained. In order to solve these equations, the contributions of the hyperelastic solid phase are incrementally linearized, a finite difference rule is introduced for temporal discretization, and an iterative scheme is adopted to achieve equilibrium at the end of each time increment. We demonstrate the accuracy and adequacy of the procedure using a six-node, isoparametric axisymmetric element, and we present an example problem for which independent numerical solution is available. Next, we present an automated, adaptive environment for the simulation of soft tissue continua in which the finite element analysis is coupled with automatic mesh generation, error indicators, and projection methods. Mesh generation and updating, including both refinement and coarsening, for the two-dimensional examples examined in this study are performed using the finite quadtree approach. The adaptive analysis is based on an error indicator which is the L2 norm of the difference between the finite element solution and a projected finite element

  20. Algebraic and adaptive learning in neural control systems

    NASA Astrophysics Data System (ADS)

    Ferrari, Silvia

    A systematic approach is developed for designing adaptive and reconfigurable nonlinear control systems that are applicable to plants modeled by ordinary differential equations. The nonlinear controller comprising a network of neural networks is taught using a two-phase learning procedure realized through novel techniques for initialization, on-line training, and adaptive critic design. A critical observation is that the gradients of the functions defined by the neural networks must equal corresponding linear gain matrices at chosen operating points. On-line training is based on a dual heuristic adaptive critic architecture that improves control for large, coupled motions by accounting for actual plant dynamics and nonlinear effects. An action network computes the optimal control law; a critic network predicts the derivative of the cost-to-go with respect to the state. Both networks are algebraically initialized based on prior knowledge of satisfactory pointwise linear controllers and continue to adapt on line during full-scale simulations of the plant. On-line training takes place sequentially over discrete periods of time and involves several numerical procedures. A backpropagating algorithm called Resilient Backpropagation is modified and successfully implemented to meet these objectives, without excessive computational expense. This adaptive controller is as conservative as the linear designs and as effective as a global nonlinear controller. The method is successfully implemented for the full-envelope control of a six-degree-of-freedom aircraft simulation. The results show that the on-line adaptation brings about improved performance with respect to the initialization phase during aircraft maneuvers that involve large-angle and coupled dynamics, and parameter variations.

  1. Adaptive control of periodic systems

    NASA Astrophysics Data System (ADS)

    Tian, Zhiling

    2009-12-01

    Adaptive control is needed to cope with parametric uncertainty in dynamical systems. The adaptive control of LTI systems in both discrete and continuous time has been studied for four decades and the results are currently used widely in many different fields. In recent years, interest has shifted to the adaptive control of time-varying systems. It is known that the adaptive control of arbitrarily rapidly time-varying systems is in general intractable, but systems with periodically time-varying parameters (LTP systems) which have much more structure, are amenable to mathematical analysis. Further, there is also a need for such control in practical problems which have arisen in industry during the past twenty years. This thesis is the first attempt to deal with the adaptive control of LTP systems. Adaptive Control involves estimation of unknown parameters, adjusting the control parameters based on the estimates, and demonstrating that the overall system is stable. System theoretic properties such as stability, controllability, and observability play an important role both in formulating of the problems, as well as in generating solutions for them. For LTI systems, these properties have been studied since 1960s, and algebraic conditions that have to be satisfied to assure these properties are now well established. In the case of LTP systems, these properties can be expressed only in terms of transition matrices that are much more involved than those for LTI systems. Since adaptive control problems can be formulated only when these properties are well understood, it is not surprising that systematic efforts have not been made thus far for formulating and solving adaptive control problems that arise in LTP systems. Even in the case of LTI systems, it is well recognized that problems related to adaptive discrete-time system are not as difficult as those that arise in the continuous-time systems. This is amply evident in the solutions that were derived in the 1980s and

  2. Visual Contrast Sensitivity Functions Obtained from Untrained Observers Using Tracking and Staircase Procedures. Final Report.

    ERIC Educational Resources Information Center

    Geri, George A.; Hubbard, David C.

    Two adaptive psychophysical procedures (tracking and "yes-no" staircase) for obtaining human visual contrast sensitivity functions (CSF) were evaluated. The procedures were chosen based on their proven validity and the desire to evaluate the practical effects of stimulus transients, since tracking procedures traditionally employ gradual…

  3. On the solution of evolution equations based on multigrid and explicit iterative methods

    NASA Astrophysics Data System (ADS)

    Zhukov, V. T.; Novikova, N. D.; Feodoritova, O. B.

    2015-08-01

    Two schemes for solving initial-boundary value problems for three-dimensional parabolic equations are studied. One is implicit and is solved using the multigrid method, while the other is explicit iterative and is based on optimal properties of the Chebyshev polynomials. In the explicit iterative scheme, the number of iteration steps and the iteration parameters are chosen as based on the approximation and stability conditions, rather than on the optimization of iteration convergence to the solution of the implicit scheme. The features of the multigrid scheme include the implementation of the intergrid transfer operators for the case of discontinuous coefficients in the equation and the adaptation of the smoothing procedure to the spectrum of the difference operators. The results produced by these schemes as applied to model problems with anisotropic discontinuous coefficients are compared.

  4. Lessons from a Space Analog on Adaptation for Long-Duration Exploration Missions.

    PubMed

    Anglin, Katlin M; Kring, Jason P

    2016-04-01

    Exploration missions to asteroids and Mars will bring new challenges associated with communication delays and more autonomy for crews. Mission safety and success will rely on how well the entire system, from technology to the human elements, is adaptable and resilient to disruptive, novel, or potentially catastrophic events. The recent NASA Extreme Environment Missions Operations (NEEMO) 20 mission highlighted this need and produced valuable "lessons learned" that will inform future research on team adaptation and resilience. A team of NASA, industry, and academic members used an iterative process to design a tripod shaped structure, called the CORAL Tower, for two astronauts to assemble underwater with minimal tools. The team also developed assembly procedures, administered training to the crew, and provided support during the mission. During the design, training, and assembly of the Tower, the team learned first-hand how adaptation in extreme environments depends on incremental testing, thorough procedures and contingency plans that predict possible failure scenarios, and effective team adaptation and resiliency for the crew and support personnel. Findings from NEEMO 20 provide direction on the design and testing process for future space systems and crews to maximize adaptation. This experience also underscored the need for more research on team adaptation, particularly how input and process factors affect adaption outcomes, the team adaptation iterative process, and new ways to measure the adaptation process.

  5. SAGE: The Self-Adaptive Grid Code. 3

    NASA Technical Reports Server (NTRS)

    Davies, Carol B.; Venkatapathy, Ethiraj

    1999-01-01

    The multi-dimensional self-adaptive grid code, SAGE, is an important tool in the field of computational fluid dynamics (CFD). It provides an efficient method to improve the accuracy of flow solutions while simultaneously reducing computer processing time. Briefly, SAGE enhances an initial computational grid by redistributing the mesh points into more appropriate locations. The movement of these points is driven by an equal-error-distribution algorithm that utilizes the relationship between high flow gradients and excessive solution errors. The method also provides a balance between clustering points in the high gradient regions and maintaining the smoothness and continuity of the adapted grid, The latest version, Version 3, includes the ability to change the boundaries of a given grid to more efficiently enclose flow structures and provides alternative redistribution algorithms.

  6. Evaluation and application of the ROMS 1-way embedding procedure to the central california upwelling system

    NASA Astrophysics Data System (ADS)

    Penven, Pierrick; Debreu, Laurent; Marchesiello, Patrick; McWilliams, James C.

    What most clearly distinguishes near-shore and off-shore currents is their dominant spatial scale, O (1-30) km near-shore and O (30-1000) km off-shore. In practice, these phenomena are usually both measured and modeled with separate methods. In particular, it is infeasible for any regular computational grid to be large enough to simultaneously resolve well both types of currents. In order to obtain local solutions at high resolution while preserving the regional-scale circulation at an affordable computational cost, a 1-way grid embedding capability has been integrated into the Regional Oceanic Modeling System (ROMS). It takes advantage of the AGRIF (Adaptive Grid Refinement in Fortran) Fortran 90 package based on the use of pointers. After a first evaluation in a baroclinic vortex test case, the embedding procedure has been applied to a domain that covers the central upwelling region off California, around Monterey Bay, embedded in a domain that spans the continental U.S. Pacific Coast. Long-term simulations (10 years) have been conducted to obtain mean-seasonal statistical equilibria. The final solution shows few discontinuities at the parent-child domain boundary and a valid representation of the local upwelling structure, at a CPU cost only slightly greater than for the inner region alone. The solution is assessed by comparison with solutions for the whole US Pacific Coast at both low and high resolutions and to solutions for only the inner region at high resolution with mean-seasonal boundary conditions.

  7. Adaptive near-field beamforming techniques for sound source imaging.

    PubMed

    Cho, Yong Thung; Roan, Michael J

    2009-02-01

    Phased array signal processing techniques such as beamforming have a long history in applications such as sonar for detection and localization of far-field sound sources. Two sometimes competing challenges arise in any type of spatial processing; these are to minimize contributions from directions other than the look direction and minimize the width of the main lobe. To tackle this problem a large body of work has been devoted to the development of adaptive procedures that attempt to minimize side lobe contributions to the spatial processor output. In this paper, two adaptive beamforming procedures-minimum variance distorsionless response and weight optimization to minimize maximum side lobes--are modified for use in source visualization applications to estimate beamforming pressure and intensity using near-field pressure measurements. These adaptive techniques are compared to a fixed near-field focusing technique (both techniques use near-field beamforming weightings focusing at source locations estimated based on spherical wave array manifold vectors with spatial windows). Sound source resolution accuracies of near-field imaging procedures with different weighting strategies are compared using numerical simulations both in anechoic and reverberant environments with random measurement noise. Also, experimental results are given for near-field sound pressure measurements of an enclosed loudspeaker.

  8. Adaptive Search through Constraint Violations

    DTIC Science & Technology

    1990-01-01

    procedural) knowledge? Different methodologies are used to investigate these questions: Psychological experiments, computer simulations, historical studies...learns control knowledge through adaptive search. Unlike most other psychological models of skill acquisition, HS is a model of analytical, or...Newzll, 1986; VanLehn, in press). Psychological models of skill acquisition employ different problem solving mechanisms (forward search, backward

  9. Assessment of Three “WHO” Patient Safety Solutions: Where Do We Stand and What Can We Do?

    PubMed Central

    Banihashemi, Sheida; Hatam, Nahid; Zand, Farid; Kharazmi, Erfan; Nasimi, Soheila; Askarian, Mehrdad

    2015-01-01

    Background: Most medical errors are preventable. The aim of this study was to compare the current execution of the 3 patient safety solutions with WHO suggested actions and standards. Methods: Data collection forms and direct observation were used to determine the status of implementation of existing protocols, resources, and tools. Results: In the field of patient hand-over, there was no standardized approach. In the field of the performance of correct procedure at the correct body site, there were no safety checklists, guideline, and educational content for informing the patients and their families about the procedure. In the field of hand hygiene (HH), although availability of necessary resources was acceptable, availability of promotional HH posters and reminders was substandard. Conclusions: There are some limitations of resources, protocols, and standard checklists in all three areas. We designed some tools that will help both wards to improve patient safety by the implementation of adapted WHO suggested actions. PMID:26900434

  10. On Browne's Solution for Oblique Procrustes Rotation

    ERIC Educational Resources Information Center

    Cramer, Elliot M.

    1974-01-01

    A form of Browne's (1967) solution of finding a least squares fit to a specified factor structure is given which does not involve solution of an eigenvalue problem. It suggests the possible existence of a singularity, and a simple modification of Browne's computational procedure is proposed. (Author/RC)

  11. Attractor mechanism as a distillation procedure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Levay, Peter; Szalay, Szilard

    2010-07-15

    In a recent paper it was shown that for double extremal static spherical symmetric BPS black hole solutions in the STU model the well-known process of moduli stabilization at the horizon can be recast in a form of a distillation procedure of a three-qubit entangled state of a Greenberger-Horne-Zeilinger type. By studying the full flow in moduli space in this paper we investigate this distillation procedure in more detail. We introduce a three-qubit state with amplitudes depending on the conserved charges, the warp factor, and the moduli. We show that for the recently discovered non-BPS solutions it is possible tomore » see how the distillation procedure unfolds itself as we approach the horizon. For the non-BPS seed solutions at the asymptotically Minkowski region we are starting with a three-qubit state having seven nonequal nonvanishing amplitudes and finally at the horizon we get a Greenberger-Horne-Zeilinger state with merely four nonvanishing ones with equal magnitudes. The magnitude of the surviving nonvanishing amplitudes is proportional to the macroscopic black hole entropy. A systematic study of such attractor states shows that their properties reflect the structure of the fake superpotential. We also demonstrate that when starting with the very special values for the moduli corresponding to flat directions the uniform structure at the horizon deteriorates due to errors generalizing the usual bit flips acting on the qubits of the attractor states.« less

  12. Accurate Adaptive Level Set Method and Sharpening Technique for Three Dimensional Deforming Interfaces

    NASA Technical Reports Server (NTRS)

    Kim, Hyoungin; Liou, Meng-Sing

    2011-01-01

    In this paper, we demonstrate improved accuracy of the level set method for resolving deforming interfaces by proposing two key elements: (1) accurate level set solutions on adapted Cartesian grids by judiciously choosing interpolation polynomials in regions of different grid levels and (2) enhanced reinitialization by an interface sharpening procedure. The level set equation is solved using a fifth order WENO scheme or a second order central differencing scheme depending on availability of uniform stencils at each grid point. Grid adaptation criteria are determined so that the Hamiltonian functions at nodes adjacent to interfaces are always calculated by the fifth order WENO scheme. This selective usage between the fifth order WENO and second order central differencing schemes is confirmed to give more accurate results compared to those in literature for standard test problems. In order to further improve accuracy especially near thin filaments, we suggest an artificial sharpening method, which is in a similar form with the conventional re-initialization method but utilizes sign of curvature instead of sign of the level set function. Consequently, volume loss due to numerical dissipation on thin filaments is remarkably reduced for the test problems

  13. Development of a pressure based multigrid solution method for complex fluid flows

    NASA Technical Reports Server (NTRS)

    Shyy, Wei

    1991-01-01

    In order to reduce the computational difficulty associated with a single grid (SG) solution procedure, the multigrid (MG) technique was identified as a useful means for improving the convergence rate of iterative methods. A full MG full approximation storage (FMG/FAS) algorithm is used to solve the incompressible recirculating flow problems in complex geometries. The algorithm is implemented in conjunction with a pressure correction staggered grid type of technique using the curvilinear coordinates. In order to show the performance of the method, two flow configurations, one a square cavity and the other a channel, are used as test problems. Comparisons are made between the iterations, equivalent work units, and CPU time. Besides showing that the MG method can yield substantial speed-up with wide variations in Reynolds number, grid distributions, and geometry, issues such as the convergence characteristics of different grid levels, the choice of convection schemes, and the effectiveness of the basic iteration smoothers are studied. An adaptive grid scheme is also combined with the MG procedure to explore the effects of grid resolution on the MG convergence rate as well as the numerical accuracy.

  14. Broadcasting satellite service synthesis using gradient and cyclic coordinate search procedures

    NASA Technical Reports Server (NTRS)

    Reilly, C. H.; Mount-Campbell, C. A.; Gonsalvez, D. J.; Martin, C. H.; Levis, C. A.; Wang, C. W.

    1986-01-01

    Two search techniques are considered for solving satellite synthesis problems. Neither is likely to find a globally optimal solution. In order to determine which method performs better and what factors affect their performance, we design an experiment and solve the same problem under a variety of starting solution configuration-algorithm combinations. Since there is no randomization in the experiment, we present results of practical, rather than statistical, significance. Our implementation of a cyclic coordinate search procedure clearly finds better synthesis solutions than our implementation of a gradient search procedure does with our objective of maximizing the minimum C/I ratio computed at test points on the perimeters of the intended service areas. The length of the available orbital arc and the configuration of the starting solution are shown to affect the quality of the solutions found.

  15. Implementing Culture Change in Nursing Homes: An Adaptive Leadership Framework

    PubMed Central

    Corazzini, Kirsten; Twersky, Jack; White, Heidi K.; Buhr, Gwendolen T.; McConnell, Eleanor S.; Weiner, Madeline; Colón-Emeric, Cathleen S.

    2015-01-01

    Purpose of the Study: To describe key adaptive challenges and leadership behaviors to implement culture change for person-directed care. Design and Methods: The study design was a qualitative, observational study of nursing home staff perceptions of the implementation of culture change in each of 3 nursing homes. We conducted 7 focus groups of licensed and unlicensed nursing staff, medical care providers, and administrators. Questions explored perceptions of facilitators and barriers to culture change. Using a template organizing style of analysis with immersion/crystallization, themes of barriers and facilitators were coded for adaptive challenges and leadership. Results: Six key themes emerged, including relationships, standards and expectations, motivation and vision, workload, respect of personhood, and physical environment. Within each theme, participants identified barriers that were adaptive challenges and facilitators that were examples of adaptive leadership. Commonly identified challenges were how to provide person-directed care in the context of extant rules or policies or how to develop staff motivated to provide person-directed care. Implications: Implementing culture change requires the recognition of adaptive challenges for which there are no technical solutions, but which require reframing of norms and expectations, and the development of novel and flexible solutions. Managers and administrators seeking to implement person-directed care will need to consider the role of adaptive leadership to address these adaptive challenges. PMID:24451896

  16. 46 CFR 153.1035 - Acetone cyanohydrin or lactonitrile solutions.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 46 Shipping 5 2010-10-01 2010-10-01 false Acetone cyanohydrin or lactonitrile solutions. 153.1035... Special Cargo Procedures § 153.1035 Acetone cyanohydrin or lactonitrile solutions. No person may operate a tankship carrying a cargo of acetone cyanohydrin or lactonitrile solutions, unless that cargo is stabilized...

  17. 46 CFR 153.1035 - Acetone cyanohydrin or lactonitrile solutions.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 46 Shipping 5 2013-10-01 2013-10-01 false Acetone cyanohydrin or lactonitrile solutions. 153.1035... Special Cargo Procedures § 153.1035 Acetone cyanohydrin or lactonitrile solutions. No person may operate a tankship carrying a cargo of acetone cyanohydrin or lactonitrile solutions, unless that cargo is stabilized...

  18. 46 CFR 153.1035 - Acetone cyanohydrin or lactonitrile solutions.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 46 Shipping 5 2012-10-01 2012-10-01 false Acetone cyanohydrin or lactonitrile solutions. 153.1035... Special Cargo Procedures § 153.1035 Acetone cyanohydrin or lactonitrile solutions. No person may operate a tankship carrying a cargo of acetone cyanohydrin or lactonitrile solutions, unless that cargo is stabilized...

  19. 46 CFR 153.1035 - Acetone cyanohydrin or lactonitrile solutions.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 46 Shipping 5 2014-10-01 2014-10-01 false Acetone cyanohydrin or lactonitrile solutions. 153.1035... Special Cargo Procedures § 153.1035 Acetone cyanohydrin or lactonitrile solutions. No person may operate a tankship carrying a cargo of acetone cyanohydrin or lactonitrile solutions, unless that cargo is stabilized...

  20. 46 CFR 153.1035 - Acetone cyanohydrin or lactonitrile solutions.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 46 Shipping 5 2011-10-01 2011-10-01 false Acetone cyanohydrin or lactonitrile solutions. 153.1035... Special Cargo Procedures § 153.1035 Acetone cyanohydrin or lactonitrile solutions. No person may operate a tankship carrying a cargo of acetone cyanohydrin or lactonitrile solutions, unless that cargo is stabilized...

  1. Thermal Adaptation Methods of Urban Plaza Users in Asia's Hot-Humid Regions: A Taiwan Case Study.

    PubMed

    Wu, Chen-Fa; Hsieh, Yen-Fen; Ou, Sheng-Jung

    2015-10-27

    Thermal adaptation studies provide researchers great insight to help understand how people respond to thermal discomfort. This research aims to assess outdoor urban plaza conditions in hot and humid regions of Asia by conducting an evaluation of thermal adaptation. We also propose that questionnaire items are appropriate for determining thermal adaptation strategies adopted by urban plaza users. A literature review was conducted and first hand data collected by field observations and interviews used to collect information on thermal adaptation strategies. Item analysis--Exploratory Factor Analysis (EFA) and Confirmatory Factor Analysis (CFA)--were applied to refine the questionnaire items and determine the reliability of the questionnaire evaluation procedure. The reliability and validity of items and constructing process were also analyzed. Then, researchers facilitated an evaluation procedure for assessing the thermal adaptation strategies of urban plaza users in hot and humid regions of Asia and formulated a questionnaire survey that was distributed in Taichung's Municipal Plaza in Taiwan. Results showed that most users responded with behavioral adaptation when experiencing thermal discomfort. However, if the thermal discomfort could not be alleviated, they then adopted psychological strategies. In conclusion, the evaluation procedure for assessing thermal adaptation strategies and the questionnaire developed in this study can be applied to future research on thermal adaptation strategies adopted by urban plaza users in hot and humid regions of Asia.

  2. Adaptation of Instructional Materials: A Commentary on the Research on Adaptations of "Who Polluted the Potomac"

    ERIC Educational Resources Information Center

    Ercikan, Kadriye; Alper, Naim

    2009-01-01

    This commentary first summarizes and discusses the analysis of the two translation processes described in the Oliveira, Colak, and Akerson article and the inferences these researchers make based on their research. In the second part of the commentary, we describe procedures and criteria used in adapting tests into different languages and how they…

  3. Adaptive Identification by Systolic Arrays.

    DTIC Science & Technology

    1987-12-01

    BIBLIOGRIAPHY Anton , Howard, Elementary Linear Algebra , John Wiley & Sons, 19S4. Cristi, Roberto, A Parallel Structure Jor Adaptive Pole Placement...10 11. SYSTEM IDENTIFICATION M*YETHODS ....................... 12 A. LINEAR SYSTEM MODELING ......................... 12 B. SOLUTION OF SYSTEMS OF... LINEAR EQUATIONS ......... 13 C. QR DECOMPOSITION ................................ 14 D. RECURSIVE LEAST SQUARES ......................... 16 E. BLOCK

  4. Adaptive Mesh Refinement for Microelectronic Device Design

    NASA Technical Reports Server (NTRS)

    Cwik, Tom; Lou, John; Norton, Charles

    1999-01-01

    Finite element and finite volume methods are used in a variety of design simulations when it is necessary to compute fields throughout regions that contain varying materials or geometry. Convergence of the simulation can be assessed by uniformly increasing the mesh density until an observable quantity stabilizes. Depending on the electrical size of the problem, uniform refinement of the mesh may be computationally infeasible due to memory limitations. Similarly, depending on the geometric complexity of the object being modeled, uniform refinement can be inefficient since regions that do not need refinement add to the computational expense. In either case, convergence to the correct (measured) solution is not guaranteed. Adaptive mesh refinement methods attempt to selectively refine the region of the mesh that is estimated to contain proportionally higher solution errors. The refinement may be obtained by decreasing the element size (h-refinement), by increasing the order of the element (p-refinement) or by a combination of the two (h-p refinement). A successful adaptive strategy refines the mesh to produce an accurate solution measured against the correct fields without undue computational expense. This is accomplished by the use of a) reliable a posteriori error estimates, b) hierarchal elements, and c) automatic adaptive mesh generation. Adaptive methods are also useful when problems with multi-scale field variations are encountered. These occur in active electronic devices that have thin doped layers and also when mixed physics is used in the calculation. The mesh needs to be fine at and near the thin layer to capture rapid field or charge variations, but can coarsen away from these layers where field variations smoothen and charge densities are uniform. This poster will present an adaptive mesh refinement package that runs on parallel computers and is applied to specific microelectronic device simulations. Passive sensors that operate in the infrared portion of

  5. Adaption of unstructured meshes using node movement

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carpenter, J.G.; McRae, V.D.S.

    1996-12-31

    The adaption algorithm of Benson and McRae is modified for application to unstructured grids. The weight function generation was modified for application to unstructured grids and movement was limited to prevent cross over. A NACA 0012 airfoil is used as a test case to evaluate the modified algorithm when applied to unstructured grids and compared to results obtained by Warren. An adaptive mesh solution for the Sudhoo and Hall four element airfoil is included as a demonstration case.

  6. Dissociating proportion congruent and conflict adaptation effects in a Simon-Stroop procedure.

    PubMed

    Torres-Quesada, Maryem; Funes, Maria Jesús; Lupiáñez, Juan

    2013-02-01

    Proportion congruent and conflict adaptation are two well known effects associated with cognitive control. A critical open question is whether they reflect the same or separate cognitive control mechanisms. In this experiment, in a training phase we introduced a proportion congruency manipulation for one conflict type (i.e. Simon), whereas in pre-training and post-training phases two conflict types (e.g. Simon and Spatial Stroop) were displayed with the same incongruent-to-congruent ratio. The results supported the sustained nature of the proportion congruent effect, as it transferred from the training to the post-training phase. Furthermore, this transfer generalized to both conflict types. By contrast, the conflict adaptation effect was specific to conflict type, as it was only observed when the same conflict type (either Simon or Stroop) was presented on two consecutive trials (no effect was observed on conflict type alternation trials). Results are interpreted as supporting the reactive and proactive control mechanisms distinction. Copyright © 2013 Elsevier B.V. All rights reserved.

  7. Design of a Model Reference Adaptive Controller for an Unmanned Air Vehicle

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Matsutani, Megumi; Annaswamy, Anuradha M.

    2010-01-01

    This paper presents the "Adaptive Control Technology for Safe Flight (ACTS)" architecture, which consists of a non-adaptive controller that provides satisfactory performance under nominal flying conditions, and an adaptive controller that provides robustness under off nominal ones. The design and implementation procedures of both controllers are presented. The aim of these procedures, which encompass both theoretical and practical considerations, is to develop a controller suitable for flight. The ACTS architecture is applied to the Generic Transport Model developed by NASA-Langley Research Center. The GTM is a dynamically scaled test model of a transport aircraft for which a flight-test article and a high-fidelity simulation are available. The nominal controller at the core of the ACTS architecture has a multivariable LQR-PI structure while the adaptive one has a direct, model reference structure. The main control surfaces as well as the throttles are used as control inputs. The inclusion of the latter alleviates the pilot s workload by eliminating the need for cancelling the pitch coupling generated by changes in thrust. Furthermore, the independent usage of the throttles by the adaptive controller enables their use for attitude control. Advantages and potential drawbacks of adaptation are demonstrated by performing high fidelity simulations of a flight-validated controller and of its adaptive augmentation.

  8. Anisotropic mesh adaptation for marine ice-sheet modelling

    NASA Astrophysics Data System (ADS)

    Gillet-Chaulet, Fabien; Tavard, Laure; Merino, Nacho; Peyaud, Vincent; Brondex, Julien; Durand, Gael; Gagliardini, Olivier

    2017-04-01

    Improving forecasts of ice-sheets contribution to sea-level rise requires, amongst others, to correctly model the dynamics of the grounding line (GL), i.e. the line where the ice detaches from its underlying bed and goes afloat on the ocean. Many numerical studies, including the intercomparison exercises MISMIP and MISMIP3D, have shown that grid refinement in the GL vicinity is a key component to obtain reliable results. Improving model accuracy while maintaining the computational cost affordable has then been an important target for the development of marine icesheet models. Adaptive mesh refinement (AMR) is a method where the accuracy of the solution is controlled by spatially adapting the mesh size. It has become popular in models using the finite element method as they naturally deal with unstructured meshes, but block-structured AMR has also been successfully applied to model GL dynamics. The main difficulty with AMR is to find efficient and reliable estimators of the numerical error to control the mesh size. Here, we use the estimator proposed by Frey and Alauzet (2015). Based on the interpolation error, it has been found effective in practice to control the numerical error, and has some flexibility, such as its ability to combine metrics for different variables, that makes it attractive. Routines to compute the anisotropic metric defining the mesh size have been implemented in the finite element ice flow model Elmer/Ice (Gagliardini et al., 2013). The mesh adaptation is performed using the freely available library MMG (Dapogny et al., 2014) called from Elmer/Ice. Using a setup based on the inter-comparison exercise MISMIP+ (Asay-Davis et al., 2016), we study the accuracy of the solution when the mesh is adapted using various variables (ice thickness, velocity, basal drag, …). We show that combining these variables allows to reduce the number of mesh nodes by more than one order of magnitude, for the same numerical accuracy, when compared to uniform mesh

  9. Solution phase synthesis of aluminum-doped silicon nanoparticles via room-temperature, solvent based chemical reduction of silicon tetrachloride

    NASA Astrophysics Data System (ADS)

    Mowbray, Andrew James

    We present a method of wet chemical synthesis of aluminum-doped silicon nanoparticles (Al-doped Si NPs), encompassing the solution-phase co-reduction of silicon tetrachloride (SiCl4) and aluminum chloride (AlCl 3) by sodium naphthalide (Na[NAP]) in 1,2-dimethoxyethane (DME). The development of this method was inspired by the work of Baldwin et al. at the University of California, Davis, and was adapted for our research through some noteworthy procedural modifications. Centrifugation and solvent-based extraction techniques were used throughout various stages of the synthesis procedure to achieve efficient and well-controlled separation of the Si NP product from the reaction media. In addition, the development of a non-aqueous, formamide-based wash solution facilitated simultaneous removal of the NaCl byproduct and Si NP surface passivation via attachment of 1-octanol to the particle surface. As synthesized, the Si NPs were typically 3-15 nm in diameter, and were mainly amorphous, as opposed to crystalline, as concluded from SAED and XRD diffraction pattern analysis. Aluminum doping at various concentrations was accomplished via the inclusion of aluminum chloride (AlCl3); which was in small quantities dissolved into the synthesis solution to be reduced alongside the SiCl4 precursor. The introduction of Al into the chemically-reduced Si NP precipitate was not found to adversely affect the formation of the Si NPs, but was found to influence aspects such as particle stability and dispersibility throughout various stages of the procedure. Analytical techniques including transmission electron microscopy (TEM), FTIR spectroscopy, and ICP-optical emission spectroscopy were used to comprehensively characterize the product NPs. These methods confirm both the presence of Al and surface-bound 1-octanol in the newly formed Si NPs.

  10. 2-dimensional implicit hydrodynamics on adaptive grids

    NASA Astrophysics Data System (ADS)

    Stökl, A.; Dorfi, E. A.

    2007-12-01

    We present a numerical scheme for two-dimensional hydrodynamics computations using a 2D adaptive grid together with an implicit discretization. The combination of these techniques has offered favorable numerical properties applicable to a variety of one-dimensional astrophysical problems which motivated us to generalize this approach for two-dimensional applications. Due to the different topological nature of 2D grids compared to 1D problems, grid adaptivity has to avoid severe grid distortions which necessitates additional smoothing parameters to be included into the formulation of a 2D adaptive grid. The concept of adaptivity is described in detail and several test computations demonstrate the effectivity of smoothing. The coupled solution of this grid equation together with the equations of hydrodynamics is illustrated by computation of a 2D shock tube problem.

  11. Grid adaptation using chimera composite overlapping meshes

    NASA Technical Reports Server (NTRS)

    Kao, Kai-Hsiung; Liou, Meng-Sing; Chow, Chuen-Yen

    1994-01-01

    The objective of this paper is to perform grid adaptation using composite overlapping meshes in regions of large gradient to accurately capture the salient features during computation. The chimera grid scheme, a multiple overset mesh technique, is used in combination with a Navier-Stokes solver. The numerical solution is first converged to a steady state based on an initial coarse mesh. Solution-adaptive enhancement is then performed by using a secondary fine grid system which oversets on top of the base grid in the high-gradient region, but without requiring the mesh boundaries to join in any special way. Communications through boundary interfaces between those separated grids are carried out using trilinear interpolation. Application to the Euler equations for shock reflections and to shock wave/boundary layer interaction problem are tested. With the present method, the salient features are well-resolved.

  12. 76 FR 78015 - Revised Analysis and Mapping Procedures for Non-Accredited Levees

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-12-15

    ...] Revised Analysis and Mapping Procedures for Non-Accredited Levees AGENCY: Federal Emergency Management... comments on the proposed solution for Revised Analysis and Mapping Procedures for Non-Accredited Levees. This document proposes a revised procedure for the analysis and mapping of non-accredited levees on...

  13. A direct method of solution for the Fokas-Lenells derivative nonlinear Schrödinger equation: I. Bright soliton solutions

    NASA Astrophysics Data System (ADS)

    Matsuno, Yoshimasa

    2012-06-01

    We develop a direct method of solution for finding the bright N-soliton solution of the Fokas-Lenells derivative nonlinear Schrödinger equation. The construction of the solution is performed by means of a purely algebraic procedure using an elementary theory of determinants and does not rely on the inverse scattering transform method. We present two different expressions of the solution both of which are expressed as a ratio of determinants. We then investigate the properties of the solutions and find several new features. Specifically, we derive the formula for the phase shift caused by the collisions of bright solitons.

  14. Cross-cultural adaptation of the VISA-P questionnaire for Greek-speaking patients with patellar tendinopathy.

    PubMed

    Korakakis, Vasileios; Patsiaouras, Asterios; Malliaropoulos, Nikos

    2014-12-01

    To cross-culturally adapt the VISA-P questionnaire for Greek-speaking patients and evaluate its psychometric properties. The VISA-P was developed in the English language to evaluate patients with patellar tendinopathy. The validity and use of self-administered questionnaires in different language and cultural populations require a specific procedure in order to maintain their content validity. The VISA-P questionnaire was translated and cross-culturally adapted according to specific guidelines. The validity and reliability were tested in 61 healthy recreational athletes, 64 athletes at risk from different sports, 32 patellar tendinopathy patients and 30 patients with other knee injuries. Participants completed the questionnaire at baseline and after 15-17 days. The questionnaire's face and content validity were judged as good by the expert committee, and the participants. Concurrent validity was almost perfect (ρ=-0.839, p<0.001). Also, factorial validity testing revealed a two-factor solution, which explained 85.6% of the total variance. A one-factor solution explained 80.8% of the variance when the other knee injury group was excluded. Known group validity was demonstrated by significant differences between patients compared with the asymptomatic groups (p<0.001). The VISA-P-GR exhibited very good test-retest reliability (ICC=0.818, p<0.001; 95% CI 0.758 to 0.864) and internal consistency since Cronbach's α analysis ranged from α=0.785 to 0.784 following a 15-17 days interval. The translated VISA-P-GR is a valid and reliable questionnaire and its psychometric properties are comparable with the original and adapted versions. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  15. Actuator placement in prestressed adaptive trusses for vibration control

    NASA Technical Reports Server (NTRS)

    Jalihal, P.; Utku, Senol; Wada, Ben K.

    1993-01-01

    This paper describes the optimal location selection of actuators for vibration control in prestressed adaptive trusses. Since prestressed adaptive trusses are statically indeterminate, the actuators to be used for vibration control purposes must work against (1) existing static axial prestressing forces, (2) static axial forces caused by the actuation, and (3) dynamic axial forces caused by the motion of the mass. In statically determinate adaptive trusses (1) and (2) are non - existing. The actuator placement problem in statically indeterminate trusses is therefore governed by the actuation energy and the actuator strength requirements. Assuming output feedback type control of selected vibration modes in autonomous systems, a procedure is given for the placement of vibration controlling actuators in prestressed adaptive trusses.

  16. Solution of plane cascade flow using improved surface singularity methods

    NASA Technical Reports Server (NTRS)

    Mcfarland, E. R.

    1981-01-01

    A solution method has been developed for calculating compressible inviscid flow through a linear cascade of arbitrary blade shapes. The method uses advanced surface singularity formulations which were adapted from those found in current external flow analyses. The resulting solution technique provides a fast flexible calculation for flows through turbomachinery blade rows. The solution method and some examples of the method's capabilities are presented.

  17. Multi-model predictive control based on LMI: from the adaptation of the state-space model to the analytic description of the control law

    NASA Astrophysics Data System (ADS)

    Falugi, P.; Olaru, S.; Dumur, D.

    2010-08-01

    This article proposes an explicit robust predictive control solution based on linear matrix inequalities (LMIs). The considered predictive control strategy uses different local descriptions of the system dynamics and uncertainties and thus allows the handling of less conservative input constraints. The computed control law guarantees constraint satisfaction and asymptotic stability. The technique is effective for a class of nonlinear systems embedded into polytopic models. A detailed discussion of the procedures which adapt the partition of the state space is presented. For the practical implementation the construction of suitable (explicit) descriptions of the control law are described upon concrete algorithms.

  18. Deconvolution of post-adaptive optics images of faint circumstellar environments by means of the inexact Bregman procedure

    NASA Astrophysics Data System (ADS)

    Benfenati, A.; La Camera, A.; Carbillet, M.

    2016-02-01

    Aims: High-dynamic range images of astrophysical objects present some difficulties in their restoration because of the presence of very bright point-wise sources surrounded by faint and smooth structures. We propose a method that enables the restoration of this kind of images by taking these kinds of sources into account and, at the same time, improving the contrast enhancement in the final image. Moreover, the proposed approach can help to detect the position of the bright sources. Methods: The classical variational scheme in the presence of Poisson noise aims to find the minimum of a functional compound of the generalized Kullback-Leibler function and a regularization functional: the latter function is employed to preserve some characteristic in the restored image. The inexact Bregman procedure substitutes the regularization function with its inexact Bregman distance. This proposed scheme allows us to take under control the level of inexactness arising in the computed solution and permits us to employ an overestimation of the regularization parameter (which balances the trade-off between the Kullback-Leibler and the Bregman distance). This aspect is fundamental, since the estimation of this kind of parameter is very difficult in the presence of Poisson noise. Results: The inexact Bregman procedure is tested on a bright unresolved binary star with a faint circumstellar environment. When the sources' position is exactly known, this scheme provides us with very satisfactory results. In case of inexact knowledge of the sources' position, it can in addition give some useful information on the true positions. Finally, the inexact Bregman scheme can be also used when information about the binary star's position concerns a connected region instead of isolated pixels.

  19. Modelling atmospheric flows with adaptive moving meshes

    NASA Astrophysics Data System (ADS)

    Kühnlein, Christian; Smolarkiewicz, Piotr K.; Dörnbrack, Andreas

    2012-04-01

    An anelastic atmospheric flow solver has been developed that combines semi-implicit non-oscillatory forward-in-time numerics with a solution-adaptive mesh capability. A key feature of the solver is the unification of a mesh adaptation apparatus, based on moving mesh partial differential equations (PDEs), with the rigorous formulation of the governing anelastic PDEs in generalised time-dependent curvilinear coordinates. The solver development includes an enhancement of the flux-form multidimensional positive definite advection transport algorithm (MPDATA) - employed in the integration of the underlying anelastic PDEs - that ensures full compatibility with mass continuity under moving meshes. In addition, to satisfy the geometric conservation law (GCL) tensor identity under general moving meshes, a diagnostic approach is proposed based on the treatment of the GCL as an elliptic problem. The benefits of the solution-adaptive moving mesh technique for the simulation of multiscale atmospheric flows are demonstrated. The developed solver is verified for two idealised flow problems with distinct levels of complexity: passive scalar advection in a prescribed deformational flow, and the life cycle of a large-scale atmospheric baroclinic wave instability showing fine-scale phenomena of fronts and internal gravity waves.

  20. Thermal Adaptation Methods of Urban Plaza Users in Asia’s Hot-Humid Regions: A Taiwan Case Study

    PubMed Central

    Wu, Chen-Fa; Hsieh, Yen-Fen; Ou, Sheng-Jung

    2015-01-01

    Thermal adaptation studies provide researchers great insight to help understand how people respond to thermal discomfort. This research aims to assess outdoor urban plaza conditions in hot and humid regions of Asia by conducting an evaluation of thermal adaptation. We also propose that questionnaire items are appropriate for determining thermal adaptation strategies adopted by urban plaza users. A literature review was conducted and first hand data collected by field observations and interviews used to collect information on thermal adaptation strategies. Item analysis—Exploratory Factor Analysis (EFA) and Confirmatory Factor Analysis (CFA)—were applied to refine the questionnaire items and determine the reliability of the questionnaire evaluation procedure. The reliability and validity of items and constructing process were also analyzed. Then, researchers facilitated an evaluation procedure for assessing the thermal adaptation strategies of urban plaza users in hot and humid regions of Asia and formulated a questionnaire survey that was distributed in Taichung’s Municipal Plaza in Taiwan. Results showed that most users responded with behavioral adaptation when experiencing thermal discomfort. However, if the thermal discomfort could not be alleviated, they then adopted psychological strategies. In conclusion, the evaluation procedure for assessing thermal adaptation strategies and the questionnaire developed in this study can be applied to future research on thermal adaptation strategies adopted by urban plaza users in hot and humid regions of Asia. PMID:26516881

  1. Country, climate change adaptation and colonisation: insights from an Indigenous adaptation planning process, Australia.

    PubMed

    Nursey-Bray, Melissa; Palmer, Robert

    2018-03-01

    Indigenous peoples are going to be disproportionately affected by climate change. Developing tailored, place based, and culturally appropriate solutions will be necessary. Yet finding cultural and institutional 'fit' within and between competing values-based climate and environmental management governance regimes remains an ongoing challenge. This paper reports on a collaborative research project with the Arabana people of central Australia, that resulted in the production of the first Indigenous community-based climate change adaptation strategy in Australia. We aimed to try and understand what conditions are needed to support Indigenous driven adaptation initiatives, if there are any cultural differences that need accounting for and how, once developed they be integrated into existing governance arrangements. Our analysis found that climate change adaptation is based on the centrality of the connection to 'country' (traditional land), it needs to be aligned with cultural values, and focus on the building of adaptive capacity. We find that the development of climate change adaptation initiatives cannot be divorced from the historical context of how the Arabana experienced and collectively remember colonisation. We argue that in developing culturally responsive climate governance for and with Indigenous peoples, that that the history of colonisation and the ongoing dominance of entrenched Western governance regimes needs acknowledging and redressing into contemporary environmental/climate management.

  2. Electronic excitation spectra of molecules in solution calculated using the symmetry-adapted cluster-configuration interaction method in the polarizable continuum model with perturbative approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fukuda, Ryoichi, E-mail: fukuda@ims.ac.jp; Ehara, Masahiro; Elements Strategy Initiative for Catalysts and Batteries

    A perturbative approximation of the state specific polarizable continuum model (PCM) symmetry-adapted cluster-configuration interaction (SAC-CI) method is proposed for efficient calculations of the electronic excitations and absorption spectra of molecules in solutions. This first-order PCM SAC-CI method considers the solvent effects on the energies of excited states up to the first-order with using the zeroth-order wavefunctions. This method can avoid the costly iterative procedure of the self-consistent reaction field calculations. The first-order PCM SAC-CI calculations well reproduce the results obtained by the iterative method for various types of excitations of molecules in polar and nonpolar solvents. The first-order contribution ismore » significant for the excitation energies. The results obtained by the zeroth-order PCM SAC-CI, which considers the fixed ground-state reaction field for the excited-state calculations, are deviated from the results by the iterative method about 0.1 eV, and the zeroth-order PCM SAC-CI cannot predict even the direction of solvent shifts in n-hexane for many cases. The first-order PCM SAC-CI is applied to studying the solvatochromisms of (2,2{sup ′}-bipyridine)tetracarbonyltungsten [W(CO){sub 4}(bpy), bpy = 2,2{sup ′}-bipyridine] and bis(pentacarbonyltungsten)pyrazine [(OC){sub 5}W(pyz)W(CO){sub 5}, pyz = pyrazine]. The SAC-CI calculations reveal the detailed character of the excited states and the mechanisms of solvent shifts. The energies of metal to ligand charge transfer states are significantly sensitive to solvents. The first-order PCM SAC-CI well reproduces the observed absorption spectra of the tungsten carbonyl complexes in several solvents.« less

  3. Star adaptation for two-algorithms used on serial computers

    NASA Technical Reports Server (NTRS)

    Howser, L. M.; Lambiotte, J. J., Jr.

    1974-01-01

    Two representative algorithms used on a serial computer and presently executed on the Control Data Corporation 6000 computer were adapted to execute efficiently on the Control Data STAR-100 computer. Gaussian elimination for the solution of simultaneous linear equations and the Gauss-Legendre quadrature formula for the approximation of an integral are the two algorithms discussed. A description is given of how the programs were adapted for STAR and why these adaptations were necessary to obtain an efficient STAR program. Some points to consider when adapting an algorithm for STAR are discussed. Program listings of the 6000 version coded in 6000 FORTRAN, the adapted STAR version coded in 6000 FORTRAN, and the STAR version coded in STAR FORTRAN are presented in the appendices.

  4. Implementing Culture Change in Nursing Homes: An Adaptive Leadership Framework.

    PubMed

    Corazzini, Kirsten; Twersky, Jack; White, Heidi K; Buhr, Gwendolen T; McConnell, Eleanor S; Weiner, Madeline; Colón-Emeric, Cathleen S

    2015-08-01

    To describe key adaptive challenges and leadership behaviors to implement culture change for person-directed care. The study design was a qualitative, observational study of nursing home staff perceptions of the implementation of culture change in each of 3 nursing homes. We conducted 7 focus groups of licensed and unlicensed nursing staff, medical care providers, and administrators. Questions explored perceptions of facilitators and barriers to culture change. Using a template organizing style of analysis with immersion/crystallization, themes of barriers and facilitators were coded for adaptive challenges and leadership. Six key themes emerged, including relationships, standards and expectations, motivation and vision, workload, respect of personhood, and physical environment. Within each theme, participants identified barriers that were adaptive challenges and facilitators that were examples of adaptive leadership. Commonly identified challenges were how to provide person-directed care in the context of extant rules or policies or how to develop staff motivated to provide person-directed care. Implementing culture change requires the recognition of adaptive challenges for which there are no technical solutions, but which require reframing of norms and expectations, and the development of novel and flexible solutions. Managers and administrators seeking to implement person-directed care will need to consider the role of adaptive leadership to address these adaptive challenges. © The Author 2014. Published by Oxford University Press on behalf of The Gerontological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  5. A Comparison of Error-Correction Procedures on Skill Acquisition during Discrete-Trial Instruction

    ERIC Educational Resources Information Center

    Carroll, Regina A.; Joachim, Brad T.; St. Peter, Claire C.; Robinson, Nicole

    2015-01-01

    Previous research supports the use of a variety of error-correction procedures to facilitate skill acquisition during discrete-trial instruction. We used an adapted alternating treatments design to compare the effects of 4 commonly used error-correction procedures on skill acquisition for 2 children with attention deficit hyperactivity disorder…

  6. Studying the neural bases of prism adaptation using fMRI: A technical and design challenge.

    PubMed

    Bultitude, Janet H; Farnè, Alessandro; Salemme, Romeo; Ibarrola, Danielle; Urquizar, Christian; O'Shea, Jacinta; Luauté, Jacques

    2017-12-01

    Prism adaptation induces rapid recalibration of visuomotor coordination. The neural mechanisms of prism adaptation have come under scrutiny since the observations that the technique can alleviate hemispatial neglect following stroke, and can alter spatial cognition in healthy controls. Relative to non-imaging behavioral studies, fMRI investigations of prism adaptation face several challenges arising from the confined physical environment of the scanner and the supine position of the participants. Any researcher who wishes to administer prism adaptation in an fMRI environment must adjust their procedures enough to enable the experiment to be performed, but not so much that the behavioral task departs too much from true prism adaptation. Furthermore, the specific temporal dynamics of behavioral components of prism adaptation present additional challenges for measuring their neural correlates. We developed a system for measuring the key features of prism adaptation behavior within an fMRI environment. To validate our configuration, we present behavioral (pointing) and head movement data from 11 right-hemisphere lesioned patients and 17 older controls who underwent sham and real prism adaptation in an MRI scanner. Most participants could adapt to prismatic displacement with minimal head movements, and the procedure was well tolerated. We propose recommendations for fMRI studies of prism adaptation based on the design-specific constraints and our results.

  7. Traffic-Adaptive, Flow-Specific Medium Access for Wireless Networks

    DTIC Science & Technology

    2009-09-01

    hybrid, contention and non-contention schemes are shown to be special cases. This work also compares the energy efficiency of centralized and distributed...solutions and proposes an energy efficient version of traffic-adaptive CWS-MAC that includes an adaptive sleep cycle coordinated through the use of...preamble sampling. A preamble sampling probability parameter is introduced to manage the trade-off between energy efficiency and throughput and delay

  8. Cultural adaptation in translational research: field experiences.

    PubMed

    Dévieux, Jessy G; Malow, Robert M; Rosenberg, Rhonda; Jean-Gilles, Michèle; Samuels, Deanne; Ergon-Pérez, Emma; Jacobs, Robin

    2005-06-01

    The increase in the incidence of HIV/AIDS among minorities in the United States and in certain developing nations has prompted new intervention priorities, stressing the adaptation of efficacious interventions for diverse and marginalized groups. The experiences of Florida International University's AIDS Prevention Program in translating HIV primary and secondary prevention interventions among these multicultural populations provide insight into the process of cultural adaptations and address the new scientific emphasis on ecological validity. An iterative process involving forward and backward translation, a cultural linguistic committee, focus group discussions, documentation of project procedures, and consultations with other researchers in the field was used to modify interventions. This article presents strategies used to ensure fidelity in implementing the efficacious core components of evidence-based interventions for reducing HIV transmission and drug use behaviors and the challenges posed by making cultural adaptation for participants with low literacy. This experience demonstrates the importance of integrating culturally relevant material in the translation process with intense focus on language and nuance. The process must ensure that the level of intervention is appropriate for the educational level of participants. Furthermore, the rights of participants must be protected during consenting procedures by instituting policies that recognize the socioeconomic, educational, and systemic pressures to participate in research.

  9. Code Development of Three-Dimensional General Relativistic Hydrodynamics with AMR (Adaptive-Mesh Refinement) and Results from Special and General Relativistic Hydrodynamics

    NASA Astrophysics Data System (ADS)

    Dönmez, Orhan

    2004-09-01

    In this paper, the general procedure to solve the general relativistic hydrodynamical (GRH) equations with adaptive-mesh refinement (AMR) is presented. In order to achieve, the GRH equations are written in the conservation form to exploit their hyperbolic character. The numerical solutions of GRH equations are obtained by high resolution shock Capturing schemes (HRSC), specifically designed to solve nonlinear hyperbolic systems of conservation laws. These schemes depend on the characteristic information of the system. The Marquina fluxes with MUSCL left and right states are used to solve GRH equations. First, different test problems with uniform and AMR grids on the special relativistic hydrodynamics equations are carried out to verify the second-order convergence of the code in one, two and three dimensions. Results from uniform and AMR grid are compared. It is found that adaptive grid does a better job when the number of resolution is increased. Second, the GRH equations are tested using two different test problems which are Geodesic flow and Circular motion of particle In order to do this, the flux part of GRH equations is coupled with source part using Strang splitting. The coupling of the GRH equations is carried out in a treatment which gives second order accurate solutions in space and time.

  10. A Bayesian Hybrid Adaptive Randomisation Design for Clinical Trials with Survival Outcomes.

    PubMed

    Moatti, M; Chevret, S; Zohar, S; Rosenberger, W F

    2016-01-01

    Response-adaptive randomisation designs have been proposed to improve the efficiency of phase III randomised clinical trials and improve the outcomes of the clinical trial population. In the setting of failure time outcomes, Zhang and Rosenberger (2007) developed a response-adaptive randomisation approach that targets an optimal allocation, based on a fixed sample size. The aim of this research is to propose a response-adaptive randomisation procedure for survival trials with an interim monitoring plan, based on the following optimal criterion: for fixed variance of the estimated log hazard ratio, what allocation minimizes the expected hazard of failure? We demonstrate the utility of the design by redesigning a clinical trial on multiple myeloma. To handle continuous monitoring of data, we propose a Bayesian response-adaptive randomisation procedure, where the log hazard ratio is the effect measure of interest. Combining the prior with the normal likelihood, the mean posterior estimate of the log hazard ratio allows derivation of the optimal target allocation. We perform a simulation study to assess and compare the performance of this proposed Bayesian hybrid adaptive design to those of fixed, sequential or adaptive - either frequentist or fully Bayesian - designs. Non informative normal priors of the log hazard ratio were used, as well as mixture of enthusiastic and skeptical priors. Stopping rules based on the posterior distribution of the log hazard ratio were computed. The method is then illustrated by redesigning a phase III randomised clinical trial of chemotherapy in patients with multiple myeloma, with mixture of normal priors elicited from experts. As expected, there was a reduction in the proportion of observed deaths in the adaptive vs. non-adaptive designs; this reduction was maximized using a Bayes mixture prior, with no clear-cut improvement by using a fully Bayesian procedure. The use of stopping rules allows a slight decrease in the observed

  11. Adaptive grid generation in a patient-specific cerebral aneurysm

    NASA Astrophysics Data System (ADS)

    Hodis, Simona; Kallmes, David F.; Dragomir-Daescu, Dan

    2013-11-01

    Adapting grid density to flow behavior provides the advantage of increasing solution accuracy while decreasing the number of grid elements in the simulation domain, therefore reducing the computational time. One method for grid adaptation requires successive refinement of grid density based on observed solution behavior until the numerical errors between successive grids are negligible. However, such an approach is time consuming and it is often neglected by the researchers. We present a technique to calculate the grid size distribution of an adaptive grid for computational fluid dynamics (CFD) simulations in a complex cerebral aneurysm geometry based on the kinematic curvature and torsion calculated from the velocity field. The relationship between the kinematic characteristics of the flow and the element size of the adaptive grid leads to a mathematical equation to calculate the grid size in different regions of the flow. The adaptive grid density is obtained such that it captures the more complex details of the flow with locally smaller grid size, while less complex flow characteristics are calculated on locally larger grid size. The current study shows that kinematic curvature and torsion calculated from the velocity field in a cerebral aneurysm can be used to find the locations of complex flow where the computational grid needs to be refined in order to obtain an accurate solution. We found that the complexity of the flow can be adequately described by velocity and vorticity and the angle between the two vectors. For example, inside the aneurysm bleb, at the bifurcation, and at the major arterial turns the element size in the lumen needs to be less than 10% of the artery radius, while at the boundary layer, the element size should be smaller than 1% of the artery radius, for accurate results within a 0.5% relative approximation error. This technique of quantifying flow complexity and adaptive remeshing has the potential to improve results accuracy and reduce

  12. An adaptive time-stepping strategy for solving the phase field crystal model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Zhengru, E-mail: zrzhang@bnu.edu.cn; Ma, Yuan, E-mail: yuner1022@gmail.com; Qiao, Zhonghua, E-mail: zqiao@polyu.edu.hk

    2013-09-15

    In this work, we will propose an adaptive time step method for simulating the dynamics of the phase field crystal (PFC) model. The numerical simulation of the PFC model needs long time to reach steady state, and then large time-stepping method is necessary. Unconditionally energy stable schemes are used to solve the PFC model. The time steps are adaptively determined based on the time derivative of the corresponding energy. It is found that the use of the proposed time step adaptivity cannot only resolve the steady state solution, but also the dynamical development of the solution efficiently and accurately. Themore » numerical experiments demonstrate that the CPU time is significantly saved for long time simulations.« less

  13. Adapting construction staking to modern technology : final report.

    DOT National Transportation Integrated Search

    2017-08-01

    This report summarizes the tasks and findings of the ICT Project R27-163, Adapting Construction Staking to Modern Technology, which aims to develop written procedures for the use of modern technologies (such as GPS and civil information modeling) in ...

  14. Clean Energy Solutions Center: Assisting Countries with Clean Energy Policy

    Science.gov Websites

    Energy Solutions Center: Assisting Countries with Clean Energy Policy NREL helps developing countries and adapting to climate change impacts, developing countries are looking for clean energy solutions supports clean energy scale-up in the developing world are knowledge, capacity, and cost. The Clean Energy

  15. A self-adaptive-grid method with application to airfoil flow

    NASA Technical Reports Server (NTRS)

    Nakahashi, K.; Deiwert, G. S.

    1985-01-01

    A self-adaptive-grid method is described that is suitable for multidimensional steady and unsteady computations. Based on variational principles, a spring analogy is used to redistribute grid points in an optimal sense to reduce the overall solution error. User-specified parameters, denoting both maximum and minimum permissible grid spacings, are used to define the all-important constants, thereby minimizing the empiricism and making the method self-adaptive. Operator splitting and one-sided controls for orthogonality and smoothness are used to make the method practical, robust, and efficient. Examples are included for both steady and unsteady viscous flow computations about airfoils in two dimensions, as well as for a steady inviscid flow computation and a one-dimensional case. These examples illustrate the precise control the user has with the self-adaptive method and demonstrate a significant improvement in accuracy and quality of the solutions.

  16. Grid adaption using Chimera composite overlapping meshes

    NASA Technical Reports Server (NTRS)

    Kao, Kai-Hsiung; Liou, Meng-Sing; Chow, Chuen-Yen

    1993-01-01

    The objective of this paper is to perform grid adaptation using composite over-lapping meshes in regions of large gradient to capture the salient features accurately during computation. The Chimera grid scheme, a multiple overset mesh technique, is used in combination with a Navier-Stokes solver. The numerical solution is first converged to a steady state based on an initial coarse mesh. Solution-adaptive enhancement is then performed by using a secondary fine grid system which oversets on top of the base grid in the high-gradient region, but without requiring the mesh boundaries to join in any special way. Communications through boundary interfaces between those separated grids are carried out using tri-linear interpolation. Applications to the Euler equations for shock reflections and to a shock wave/boundary layer interaction problem are tested. With the present method, the salient features are well resolved.

  17. Grid adaptation using Chimera composite overlapping meshes

    NASA Technical Reports Server (NTRS)

    Kao, Kai-Hsiung; Liou, Meng-Sing; Chow, Chuen-Yen

    1993-01-01

    The objective of this paper is to perform grid adaptation using composite over-lapping meshes in regions of large gradient to capture the salient features accurately during computation. The Chimera grid scheme, a multiple overset mesh technique, is used in combination with a Navier-Stokes solver. The numerical solution is first converged to a steady state based on an initial coarse mesh. Solution-adaptive enhancement is then performed by using a secondary fine grid system which oversets on top of the base grid in the high-gradient region, but without requiring the mesh boundaries to join in any special way. Communications through boundary interfaces between those separated grids are carried out using tri-linear interpolation. Applications to the Euler equations for shock reflections and to a shock wave/boundary layer interaction problem are tested. With the present method, the salient features are well resolved.

  18. Electronic excitation of molecules in solution calculated using the symmetry-adapted cluster–configuration interaction method in the polarizable continuum model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fukuda, Ryoichi, E-mail: fukuda@ims.ac.jp; Ehara, Masahiro; Elements Strategy Initiative for Catalysts and Batteries

    2015-12-31

    The effects from solvent environment are specific to the electronic states; therefore, a computational scheme for solvent effects consistent with the electronic states is necessary to discuss electronic excitation of molecules in solution. The PCM (polarizable continuum model) SAC (symmetry-adapted cluster) and SAC-CI (configuration interaction) methods are developed for such purposes. The PCM SAC-CI adopts the state-specific (SS) solvation scheme where solvent effects are self-consistently considered for every ground and excited states. For efficient computations of many excited states, we develop a perturbative approximation for the PCM SAC-CI method, which is called corrected linear response (cLR) scheme. Our test calculationsmore » show that the cLR PCM SAC-CI is a very good approximation of the SS PCM SAC-CI method for polar and nonpolar solvents.« less

  19. Adapting Aquatic Circuit Training for Special Populations.

    ERIC Educational Resources Information Center

    Thome, Kathleen

    1980-01-01

    The author discusses how land activities can be adapted to water so that individuals with handicapping conditions can participate in circuit training activities. An initial section lists such organizational procedures as providing vocal and/or visual cues for activities, having assistants accompany the performers throughout the circuit, and…

  20. Ranking procedure for partial discriminant analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beckman, R.J.; Johnson, M.E.

    1981-09-01

    A rank procedure developed by Broffitt, Randles, and Hogg (1976) is modified to control the conditional probability of misclassification given that classification has been attempted. This modification leads to a useful solution to the two-population partial discriminant analysis problem for even moderately sized training sets.

  1. An Eulerian/Lagrangian coupling procedure for three-dimensional vortical flows

    NASA Technical Reports Server (NTRS)

    Felici, Helene M.; Drela, Mark

    1993-01-01

    A coupled Eulerian/Lagrangian method is presented for the reduction of numerical diffusion observed in solutions of 3D vortical flows using standard Eulerian finite-volume time-marching procedures. A Lagrangian particle tracking method, added to the Eulerian time-marching procedure, provides a correction of the Eulerian solution. In turn, the Eulerian solution is used to integrate the Lagrangian state-vector along the particles trajectories. While the Eulerian solution ensures the conservation of mass and sets the pressure field, the particle markers describe accurately the convection properties and enhance the vorticity and entropy capturing capabilities of the Eulerian solver. The Eulerian/Lagrangian coupling strategies are discussed and the combined scheme is tested on a constant stagnation pressure flow in a 90 deg bend and on a swirling pipe flow. As the numerical diffusion is reduced when using the Lagrangian correction, a vorticity gradient augmentation is identified as a basic problem of this inviscid calculation.

  2. Flight control with adaptive critic neural network

    NASA Astrophysics Data System (ADS)

    Han, Dongchen

    2001-10-01

    In this dissertation, the adaptive critic neural network technique is applied to solve complex nonlinear system control problems. Based on dynamic programming, the adaptive critic neural network can embed the optimal solution into a neural network. Though trained off-line, the neural network forms a real-time feedback controller. Because of its general interpolation properties, the neurocontroller has inherit robustness. The problems solved here are an agile missile control for U.S. Air Force and a midcourse guidance law for U.S. Navy. In the first three papers, the neural network was used to control an air-to-air agile missile to implement a minimum-time heading-reverse in a vertical plane corresponding to following conditions: a system without constraint, a system with control inequality constraint, and a system with state inequality constraint. While the agile missile is a one-dimensional problem, the midcourse guidance law is the first test-bed for multiple-dimensional problem. In the fourth paper, the neurocontroller is synthesized to guide a surface-to-air missile to a fixed final condition, and to a flexible final condition from a variable initial condition. In order to evaluate the adaptive critic neural network approach, the numerical solutions for these cases are also obtained by solving two-point boundary value problem with a shooting method. All of the results showed that the adaptive critic neural network could solve complex nonlinear system control problems.

  3. 4D laser camera for accurate patient positioning, collision avoidance, image fusion and adaptive approaches during diagnostic and therapeutic procedures.

    PubMed

    Brahme, Anders; Nyman, Peter; Skatt, Björn

    2008-05-01

    A four-dimensional (4D) laser camera (LC) has been developed for accurate patient imaging in diagnostic and therapeutic radiology. A complementary metal-oxide semiconductor camera images the intersection of a scanned fan shaped laser beam with the surface of the patient and allows real time recording of movements in a three-dimensional (3D) or four-dimensional (4D) format (3D +time). The LC system was first designed as an accurate patient setup tool during diagnostic and therapeutic applications but was found to be of much wider applicability as a general 4D photon "tag" for the surface of the patient in different clinical procedures. It is presently used as a 3D or 4D optical benchmark or tag for accurate delineation of the patient surface as demonstrated for patient auto setup, breathing and heart motion detection. Furthermore, its future potential applications in gating, adaptive therapy, 3D or 4D image fusion between most imaging modalities and image processing are discussed. It is shown that the LC system has a geometrical resolution of about 0, 1 mm and that the rigid body repositioning accuracy is about 0, 5 mm below 20 mm displacements, 1 mm below 40 mm and better than 2 mm at 70 mm. This indicates a slight need for repeated repositioning when the initial error is larger than about 50 mm. The positioning accuracy with standard patient setup procedures for prostate cancer at Karolinska was found to be about 5-6 mm when independently measured using the LC system. The system was found valuable for positron emission tomography-computed tomography (PET-CT) in vivo tumor and dose delivery imaging where it potentially may allow effective correction for breathing artifacts in 4D PET-CT and image fusion with lymph node atlases for accurate target volume definition in oncology. With a LC system in all imaging and radiation therapy rooms, auto setup during repeated diagnostic and therapeutic procedures may save around 5 min per session, increase accuracy and allow

  4. Acute skin lesions after surgical procedures: a clinical approach.

    PubMed

    Borrego, L

    2013-11-01

    In the hospital setting, dermatologists are often required to evaluate inflammatory skin lesions arising during surgical procedures performed in other departments. These lesions can be of physical or chemical origin. Povidone iodine is the most common reported cause of such lesions. If this antiseptic solution remains in contact with the skin in liquid form for a long period of time, it can give rise to serious irritant contact dermatitis in dependent or occluded areas. Less common causes of skin lesions after surgery include allergic contact dermatitis and burns under the dispersive electrode of the electrosurgical device. Most skin lesions that arise during surgical procedures are due to an incorrect application of antiseptic solutions. Special care must therefore be taken during the use of these solutions and, in particular, they should be allowed to dry. Copyright © 2012 Elsevier España, S.L. and AEDV. All rights reserved.

  5. Solution-Focused Therapy as a Culturally Acknowledging Approach with American Indians

    ERIC Educational Resources Information Center

    Meyer, Dixie D.; Cottone, R. Rocco

    2013-01-01

    Limited literature is available applying specific theoretical orientations with American Indians. Solution-focused therapy may be appropriate, given the client-identified solutions, the egalitarian counselor/client relationship, the use of relationships, and the view that change is inevitable. However, adaption of scaling questions and the miracle…

  6. The role of interactions in a world implementing adaptation and mitigation solutions to climate change.

    PubMed

    Warren, Rachel

    2011-01-13

    The papers in this volume discuss projections of climate change impacts upon humans and ecosystems under a global mean temperature rise of 4°C above preindustrial levels. Like most studies, they are mainly single-sector or single-region-based assessments. Even the multi-sector or multi-region approaches generally consider impacts in sectors and regions independently, ignoring interactions. Extreme weather and adaptation processes are often poorly represented and losses of ecosystem services induced by climate change or human adaptation are generally omitted. This paper addresses this gap by reviewing some potential interactions in a 4°C world, and also makes a comparison with a 2°C world. In a 4°C world, major shifts in agricultural land use and increased drought are projected, and an increased human population might increasingly be concentrated in areas remaining wet enough for economic prosperity. Ecosystem services that enable prosperity would be declining, with carbon cycle feedbacks and fire causing forest losses. There is an urgent need for integrated assessments considering the synergy of impacts and limits to adaptation in multiple sectors and regions in a 4°C world. By contrast, a 2°C world is projected to experience about one-half of the climate change impacts, with concomitantly smaller challenges for adaptation. Ecosystem services, including the carbon sink provided by the Earth's forests, would be expected to be largely preserved, with much less potential for interaction processes to increase challenges to adaptation. However, demands for land and water for biofuel cropping could reduce the availability of these resources for agricultural and natural systems. Hence, a whole system approach to mitigation and adaptation, considering interactions, potential human and species migration, allocation of land and water resources and ecosystem services, will be important in either a 2°C or a 4°C world.

  7. TranAir: A full-potential, solution-adaptive, rectangular grid code for predicting subsonic, transonic, and supersonic flows about arbitrary configurations. Theory document

    NASA Technical Reports Server (NTRS)

    Johnson, F. T.; Samant, S. S.; Bieterman, M. B.; Melvin, R. G.; Young, D. P.; Bussoletti, J. E.; Hilmes, C. L.

    1992-01-01

    A new computer program, called TranAir, for analyzing complex configurations in transonic flow (with subsonic or supersonic freestream) was developed. This program provides accurate and efficient simulations of nonlinear aerodynamic flows about arbitrary geometries with the ease and flexibility of a typical panel method program. The numerical method implemented in TranAir is described. The method solves the full potential equation subject to a set of general boundary conditions and can handle regions with differing total pressure and temperature. The boundary value problem is discretized using the finite element method on a locally refined rectangular grid. The grid is automatically constructed by the code and is superimposed on the boundary described by networks of panels; thus no surface fitted grid generation is required. The nonlinear discrete system arising from the finite element method is solved using a preconditioned Krylov subspace method embedded in an inexact Newton method. The solution is obtained on a sequence of successively refined grids which are either constructed adaptively based on estimated solution errors or are predetermined based on user inputs. Many results obtained by using TranAir to analyze aerodynamic configurations are presented.

  8. Fast and Adaptive Sparse Precision Matrix Estimation in High Dimensions

    PubMed Central

    Liu, Weidong; Luo, Xi

    2014-01-01

    This paper proposes a new method for estimating sparse precision matrices in the high dimensional setting. It has been popular to study fast computation and adaptive procedures for this problem. We propose a novel approach, called Sparse Column-wise Inverse Operator, to address these two issues. We analyze an adaptive procedure based on cross validation, and establish its convergence rate under the Frobenius norm. The convergence rates under other matrix norms are also established. This method also enjoys the advantage of fast computation for large-scale problems, via a coordinate descent algorithm. Numerical merits are illustrated using both simulated and real datasets. In particular, it performs favorably on an HIV brain tissue dataset and an ADHD resting-state fMRI dataset. PMID:25750463

  9. Refractive Changes Induced by Spherical Aberration in Laser Correction Procedures: An Adaptive Optics Study.

    PubMed

    Amigó, Alfredo; Martinez-Sorribes, Paula; Recuerda, Margarita

    2017-07-01

    To study the effect on vision of induced negative and positive spherical aberration within the range of laser vision correction procedures. In 10 eyes (mean age: 35.8 years) under cyclopegic conditions, spherical aberration values from -0.75 to +0.75 µm in 0.25-µm steps were induced by an adaptive optics system. Astigmatism and spherical refraction were corrected, whereas the other natural aberrations remained untouched. Visual acuity, depth of focus defined as the interval of vision for which the target was still perceived acceptable, contrast sensitivity, and change in spherical refraction associated with the variation in pupil diameter from 6 to 2.5 mm were measured. A refractive change of 1.60 D/µm of induced spherical aberration was obtained. Emmetropic eyes became myopic when positive spherical aberration was induced and hyperopic when negative spherical aberration was induced (R 2 = 81%). There were weak correlations between spherical aberration and visual acuity or depth of focus (R 2 = 2% and 3%, respectively). Contrast sensitivity worsened with the increment of spherical aberration (R 2 = 59%). When pupil size decreased, emmetropic eyes became hyperopic when preexisting spherical aberration was positive and myopic when spherical aberration was negative, with an average refractive change of 0.60 D/µm of spherical aberration (R 2 = 54%). An inverse linear correlation exists between the refractive state of the eye and spherical aberration induced within the range of laser vision correction. Small values of spherical aberration do not worsen visual acuity or depth of focus, but positive spherical aberration may induce night myopia. In addition, the changes in spherical refraction when the pupil constricts may worsen near vision when positive spherical aberration is induced or improve it when spherical aberration is negative. [J Refract Surg. 2017;33(7):470-474.]. Copyright 2017, SLACK Incorporated.

  10. Emergent Neutrality in Adaptive Asexual Evolution

    PubMed Central

    Schiffels, Stephan; Szöllősi, Gergely J.; Mustonen, Ville; Lässig, Michael

    2011-01-01

    In nonrecombining genomes, genetic linkage can be an important evolutionary force. Linkage generates interference interactions, by which simultaneously occurring mutations affect each other’s chance of fixation. Here, we develop a comprehensive model of adaptive evolution in linked genomes, which integrates interference interactions between multiple beneficial and deleterious mutations into a unified framework. By an approximate analytical solution, we predict the fixation rates of these mutations, as well as the probabilities of beneficial and deleterious alleles at fixed genomic sites. We find that interference interactions generate a regime of emergent neutrality: all genomic sites with selection coefficients smaller in magnitude than a characteristic threshold have nearly random fixed alleles, and both beneficial and deleterious mutations at these sites have nearly neutral fixation rates. We show that this dynamic limits not only the speed of adaptation, but also a population’s degree of adaptation in its current environment. We apply the model to different scenarios: stationary adaptation in a time-dependent environment and approach to equilibrium in a fixed environment. In both cases, the analytical predictions are in good agreement with numerical simulations. Our results suggest that interference can severely compromise biological functions in an adapting population, which sets viability limits on adaptive evolution under linkage. PMID:21926305

  11. Incorporating Natural Capital into Climate Adaptation Planning: Exploring the Role of Habitat in Increasing Coastal Resilience

    NASA Astrophysics Data System (ADS)

    Wedding, L.; Hartge, E. H.; Guannel, G.; Melius, M.; Reiter, S. M.; Ruckelshaus, M.; Guerry, A.; Caldwell, M.

    2014-12-01

    To support decision-makers in their efforts to manage coastal resources in a changing climate the Natural Capital Project and the Center for Ocean Solutions are engaging in, informing, and helping to shape climate adaptation planning at various scales throughout coastal California. Our team is building collaborations with regional planners and local scientific and legal experts to inform local climate adaptation decisions that might minimize the economic and social losses associated with rising seas and more damaging storms. Decision-makers are considering engineered solutions (e.g. seawalls), natural solutions (e.g. dune or marsh restoration), and combinations of the two. To inform decisions about what kinds of solutions might best work in specific locations, we are comparing alternate climate and adaptation scenarios. We will present results from our use of the InVEST ecosystem service models in Sonoma County, with an initial focus on protection from coastal hazards due to erosion and inundation. By strategically choosing adaptation alternatives, communities and agencies can work to protect people and property while also protecting or restoring dwindling critical habitat and the full suite of benefits those habitats provide to people.

  12. E-Learning Barriers and Solutions to Knowledge Management and Transfer

    ERIC Educational Resources Information Center

    Oye, Nathaniel David; Salleh, Mazleena

    2013-01-01

    This paper present a systematic overview of barriers and solutions of e-learning in knowledge management (KM) and knowledge transfer (KT) with more focus on organizations. The paper also discusses KT in organizational settings and KT in the field of e-learning. Here, an e-learning initiative shows adaptive solutions to overcome knowledge transfer…

  13. On the Solution of the Three-Dimensional Flowfield About a Flow-Through Nacelle. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Compton, William Bernard

    1985-01-01

    The solution of the three dimensional flow field for a flow through nacelle was studied. Both inviscid and viscous inviscid interacting solutions were examined. Inviscid solutions were obtained with two different computational procedures for solving the three dimensional Euler equations. The first procedure employs an alternating direction implicit numerical algorithm, and required the development of a complete computational model for the nacelle problem. The second computational technique employs a fourth order Runge-Kutta numerical algorithm which was modified to fit the nacelle problem. Viscous effects on the flow field were evaluated with a viscous inviscid interacting computational model. This model was constructed by coupling the explicit Euler solution procedure with a flag entrainment boundary layer solution procedure in a global iteration scheme. The computational techniques were used to compute the flow field for a long duct turbofan engine nacelle at free stream Mach numbers of 0.80 and 0.94 and angles of attack of 0 and 4 deg.

  14. Biosorption of gold from computer microprocessor leachate solutions using chitin.

    PubMed

    Côrtes, Letícia N; Tanabe, Eduardo H; Bertuol, Daniel A; Dotto, Guilherme L

    2015-11-01

    The biosorption of gold from discarded computer microprocessor (DCM) leachate solutions was studied using chitin as a biosorbent. The DCM components were leached with thiourea solutions, and two procedures were tested for recovery of gold from the leachates: (1) biosorption and (2) precipitation followed by biosorption. For each procedure, the biosorption was evaluated considering kinetic, equilibrium, and thermodynamic aspects. The general order model was able to represent the kinetic behavior, and the equilibrium was well represented by the BET model. The maximum biosorption capacities were around 35 mg g(-1) for both procedures. The biosorption of gold on chitin was a spontaneous, favorable, and exothermic process. It was found that precipitation followed by biosorption resulted in the best gold recovery, because other species were removed from the leachate solution in the precipitation step. This method enabled about 80% of the gold to be recovered, using 20 g L(-1) of chitin at 298 K for 4 h. Copyright © 2015 Elsevier Ltd. All rights reserved.

  15. Completing and Adapting Models of Biological Processes

    NASA Technical Reports Server (NTRS)

    Margaria, Tiziana; Hinchey, Michael G.; Raffelt, Harald; Rash, James L.; Rouff, Christopher A.; Steffen, Bernhard

    2006-01-01

    We present a learning-based method for model completion and adaptation, which is based on the combination of two approaches: 1) R2D2C, a technique for mechanically transforming system requirements via provably equivalent models to running code, and 2) automata learning-based model extrapolation. The intended impact of this new combination is to make model completion and adaptation accessible to experts of the field, like biologists or engineers. The principle is briefly illustrated by generating models of biological procedures concerning gene activities in the production of proteins, although the main application is going to concern autonomic systems for space exploration.

  16. Research and constructive solutions on the reduction of slosh noise

    NASA Astrophysics Data System (ADS)

    Manta (Balas, M.; Balas, R.; Doicin, C. V.

    2016-11-01

    The paper presents a product design making of, over a “delicate issue” in automotive industry as slosh noise phenomena. Even though the current market tendency shows great achievements over this occurrence, in this study, the main idea is to design concepts of slosh noise baffles adapted for serial life existing fuel tanks in the automotive industry. Moreover, starting with internal and external research, going further through reversed engineering and applying own baffle technical solutions from conceptual sketches to 3D design, the paper shows the technical solutions identified as an alternative to a new development of fuel tank. Based on personal and academic experience there were identified several problematics and the possible answers based on functional analysis, in order to avoid blocking points. The idea of developing baffles adapted to already existent fuel tanks leaded to equivalent solutions analyzed from functional point of view. Once this stage is finished, a methodology will be used so as to choose the optimum solution so as to get the functional design.

  17. Physiological Self-Regulation and Adaptive Automation

    NASA Technical Reports Server (NTRS)

    Prinzell, Lawrence J.; Pope, Alan T.; Freeman, Frederick G.

    2007-01-01

    Adaptive automation has been proposed as a solution to current problems of human-automation interaction. Past research has shown the potential of this advanced form of automation to enhance pilot engagement and lower cognitive workload. However, there have been concerns voiced regarding issues, such as automation surprises, associated with the use of adaptive automation. This study examined the use of psychophysiological self-regulation training with adaptive automation that may help pilots deal with these problems through the enhancement of cognitive resource management skills. Eighteen participants were assigned to 3 groups (self-regulation training, false feedback, and control) and performed resource management, monitoring, and tracking tasks from the Multiple Attribute Task Battery. The tracking task was cycled between 3 levels of task difficulty (automatic, adaptive aiding, manual) on the basis of the electroencephalogram-derived engagement index. The other two tasks remained in automatic mode that had a single automation failure. Those participants who had received self-regulation training performed significantly better and reported lower National Aeronautics and Space Administration Task Load Index scores than participants in the false feedback and control groups. The theoretical and practical implications of these results for adaptive automation are discussed.

  18. Separation and Recovery of Cobalt from Copper Leach Solutions

    NASA Astrophysics Data System (ADS)

    Jeffers, T. H.

    1985-01-01

    Significant amounts of cobalt, a strategic and critical metal, are present in readily accessible copper recycling leach solutions. However, cost-effective technology is not available to separate and recover the cobalt from this low-grade domestic source. The Bureau of Mines has developed a procedure using a chelating ion-exchange resin from Dow Chemical Co. to successfully extract cobalt from a pH 3.0 copper recycling solution containing only 30 mg/1 cobalt. Cyclic tests with the commercial resin XFS-4195 in 4-ft-high by 1-in.-diameter columns gave an average cobalt extraction of 95% when 65 bed volumes of solution were processed at a flow rate of 4 gpm/ft.2 Elution of the cobalt using a 50 g/l H2SO4 solution yielded an eluate containing 0.5 gli Co. Selective elution of the loaded resin and solvent extraction procedures using di-2-ethylhexyl phosphoric acid (D2EHPA) and Cyanex 272 removed the impurities and produced a cobalt sulfate solution containing 25 g/l Co.

  19. Transforming AdaPT to Ada

    NASA Technical Reports Server (NTRS)

    Goldsack, Stephen J.; Holzbach-Valero, A. A.; Waldrop, Raymond S.; Volz, Richard A.

    1991-01-01

    This paper describes how the main features of the proposed Ada language extensions intended to support distribution, and offered as possible solutions for Ada9X can be implemented by transformation into standard Ada83. We start by summarizing the features proposed in a paper (Gargaro et al, 1990) which constitutes the definition of the extensions. For convenience we have called the language in its modified form AdaPT which might be interpreted as Ada with partitions. These features were carefully chosen to provide support for the construction of executable modules for execution in nodes of a network of loosely coupled computers, but flexibly configurable for different network architectures and for recovery following failure, or adapting to mode changes. The intention in their design was to provide extensions which would not impact adversely on the normal use of Ada, and would fit well in style and feel with the existing standard. We begin by summarizing the features introduced in AdaPT.

  20. Systematic evaluation of three different commercial software solutions for automatic segmentation for adaptive therapy in head-and-neck, prostate and pleural cancer.

    PubMed

    La Macchia, Mariangela; Fellin, Francesco; Amichetti, Maurizio; Cianchetti, Marco; Gianolini, Stefano; Paola, Vitali; Lomax, Antony J; Widesott, Lamberto

    2012-09-18

    To validate, in the context of adaptive radiotherapy, three commercial software solutions for atlas-based segmentation. Fifteen patients, five for each group, with cancer of the Head&Neck, pleura, and prostate were enrolled in the study. In addition to the treatment planning CT (pCT) images, one replanning CT (rCT) image set was acquired for each patient during the RT course. Three experienced physicians outlined on the pCT and rCT all the volumes of interest (VOIs). We used three software solutions (VelocityAI 2.6.2 (V), MIM 5.1.1 (M) by MIMVista and ABAS 2.0 (A) by CMS-Elekta) to generate the automatic contouring on the repeated CT. All the VOIs obtained with automatic contouring (AC) were successively corrected manually. We recorded the time needed for: 1) ex novo ROIs definition on rCT; 2) generation of AC by the three software solutions; 3) manual correction of AC.To compare the quality of the volumes obtained automatically by the software and manually corrected with those drawn from scratch on rCT, we used the following indexes: overlap coefficient (DICE), sensitivity, inclusiveness index, difference in volume, and displacement differences on three axes (x, y, z) from the isocenter. The time saved by the three software solutions for all the sites, compared to the manual contouring from scratch, is statistically significant and similar for all the three software solutions. The time saved for each site are as follows: about an hour for Head&Neck, about 40 minutes for prostate, and about 20 minutes for mesothelioma. The best DICE similarity coefficient index was obtained with the manual correction for: A (contours for prostate), A and M (contours for H&N), and M (contours for mesothelioma). From a clinical point of view, the automated contouring workflow was shown to be significantly shorter than the manual contouring process, even though manual correction of the VOIs is always needed.

  1. A conjugate heat transfer procedure for gas turbine blades.

    PubMed

    Croce, G

    2001-05-01

    A conjugate heat transfer procedure, allowing for the use of different solvers on the solid and fluid domain(s), is presented. Information exchange between solid and fluid solution is limited to boundary condition values, and this exchange is carried out at any pseudo-time step. Global convergence rate of the procedure is, thus, of the same order of magnitude of stand-alone computations.

  2. Identification of robust adaptation gene regulatory network parameters using an improved particle swarm optimization algorithm.

    PubMed

    Huang, X N; Ren, H P

    2016-05-13

    Robust adaptation is a critical ability of gene regulatory network (GRN) to survive in a fluctuating environment, which represents the system responding to an input stimulus rapidly and then returning to its pre-stimulus steady state timely. In this paper, the GRN is modeled using the Michaelis-Menten rate equations, which are highly nonlinear differential equations containing 12 undetermined parameters. The robust adaption is quantitatively described by two conflicting indices. To identify the parameter sets in order to confer the GRNs with robust adaptation is a multi-variable, multi-objective, and multi-peak optimization problem, which is difficult to acquire satisfactory solutions especially high-quality solutions. A new best-neighbor particle swarm optimization algorithm is proposed to implement this task. The proposed algorithm employs a Latin hypercube sampling method to generate the initial population. The particle crossover operation and elitist preservation strategy are also used in the proposed algorithm. The simulation results revealed that the proposed algorithm could identify multiple solutions in one time running. Moreover, it demonstrated a superior performance as compared to the previous methods in the sense of detecting more high-quality solutions within an acceptable time. The proposed methodology, owing to its universality and simplicity, is useful for providing the guidance to design GRN with superior robust adaptation.

  3. RARtool: A MATLAB Software Package for Designing Response-Adaptive Randomized Clinical Trials with Time-to-Event Outcomes.

    PubMed

    Ryeznik, Yevgen; Sverdlov, Oleksandr; Wong, Weng Kee

    2015-08-01

    Response-adaptive randomization designs are becoming increasingly popular in clinical trial practice. In this paper, we present RARtool , a user interface software developed in MATLAB for designing response-adaptive randomized comparative clinical trials with censored time-to-event outcomes. The RARtool software can compute different types of optimal treatment allocation designs, and it can simulate response-adaptive randomization procedures targeting selected optimal allocations. Through simulations, an investigator can assess design characteristics under a variety of experimental scenarios and select the best procedure for practical implementation. We illustrate the utility of our RARtool software by redesigning a survival trial from the literature.

  4. Nonrelativistic grey S n -transport radiative-shock solutions

    DOE PAGES

    Ferguson, J. M.; Morel, J. E.; Lowrie, R. B.

    2017-06-01

    We present semi-analytic radiative-shock solutions in which grey Sn-transport is used to model the radiation, and we include both constant cross sections and cross sections that depend on temperature and density. These new solutions solve for a variable Eddington factor (VEF) across the shock domain, which allows for interesting physics not seen before in radiative-shock solutions. Comparisons are made with the grey nonequilibrium-diffusion radiative-shock solutions of Lowrie and Edwards [1], which assumed that the Eddington factor is constant across the shock domain. It is our experience that the local Mach number is monotonic when producing nonequilibrium-diffusion solutions, but that thismore » monotonicity may disappear while integrating the precursor region to produce Sn-transport solutions. For temperature- and density-dependent cross sections we show evidence of a spike in the VEF in the far upstream portion of the radiative-shock precursor. We show evidence of an adaptation zone in the precursor region, adjacent to the embedded hydrodynamic shock, as conjectured by Drake [2, 3], and also confirm his expectation that the precursor temperatures adjacent to the Zel’dovich spike take values that are greater than the downstream post-shock equilibrium temperature. We also show evidence that the radiation energy density can be nonmonotonic under the Zel’dovich spike, which is indicative of anti-diffusive radiation flow as predicted by McClarren and Drake [4]. We compare the angle dependence of the radiation flow for the Sn-transport and nonequilibriumdiffusion radiation solutions, and show that there are considerable differences in the radiation flow between these models across the shock structure. Lastly, we analyze the radiation flow to understand the cause of the adaptation zone, as well as the structure of the Sn-transport radiation-intensity solutions across the shock structure.« less

  5. A flexible architecture for advanced process control solutions

    NASA Astrophysics Data System (ADS)

    Faron, Kamyar; Iourovitski, Ilia

    2005-05-01

    Advanced Process Control (APC) is now mainstream practice in the semiconductor manufacturing industry. Over the past decade and a half APC has evolved from a "good idea", and "wouldn"t it be great" concept to mandatory manufacturing practice. APC developments have primarily dealt with two major thrusts, algorithms and infrastructure, and often the line between them has been blurred. The algorithms have evolved from very simple single variable solutions to sophisticated and cutting edge adaptive multivariable (input and output) solutions. Spending patterns in recent times have demanded that the economics of a comprehensive APC infrastructure be completely justified for any and all cost conscious manufacturers. There are studies suggesting integration costs as high as 60% of the total APC solution costs. Such cost prohibitive figures clearly diminish the return on APC investments. This has limited the acceptance and development of pure APC infrastructure solutions for many fabs. Modern APC solution architectures must satisfy the wide array of requirements from very manual R&D environments to very advanced and automated "lights out" manufacturing facilities. A majority of commercially available control solutions and most in house developed solutions lack important attributes of scalability, flexibility, and adaptability and hence require significant resources for integration, deployment, and maintenance. Many APC improvement efforts have been abandoned and delayed due to legacy systems and inadequate architectural design. Recent advancements (Service Oriented Architectures) in the software industry have delivered ideal technologies for delivering scalable, flexible, and reliable solutions that can seamlessly integrate into any fabs" existing system and business practices. In this publication we shall evaluate the various attributes of the architectures required by fabs and illustrate the benefits of a Service Oriented Architecture to satisfy these requirements. Blue

  6. Nonrelativistic grey S n -transport radiative-shock solutions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ferguson, J. M.; Morel, J. E.; Lowrie, R. B.

    We present semi-analytic radiative-shock solutions in which grey Sn-transport is used to model the radiation, and we include both constant cross sections and cross sections that depend on temperature and density. These new solutions solve for a variable Eddington factor (VEF) across the shock domain, which allows for interesting physics not seen before in radiative-shock solutions. Comparisons are made with the grey nonequilibrium-diffusion radiative-shock solutions of Lowrie and Edwards [1], which assumed that the Eddington factor is constant across the shock domain. It is our experience that the local Mach number is monotonic when producing nonequilibrium-diffusion solutions, but that thismore » monotonicity may disappear while integrating the precursor region to produce Sn-transport solutions. For temperature- and density-dependent cross sections we show evidence of a spike in the VEF in the far upstream portion of the radiative-shock precursor. We show evidence of an adaptation zone in the precursor region, adjacent to the embedded hydrodynamic shock, as conjectured by Drake [2, 3], and also confirm his expectation that the precursor temperatures adjacent to the Zel’dovich spike take values that are greater than the downstream post-shock equilibrium temperature. We also show evidence that the radiation energy density can be nonmonotonic under the Zel’dovich spike, which is indicative of anti-diffusive radiation flow as predicted by McClarren and Drake [4]. We compare the angle dependence of the radiation flow for the Sn-transport and nonequilibriumdiffusion radiation solutions, and show that there are considerable differences in the radiation flow between these models across the shock structure. Lastly, we analyze the radiation flow to understand the cause of the adaptation zone, as well as the structure of the Sn-transport radiation-intensity solutions across the shock structure.« less

  7. Improvement of In Vitro Date Palm Plantlet Acclimatization Rate with Kinetin and Hoagland Solution.

    PubMed

    Hassan, Mona M

    2017-01-01

    In vitro propagation of date palm Phoenix dactylifera L. is an ideal method to produce large numbers of healthy plants with specific characteristics and has the ability to transfer plantlets to ex vitro conditions at low cost and with a high survival rate. This chapter describes optimized acclimatization procedures for in vitro date palm plantlets. Primarily, the protocol presents the use of kinetin and Hoagland solution to enhance the growth of Barhee cv. plantlets in the greenhouse at two stages of acclimatization and the appropriate planting medium under shade and sunlight in the nursery. Foliar application of kinetin (20 mg/L) is recommended at the first stage. A combination between soil and foliar application of 50% Hoagland solution is favorable to plant growth and developmental parameters including plant height, leaf width, stem base diameter, chlorophyll A and B, carotenoids, and indoles. The optimum values of vegetative growth parameters during the adaptation stage in a shaded nursery are achieved using planting medium containing peat moss/perlite 2:1 (v/v), while in a sunlight nursery, clay/perlite/compost at equal ratio is the best. This protocol is suitable for large-scale production of micropropagated date palm plantlets.

  8. Lesson 7: From Requirements to Specific Solutions

    EPA Pesticide Factsheets

    CROMERR requirements set performance goals, they do not dictate specific system functions, operating procedures,system architecture, or technology. The task is to decide on a solution to meet the goals.

  9. Adapting agriculture to climate change.

    PubMed

    Howden, S Mark; Soussana, Jean-François; Tubiello, Francesco N; Chhetri, Netra; Dunlop, Michael; Meinke, Holger

    2007-12-11

    The strong trends in climate change already evident, the likelihood of further changes occurring, and the increasing scale of potential climate impacts give urgency to addressing agricultural adaptation more coherently. There are many potential adaptation options available for marginal change of existing agricultural systems, often variations of existing climate risk management. We show that implementation of these options is likely to have substantial benefits under moderate climate change for some cropping systems. However, there are limits to their effectiveness under more severe climate changes. Hence, more systemic changes in resource allocation need to be considered, such as targeted diversification of production systems and livelihoods. We argue that achieving increased adaptation action will necessitate integration of climate change-related issues with other risk factors, such as climate variability and market risk, and with other policy domains, such as sustainable development. Dealing with the many barriers to effective adaptation will require a comprehensive and dynamic policy approach covering a range of scales and issues, for example, from the understanding by farmers of change in risk profiles to the establishment of efficient markets that facilitate response strategies. Science, too, has to adapt. Multidisciplinary problems require multidisciplinary solutions, i.e., a focus on integrated rather than disciplinary science and a strengthening of the interface with decision makers. A crucial component of this approach is the implementation of adaptation assessment frameworks that are relevant, robust, and easily operated by all stakeholders, practitioners, policymakers, and scientists.

  10. Need for adaptation: transformation of temporary houses.

    PubMed

    Wagemann, Elizabeth

    2017-10-01

    Building permanent accommodation after a disaster takes time for reasons including the removal of debris, the lack of available land, and the procurement of resources. In the period in-between, affected communities find shelter in different ways. Temporary houses or transitional shelters are used when families cannot return to their pre-disaster homes and no other alternative can be provided. In practice, families stay in a standard interim solution for months or even years while trying to return to their routines. Consequently, they adapt their houses to meet their midterm needs. This study analysed temporary houses in Chile and Peru to illustrate how families modify them with or without external support. The paper underlines that guidance must be given on how to alter them safely and on how to incorporate the temporary solution into the permanent structure, because families adapt their houses whether or not they are so designed. © 2017 The Author(s). Disasters © Overseas Development Institute, 2017.

  11. PROCESS OF ELIMINATING HYDROGEN PEROXIDE IN SOLUTIONS CONTAINING PLUTONIUM VALUES

    DOEpatents

    Barrick, J.G.; Fries, B.A.

    1960-09-27

    A procedure is given for peroxide precipitation processes for separating and recovering plutonium values contained in an aqueous solution. When plutonium peroxide is precipitated from an aqueous solution, the supernatant contains appreciable quantities of plutonium and peroxide. It is desirable to process this solution further to recover plutonium contained therein, but the presence of the peroxide introduces difficulties; residual hydrogen peroxide contained in the supernatant solution is eliminated by adding a nitrite or a sulfite to this solution.

  12. Space motion sickness preflight adaptation training: preliminary studies with prototype trainers

    NASA Technical Reports Server (NTRS)

    Parker, D. E.; Rock, J. C.; von Gierke, H. E.; Ouyang, L.; Reschke, M. F.; Arrott, A. P.

    1987-01-01

    Preflight training frequently has been proposed as a potential solution to the problem of space motion sickness. The paper considers successively the otolith reinterpretation, the concept for a preflight adaptation trainer and the research with the Miami University Seesaw, the Wright Patterson Air-Force Base Dynamic Environment Simulator and the Visually Coupled Airborne Systems Simulator prototype adaptation trainers.

  13. Solution of nonlinear flow equations for complex aerodynamic shapes

    NASA Technical Reports Server (NTRS)

    Djomehri, M. Jahed

    1992-01-01

    Solution-adaptive CFD codes based on unstructured methods for 3-D complex geometries in subsonic to supersonic regimes were investigated, and the computed solution data were analyzed in conjunction with experimental data obtained from wind tunnel measurements in order to assess and validate the predictability of the code. Specifically, the FELISA code was assessed and improved in cooperation with NASA Langley and Imperial College, Swansea, U.K.

  14. Exact least squares adaptive beamforming using an orthogonalization network

    NASA Astrophysics Data System (ADS)

    Yuen, Stanley M.

    1991-03-01

    The pros and cons of various classical and state-of-the-art methods in adaptive array processing are discussed, and the relevant concepts and historical developments are pointed out. A set of easy-to-understand equations for facilitating derivation of any least-squares-based algorithm is derived. Using this set of equations and incorporating all of the useful properties associated with various techniques, an efficient solution to the real-time adaptive beamforming problem is developed.

  15. On-line upgrade of program modules using AdaPT

    NASA Technical Reports Server (NTRS)

    Waldrop, Raymond S.; Volz, Richard A.; Smith, Gary W.; Goldsack, Stephen J.; Holzbach-Valero, A. A.

    1993-01-01

    One purpose of our research is the investigation of the effectiveness and expressiveness of AdaPT, a set of language extensions to Ada 83, for distributed systems. As a part of that effort, we are now investigating the subject of replacing, e.g. upgrading, software modules while the software system remains in operation. The AdaPT language extensions provide a good basis for this investigation for several reasons: they include the concept of specific, self-contained program modules which can be manipulated; support for program configuration is included in the language; and although the discussion will be in terms of the AdaPT language, the AdaPT to Ada 83 conversion methodology being developed as another part of this project will provide a basis for the application of our findings to Ada 83 and Ada 9X systems. The purpose of this investigation is to explore the basic mechanisms of the replacement process. With this purpose in mind, we will avoid including issues whose presence would obscure these basic mechanisms by introducing additional, unrelated concerns. Thus, while replacement in the presence of real-time deadlines, heterogeneous systems, and unreliable networks is certainly a topic of interest, we will first gain an understanding of the basic processes in the absence of such concerns. The extension of the replacement process to more complex situations can be made later. A previous report established an overview of the module replacement problem, a taxonomy of the various aspects of the replacement process, and a solution to one case in the replacement taxonomy. This report provides solutions to additional cases in the replacement process taxonomy: replacement of partitions with state and replacement of nodes. The solutions presented here establish the basic principles for module replacement. Extension of these solutions to other more complicated cases in the replacement taxonomy is direct, though requiring substantial work beyond the available funding.

  16. Adaptive finite element methods for two-dimensional problems in computational fracture mechanics

    NASA Technical Reports Server (NTRS)

    Min, J. B.; Bass, J. M.; Spradley, L. W.

    1994-01-01

    Some recent results obtained using solution-adaptive finite element methods in two-dimensional problems in linear elastic fracture mechanics are presented. The focus is on the basic issue of adaptive finite element methods for validating the new methodology by computing demonstration problems and comparing the stress intensity factors to analytical results.

  17. Adapting to blur produced by ocular high-order aberrations

    PubMed Central

    Sawides, Lucie; de Gracia, Pablo; Dorronsoro, Carlos; Webster, Michael; Marcos, Susana

    2011-01-01

    The perceived focus of an image can be strongly biased by prior adaptation to a blurred or sharpened image. We examined whether these adaptation effects can occur for the natural patterns of retinal image blur produced by high-order aberrations (HOAs) in the optics of the eye. Focus judgments were measured for 4 subjects to estimate in a forced choice procedure (sharp/blurred) their neutral point after adaptation to different levels of blur produced by scaled increases or decreases in their HOAs. The optical blur was simulated by convolution of the PSFs from the 4 different HOA patterns, with Zernike coefficients (excluding tilt, defocus, and astigmatism) multiplied by a factor between 0 (diffraction limited) and 2 (double amount of natural blur). Observers viewed the images through an Adaptive Optics system that corrected their aberrations and made settings under neutral adaptation to a gray field or after adapting to 5 different blur levels. All subjects adapted to changes in the level of blur imposed by HOA regardless of which observer’s HOA was used to generate the stimuli, with the perceived neutral point proportional to the amount of blur in the adapting image. PMID:21712375

  18. Adapting to blur produced by ocular high-order aberrations.

    PubMed

    Sawides, Lucie; de Gracia, Pablo; Dorronsoro, Carlos; Webster, Michael; Marcos, Susana

    2011-06-28

    The perceived focus of an image can be strongly biased by prior adaptation to a blurred or sharpened image. We examined whether these adaptation effects can occur for the natural patterns of retinal image blur produced by high-order aberrations (HOAs) in the optics of the eye. Focus judgments were measured for 4 subjects to estimate in a forced choice procedure (sharp/blurred) their neutral point after adaptation to different levels of blur produced by scaled increases or decreases in their HOAs. The optical blur was simulated by convolution of the PSFs from the 4 different HOA patterns, with Zernike coefficients (excluding tilt, defocus, and astigmatism) multiplied by a factor between 0 (diffraction limited) and 2 (double amount of natural blur). Observers viewed the images through an Adaptive Optics system that corrected their aberrations and made settings under neutral adaptation to a gray field or after adapting to 5 different blur levels. All subjects adapted to changes in the level of blur imposed by HOA regardless of which observer's HOA was used to generate the stimuli, with the perceived neutral point proportional to the amount of blur in the adapting image.

  19. A more radical solution.

    PubMed

    Lachmann, Peter J

    2015-01-01

    The current modifications to licensing procedures still leave a basically flawed system in place. A more radical solution is proposed that involves dispensing with Phase 3 trials and making medicines available at the end of Phase 2 to those who are fully informed of the potential risks and benefits and wish to take part in this novel procedure. The advantages include a shorter development time, lower development costs and allowing smaller companies to take medicines to the clinic. The principal obstacle is that medicines are subject to strict liability rather than the tort of negligence - and this will have to be amended in due course.

  20. An investigation of several numerical procedures for time-asymptotic compressible Navier-Stokes solutions

    NASA Technical Reports Server (NTRS)

    Rudy, D. H.; Morris, D. J.; Blanchard, D. K.; Cooke, C. H.; Rubin, S. G.

    1975-01-01

    The status of an investigation of four numerical techniques for the time-dependent compressible Navier-Stokes equations is presented. Results for free shear layer calculations in the Reynolds number range from 1000 to 81000 indicate that a sequential alternating-direction implicit (ADI) finite-difference procedure requires longer computing times to reach steady state than a low-storage hopscotch finite-difference procedure. A finite-element method with cubic approximating functions was found to require excessive computer storage and computation times. A fourth method, an alternating-direction cubic spline technique which is still being tested, is also described.

  1. A Generalized Deduction of the Ideal-Solution Model

    ERIC Educational Resources Information Center

    Leo, Teresa J.; Perez-del-Notario, Pedro; Raso, Miguel A.

    2006-01-01

    A new general procedure for deriving the Gibbs energy of mixing is developed through general thermodynamic considerations, and the ideal-solution model is obtained as a special particular case of the general one. The deduction of the Gibbs energy of mixing for the ideal-solution model is a rational one and viewed suitable for advanced students who…

  2. When procedures discourage insight: epistemological consequences of prompting novice physics students to construct force diagrams

    NASA Astrophysics Data System (ADS)

    Kuo, Eric; Hallinen, Nicole R.; Conlin, Luke D.

    2017-05-01

    One aim of school science instruction is to help students become adaptive problem solvers. Though successful at structuring novice problem solving, step-by-step problem-solving frameworks may also constrain students' thinking. This study utilises a paradigm established by Heckler [(2010). Some consequences of prompting novice physics students to construct force diagrams. International Journal of Science Education, 32(14), 1829-1851] to test how cuing the first step in a standard framework affects undergraduate students' approaches and evaluation of solutions in physics problem solving. Specifically, prompting the construction of a standard diagram before problem solving increases the use of standard procedures, decreasing the use of a conceptual shortcut. Providing a diagram prompt also lowers students' ratings of informal approaches to similar problems. These results suggest that reminding students to follow typical problem-solving frameworks limits their views of what counts as good problem solving.

  3. On valuing information in adaptive-management models.

    PubMed

    Moore, Alana L; McCarthy, Michael A

    2010-08-01

    Active adaptive management looks at the benefit of using strategies that may be suboptimal in the near term but may provide additional information that will facilitate better management in the future. In many adaptive-management problems that have been studied, the optimal active and passive policies (accounting for learning when designing policies and designing policy on the basis of current best information, respectively) are very similar. This seems paradoxical; when faced with uncertainty about the best course of action, managers should spend very little effort on actively designing programs to learn about the system they are managing. We considered two possible reasons why active and passive adaptive solutions are often similar. First, the benefits of learning are often confined to the particular case study in the modeled scenario, whereas in reality information gained from local studies is often applied more broadly. Second, management objectives that incorporate the variance of an estimate may place greater emphasis on learning than more commonly used objectives that aim to maximize an expected value. We explored these issues in a case study of Merri Creek, Melbourne, Australia, in which the aim was to choose between two options for revegetation. We explicitly incorporated monitoring costs in the model. The value of the terminal rewards and the choice of objective both influenced the difference between active and passive adaptive solutions. Explicitly considering the cost of monitoring provided a different perspective on how the terminal reward and management objective affected learning. The states for which it was optimal to monitor did not always coincide with the states in which active and passive adaptive management differed. Our results emphasize that spending resources on monitoring is only optimal when the expected benefits of the options being considered are similar and when the pay-off for learning about their benefits is large.

  4. User-Centered Indexing for Adaptive Information Access

    NASA Technical Reports Server (NTRS)

    Chen, James R.; Mathe, Nathalie

    1996-01-01

    We are focusing on information access tasks characterized by large volume of hypermedia connected technical documents, a need for rapid and effective access to familiar information, and long-term interaction with evolving information. The problem for technical users is to build and maintain a personalized task-oriented model of the information to quickly access relevant information. We propose a solution which provides user-centered adaptive information retrieval and navigation. This solution supports users in customizing information access over time. It is complementary to information discovery methods which provide access to new information, since it lets users customize future access to previously found information. It relies on a technique, called Adaptive Relevance Network, which creates and maintains a complex indexing structure to represent personal user's information access maps organized by concepts. This technique is integrated within the Adaptive HyperMan system, which helps NASA Space Shuttle flight controllers organize and access large amount of information. It allows users to select and mark any part of a document as interesting, and to index that part with user-defined concepts. Users can then do subsequent retrieval of marked portions of documents. This functionality allows users to define and access personal collections of information, which are dynamically computed. The system also supports collaborative review by letting users share group access maps. The adaptive relevance network provides long-term adaptation based both on usage and on explicit user input. The indexing structure is dynamic and evolves over time. Leading and generalization support flexible retrieval of information under similar concepts. The network is geared towards more recent information access, and automatically manages its size in order to maintain rapid access when scaling up to large hypermedia space. We present results of simulated learning experiments.

  5. Parameter learning for performance adaptation

    NASA Technical Reports Server (NTRS)

    Peek, Mark D.; Antsaklis, Panos J.

    1990-01-01

    A parameter learning method is introduced and used to broaden the region of operability of the adaptive control system of a flexible space antenna. The learning system guides the selection of control parameters in a process leading to optimal system performance. A grid search procedure is used to estimate an initial set of parameter values. The optimization search procedure uses a variation of the Hooke and Jeeves multidimensional search algorithm. The method is applicable to any system where performance depends on a number of adjustable parameters. A mathematical model is not necessary, as the learning system can be used whenever the performance can be measured via simulation or experiment. The results of two experiments, the transient regulation and the command following experiment, are presented.

  6. Quality based approach for adaptive face recognition

    NASA Astrophysics Data System (ADS)

    Abboud, Ali J.; Sellahewa, Harin; Jassim, Sabah A.

    2009-05-01

    Recent advances in biometric technology have pushed towards more robust and reliable systems. We aim to build systems that have low recognition errors and are less affected by variation in recording conditions. Recognition errors are often attributed to the usage of low quality biometric samples. Hence, there is a need to develop new intelligent techniques and strategies to automatically measure/quantify the quality of biometric image samples and if necessary restore image quality according to the need of the intended application. In this paper, we present no-reference image quality measures in the spatial domain that have impact on face recognition. The first is called symmetrical adaptive local quality index (SALQI) and the second is called middle halve (MH). Also, an adaptive strategy has been developed to select the best way to restore the image quality, called symmetrical adaptive histogram equalization (SAHE). The main benefits of using quality measures for adaptive strategy are: (1) avoidance of excessive unnecessary enhancement procedures that may cause undesired artifacts, and (2) reduced computational complexity which is essential for real time applications. We test the success of the proposed measures and adaptive approach for a wavelet-based face recognition system that uses the nearest neighborhood classifier. We shall demonstrate noticeable improvements in the performance of adaptive face recognition system over the corresponding non-adaptive scheme.

  7. Application of the Flood-IMPAT procedure in the Valle d'Aosta Region, Italy

    NASA Astrophysics Data System (ADS)

    Minucci, Guido; Mendoza, Marina Tamara; Molinari, Daniela; Atun, Funda; Menoni, Scira; Ballio, Francesco

    2016-04-01

    Flood Risk Management Plans (FRMPs) established by European "Floods" Directive (Directive 2007/60/EU) to Member States in order to address all aspects of flood risk management, taking into account costs and benefits of proposed mitigation tools must be reviewed by the same law every six years. This is aimed at continuously increasing the effectiveness of risk management, on the bases of the most advanced knowledge of flood risk and most (economically) feasible solutions, also taking into consideration achievements of the previous management cycle. Within this context, the Flood-IMPAT (i.e. Integrated Meso-scale Procedure to Assess Territorial flood risk) procedure has been developed aiming at overcoming limits of risk maps produced by the Po River Basin Authority and adopted for the first version of the Po River FRMP. The procedure allows the estimation of flood risk at the meso-scale and it is characterized by three main peculiarities. First is its feasibility for the entire Italian territory. Second is the possibility to express risk in monetary terms (i.e. expected damage), at least for those categories of damage for which suitable models are available. Finally, independent modules compose the procedure: each module allows the estimation of a certain type of damage (i.e. direct, indirect, intangibles) on a certain sector (e.g. residential, industrial, agriculture, environment, etc.) separately, guaranteeing flexibility in the implementation. This paper shows the application of the Flood-IMPAT procedure and the recent advancements in the procedure, aiming at increasing its reliability and usability. Through a further implementation of the procedure in the Dora Baltea River Basin (North of Italy), it was possible to test the sensitivity of risk estimates supplied by Flood-IMPAT with respect to different damage models and different approaches for the estimation of assets at risk. Risk estimates were also compared with observed damage data in the investigated areas

  8. Adaptive control of large space structures using recursive lattice filters

    NASA Technical Reports Server (NTRS)

    Sundararajan, N.; Goglia, G. L.

    1985-01-01

    The use of recursive lattice filters for identification and adaptive control of large space structures is studied. Lattice filters were used to identify the structural dynamics model of the flexible structures. This identification model is then used for adaptive control. Before the identified model and control laws are integrated, the identified model is passed through a series of validation procedures and only when the model passes these validation procedures is control engaged. This type of validation scheme prevents instability when the overall loop is closed. Another important area of research, namely that of robust controller synthesis, was investigated using frequency domain multivariable controller synthesis methods. The method uses the Linear Quadratic Guassian/Loop Transfer Recovery (LQG/LTR) approach to ensure stability against unmodeled higher frequency modes and achieves the desired performance.

  9. Natural language generation of surgical procedures.

    PubMed

    Wagner, J C; Rogers, J E; Baud, R H; Scherrer, J R

    1999-01-01

    A number of compositional Medical Concept Representation systems are being developed. Although these provide for a detailed conceptual representation of the underlying information, they have to be translated back to natural language for used by end-users and applications. The GALEN programme has been developing one such representation and we report here on a tool developed to generate natural language phrases from the GALEN conceptual representations. This tool can be adapted to different source modelling schemes and to different destination languages or sublanguages of a domain. It is based on a multilingual approach to natural language generation, realised through a clean separation of the domain model from the linguistic model and their link by well defined structures. Specific knowledge structures and operations have been developed for bridging between the modelling 'style' of the conceptual representation and natural language. Using the example of the scheme developed for modelling surgical operative procedures within the GALEN-IN-USE project, we show how the generator is adapted to such a scheme. The basic characteristics of the surgical procedures scheme are presented together with the basic principles of the generation tool. Using worked examples, we discuss the transformation operations which change the initial source representation into a form which can more directly be translated to a given natural language. In particular, the linguistic knowledge which has to be introduced--such as definitions of concepts and relationships is described. We explain the overall generator strategy and how particular transformation operations are triggered by language-dependent and conceptual parameters. Results are shown for generated French phrases corresponding to surgical procedures from the urology domain.

  10. Space-time adaptive ADER-DG schemes for dissipative flows: Compressible Navier-Stokes and resistive MHD equations

    NASA Astrophysics Data System (ADS)

    Fambri, Francesco; Dumbser, Michael; Zanotti, Olindo

    2017-11-01

    This paper presents an arbitrary high-order accurate ADER Discontinuous Galerkin (DG) method on space-time adaptive meshes (AMR) for the solution of two important families of non-linear time dependent partial differential equations for compressible dissipative flows : the compressible Navier-Stokes equations and the equations of viscous and resistive magnetohydrodynamics in two and three space-dimensions. The work continues a recent series of papers concerning the development and application of a proper a posteriori subcell finite volume limiting procedure suitable for discontinuous Galerkin methods (Dumbser et al., 2014, Zanotti et al., 2015 [40,41]). It is a well known fact that a major weakness of high order DG methods lies in the difficulty of limiting discontinuous solutions, which generate spurious oscillations, namely the so-called 'Gibbs phenomenon'. In the present work, a nonlinear stabilization of the scheme is sequentially and locally introduced only for troubled cells on the basis of a novel a posteriori detection criterion, i.e. the MOOD approach. The main benefits of the MOOD paradigm, i.e. the computational robustness even in the presence of strong shocks, are preserved and the numerical diffusion is considerably reduced also for the limited cells by resorting to a proper sub-grid. In practice the method first produces a so-called candidate solution by using a high order accurate unlimited DG scheme. Then, a set of numerical and physical detection criteria is applied to the candidate solution, namely: positivity of pressure and density, absence of floating point errors and satisfaction of a discrete maximum principle in the sense of polynomials. Furthermore, in those cells where at least one of these criteria is violated the computed candidate solution is detected as troubled and is locally rejected. Subsequently, a more reliable numerical solution is recomputed a posteriori by employing a more robust but still very accurate ADER-WENO finite volume

  11. An Adaptive Unstructured Grid Method by Grid Subdivision, Local Remeshing, and Grid Movement

    NASA Technical Reports Server (NTRS)

    Pirzadeh, Shahyar Z.

    1999-01-01

    An unstructured grid adaptation technique has been developed and successfully applied to several three dimensional inviscid flow test cases. The approach is based on a combination of grid subdivision, local remeshing, and grid movement. For solution adaptive grids, the surface triangulation is locally refined by grid subdivision, and the tetrahedral grid in the field is partially remeshed at locations of dominant flow features. A grid redistribution strategy is employed for geometric adaptation of volume grids to moving or deforming surfaces. The method is automatic and fast and is designed for modular coupling with different solvers. Several steady state test cases with different inviscid flow features were tested for grid/solution adaptation. In all cases, the dominant flow features, such as shocks and vortices, were accurately and efficiently predicted with the present approach. A new and robust method of moving tetrahedral "viscous" grids is also presented and demonstrated on a three-dimensional example.

  12. Protocol Independent Adaptive Route Update for VANET

    PubMed Central

    Rasheed, Asim; Qayyum, Amir

    2014-01-01

    High relative node velocity and high active node density have presented challenges to existing routing approaches within highly scaled ad hoc wireless networks, such as Vehicular Ad hoc Networks (VANET). Efficient routing requires finding optimum route with minimum delay, updating it on availability of a better one, and repairing it on link breakages. Current routing protocols are generally focused on finding and maintaining an efficient route, with very less emphasis on route update. Adaptive route update usually becomes impractical for dense networks due to large routing overheads. This paper presents an adaptive route update approach which can provide solution for any baseline routing protocol. The proposed adaptation eliminates the classification of reactive and proactive by categorizing them as logical conditions to find and update the route. PMID:24723807

  13. Countermeasures to Enhance Sensorimotor Adaptability

    NASA Technical Reports Server (NTRS)

    Bloomberg, J. J.; Peters, B. T.; Mulavara, A. P.; Brady, R. A.; Batson, C. C.; Miller, C. A.; Cohen, H. S.

    2011-01-01

    adaptability. These results indicate that SA training techniques can be added to existing treadmill exercise equipment and procedures to produce a single integrated countermeasure system to improve performance of astro/cosmonauts during prolonged exploratory space missions.

  14. Adaptive Modeling of Details for Physically-Based Sound Synthesis and Propagation

    DTIC Science & Technology

    2015-03-21

    the interface that ensures the consistency and validity of the solution given by the two methods. Transfer functions are used to model two-way...release; distribution is unlimited. Adaptive modeling of details for physically-based sound synthesis and propagation The views, opinions and/or...Research Triangle Park, NC 27709-2211 Applied sciences, Adaptive modeling , Physcially-based, Sound synthesis, Propagation, Virtual world REPORT

  15. An adaptive SVSF-SLAM algorithm to improve the success and solving the UGVs cooperation problem

    NASA Astrophysics Data System (ADS)

    Demim, Fethi; Nemra, Abdelkrim; Louadj, Kahina; Hamerlain, Mustapha; Bazoula, Abdelouahab

    2018-05-01

    This paper aims to present a Decentralised Cooperative Simultaneous Localization and Mapping (DCSLAM) solution based on 2D laser data using an Adaptive Covariance Intersection (ACI). The ACI-DCSLAM algorithm will be validated on a swarm of Unmanned Ground Vehicles (UGVs) receiving features to estimate the position and covariance of shared features before adding them to the global map. With the proposed solution, a group of (UGVs) will be able to construct a large reliable map and localise themselves within this map without any user intervention. The most popular solutions to this problem are the EKF-SLAM, Nonlinear H-infinity ? SLAM and the FAST-SLAM. The former suffers from two important problems which are the poor consistency caused by the linearization problem and the calculation of Jacobian. The second solution is the ? which is a very promising filter because it doesn't make any assumption about noise characteristics, while the latter is not suitable for real time implementation. Therefore, a new alternative solution based on the smooth variable structure filter (SVSF) is adopted. Cooperative adaptive SVSF-SLAM algorithm is proposed in this paper to solve the UGVs SLAM problem. Our main contribution consists in adapting the SVSF filter to solve the Decentralised Cooperative SLAM problem for multiple UGVs. The algorithms developed in this paper were implemented using two mobile robots Pioneer ?, equiped with 2D laser telemetry sensors. Good results are obtained by the Cooperative adaptive SVSF-SLAM algorithm compared to the Cooperative EKF/?-SLAM algorithms, especially when the noise is colored or affected by a variable bias. Simulation results confirm and show the efficiency of the proposed algorithm which is more robust, stable and adapted to real time applications.

  16. Global adaptive control for uncertain nonaffine nonlinear hysteretic systems.

    PubMed

    Liu, Yong-Hua; Huang, Liangpei; Xiao, Dongming; Guo, Yong

    2015-09-01

    In this paper, the global output tracking is investigated for a class of uncertain nonlinear hysteretic systems with nonaffine structures. By combining the solution properties of the hysteresis model with the novel backstepping approach, a robust adaptive control algorithm is developed without constructing a hysteresis inverse. The proposed control scheme is further modified to tackle the bounded disturbances by adaptively estimating their bounds. It is rigorously proven that the designed adaptive controllers can guarantee global stability of the closed-loop system. Two numerical examples are provided to show the effectiveness of the proposed control schemes. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  17. Natural language generation of surgical procedures.

    PubMed

    Wagner, J C; Rogers, J E; Baud, R H; Scherrer, J R

    1998-01-01

    The GALEN-IN-USE project has developed a compositional scheme for the conceptual representation of surgical operative procedure rubrics. The complex representations which result are translated back to surface language by a tool for multilingual natural language generation. This generator can be adapted to the specific characteristics of the scheme by introducing particular definitions of concepts and relationships. We discuss how the generator uses such definitions to bridge between the modelling 'style' of the GALEN scheme and natural language.

  18. "Low-field" intraoperative MRI: a new scenario, a new adaptation.

    PubMed

    Iturri-Clavero, F; Galbarriatu-Gutierrez, L; Gonzalez-Uriarte, A; Tamayo-Medel, G; de Orte, K; Martinez-Ruiz, A; Castellon-Larios, K; Bergese, S D

    2016-11-01

    To describe the adaptation of Cruces University Hospital to the use of intraoperative magnetic resonance imaging (ioMRI), and how the acquisition and use of this technology would impact the day-to-day running of the neurosurgical suite. With the approval of the ethics committee, an observational, prospective study was performed from June 2012 to April 2014, which included 109 neurosurgical procedures with the assistance of ioMRI. These were performed using the Polestar N-30 system (PSN30; Medtronic Navigation, Louisville, CO), which was integrated into the operating room. A total of 159 procedures were included: 109 cranial surgeries assisted with ioMRI and 50 control cases (no ioMRI use). There were no statistical significant differences when anaesthetic time (p=0.587) and surgical time (p=0.792) were compared; however, an important difference was shown in duration of patient positioning (p<0.0009) and total duration of the procedure (p<0.0009) between both groups. The introduction of ioMRI is necessary for most neurosurgical suites; however, a few things need to be taken into consideration when adapting to it. Increase procedure time, the use of specific MRI-safe devices, as well as a checklist for each patient to minimise risks, should be taken into consideration. Published by Elsevier Ltd.

  19. Balancing stability and flexibility in adaptive governance: an ...

    EPA Pesticide Factsheets

    Adaptive governance must work “on the ground,” that is, it must operate through structures and procedures that the people it governs perceive to be legitimate and fair, as well as incorporating processes and substantive goals that are effective in allowing social-ecological systems (SESs) to adapt to climate change and other impacts. To address the continuing and accelerating alterations that climate change is bringing to SESs, adaptive governance generally will require more flexibility than prior governance institutions have often allowed. However, to function as good governance, adaptive governance must pay real attention to the problem of how to balance this increased need for flexibility with continuing governance stability so that it can foster adaptation to change without being perceived or experienced as perpetually destabilizing, disruptive, and unfair. Flexibility and stability serve different purposes in governance, and a variety of tools exist to strike different balances between them while still preserving the governance institution’s legitimacy among the people governed. After reviewing those purposes and the implications of climate change for environmental governance, we examine psychological insights into the structuring of adaptive governance and the variety of legal tools available to incorporate those insights into adaptive governance regimes. Because the substantive goals of governance systems will differ among specific systems, we do no

  20. High-order adaptive secondary mirrors: where are we?

    NASA Astrophysics Data System (ADS)

    Salinari, Piero; Sandler, David G.

    1998-09-01

    We discuss the current developments and the perspective performances of adaptive secondary mirrors for high order adaptive a correction on large ground based telescopes. The development of the basic techniques involved a large collaborative effort of public research Institutes and of private companies is now essentially complete. The next crucial step will be the construction of an adaptive secondary mirror for the 6.5 m MMT. Problems such as the fabrication of very thin mirrors, the low cost implementation of fast position sensors, of efficient and compact electromagnetic actuators, of the control and communication electronics, of the actuator control system, of the thermal control and of the mechanical layout can be considered as solved, in some cases with more than one viable solution. To verify performances at system level two complete prototypes have been built and tested, one at ThermoTrex and the other at Arcetri. The two prototypes adopt the same basic approach concerning actuators, sensor and support of the thin mirror, but differ in a number of aspects such as the material of the rigid back plate used as reference for the thin mirror, the number and surface density of the actuators, the solution adopted for the removal of the heat, and the design of the electronics. We discuss how the results obtained by of the two prototypes and by numerical simulations will guide the design of full size adaptive secondary units.

  1. QUANTIFYING LOAD-INDUCED SOLUTE TRANSPORT AND SOLUTE-MATRIX INTERACTIONS WITHIN THE OSTEOCYTE LACUNAR-CANALICULAR SYSTEM

    PubMed Central

    Wang, Bin; Zhou, Xiaozhou; Price, Christopher; Li, Wen; Pan, Jun; Wang, Liyun

    2012-01-01

    Osteocytes, the most abundant cells in bone, are critical in maintaining tissue homeostasis and orchestrating bone’s mechanical adaptation. Osteocytes depend upon load-induced convection within the lacunar-canalicular system (LCS) to maintain viability and to sense their mechanical environment. Using the fluorescence recovery after photobleaching (FRAP) imaging approach, we previously quantified the convection of a small tracer (sodium fluorescein, 376Da) in the murine tibial LCS for an intermittent cyclic loading (Price et al., 2011. JBMR 26:277-85). In the present study we first expanded the investigation of solute transport using a larger tracer (parvalbumin, 12.3kDa), which is comparable in size to some signaling proteins secreted by osteocytes. Murine tibiae were subjected to sequential FRAP tests under rest-inserted cyclic loading while the loading magnitude (0, 2.8, or 4.8N) and frequency (0.5, 1, or 2 Hz) were varied. The characteristic transport rate k and the transport enhancement relative to diffusion (k/k0) were measured under each loading condition, from which the peak solute velocity in the LCS was derived using our LCS transport model. Both the transport enhancement and solute velocity increased with loading magnitude and decreased with loading frequency. Furthermore, the solute-matrix interactions, quantified in terms of the reflection coefficient through the osteocytic pericellular matrix (PCM), were measured and theoretically modeled. The reflection coefficient of parvalbumin (σ=0.084) was derived from the differential fluid and solute velocities within loaded bone. Using a newly developed PCM sieving model, the PCM’s fiber configurations accounting for the measured interactions were obtained for the first time. The present study provided not only new data on the micro-fluidic environment experienced by osteocytes in situ, but also a powerful quantitative tool for future study of the PCM, the critical interface that controls both outside

  2. A self-adaptive memeplexes robust search scheme for solving stochastic demands vehicle routing problem

    NASA Astrophysics Data System (ADS)

    Chen, Xianshun; Feng, Liang; Ong, Yew Soon

    2012-07-01

    In this article, we proposed a self-adaptive memeplex robust search (SAMRS) for finding robust and reliable solutions that are less sensitive to stochastic behaviours of customer demands and have low probability of route failures, respectively, in vehicle routing problem with stochastic demands (VRPSD). In particular, the contribution of this article is three-fold. First, the proposed SAMRS employs the robust solution search scheme (RS 3) as an approximation of the computationally intensive Monte Carlo simulation, thus reducing the computation cost of fitness evaluation in VRPSD, while directing the search towards robust and reliable solutions. Furthermore, a self-adaptive individual learning based on the conceptual modelling of memeplex is introduced in the SAMRS. Finally, SAMRS incorporates a gene-meme co-evolution model with genetic and memetic representation to effectively manage the search for solutions in VRPSD. Extensive experimental results are then presented for benchmark problems to demonstrate that the proposed SAMRS serves as an efficable means of generating high-quality robust and reliable solutions in VRPSD.

  3. Adaptive neural network motion control for aircraft under uncertainty conditions

    NASA Astrophysics Data System (ADS)

    Efremov, A. V.; Tiaglik, M. S.; Tiumentsev, Yu V.

    2018-02-01

    We need to provide motion control of modern and advanced aircraft under diverse uncertainty conditions. This problem can be solved by using adaptive control laws. We carry out an analysis of the capabilities of these laws for such adaptive systems as MRAC (Model Reference Adaptive Control) and MPC (Model Predictive Control). In the case of a nonlinear control object, the most efficient solution to the adaptive control problem is the use of neural network technologies. These technologies are suitable for the development of both a control object model and a control law for the object. The approximate nature of the ANN model was taken into account by introducing additional compensating feedback into the control system. The capabilities of adaptive control laws under uncertainty in the source data are considered. We also conduct simulations to assess the contribution of adaptivity to the behavior of the system.

  4. [Adaptation of the picture exchange communication system in a school context].

    PubMed

    Almeida, Maria Amélia; Piza, Maria Helena Machado; Lamônica, Dionísa Aparecida Cusin

    2005-01-01

    Alternative communication. To evaluate the efficacy of the adapted Pecs and Picture Communication Symbols (PCS) in the communication of a child with cerebral palsy. The participant of this study was a 9 year and 10 months old girl, with athetoid quadriplegia. All stages of the adapted Pecs were applied (Walter, 2000), using the PCS pictures (Johnson, 1998), associated with the functional curriculum proposed by LeBlanc (1991). An experimental AB Design was used in order to test the procedures. The subject was able to pass through all of the adapted Pecs phases and to use her communication board in school activities. The adapted Pecs proved to be effective in improving the subject's communication abilities.

  5. Residual Distribution Schemes for Conservation Laws Via Adaptive Quadrature

    NASA Technical Reports Server (NTRS)

    Barth, Timothy; Abgrall, Remi; Biegel, Bryan (Technical Monitor)

    2000-01-01

    This paper considers a family of nonconservative numerical discretizations for conservation laws which retains the correct weak solution behavior in the limit of mesh refinement whenever sufficient order numerical quadrature is used. Our analysis of 2-D discretizations in nonconservative form follows the 1-D analysis of Hou and Le Floch. For a specific family of nonconservative discretizations, it is shown under mild assumptions that the error arising from non-conservation is strictly smaller than the discretization error in the scheme. In the limit of mesh refinement under the same assumptions, solutions are shown to satisfy an entropy inequality. Using results from this analysis, a variant of the "N" (Narrow) residual distribution scheme of van der Weide and Deconinck is developed for first-order systems of conservation laws. The modified form of the N-scheme supplants the usual exact single-state mean-value linearization of flux divergence, typically used for the Euler equations of gasdynamics, by an equivalent integral form on simplex interiors. This integral form is then numerically approximated using an adaptive quadrature procedure. This renders the scheme nonconservative in the sense described earlier so that correct weak solutions are still obtained in the limit of mesh refinement. Consequently, we then show that the modified form of the N-scheme can be easily applied to general (non-simplicial) element shapes and general systems of first-order conservation laws equipped with an entropy inequality where exact mean-value linearization of the flux divergence is not readily obtained, e.g. magnetohydrodynamics, the Euler equations with certain forms of chemistry, etc. Numerical examples of subsonic, transonic and supersonic flows containing discontinuities together with multi-level mesh refinement are provided to verify the analysis.

  6. A Robust Adaptive Autonomous Approach to Optimal Experimental Design

    NASA Astrophysics Data System (ADS)

    Gu, Hairong

    Experimentation is the fundamental tool of scientific inquiries to understand the laws governing the nature and human behaviors. Many complex real-world experimental scenarios, particularly in quest of prediction accuracy, often encounter difficulties to conduct experiments using an existing experimental procedure for the following two reasons. First, the existing experimental procedures require a parametric model to serve as the proxy of the latent data structure or data-generating mechanism at the beginning of an experiment. However, for those experimental scenarios of concern, a sound model is often unavailable before an experiment. Second, those experimental scenarios usually contain a large number of design variables, which potentially leads to a lengthy and costly data collection cycle. Incompetently, the existing experimental procedures are unable to optimize large-scale experiments so as to minimize the experimental length and cost. Facing the two challenges in those experimental scenarios, the aim of the present study is to develop a new experimental procedure that allows an experiment to be conducted without the assumption of a parametric model while still achieving satisfactory prediction, and performs optimization of experimental designs to improve the efficiency of an experiment. The new experimental procedure developed in the present study is named robust adaptive autonomous system (RAAS). RAAS is a procedure for sequential experiments composed of multiple experimental trials, which performs function estimation, variable selection, reverse prediction and design optimization on each trial. Directly addressing the challenges in those experimental scenarios of concern, function estimation and variable selection are performed by data-driven modeling methods to generate a predictive model from data collected during the course of an experiment, thus exempting the requirement of a parametric model at the beginning of an experiment; design optimization is

  7. In-flight results of adaptive attitude control law for a microsatellite

    NASA Astrophysics Data System (ADS)

    Pittet, C.; Luzi, A. R.; Peaucelle, D.; Biannic, J.-M.; Mignot, J.

    2015-06-01

    Because satellites usually do not experience large changes of mass, center of gravity or inertia in orbit, linear time invariant (LTI) controllers have been widely used to control their attitude. But, as the pointing requirements become more stringent and the satellite's structure more complex with large steerable and/or deployable appendices and flexible modes occurring in the control bandwidth, one unique LTI controller is no longer sufficient. One solution consists in designing several LTI controllers, one for each set point, but the switching between them is difficult to tune and validate. Another interesting solution is to use adaptive controllers, which could present at least two advantages: first, as the controller automatically and continuously adapts to the set point without changing the structure, no switching logic is needed in the software; second, performance and stability of the closed-loop system can be assessed directly on the whole flight domain. To evaluate the real benefits of adaptive control for satellites, in terms of design, validation and performances, CNES selected it as end-of-life experiment on PICARD microsatellite. This paper describes the design, validation and in-flight results of the new adaptive attitude control law, compared to nominal control law.

  8. Use of prism adaptation in children with unilateral brain lesion: Is it feasible?

    PubMed

    Riquelme, Inmaculada; Henne, Camille; Flament, Benoit; Legrain, Valéry; Bleyenheuft, Yannick; Hatem, Samar M

    2015-01-01

    Unilateral visuospatial deficits have been observed in children with brain damage. While the effectiveness of prism adaptation for treating unilateral neglect in adult stroke patients has been demonstrated previously, the usefulness of prism adaptation in a pediatric population is still unknown. The present study aims at evaluating the feasibility of prism adaptation in children with unilateral brain lesion and comparing the validity of a game procedure designed for child-friendly paediatric intervention, with the ecological task used for prism adaptation in adult patients. Twenty-one children with unilateral brain lesion randomly were assigned to a prism group wearing prismatic glasses, or a control group wearing neutral glasses during a bimanual task intervention. All children performed two different bimanual tasks on randomly assigned consecutive days: ecological tasks or game tasks. The efficacy of prism adaptation was measured by assessing its after-effects with visual open loop pointing (visuoproprioceptive test) and subjective straight-ahead pointing (proprioceptive test). Game tasks and ecological tasks produced similar after-effects. Prismatic glasses elicited a significant shift of visuospatial coordinates which was not observed in the control group. Prism adaptation performed with game tasks seems an effective procedure to obtain after-effects in children with unilateral brain lesion. The usefulness of repetitive prism adaptation sessions as a therapeutic intervention in children with visuospatial deficits and/or neglect, should be investigated in future studies. Copyright © 2015 Elsevier Ltd. All rights reserved.

  9. Cohabitants' perspective on housing adaptations: a piece of the puzzle.

    PubMed

    Granbom, Marianne; Taei, Afsaneh; Ekstam, Lisa

    2017-12-01

    As part of the Swedish state-funded healthcare system, housing adaptations are used to promote safe and independent living for disabled people in ordinary housing through the elimination of physical environmental barriers in the home. The aim of this study was to describe the cohabitants' expectations and experiences of how a housing adaptation, intended for the partner, would impact their everyday life. In-depth interviews were conducted with cohabitants of nine people applying for a housing adaptation, initially at the time of the application and then again 3 months after the housing adaptation was installed. A longitudinal analysis was performed including analysis procedures from Grounded Theory. The findings revealed the expectations and experiences in four categories: partners' activities and independence; cohabitants' everyday activities and caregiving; couples' shared recreational/leisure activities; and housing decisions. A core category putting the intervention into perspective was called 'Housing adaptations - A piece of the puzzle'. From the cohabitants' perspective, new insights on housing adaptations emerged, which are important to consider when planning and carrying out successful housing adaptations. © 2017 Nordic College of Caring Science.

  10. Human Health and Climate Change: Leverage Points for Adaptation in Urban Environments

    PubMed Central

    Proust, Katrina; Newell, Barry; Brown, Helen; Capon, Anthony; Browne, Chris; Burton, Anthony; Dixon, Jane; Mu, Lisa; Zarafu, Monica

    2012-01-01

    The design of adaptation strategies that promote urban health and well-being in the face of climate change requires an understanding of the feedback interactions that take place between the dynamical state of a city, the health of its people, and the state of the planet. Complexity, contingency and uncertainty combine to impede the growth of such systemic understandings. In this paper we suggest that the collaborative development of conceptual models can help a group to identify potential leverage points for effective adaptation. We describe a three-step procedure that leads from the development of a high-level system template, through the selection of a problem space that contains one or more of the group’s adaptive challenges, to a specific conceptual model of a sub-system of importance to the group. This procedure is illustrated by a case study of urban dwellers’ maladaptive dependence on private motor vehicles. We conclude that a system dynamics approach, revolving around the collaborative construction of a set of conceptual models, can help communities to improve their adaptive capacity, and so better meet the challenge of maintaining, and even improving, urban health in the face of climate change. PMID:22829795

  11. Toxicity Minimized Cryoprotectant Addition and Removal Procedures for Adherent Endothelial Cells

    PubMed Central

    Davidson, Allyson Fry; Glasscock, Cameron; McClanahan, Danielle R.; Benson, James D.; Higgins, Adam Z.

    2015-01-01

    Ice-free cryopreservation, known as vitrification, is an appealing approach for banking of adherent cells and tissues because it prevents dissociation and morphological damage that may result from ice crystal formation. However, current vitrification methods are often limited by the cytotoxicity of the concentrated cryoprotective agent (CPA) solutions that are required to suppress ice formation. Recently, we described a mathematical strategy for identifying minimally toxic CPA equilibration procedures based on the minimization of a toxicity cost function. Here we provide direct experimental support for the feasibility of these methods when applied to adherent endothelial cells. We first developed a concentration- and temperature-dependent toxicity cost function by exposing the cells to a range of glycerol concentrations at 21°C and 37°C, and fitting the resulting viability data to a first order cell death model. This cost function was then numerically minimized in our state constrained optimization routine to determine addition and removal procedures for 17 molal (mol/kg water) glycerol solutions. Using these predicted optimal procedures, we obtained 81% recovery after exposure to vitrification solutions, as well as successful vitrification with the relatively slow cooling and warming rates of 50°C/min and 130°C/min. In comparison, conventional multistep CPA equilibration procedures resulted in much lower cell yields of about 10%. Our results demonstrate the potential for rational design of minimally toxic vitrification procedures and pave the way for extension of our optimization approach to other adherent cell types as well as more complex systems such as tissues and organs. PMID:26605546

  12. Anisotropic norm-oriented mesh adaptation for a Poisson problem

    NASA Astrophysics Data System (ADS)

    Brèthes, Gautier; Dervieux, Alain

    2016-10-01

    We present a novel formulation for the mesh adaptation of the approximation of a Partial Differential Equation (PDE). The discussion is restricted to a Poisson problem. The proposed norm-oriented formulation extends the goal-oriented formulation since it is equation-based and uses an adjoint. At the same time, the norm-oriented formulation somewhat supersedes the goal-oriented one since it is basically a solution-convergent method. Indeed, goal-oriented methods rely on the reduction of the error in evaluating a chosen scalar output with the consequence that, as mesh size is increased (more degrees of freedom), only this output is proven to tend to its continuous analog while the solution field itself may not converge. A remarkable quality of goal-oriented metric-based adaptation is the mathematical formulation of the mesh adaptation problem under the form of the optimization, in the well-identified set of metrics, of a well-defined functional. In the new proposed formulation, we amplify this advantage. We search, in the same well-identified set of metrics, the minimum of a norm of the approximation error. The norm is prescribed by the user and the method allows addressing the case of multi-objective adaptation like, for example in aerodynamics, adaptating the mesh for drag, lift and moment in one shot. In this work, we consider the basic linear finite-element approximation and restrict our study to L2 norm in order to enjoy second-order convergence. Numerical examples for the Poisson problem are computed.

  13. General MoM Solutions for Large Arrays

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fasenfest, B; Capolino, F; Wilton, D R

    2003-07-22

    This paper focuses on a numerical procedure that addresses the difficulties of dealing with large, finite arrays while preserving the generality and robustness of full-wave methods. We present a fast method based on approximating interactions between sufficiently separated array elements via a relatively coarse interpolation of the Green's function on a uniform grid commensurate with the array's periodicity. The interaction between the basis and testing functions is reduced to a three-stage process. The first stage is a projection of standard (e.g., RWG) subdomain bases onto a set of interpolation functions that interpolate the Green's function on the array face. Thismore » projection, which is used in a matrix/vector product for each array cell in an iterative solution process, need only be carried out once for a single cell and results in a low-rank matrix. An intermediate stage matrix/vector product computation involving the uniformly sampled Green's function is of convolutional form in the lateral (transverse) directions so that a 2D FFT may be used. The final stage is a third matrix/vector product computation involving a matrix resulting from projecting testing functions onto the Green's function interpolation functions; the low-rank matrix is either identical to (using Galerkin's method) or similar to that for the bases projection. An effective MoM solution scheme is developed for large arrays using a modification of the AIM (Adaptive Integral Method) method. The method permits the analysis of arrays with arbitrary contours and nonplanar elements. Both fill and solve times within the MoM method are improved with respect to more standard MoM solvers.« less

  14. Simultaneous contrast: evidence from licking microstructure and cross-solution comparisons.

    PubMed

    Dwyer, Dominic M; Lydall, Emma S; Hayward, Andrew J

    2011-04-01

    The microstructure of rats' licking responses was analyzed to investigate both "classic" simultaneous contrast (e.g., Flaherty & Largen, 1975) and a novel discrete-trial contrast procedure where access to an 8% test solution of sucrose was preceded by a sample of either 2%, 8%, or 32% sucrose (Experiments 1 and 2, respectively). Consumption of a given concentration of sucrose was higher when consumed alongside a low rather than high concentration comparison solution (positive contrast) and consumption of a given concentration of sucrose was lower when consumed alongside a high rather than a low concentration comparison solution (negative contrast). Furthermore, positive contrast increased the size of lick clusters while negative contrast decreased the size of lick clusters. Lick cluster size has a positive monotonic relationship with the concentration of palatable solutions and so positive and negative contrasts produced changes in lick cluster size that were analogous to raising or lowering the concentration of the test solution respectively. Experiment 3 utilized the discrete-trial procedure and compared contrast between two solutions of the same type (sucrose-sucrose or maltodextrin-maltodextrin) or contrast across solutions (sucrose-maltodextrin or maltodextrin-sucrose). Contrast effects on consumption were present, but reduced in size, in the cross-solution conditions. Moreover, lick cluster sizes were not affected at all by cross-solution contrasts as they were by same-solution contrasts. These results are consistent with the idea that simultaneous contrast effects depend, at least partially, on sensory mechanisms.

  15. Coupling internal cerebellar models enhances online adaptation and supports offline consolidation in sensorimotor tasks

    PubMed Central

    Passot, Jean-Baptiste; Luque, Niceto R.; Arleo, Angelo

    2013-01-01

    The cerebellum is thought to mediate sensorimotor adaptation through the acquisition of internal models of the body-environment interaction. These representations can be of two types, identified as forward and inverse models. The first predicts the sensory consequences of actions, while the second provides the correct commands to achieve desired state transitions. In this paper, we propose a composite architecture consisting of multiple cerebellar internal models to account for the adaptation performance of humans during sensorimotor learning. The proposed model takes inspiration from the cerebellar microcomplex circuit, and employs spiking neurons to process information. We investigate the intrinsic properties of the cerebellar circuitry subserving efficient adaptation properties, and we assess the complementary contributions of internal representations by simulating our model in a procedural adaptation task. Our simulation results suggest that the coupling of internal models enhances learning performance significantly (compared with independent forward and inverse models), and it allows for the reproduction of human adaptation capabilities. Furthermore, we provide a computational explanation for the performance improvement observed after one night of sleep in a wide range of sensorimotor tasks. We predict that internal model coupling is a necessary condition for the offline consolidation of procedural memories. PMID:23874289

  16. Entropy-based adaptive attitude estimation

    NASA Astrophysics Data System (ADS)

    Kiani, Maryam; Barzegar, Aylin; Pourtakdoust, Seid H.

    2018-03-01

    Gaussian approximation filters have increasingly been developed to enhance the accuracy of attitude estimation in space missions. The effective employment of these algorithms demands accurate knowledge of system dynamics and measurement models, as well as their noise characteristics, which are usually unavailable or unreliable. An innovation-based adaptive filtering approach has been adopted as a solution to this problem; however, it exhibits two major challenges, namely appropriate window size selection and guaranteed assurance of positive definiteness for the estimated noise covariance matrices. The current work presents two novel techniques based on relative entropy and confidence level concepts in order to address the abovementioned drawbacks. The proposed adaptation techniques are applied to two nonlinear state estimation algorithms of the extended Kalman filter and cubature Kalman filter for attitude estimation of a low earth orbit satellite equipped with three-axis magnetometers and Sun sensors. The effectiveness of the proposed adaptation scheme is demonstrated by means of comprehensive sensitivity analysis on the system and environmental parameters by using extensive independent Monte Carlo simulations.

  17. Buffered lidocaine and bupivacaine mixture - the ideal local anesthetic solution?

    PubMed

    Best, Corliss A; Best, Alyssa A; Best, Timothy J; Hamilton, Danielle A

    2015-01-01

    The use of injectable local anesthetic solutions to facilitate pain-free surgery is an integral component of many procedures performed by the plastic surgeon. In many instances, a solution that has both rapid onset and prolonged duration of analgesia is optimal. A combination of lidocaine and bupivacaine, plain or with epinephrine, is readily available in most Canadian health care settings where such procedures are performed, and fulfills these criteria. However, commercially available solutions of both medications are acidic and cause a burning sensation on injection. Buffering to neutral pH with sodium bicarbonate is a practical method to mitigate the burning sensation, and has the added benefit of increasing the fraction of nonionized lipid soluble drug available. The authors report on the proportions of the three drugs to yield a neutral pH, and the results of an initial survey regarding the use of the combined solution with epinephrine in hand surgery.

  18. Assessing Children's Implicit Attitudes Using the Affect Misattribution Procedure

    ERIC Educational Resources Information Center

    Williams, Amanda; Steele, Jennifer R.; Lipman, Corey

    2016-01-01

    In the current research, we examined whether the Affect Misattribution Procedure (AMP) could be successfully adapted as an implicit measure of children's attitudes. We tested this possibility in 3 studies with 5- to 10-year-old children. In Study 1, we found evidence that children misattribute affect elicited by attitudinally positive (e.g., cute…

  19. A self-organizing Lagrangian particle method for adaptive-resolution advection-diffusion simulations

    NASA Astrophysics Data System (ADS)

    Reboux, Sylvain; Schrader, Birte; Sbalzarini, Ivo F.

    2012-05-01

    We present a novel adaptive-resolution particle method for continuous parabolic problems. In this method, particles self-organize in order to adapt to local resolution requirements. This is achieved by pseudo forces that are designed so as to guarantee that the solution is always well sampled and that no holes or clusters develop in the particle distribution. The particle sizes are locally adapted to the length scale of the solution. Differential operators are consistently evaluated on the evolving set of irregularly distributed particles of varying sizes using discretization-corrected operators. The method does not rely on any global transforms or mapping functions. After presenting the method and its error analysis, we demonstrate its capabilities and limitations on a set of two- and three-dimensional benchmark problems. These include advection-diffusion, the Burgers equation, the Buckley-Leverett five-spot problem, and curvature-driven level-set surface refinement.

  20. Support of surgical process modeling by using adaptable software user interfaces

    NASA Astrophysics Data System (ADS)

    Neumuth, T.; Kaschek, B.; Czygan, M.; Goldstein, D.; Strauß, G.; Meixensberger, J.; Burgert, O.

    2010-03-01

    Surgical Process Modeling (SPM) is a powerful method for acquiring data about the evolution of surgical procedures. Surgical Process Models are used in a variety of use cases including evaluation studies, requirements analysis and procedure optimization, surgical education, and workflow management scheme design. This work proposes the use of adaptive, situation-aware user interfaces for observation support software for SPM. We developed a method to support the modeling of the observer by using an ontological knowledge base. This is used to drive the graphical user interface for the observer to restrict the search space of terminology depending on the current situation. In the evaluation study it is shown, that the workload of the observer was decreased significantly by using adaptive user interfaces. 54 SPM observation protocols were analyzed by using the NASA Task Load Index and it was shown that the use of the adaptive user interface disburdens the observer significantly in workload criteria effort, mental demand and temporal demand, helping him to concentrate on his essential task of modeling the Surgical Process.

  1. Adaptive Meshing Techniques for Viscous Flow Calculations on Mixed Element Unstructured Meshes

    NASA Technical Reports Server (NTRS)

    Mavriplis, D. J.

    1997-01-01

    An adaptive refinement strategy based on hierarchical element subdivision is formulated and implemented for meshes containing arbitrary mixtures of tetrahendra, hexahendra, prisms and pyramids. Special attention is given to keeping memory overheads as low as possible. This procedure is coupled with an algebraic multigrid flow solver which operates on mixed-element meshes. Inviscid flows as well as viscous flows are computed an adaptively refined tetrahedral, hexahedral, and hybrid meshes. The efficiency of the method is demonstrated by generating an adapted hexahedral mesh containing 3 million vertices on a relatively inexpensive workstation.

  2. Media-fill simulation tests in manual and robotic aseptic preparation of injection solutions in syringes.

    PubMed

    Krämer, Irene; Federici, Matteo; Kaiser, Vanessa; Thiesen, Judith

    2016-04-01

    -amber solutions. In addition, the reliability of the nutrient medium and the process was demonstrated by positive growth promotion tests with S. epidermidis. During automated preparation the recommended limits < 1 cfu per settle/contact plate set for cleanroom Grade A zones were not succeeded in the carousel and working area, but in the loading area of the robot. During manual preparation, the number of cfus detected on settle/contact plates inside the workbenches lay far below the limits. The number of cfus detected on fingertips succeeded several times the limit during manual preparation but not during automated preparation. There was no difference in the microbial contamination rate depending on the extent of cleaning and disinfection of the robot. Extensive media-fill tests simulating manual and automated preparation of ready-to-use cytotoxic injection solutions revealed the same level of sterility for both procedures. The results of supplemental environmental controls confirmed that the aseptic procedures are well controlled. As there was no difference in the microbial contamination rates of the media preparations depending on the extent of cleaning and disinfection of the robot, the results were used to adapt the respective standard operating procedures. © The Author(s) 2014.

  3. Robust Adaptive Thresholder For Document Scanning Applications

    NASA Astrophysics Data System (ADS)

    Hsing, To R.

    1982-12-01

    In document scanning applications, thresholding is used to obtain binary data from a scanner. However, due to: (1) a wide range of different color backgrounds; (2) density variations of printed text information; and (3) the shading effect caused by the optical systems, the use of adaptive thresholding to enhance the useful information is highly desired. This paper describes a new robust adaptive thresholder for obtaining valid binary images. It is basically a memory type algorithm which can dynamically update the black and white reference level to optimize a local adaptive threshold function. The results of high image quality from different types of simulate test patterns can be obtained by this algorithm. The software algorithm is described and experiment results are present to describe the procedures. Results also show that the techniques described here can be used for real-time signal processing in the varied applications.

  4. Retinal adaptation abnormalities in primary open-angle glaucoma.

    PubMed

    Dul, Mitchell; Ennis, Robert; Radner, Shira; Lee, Barry; Zaidi, Qasim

    2015-01-22

    Dynamic color and brightness adaptation are crucial for visual functioning. The effects of glaucoma on retinal ganglion cells (RGCs) could compromise these functions. We have previously used slow dynamic changes of light at moderate intensities to measure the speed and magnitude of subtractive adaptation in RGCs. We used the same procedure to test if RGC abnormalities cause slower and weaker adaptation for patients with glaucoma when compared to age-similar controls. We assessed adaptation deficits in specific classes of RGCs by testing along the three cardinal color axes that isolate konio, parvo, and magno RGCs. For one eye each of 10 primary open-angle glaucoma patients and their age-similar controls, we measured the speed and magnitude of adapting to 1/32 Hz color modulations along the three cardinal axes, at central fixation and 8° superior, inferior, nasal, and temporal to fixation. In all 15 comparisons (5 locations × 3 color axes), average adaptation was slower and weaker for glaucoma patients than for controls. Adaptation developed slower at central targets than at 8° eccentricities for controls, but not for patients. Adaptation speed and magnitude differed between affected and control eyes even at retinal locations showing no visual field loss with clinical perimetry. Neural adaptation is weaker in glaucoma patients for all three classes of RGCs. Since adaptation abnormalities are manifested even at retinal locations not exhibiting a visual field loss, this novel form of assessment may offer a functional insight into glaucoma and an early diagnosis tool. Copyright 2015 The Association for Research in Vision and Ophthalmology, Inc.

  5. Retinal Adaptation Abnormalities in Primary Open-Angle Glaucoma

    PubMed Central

    Dul, Mitchell; Ennis, Robert; Radner, Shira; Lee, Barry; Zaidi, Qasim

    2015-01-01

    Purpose. Dynamic color and brightness adaptation are crucial for visual functioning. The effects of glaucoma on retinal ganglion cells (RGCs) could compromise these functions. We have previously used slow dynamic changes of light at moderate intensities to measure the speed and magnitude of subtractive adaptation in RGCs. We used the same procedure to test if RGC abnormalities cause slower and weaker adaptation for patients with glaucoma when compared to age-similar controls. We assessed adaptation deficits in specific classes of RGCs by testing along the three cardinal color axes that isolate konio, parvo, and magno RGCs. Methods. For one eye each of 10 primary open-angle glaucoma patients and their age-similar controls, we measured the speed and magnitude of adapting to 1/32 Hz color modulations along the three cardinal axes, at central fixation and 8° superior, inferior, nasal, and temporal to fixation. Results. In all 15 comparisons (5 locations × 3 color axes), average adaptation was slower and weaker for glaucoma patients than for controls. Adaptation developed slower at central targets than at 8° eccentricities for controls, but not for patients. Adaptation speed and magnitude differed between affected and control eyes even at retinal locations showing no visual field loss with clinical perimetry. Conclusions. Neural adaptation is weaker in glaucoma patients for all three classes of RGCs. Since adaptation abnormalities are manifested even at retinal locations not exhibiting a visual field loss, this novel form of assessment may offer a functional insight into glaucoma and an early diagnosis tool. PMID:25613950

  6. A generalized procedure for the prediction of multicomponent adsorption equilibria

    DOE PAGES

    Ladshaw, Austin; Yiacoumi, Sotira; Tsouris, Costas

    2015-04-07

    Prediction of multicomponent adsorption equilibria has been investigated for several decades. While there are theories available to predict the adsorption behavior of ideal mixtures, there are few purely predictive theories to account for nonidealities in real systems. Most models available for dealing with nonidealities contain interaction parameters that must be obtained through correlation with binary-mixture data. However, as the number of components in a system grows, the number of parameters needed to be obtained increases exponentially. Here, a generalized procedure is proposed, as an extension of the predictive real adsorbed solution theory, for determining the parameters of any activity model,more » for any number of components, without correlation. This procedure is then combined with the adsorbed solution theory to predict the adsorption behavior of mixtures. As this method can be applied to any isotherm model and any activity model, it is referred to as the generalized predictive adsorbed solution theory.« less

  7. Adaptive Window Zero-Crossing-Based Instantaneous Frequency Estimation

    NASA Astrophysics Data System (ADS)

    Sekhar, S. Chandra; Sreenivas, TV

    2004-12-01

    We address the problem of estimating instantaneous frequency (IF) of a real-valued constant amplitude time-varying sinusoid. Estimation of polynomial IF is formulated using the zero-crossings of the signal. We propose an algorithm to estimate nonpolynomial IF by local approximation using a low-order polynomial, over a short segment of the signal. This involves the choice of window length to minimize the mean square error (MSE). The optimal window length found by directly minimizing the MSE is a function of the higher-order derivatives of the IF which are not available a priori. However, an optimum solution is formulated using an adaptive window technique based on the concept of intersection of confidence intervals. The adaptive algorithm enables minimum MSE-IF (MMSE-IF) estimation without requiring a priori information about the IF. Simulation results show that the adaptive window zero-crossing-based IF estimation method is superior to fixed window methods and is also better than adaptive spectrogram and adaptive Wigner-Ville distribution (WVD)-based IF estimators for different signal-to-noise ratio (SNR).

  8. Procedures for the salvage and necropsy of the dugong (Dugong dugon)

    USGS Publications Warehouse

    Eros, Carole; Marsh, Helene; Bonde, Robert K.; O'Shea, Thomas A.; Beck, Cathy A.; Recchia, Cheri; Dobbs, Kirstin; Turner, Malcolm; Lemm, Stephanie; Pears, Rachel; Bowater, Rachel

    2007-01-01

    Data and specimens collected from dugong carcasses and live stranded individuals provide vital information for research and management agencies. The ability to assign a cause of death (natural and/or human induced) to a carcass assists managers to identify major threats to a population in certain areas and to evaluate and adapt management measures. Data collectedfrom dugong carcasses have contributed to research in areas such as life history, feeding biology, investigating the stock structure/genetics of dugongs, contaminants studies, heavy metal analyses, parasitology, and the effects of habitat change. Adapted from the 'Manual of Procedures for the Salvage and Necropsy of Carcasses of the West Indian Manatee (Trichechus manatus),' this manual provides a detailed guide for dugong (Dugong dugon) carcass handling and necropsy procedures. It is intended to be used as a resource and training guide for anyone involved in dugong incidents who may lack dugong expertise.

  9. Adaptive management for a turbulent future

    USGS Publications Warehouse

    Allen, Craig R.; Fontaine, J.J.; Pope, K.L.; Garmestani, A.S.

    2011-01-01

    The challenges that face humanity today differ from the past because as the scale of human influence has increased, our biggest challenges have become global in nature, and formerly local problems that could be addressed by shifting populations or switching resources, now aggregate (i.e., "scale up") limiting potential management options. Adaptive management is an approach to natural resource management that emphasizes learning through management based on the philosophy that knowledge is incomplete and much of what we think we know is actually wrong. Adaptive management has explicit structure, including careful elucidation of goals, identification of alternative management objectives and hypotheses of causation, and procedures for the collection of data followed by evaluation and reiteration. It is evident that adaptive management has matured, but it has also reached a crossroads. Practitioners and scientists have developed adaptive management and structured decision making techniques, and mathematicians have developed methods to reduce the uncertainties encountered in resource management, yet there continues to be misapplication of the method and misunderstanding of its purpose. Ironically, the confusion over the term "adaptive management" may stem from the flexibility inherent in the approach, which has resulted in multiple interpretations of "adaptive management" that fall along a continuum of complexity and a priori design. Adaptive management is not a panacea for the navigation of 'wicked problems' as it does not produce easy answers, and is only appropriate in a subset of natural resource management problems where both uncertainty and controllability are high. Nonetheless, the conceptual underpinnings of adaptive management are simple; there will always be inherent uncertainty and unpredictability in the dynamics and behavior of complex social-ecological systems, but management decisions must still be made, and whenever possible, we should incorporate

  10. Adaptive Management for a Turbulent Future

    USGS Publications Warehouse

    Allen, Craig R.; Fontaine, Joseph J.; Pope, Kevin L.; Garmestani, Ahjond S.

    2011-01-01

    The challenges that face humanity today differ from the past because as the scale of human influence has increased, our biggest challenges have become global in nature, and formerly local problems that could be addressed by shifting populations or switching resources, now aggregate (i.e., "scale up") limiting potential management options. Adaptive management is an approach to natural resource management that emphasizes learning through management based on the philosophy that knowledge is incomplete and much of what we think we know is actually wrong. Adaptive management has explicit structure, including careful elucidation of goals, identification of alternative management objectives and hypotheses of causation, and procedures for the collection of data followed by evaluation and reiteration. It is evident that adaptive management has matured, but it has also reached a crossroads. Practitioners and scientists have developed adaptive management and structured decision making techniques, and mathematicians have developed methods to reduce the uncertainties encountered in resource management, yet there continues to be misapplication of the method and misunderstanding of its purpose. Ironically, the confusion over the term "adaptive management" may stem from the flexibility inherent in the approach, which has resulted in multiple interpretations of "adaptive management" that fall along a continuum of complexity and a priori design. Adaptive management is not a panacea for the navigation of 'wicked problems' as it does not produce easy answers, and is only appropriate in a subset of natural resource management problems where both uncertainty and controllability are high. Nonetheless, the conceptual underpinnings of adaptive management are simple; there will always be inherent uncertainty and unpredictability in the dynamics and behavior of complex social-ecological systems, but management decisions must still be made, and whenever possible, we should incorporate

  11. Proposal for an Evaluation Method for the Performance of Work Procedures.

    PubMed

    Mohammed, Mouda; Mébarek, Djebabra; Wafa, Boulagouas; Makhlouf, Chati

    2016-12-01

    Noncompliance of operators with work procedures is a recurrent problem. This human behavior has been said to be situational and studied by many different approaches (ergonomic and others), which consider the noncompliance with work procedures to be obvious and seek to analyze its causes as well as consequences. The object of the proposed method is to solve this problem by focusing on the performance of work procedures and ensuring improved performance on a continuous basis. This study has multiple results: (1) assessment of the work procedures' performance by a multicriteria approach; (2) the use of a continuous improvement approach as a framework for the sustainability of the assessment method of work procedures' performance; and (3) adaptation of the Stop-Card as a facilitator support for continuous improvement of work procedures. The proposed method emphasizes to put in value the inputs of continuous improvement of the work procedures in relation with the conventional approaches which adopt the obvious evidence of the noncompliance to the working procedures and seek to analyze the cause-effect relationships related to this unacceptable phenomenon, especially in strategic industry.

  12. Solution-mediated cladding doping of commercial polymer optical fibers

    NASA Astrophysics Data System (ADS)

    Stajanca, Pavol; Topolniak, Ievgeniia; Pötschke, Samuel; Krebber, Katerina

    2018-03-01

    Solution doping of commercial polymethyl methacrylate (PMMA) polymer optical fibers (POFs) is presented as a novel approach for preparation of custom cladding-doped POFs (CD-POFs). The presented method is based on a solution-mediated diffusion of dopant molecules into the fiber cladding upon soaking of POFs in a methanol-dopant solution. The method was tested on three different commercial POFs using Rhodamine B as a fluorescent dopant. The dynamics of the diffusion process was studied in order to optimize the doping procedure in terms of selection of the most suitable POF, doping time and conditions. Using the optimized procedure, longer segment of fluorescent CD-POF was prepared and its performance was characterized. Fiber's potential for sensing and illumination applications was demonstrated and discussed. The proposed method represents a simple and cheap way for fabrication of custom, short to medium length CD-POFs with various dopants.

  13. Adaptive optics image restoration algorithm based on wavefront reconstruction and adaptive total variation method

    NASA Astrophysics Data System (ADS)

    Li, Dongming; Zhang, Lijuan; Wang, Ting; Liu, Huan; Yang, Jinhua; Chen, Guifen

    2016-11-01

    To improve the adaptive optics (AO) image's quality, we study the AO image restoration algorithm based on wavefront reconstruction technology and adaptive total variation (TV) method in this paper. Firstly, the wavefront reconstruction using Zernike polynomial is used for initial estimated for the point spread function (PSF). Then, we develop our proposed iterative solutions for AO images restoration, addressing the joint deconvolution issue. The image restoration experiments are performed to verify the image restoration effect of our proposed algorithm. The experimental results show that, compared with the RL-IBD algorithm and Wiener-IBD algorithm, we can see that GMG measures (for real AO image) from our algorithm are increased by 36.92%, and 27.44% respectively, and the computation time are decreased by 7.2%, and 3.4% respectively, and its estimation accuracy is significantly improved.

  14. Bursting endemic bubbles in an adaptive network

    NASA Astrophysics Data System (ADS)

    Sherborne, N.; Blyuss, K. B.; Kiss, I. Z.

    2018-04-01

    The spread of an infectious disease is known to change people's behavior, which in turn affects the spread of disease. Adaptive network models that account for both epidemic and behavioral change have found oscillations, but in an extremely narrow region of the parameter space, which contrasts with intuition and available data. In this paper we propose a simple susceptible-infected-susceptible epidemic model on an adaptive network with time-delayed rewiring, and show that oscillatory solutions are now present in a wide region of the parameter space. Altering the transmission or rewiring rates reveals the presence of an endemic bubble—an enclosed region of the parameter space where oscillations are observed.

  15. Parallel goal-oriented adaptive finite element modeling for 3D electromagnetic exploration

    NASA Astrophysics Data System (ADS)

    Zhang, Y.; Key, K.; Ovall, J.; Holst, M.

    2014-12-01

    We present a parallel goal-oriented adaptive finite element method for accurate and efficient electromagnetic (EM) modeling of complex 3D structures. An unstructured tetrahedral mesh allows this approach to accommodate arbitrarily complex 3D conductivity variations and a priori known boundaries. The total electric field is approximated by the lowest order linear curl-conforming shape functions and the discretized finite element equations are solved by a sparse LU factorization. Accuracy of the finite element solution is achieved through adaptive mesh refinement that is performed iteratively until the solution converges to the desired accuracy tolerance. Refinement is guided by a goal-oriented error estimator that uses a dual-weighted residual method to optimize the mesh for accurate EM responses at the locations of the EM receivers. As a result, the mesh refinement is highly efficient since it only targets the elements where the inaccuracy of the solution corrupts the response at the possibly distant locations of the EM receivers. We compare the accuracy and efficiency of two approaches for estimating the primary residual error required at the core of this method: one uses local element and inter-element residuals and the other relies on solving a global residual system using a hierarchical basis. For computational efficiency our method follows the Bank-Holst algorithm for parallelization, where solutions are computed in subdomains of the original model. To resolve the load-balancing problem, this approach applies a spectral bisection method to divide the entire model into subdomains that have approximately equal error and the same number of receivers. The finite element solutions are then computed in parallel with each subdomain carrying out goal-oriented adaptive mesh refinement independently. We validate the newly developed algorithm by comparison with controlled-source EM solutions for 1D layered models and with 2D results from our earlier 2D goal oriented

  16. Adaptive surrogate model based multiobjective optimization for coastal aquifer management

    NASA Astrophysics Data System (ADS)

    Song, Jian; Yang, Yun; Wu, Jianfeng; Wu, Jichun; Sun, Xiaomin; Lin, Jin

    2018-06-01

    In this study, a novel surrogate model assisted multiobjective memetic algorithm (SMOMA) is developed for optimal pumping strategies of large-scale coastal groundwater problems. The proposed SMOMA integrates an efficient data-driven surrogate model with an improved non-dominated sorted genetic algorithm-II (NSGAII) that employs a local search operator to accelerate its convergence in optimization. The surrogate model based on Kernel Extreme Learning Machine (KELM) is developed and evaluated as an approximate simulator to generate the patterns of regional groundwater flow and salinity levels in coastal aquifers for reducing huge computational burden. The KELM model is adaptively trained during evolutionary search to satisfy desired fidelity level of surrogate so that it inhibits error accumulation of forecasting and results in correctly converging to true Pareto-optimal front. The proposed methodology is then applied to a large-scale coastal aquifer management in Baldwin County, Alabama. Objectives of minimizing the saltwater mass increase and maximizing the total pumping rate in the coastal aquifers are considered. The optimal solutions achieved by the proposed adaptive surrogate model are compared against those solutions obtained from one-shot surrogate model and original simulation model. The adaptive surrogate model does not only improve the prediction accuracy of Pareto-optimal solutions compared with those by the one-shot surrogate model, but also maintains the equivalent quality of Pareto-optimal solutions compared with those by NSGAII coupled with original simulation model, while retaining the advantage of surrogate models in reducing computational burden up to 94% of time-saving. This study shows that the proposed methodology is a computationally efficient and promising tool for multiobjective optimizations of coastal aquifer managements.

  17. Aeroacoustic Simulation of Nose Landing Gear on Adaptive Unstructured Grids With FUN3D

    NASA Technical Reports Server (NTRS)

    Vatsa, Veer N.; Khorrami, Mehdi R.; Park, Michael A.; Lockard, David P.

    2013-01-01

    Numerical simulations have been performed for a partially-dressed, cavity-closed nose landing gear configuration that was tested in NASA Langley s closed-wall Basic Aerodynamic Research Tunnel (BART) and in the University of Florida's open-jet acoustic facility known as the UFAFF. The unstructured-grid flow solver FUN3D, developed at NASA Langley Research center, is used to compute the unsteady flow field for this configuration. Starting with a coarse grid, a series of successively finer grids were generated using the adaptive gridding methodology available in the FUN3D code. A hybrid Reynolds-averaged Navier-Stokes/large eddy simulation (RANS/LES) turbulence model is used for these computations. Time-averaged and instantaneous solutions obtained on these grids are compared with the measured data. In general, the correlation with the experimental data improves with grid refinement. A similar trend is observed for sound pressure levels obtained by using these CFD solutions as input to a FfowcsWilliams-Hawkings noise propagation code to compute the farfield noise levels. In general, the numerical solutions obtained on adapted grids compare well with the hand-tuned enriched fine grid solutions and experimental data. In addition, the grid adaption strategy discussed here simplifies the grid generation process, and results in improved computational efficiency of CFD simulations.

  18. A coupled Eulerian/Lagrangian method for the solution of three-dimensional vortical flows

    NASA Technical Reports Server (NTRS)

    Felici, Helene Marie

    1992-01-01

    A coupled Eulerian/Lagrangian method is presented for the reduction of numerical diffusion observed in solutions of three-dimensional rotational flows using standard Eulerian finite-volume time-marching procedures. A Lagrangian particle tracking method using particle markers is added to the Eulerian time-marching procedure and provides a correction of the Eulerian solution. In turn, the Eulerian solutions is used to integrate the Lagrangian state-vector along the particles trajectories. The Lagrangian correction technique does not require any a-priori information on the structure or position of the vortical regions. While the Eulerian solution ensures the conservation of mass and sets the pressure field, the particle markers, used as 'accuracy boosters,' take advantage of the accurate convection description of the Lagrangian solution and enhance the vorticity and entropy capturing capabilities of standard Eulerian finite-volume methods. The combined solution procedures is tested in several applications. The convection of a Lamb vortex in a straight channel is used as an unsteady compressible flow preservation test case. The other test cases concern steady incompressible flow calculations and include the preservation of turbulent inlet velocity profile, the swirling flow in a pipe, and the constant stagnation pressure flow and secondary flow calculations in bends. The last application deals with the external flow past a wing with emphasis on the trailing vortex solution. The improvement due to the addition of the Lagrangian correction technique is measured by comparison with analytical solutions when available or with Eulerian solutions on finer grids. The use of the combined Eulerian/Lagrangian scheme results in substantially lower grid resolution requirements than the standard Eulerian scheme for a given solution accuracy.

  19. Rheology of Self-Assembling Silk Fibroin Solutions

    NASA Astrophysics Data System (ADS)

    Zhou, Rui; Chen, Song-Bi; Yuan, Xue-Feng

    2008-07-01

    A robust procedure for preparation of aqueous silk fibroin solutions with a range of concentration up to 25 wt% from domestic Bombyx mori cocoon shells has been established. We have carried out molecular and rheometric characterizations of silk fibroin solutions, and constructed an equilibrium phase diagram. The sharp sol-gel transition can be exploited for rapid solidification of micro-morphological structure. We will discuss the correlations between fluid formulation, rheological properties and processibility of silk fibroin in the talk.

  20. Parallel, Gradient-Based Anisotropic Mesh Adaptation for Re-entry Vehicle Configurations

    NASA Technical Reports Server (NTRS)

    Bibb, Karen L.; Gnoffo, Peter A.; Park, Michael A.; Jones, William T.

    2006-01-01

    Two gradient-based adaptation methodologies have been implemented into the Fun3d refine GridEx infrastructure. A spring-analogy adaptation which provides for nodal movement to cluster mesh nodes in the vicinity of strong shocks has been extended for general use within Fun3d, and is demonstrated for a 70 sphere cone at Mach 2. A more general feature-based adaptation metric has been developed for use with the adaptation mechanics available in Fun3d, and is applicable to any unstructured, tetrahedral, flow solver. The basic functionality of general adaptation is explored through a case of flow over the forebody of a 70 sphere cone at Mach 6. A practical application of Mach 10 flow over an Apollo capsule, computed with the Felisa flow solver, is given to compare the adaptive mesh refinement with uniform mesh refinement. The examples of the paper demonstrate that the gradient-based adaptation capability as implemented can give an improvement in solution quality.

  1. Adaptive time stepping for fluid-structure interaction solvers

    DOE PAGES

    Mayr, M.; Wall, W. A.; Gee, M. W.

    2017-12-22

    In this work, a novel adaptive time stepping scheme for fluid-structure interaction (FSI) problems is proposed that allows for controlling the accuracy of the time-discrete solution. Furthermore, it eases practical computations by providing an efficient and very robust time step size selection. This has proven to be very useful, especially when addressing new physical problems, where no educated guess for an appropriate time step size is available. The fluid and the structure field, but also the fluid-structure interface are taken into account for the purpose of a posteriori error estimation, rendering it easy to implement and only adding negligible additionalmore » cost. The adaptive time stepping scheme is incorporated into a monolithic solution framework, but can straightforwardly be applied to partitioned solvers as well. The basic idea can be extended to the coupling of an arbitrary number of physical models. Accuracy and efficiency of the proposed method are studied in a variety of numerical examples ranging from academic benchmark tests to complex biomedical applications like the pulsatile blood flow through an abdominal aortic aneurysm. Finally, the demonstrated accuracy of the time-discrete solution in combination with reduced computational cost make this algorithm very appealing in all kinds of FSI applications.« less

  2. Adaptive time stepping for fluid-structure interaction solvers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mayr, M.; Wall, W. A.; Gee, M. W.

    In this work, a novel adaptive time stepping scheme for fluid-structure interaction (FSI) problems is proposed that allows for controlling the accuracy of the time-discrete solution. Furthermore, it eases practical computations by providing an efficient and very robust time step size selection. This has proven to be very useful, especially when addressing new physical problems, where no educated guess for an appropriate time step size is available. The fluid and the structure field, but also the fluid-structure interface are taken into account for the purpose of a posteriori error estimation, rendering it easy to implement and only adding negligible additionalmore » cost. The adaptive time stepping scheme is incorporated into a monolithic solution framework, but can straightforwardly be applied to partitioned solvers as well. The basic idea can be extended to the coupling of an arbitrary number of physical models. Accuracy and efficiency of the proposed method are studied in a variety of numerical examples ranging from academic benchmark tests to complex biomedical applications like the pulsatile blood flow through an abdominal aortic aneurysm. Finally, the demonstrated accuracy of the time-discrete solution in combination with reduced computational cost make this algorithm very appealing in all kinds of FSI applications.« less

  3. Using adaptive grid in modeling rocket nozzle flow

    NASA Technical Reports Server (NTRS)

    Chow, Alan S.; Jin, Kang-Ren

    1992-01-01

    The mechanical behavior of a rocket motor internal flow field results in a system of nonlinear partial differential equations which cannot be solved analytically. However, this system of equations called the Navier-Stokes equations can be solved numerically. The accuracy and the convergence of the solution of the system of equations will depend largely on how precisely the sharp gradients in the domain of interest can be resolved. With the advances in computer technology, more sophisticated algorithms are available to improve the accuracy and convergence of the solutions. An adaptive grid generation is one of the schemes which can be incorporated into the algorithm to enhance the capability of numerical modeling. It is equivalent to putting intelligence into the algorithm to optimize the use of computer memory. With this scheme, the finite difference domain of the flow field called the grid does neither have to be very fine nor strategically placed at the location of sharp gradients. The grid is self adapting as the solution evolves. This scheme significantly improves the methodology of solving flow problems in rocket nozzles by taking the refinement part of grid generation out of the hands of computational fluid dynamics (CFD) specialists and place it into the computer algorithm itself.

  4. Fuzzy Adaptive Control for Intelligent Autonomous Space Exploration Problems

    NASA Technical Reports Server (NTRS)

    Esogbue, Augustine O.

    1998-01-01

    The principal objective of the research reported here is the re-design, analysis and optimization of our newly developed neural network fuzzy adaptive controller model for complex processes capable of learning fuzzy control rules using process data and improving its control through on-line adaption. The learned improvement is according to a performance objective function that provides evaluative feedback; this performance objective is broadly defined to meet long-range goals over time. Although fuzzy control had proven effective for complex, nonlinear, imprecisely-defined processes for which standard models and controls are either inefficient, impractical or cannot be derived, the state of the art prior to our work showed that procedures for deriving fuzzy control, however, were mostly ad hoc heuristics. The learning ability of neural networks was exploited to systematically derive fuzzy control and permit on-line adaption and in the process optimize control. The operation of neural networks integrates very naturally with fuzzy logic. The neural networks which were designed and tested using simulation software and simulated data, followed by realistic industrial data were reconfigured for application on several platforms as well as for the employment of improved algorithms. The statistical procedures of the learning process were investigated and evaluated with standard statistical procedures (such as ANOVA, graphical analysis of residuals, etc.). The computational advantage of dynamic programming-like methods of optimal control was used to permit on-line fuzzy adaptive control. Tests for the consistency, completeness and interaction of the control rules were applied. Comparisons to other methods and controllers were made so as to identify the major advantages of the resulting controller model. Several specific modifications and extensions were made to the original controller. Additional modifications and explorations have been proposed for further study. Some of

  5. Towards Verification of Operational Procedures Using Auto-Generated Diagnostic Trees

    NASA Technical Reports Server (NTRS)

    Kurtoglu, Tolga; Lutz, Robyn; Patterson-Hine, Ann

    2009-01-01

    The design, development, and operation of complex space, lunar and planetary exploration systems require the development of general procedures that describe a detailed set of instructions capturing how mission tasks are performed. For both crewed and uncrewed NASA systems, mission safety and the accomplishment of the scientific mission objectives are highly dependent on the correctness of procedures. In this paper, we describe how to use the auto-generated diagnostic trees from existing diagnostic models to improve the verification of standard operating procedures. Specifically, we introduce a systematic method, namely the Diagnostic Tree for Verification (DTV), developed with the goal of leveraging the information contained within auto-generated diagnostic trees in order to check the correctness of procedures, to streamline the procedures in terms of reducing the number of steps or use of resources in them, and to propose alternative procedural steps adaptive to changing operational conditions. The application of the DTV method to a spacecraft electrical power system shows the feasibility of the approach and its range of capabilities

  6. Stop Disease: Diapering Procedures = Alto a las Enfermedades: Procedimientos para Cambiar Panales.

    ERIC Educational Resources Information Center

    California Child Care Health Program, Oakland.

    In order to prevent the occurrence and spread of disease in California child care programs, this set of laminated procedure pages, in English and Spanish versions, details infant and child care procedures for safe diapering. The document delineates important rules about diapering, gives directions for making a disinfecting solution, and provides…

  7. Bayesian Adaptive Lasso for Ordinal Regression with Latent Variables

    ERIC Educational Resources Information Center

    Feng, Xiang-Nan; Wu, Hao-Tian; Song, Xin-Yuan

    2017-01-01

    We consider an ordinal regression model with latent variables to investigate the effects of observable and latent explanatory variables on the ordinal responses of interest. Each latent variable is characterized by correlated observed variables through a confirmatory factor analysis model. We develop a Bayesian adaptive lasso procedure to conduct…

  8. Surfing Global Change: Negotiating Sustainable Solutions

    ERIC Educational Resources Information Center

    Ahamer, Gilbert

    2006-01-01

    SURFING GLOBAL CHANGE (SGC) serves as a procedural shell for attaining sustainable solutions for any interdisciplinary issue and is intended for use in advanced university courses. The participants' activities evolve through five levels from individual argumentation to molding one's own views for the "common good." The paradigm of…

  9. Teaching older adults by adapting for aging changes.

    PubMed

    Weinrich, S P; Weinrich, M C; Boyd, M D; Atwood, J; Cervenka, B

    1994-12-01

    Few teaching programs are geared to meet the special learning needs of the elderly. This pilot study used a quasi-experimental pretest-posttest design to measure the effect of the Adaptation for Aging Changes (AAC) Method on fecal occult blood screening (FOBS) at meal sites for the elderly in the South. The AAC Method uses techniques that adjust the presentation to accommodate for normal aging changes and includes a demonstration of the procedure for collection of the stool blood test, memory reminders of the date to return the stool blood test, and written materials adapted to the 5th grade reading level. In addition, actual practice of the FOBS with the use of peanut butter was added to the AAC Method, making it the AAC with Practice Method (AACP) in two sites. The American Cancer Society's colorectal cancer educational slide-tape show served as the basis for all of the methods. Hemoccult II kits were distributed at no cost to the participants. Descriptive statistics, chi 2, and logistic regressions were used to analyze data from 135 Council on Aging meal sites' participants. The average age of the participants was 72 years; the average educational level was 8th grade; over half the sample was African-American; and half of the participants had incomes below the poverty level. Results support a significant increase in participation in FOBS in participants taught by the AACP Method [chi 2 (1, n = 56) = 5.34, p = 0.02; odds ratio = 6.2]. This research provides support for teaching that makes adaptations for aging changes, especially adaptations that include actual practice of the procedure.

  10. A hierarchical Bayesian approach to adaptive vision testing: A case study with the contrast sensitivity function.

    PubMed

    Gu, Hairong; Kim, Woojae; Hou, Fang; Lesmes, Luis Andres; Pitt, Mark A; Lu, Zhong-Lin; Myung, Jay I

    2016-01-01

    Measurement efficiency is of concern when a large number of observations are required to obtain reliable estimates for parametric models of vision. The standard entropy-based Bayesian adaptive testing procedures addressed the issue by selecting the most informative stimulus in sequential experimental trials. Noninformative, diffuse priors were commonly used in those tests. Hierarchical adaptive design optimization (HADO; Kim, Pitt, Lu, Steyvers, & Myung, 2014) further improves the efficiency of the standard Bayesian adaptive testing procedures by constructing an informative prior using data from observers who have already participated in the experiment. The present study represents an empirical validation of HADO in estimating the human contrast sensitivity function. The results show that HADO significantly improves the accuracy and precision of parameter estimates, and therefore requires many fewer observations to obtain reliable inference about contrast sensitivity, compared to the method of quick contrast sensitivity function (Lesmes, Lu, Baek, & Albright, 2010), which uses the standard Bayesian procedure. The improvement with HADO was maintained even when the prior was constructed from heterogeneous populations or a relatively small number of observers. These results of this case study support the conclusion that HADO can be used in Bayesian adaptive testing by replacing noninformative, diffuse priors with statistically justified informative priors without introducing unwanted bias.

  11. A hierarchical Bayesian approach to adaptive vision testing: A case study with the contrast sensitivity function

    PubMed Central

    Gu, Hairong; Kim, Woojae; Hou, Fang; Lesmes, Luis Andres; Pitt, Mark A.; Lu, Zhong-Lin; Myung, Jay I.

    2016-01-01

    Measurement efficiency is of concern when a large number of observations are required to obtain reliable estimates for parametric models of vision. The standard entropy-based Bayesian adaptive testing procedures addressed the issue by selecting the most informative stimulus in sequential experimental trials. Noninformative, diffuse priors were commonly used in those tests. Hierarchical adaptive design optimization (HADO; Kim, Pitt, Lu, Steyvers, & Myung, 2014) further improves the efficiency of the standard Bayesian adaptive testing procedures by constructing an informative prior using data from observers who have already participated in the experiment. The present study represents an empirical validation of HADO in estimating the human contrast sensitivity function. The results show that HADO significantly improves the accuracy and precision of parameter estimates, and therefore requires many fewer observations to obtain reliable inference about contrast sensitivity, compared to the method of quick contrast sensitivity function (Lesmes, Lu, Baek, & Albright, 2010), which uses the standard Bayesian procedure. The improvement with HADO was maintained even when the prior was constructed from heterogeneous populations or a relatively small number of observers. These results of this case study support the conclusion that HADO can be used in Bayesian adaptive testing by replacing noninformative, diffuse priors with statistically justified informative priors without introducing unwanted bias. PMID:27105061

  12. [Project to enhance bone bank tissue storage and distribution procedures].

    PubMed

    Huang, Jui-Chen; Wu, Chiung-Lan; Chen, Chun-Chuan; Chen, Shu-Hua

    2011-10-01

    Organ and tissue transplantation are now commonly preformed procedures. Improper organ bank handling procedures may increase infection risks. Execution accuracy in terms of tissue storage and distribution at our bone bank was 80%. We thus proposed an execution improvement project to enhance procedures in order to fulfill the intent of donors and ensure recipient safety. This project was designed to raise nurse professionalism, and ensure patient safety through enhanced tissue storage and distribution procedures. Education programs developed for this project focus on teaching standard operating procedures for bone and ligament storage and distribution, bone bank facility maintenance, trouble shooting and solutions, and periodic inspection systems. Cognition of proper storage and distribution procedures rose from 81% to 100%; Execution accuracy also rose from 80% to 100%. The project successfully conveyed concepts essential to the correct execution of organ storage and distribution procedures and proper organ bank facility management. Achieving and maintaining procedural and management standards is crucial to continued organ donations and the recipient safety.

  13. Adaptive box filters for removal of random noise from digital images

    USGS Publications Warehouse

    Eliason, E.M.; McEwen, A.S.

    1990-01-01

    We have developed adaptive box-filtering algorithms to (1) remove random bit errors (pixel values with no relation to the image scene) and (2) smooth noisy data (pixels related to the image scene but with an additive or multiplicative component of noise). For both procedures, we use the standard deviation (??) of those pixels within a local box surrounding each pixel, hence they are adaptive filters. This technique effectively reduces speckle in radar images without eliminating fine details. -from Authors

  14. The control of flexible structure vibrations using a cantilevered adaptive truss

    NASA Technical Reports Server (NTRS)

    Wynn, Robert H., Jr.; Robertshaw, Harry H.

    1991-01-01

    Analytical and experimental procedures and design tools are presented for the control of flexible structure vibrations using a cantilevered adaptive truss. Simulated and experimental data are examined for three types of structures: a slender beam, a single curved beam, and two curved beams. The adaptive truss is shown to produce a 6,000-percent increase in damping, demonstrating its potential in vibration control. Good agreement is obtained between the simulated and experimental data, thus validating the modeling methods.

  15. L(sub 1) Adaptive Flight Control System: Flight Evaluation and Technology Transition

    NASA Technical Reports Server (NTRS)

    Xargay, Enric; Hovakimyan, Naira; Dobrokhodov, Vladimir; Kaminer, Isaac; Gregory, Irene M.; Cao, Chengyu

    2010-01-01

    Certification of adaptive control technologies for both manned and unmanned aircraft represent a major challenge for current Verification and Validation techniques. A (missing) key step towards flight certification of adaptive flight control systems is the definition and development of analysis tools and methods to support Verification and Validation for nonlinear systems, similar to the procedures currently used for linear systems. In this paper, we describe and demonstrate the advantages of L(sub l) adaptive control architectures for closing some of the gaps in certification of adaptive flight control systems, which may facilitate the transition of adaptive control into military and commercial aerospace applications. As illustrative examples, we present the results of a piloted simulation evaluation on the NASA AirSTAR flight test vehicle, and results of an extensive flight test program conducted by the Naval Postgraduate School to demonstrate the advantages of L(sub l) adaptive control as a verifiable robust adaptive flight control system.

  16. Flame atomic absorption spectrometric determination of heavy metals in aqueous solution and surface water preceded by co-precipitation procedure with copper(II) 8-hydroxyquinoline

    NASA Astrophysics Data System (ADS)

    Ipeaiyeda, Ayodele Rotimi; Ayoade, Abisayo Ruth

    2017-12-01

    Co-precipitation procedure has widely been employed for preconcentration and separation of metal ions from the matrices of environmental samples. This is simply due to its simplicity, low consumption of separating solvent and short duration for analysis. Various organic ligands have been used for this purpose. However, there is dearth of information on the application of 8-hydroxyquinoline (8-HQ) as ligand and Cu(II) as carrier element. The use of Cu(II) is desirable because there is no contamination and background adsorption interference. Therefore, the objective of this study was to use 8-HQ in the presence of Cu(II) for coprecipitation of Cd(II), Co(II), Cr(III), Ni(II) and Pb(II) from standard solutions and surface water prior to their determinations by flame atomic absorption spectrometry (FAAS). The effects of pH, sample volume, amount of 8-HQ and Cu(II) and interfering ions on the recoveries of metal ions from standard solutions were monitored using FAAS. The water samples were treated with 8-HQ under the optimum experimental conditions and metal concentrations were determined by FAAS. The metal concentrations in water samples not treated with 8-HQ were also determined. The optimum recovery values for metal ions were higher than 85.0%. The concentrations (mg/L) of Co(II), Ni(II), Cr(III), and Pb(II) in water samples treated with 8-HQ were 0.014 ± 0.002, 0.03 ± 0.01, 0.04 ± 0.02 and 0.05 ± 0.02, respectively. These concentrations and those obtained without coprecipitation technique were significantly different. Coprecipitation procedure using 8-HQ as ligand and Cu(II) as carrier element enhanced the preconcentration and separation of metal ions from the matrix of water sample.

  17. Adaptation of cardiovascular system stent implants.

    PubMed

    Ostasevicius, Vytautas; Tretsyakou-Savich, Yahor; Venslauskas, Mantas; Bertasiene, Agne; Minchenya, Vladimir; Chernoglaz, Pavel

    2018-06-27

    Time-consuming design and manufacturing processes are a serious disadvantage when adapting human cardiovascular implants as they cause unacceptable delays after the decision to intervene surgically has been made. An ideal cardiovascular implant should have a broad range of characteristics such as strength, viscoelasticity and blood compatibility. The present research proposes the sequence of the geometrical adaptation procedures and presents their results. The adaptation starts from the identification of a person's current health status while performing abdominal aortic aneurysm (AAA) imaging, which is a point of departure for the mathematical model of a cardiovascular implant. The computerized tomography scan shows the patient-specific geometry parameters of AAA and helps to create a model using COMSOL Multiphysics software. The initial parameters for flow simulation are taken from the results of a patient survey. The simulation results allow choosing the available shape of an implant which ensures a non-turbulent flow. These parameters are essential for the design and manufacturing of an implant prototype which should be tested experimentally for the assurance that the mathematical model is adequate to a physical one. The article gives a focused description of competences and means that are necessary to achieve the shortest possible preparation of the adapted cardiovascular implant for the surgery.

  18. Arbitrary Steady-State Solutions with the K-epsilon Model

    NASA Technical Reports Server (NTRS)

    Rumsey, Christopher L.; Pettersson Reif, B. A.; Gatski, Thomas B.

    2006-01-01

    Widely-used forms of the K-epsilon turbulence model are shown to yield arbitrary steady-state converged solutions that are highly dependent on numerical considerations such as initial conditions and solution procedure. These solutions contain pseudo-laminar regions of varying size. By applying a nullcline analysis to the equation set, it is possible to clearly demonstrate the reasons for the anomalous behavior. In summary, the degenerate solution acts as a stable fixed point under certain conditions, causing the numerical method to converge there. The analysis also suggests a methodology for preventing the anomalous behavior in steady-state computations.

  19. Adaptive grid methods for RLV environment assessment and nozzle analysis

    NASA Technical Reports Server (NTRS)

    Thornburg, Hugh J.

    1996-01-01

    Rapid access to highly accurate data about complex configurations is needed for multi-disciplinary optimization and design. In order to efficiently meet these requirements a closer coupling between the analysis algorithms and the discretization process is needed. In some cases, such as free surface, temporally varying geometries, and fluid structure interaction, the need is unavoidable. In other cases the need is to rapidly generate and modify high quality grids. Techniques such as unstructured and/or solution-adaptive methods can be used to speed the grid generation process and to automatically cluster mesh points in regions of interest. Global features of the flow can be significantly affected by isolated regions of inadequately resolved flow. These regions may not exhibit high gradients and can be difficult to detect. Thus excessive resolution in certain regions does not necessarily increase the accuracy of the overall solution. Several approaches have been employed for both structured and unstructured grid adaption. The most widely used involve grid point redistribution, local grid point enrichment/derefinement or local modification of the actual flow solver. However, the success of any one of these methods ultimately depends on the feature detection algorithm used to determine solution domain regions which require a fine mesh for their accurate representation. Typically, weight functions are constructed to mimic the local truncation error and may require substantial user input. Most problems of engineering interest involve multi-block grids and widely disparate length scales. Hence, it is desirable that the adaptive grid feature detection algorithm be developed to recognize flow structures of different type as well as differing intensity, and adequately address scaling and normalization across blocks. These weight functions can then be used to construct blending functions for algebraic redistribution, interpolation functions for unstructured grid generation

  20. Predicting mesh density for adaptive modelling of the global atmosphere.

    PubMed

    Weller, Hilary

    2009-11-28

    The shallow water equations are solved using a mesh of polygons on the sphere, which adapts infrequently to the predicted future solution. Infrequent mesh adaptation reduces the cost of adaptation and load-balancing and will thus allow for more accurate mapping on adaptation. We simulate the growth of a barotropically unstable jet adapting the mesh every 12 h. Using an adaptation criterion based largely on the gradient of the vorticity leads to a mesh with around 20 per cent of the cells of a uniform mesh that gives equivalent results. This is a similar proportion to previous studies of the same test case with mesh adaptation every 1-20 min. The prediction of the mesh density involves solving the shallow water equations on a coarse mesh in advance of the locally refined mesh in order to estimate where features requiring higher resolution will grow, decay or move to. The adaptation criterion consists of two parts: that resolved on the coarse mesh, and that which is not resolved and so is passively advected on the coarse mesh. This combination leads to a balance between resolving features controlled by the large-scale dynamics and maintaining fine-scale features.