Spatial adaptation procedures on tetrahedral meshes for unsteady aerodynamic flow calculations
NASA Technical Reports Server (NTRS)
Rausch, Russ D.; Batina, John T.; Yang, Henry T. Y.
1993-01-01
Spatial adaptation procedures for the accurate and efficient solution of steady and unsteady inviscid flow problems are described. The adaptation procedures were developed and implemented within a three-dimensional, unstructured-grid, upwind-type Euler code. These procedures involve mesh enrichment and mesh coarsening to either add points in high gradient regions of the flow or remove points where they are not needed, respectively, to produce solutions of high spatial accuracy at minimal computational cost. A detailed description of the enrichment and coarsening procedures are presented and comparisons with experimental data for an ONERA M6 wing and an exact solution for a shock-tube problem are presented to provide an assessment of the accuracy and efficiency of the capability. Steady and unsteady results, obtained using spatial adaptation procedures, are shown to be of high spatial accuracy, primarily in that discontinuities such as shock waves are captured very sharply.
Spatial adaptation procedures on tetrahedral meshes for unsteady aerodynamic flow calculations
NASA Technical Reports Server (NTRS)
Rausch, Russ D.; Batina, John T.; Yang, Henry T. Y.
1993-01-01
Spatial adaptation procedures for the accurate and efficient solution of steady and unsteady inviscid flow problems are described. The adaptation procedures were developed and implemented within a three-dimensional, unstructured-grid, upwind-type Euler code. These procedures involve mesh enrichment and mesh coarsening to either add points in high gradient regions of the flow or remove points where they are not needed, respectively, to produce solutions of high spatial accuracy at minimal computational cost. The paper gives a detailed description of the enrichment and coarsening procedures and presents comparisons with experimental data for an ONERA M6 wing and an exact solution for a shock-tube problem to provide an assessment of the accuracy and efficiency of the capability. Steady and unsteady results, obtained using spatial adaptation procedures, are shown to be of high spatial accuracy, primarily in that discontinuities such as shock waves are captured very sharply.
An assessment of the adaptive unstructured tetrahedral grid, Euler Flow Solver Code FELISA
NASA Technical Reports Server (NTRS)
Djomehri, M. Jahed; Erickson, Larry L.
1994-01-01
A three-dimensional solution-adaptive Euler flow solver for unstructured tetrahedral meshes is assessed, and the accuracy and efficiency of the method for predicting sonic boom pressure signatures about simple generic models are demonstrated. Comparison of computational and wind tunnel data and enhancement of numerical solutions by means of grid adaptivity are discussed. The mesh generation is based on the advancing front technique. The FELISA code consists of two solvers, the Taylor-Galerkin and the Runge-Kutta-Galerkin schemes, both of which are spacially discretized by the usual Galerkin weighted residual finite-element methods but with different explicit time-marching schemes to steady state. The solution-adaptive grid procedure is based on either remeshing or mesh refinement techniques. An alternative geometry adaptive procedure is also incorporated.
hp-Adaptive time integration based on the BDF for viscous flows
NASA Astrophysics Data System (ADS)
Hay, A.; Etienne, S.; Pelletier, D.; Garon, A.
2015-06-01
This paper presents a procedure based on the Backward Differentiation Formulas of order 1 to 5 to obtain efficient time integration of the incompressible Navier-Stokes equations. The adaptive algorithm performs both stepsize and order selections to control respectively the solution accuracy and the computational efficiency of the time integration process. The stepsize selection (h-adaptivity) is based on a local error estimate and an error controller to guarantee that the numerical solution accuracy is within a user prescribed tolerance. The order selection (p-adaptivity) relies on the idea that low-accuracy solutions can be computed efficiently by low order time integrators while accurate solutions require high order time integrators to keep computational time low. The selection is based on a stability test that detects growing numerical noise and deems a method of order p stable if there is no method of lower order that delivers the same solution accuracy for a larger stepsize. Hence, it guarantees both that (1) the used method of integration operates inside of its stability region and (2) the time integration procedure is computationally efficient. The proposed time integration procedure also features a time-step rejection and quarantine mechanisms, a modified Newton method with a predictor and dense output techniques to compute solution at off-step points.
An adaptive embedded mesh procedure for leading-edge vortex flows
NASA Technical Reports Server (NTRS)
Powell, Kenneth G.; Beer, Michael A.; Law, Glenn W.
1989-01-01
A procedure for solving the conical Euler equations on an adaptively refined mesh is presented, along with a method for determining which cells to refine. The solution procedure is a central-difference cell-vertex scheme. The adaptation procedure is made up of a parameter on which the refinement decision is based, and a method for choosing a threshold value of the parameter. The refinement parameter is a measure of mesh-convergence, constructed by comparison of locally coarse- and fine-grid solutions. The threshold for the refinement parameter is based on the curvature of the curve relating the number of cells flagged for refinement to the value of the refinement threshold. Results for three test cases are presented. The test problem is that of a delta wing at angle of attack in a supersonic free-stream. The resulting vortices and shocks are captured efficiently by the adaptive code.
NASA Technical Reports Server (NTRS)
Rausch, Russ D.; Batina, John T.; Yang, Henry T. Y.
1991-01-01
Spatial adaption procedures for the accurate and efficient solution of steady and unsteady inviscid flow problems are described. The adaption procedures were developed and implemented within a two-dimensional unstructured-grid upwind-type Euler code. These procedures involve mesh enrichment and mesh coarsening to either add points in a high gradient region or the flow or remove points where they are not needed, respectively, to produce solutions of high spatial accuracy at minimal computational costs. A detailed description is given of the enrichment and coarsening procedures and comparisons with alternative results and experimental data are presented to provide an assessment of the accuracy and efficiency of the capability. Steady and unsteady transonic results, obtained using spatial adaption for the NACA 0012 airfoil, are shown to be of high spatial accuracy, primarily in that the shock waves are very sharply captured. The results were obtained with a computational savings of a factor of approximately fifty-three for a steady case and as much as twenty-five for the unsteady cases.
NASA Technical Reports Server (NTRS)
Rausch, Russ D.; Yang, Henry T. Y.; Batina, John T.
1991-01-01
Spatial adaption procedures for the accurate and efficient solution of steady and unsteady inviscid flow problems are described. The adaption procedures were developed and implemented within a two-dimensional unstructured-grid upwind-type Euler code. These procedures involve mesh enrichment and mesh coarsening to either add points in high gradient regions of the flow or remove points where they are not needed, respectively, to produce solutions of high spatial accuracy at minimal computational cost. The paper gives a detailed description of the enrichment and coarsening procedures and presents comparisons with alternative results and experimental data to provide an assessment of the accuracy and efficiency of the capability. Steady and unsteady transonic results, obtained using spatial adaption for the NACA 0012 airfoil, are shown to be of high spatial accuracy, primarily in that the shock waves are very sharply captured. The results were obtained with a computational savings of a factor of approximately fifty-three for a steady case and as much as twenty-five for the unsteady cases.
NASA Technical Reports Server (NTRS)
Usab, William J., Jr.; Jiang, Yi-Tsann
1991-01-01
The objective of the present research is to develop a general solution adaptive scheme for the accurate prediction of inviscid quasi-three-dimensional flow in advanced compressor and turbine designs. The adaptive solution scheme combines an explicit finite-volume time-marching scheme for unstructured triangular meshes and an advancing front triangular mesh scheme with a remeshing procedure for adapting the mesh as the solution evolves. The unstructured flow solver has been tested on a series of two-dimensional airfoil configurations including a three-element analytic test case presented here. Mesh adapted quasi-three-dimensional Euler solutions are presented for three spanwise stations of the NASA rotor 67 transonic fan. Computed solutions are compared with available experimental data.
An adaptive mesh-moving and refinement procedure for one-dimensional conservation laws
NASA Technical Reports Server (NTRS)
Biswas, Rupak; Flaherty, Joseph E.; Arney, David C.
1993-01-01
We examine the performance of an adaptive mesh-moving and /or local mesh refinement procedure for the finite difference solution of one-dimensional hyperbolic systems of conservation laws. Adaptive motion of a base mesh is designed to isolate spatially distinct phenomena, and recursive local refinement of the time step and cells of the stationary or moving base mesh is performed in regions where a refinement indicator exceeds a prescribed tolerance. These adaptive procedures are incorporated into a computer code that includes a MacCormack finite difference scheme wih Davis' artificial viscosity model and a discretization error estimate based on Richardson's extrapolation. Experiments are conducted on three problems in order to qualify the advantages of adaptive techniques relative to uniform mesh computations and the relative benefits of mesh moving and refinement. Key results indicate that local mesh refinement, with and without mesh moving, can provide reliable solutions at much lower computational cost than possible on uniform meshes; that mesh motion can be used to improve the results of uniform mesh solutions for a modest computational effort; that the cost of managing the tree data structure associated with refinement is small; and that a combination of mesh motion and refinement reliably produces solutions for the least cost per unit accuracy.
NASA Astrophysics Data System (ADS)
Simoni, L.; Secchi, S.; Schrefler, B. A.
2008-12-01
This paper analyses the numerical difficulties commonly encountered in solving fully coupled numerical models and proposes a numerical strategy apt to overcome them. The proposed procedure is based on space refinement and time adaptivity. The latter, which in mainly studied here, is based on the use of a finite element approach in the space domain and a Discontinuous Galerkin approximation within each time span. Error measures are defined for the jump of the solution at each time station. These constitute the parameters allowing for the time adaptivity. Some care is however, needed for a useful definition of the jump measures. Numerical tests are presented firstly to demonstrate the advantages and shortcomings of the method over the more traditional use of finite differences in time, then to assess the efficiency of the proposed procedure for adapting the time step. The proposed method reveals its efficiency and simplicity to adapt the time step in the solution of coupled field problems.
Approximate solution of the multiple watchman routes problem with restricted visibility range.
Faigl, Jan
2010-10-01
In this paper, a new self-organizing map (SOM) based adaptation procedure is proposed to address the multiple watchman route problem with the restricted visibility range in the polygonal domain W. A watchman route is represented by a ring of connected neuron weights that evolves in W, while obstacles are considered by approximation of the shortest path. The adaptation procedure considers a coverage of W by the ring in order to attract nodes toward uncovered parts of W. The proposed procedure is experimentally verified in a set of environments and several visibility ranges. Performance of the procedure is compared with the decoupled approach based on solutions of the art gallery problem and the consecutive traveling salesman problem. The experimental results show the suitability of the proposed procedure based on relatively simple supporting geometrical structures, enabling application of the SOM principles to watchman route problems in W.
Aerodynamics of Engine-Airframe Interaction
NASA Technical Reports Server (NTRS)
Caughey, D. A.
1986-01-01
The report describes progress in research directed towards the efficient solution of the inviscid Euler and Reynolds-averaged Navier-Stokes equations for transonic flows through engine inlets, and past complete aircraft configurations, with emphasis on the flowfields in the vicinity of engine inlets. The research focusses upon the development of solution-adaptive grid procedures for these problems, and the development of multi-grid algorithms in conjunction with both, implicit and explicit time-stepping schemes for the solution of three-dimensional problems. The work includes further development of mesh systems suitable for inlet and wing-fuselage-inlet geometries using a variational approach. Work during this reporting period concentrated upon two-dimensional problems, and has been in two general areas: (1) the development of solution-adaptive procedures to cluster the grid cells in regions of high (truncation) error;and (2) the development of a multigrid scheme for solution of the two-dimensional Euler equations using a diagonalized alternating direction implicit (ADI) smoothing algorithm.
Solution-Adaptive Cartesian Cell Approach for Viscous and Inviscid Flows
NASA Technical Reports Server (NTRS)
Coirier, William J.; Powell, Kenneth G.
1996-01-01
A Cartesian cell-based approach for adaptively refined solutions of the Euler and Navier-Stokes equations in two dimensions is presented. Grids about geometrically complicated bodies are generated automatically, by the recursive subdivision of a single Cartesian cell encompassing the entire flow domain. Where the resulting cells intersect bodies, polygonal cut cells are created using modified polygon-clipping algorithms. The grid is stored in a binary tree data structure that provides a natural means of obtaining cell-to-cell connectivity and of carrying out solution-adaptive mesh refinement. The Euler and Navier-Stokes equations are solved on the resulting grids using a finite volume formulation. The convective terms are upwinded: A linear reconstruction of the primitive variables is performed, providing input states to an approximate Riemann solver for computing the fluxes between neighboring cells. The results of a study comparing the accuracy and positivity of two classes of cell-centered, viscous gradient reconstruction procedures is briefly summarized. Adaptively refined solutions of the Navier-Stokes equations are shown using the more robust of these gradient reconstruction procedures, where the results computed by the Cartesian approach are compared to theory, experiment, and other accepted computational results for a series of low and moderate Reynolds number flows.
NASA Technical Reports Server (NTRS)
Abolhassani, Jamshid S.; Everton, Eric L.
1990-01-01
An interactive grid adaption method is developed, discussed and applied to the unsteady flow about an oscillating airfoil. The user is allowed to have direct interaction with the adaption of the grid as well as the solution procedure. Grid points are allowed to adapt simultaneously to several variables. In addition to the theory and results, the hardware and software requirements are discussed.
Cartesian Off-Body Grid Adaption for Viscous Time- Accurate Flow Simulation
NASA Technical Reports Server (NTRS)
Buning, Pieter G.; Pulliam, Thomas H.
2011-01-01
An improved solution adaption capability has been implemented in the OVERFLOW overset grid CFD code. Building on the Cartesian off-body approach inherent in OVERFLOW and the original adaptive refinement method developed by Meakin, the new scheme provides for automated creation of multiple levels of finer Cartesian grids. Refinement can be based on the undivided second-difference of the flow solution variables, or on a specific flow quantity such as vorticity. Coupled with load-balancing and an inmemory solution interpolation procedure, the adaption process provides very good performance for time-accurate simulations on parallel compute platforms. A method of using refined, thin body-fitted grids combined with adaption in the off-body grids is presented, which maximizes the part of the domain subject to adaption. Two- and three-dimensional examples are used to illustrate the effectiveness and performance of the adaption scheme.
NASA Technical Reports Server (NTRS)
Rebstock, Rainer
1987-01-01
Numerical methods are developed for control of three dimensional adaptive test sections. The physical properties of the design problem occurring in the external field computation are analyzed, and a design procedure suited for solution of the problem is worked out. To do this, the desired wall shape is determined by stepwise modification of an initial contour. The necessary changes in geometry are determined with the aid of a panel procedure, or, with incident flow near the sonic range, with a transonic small perturbation (TSP) procedure. The designed wall shape, together with the wall deflections set during the tunnel run, are the input to a newly derived one-step formula which immediately yields the adapted wall contour. This is particularly important since the classical iterative adaptation scheme is shown to converge poorly for 3D flows. Experimental results obtained in the adaptive test section with eight flexible walls are presented to demonstrate the potential of the procedure. Finally, a method is described to minimize wall interference in 3D flows by adapting only the top and bottom wind tunnel walls.
Adjoint-Based, Three-Dimensional Error Prediction and Grid Adaptation
NASA Technical Reports Server (NTRS)
Park, Michael A.
2002-01-01
Engineering computational fluid dynamics (CFD) analysis and design applications focus on output functions (e.g., lift, drag). Errors in these output functions are generally unknown and conservatively accurate solutions may be computed. Computable error estimates can offer the possibility to minimize computational work for a prescribed error tolerance. Such an estimate can be computed by solving the flow equations and the linear adjoint problem for the functional of interest. The computational mesh can be modified to minimize the uncertainty of a computed error estimate. This robust mesh-adaptation procedure automatically terminates when the simulation is within a user specified error tolerance. This procedure for estimating and adapting to error in a functional is demonstrated for three-dimensional Euler problems. An adaptive mesh procedure that links to a Computer Aided Design (CAD) surface representation is demonstrated for wing, wing-body, and extruded high lift airfoil configurations. The error estimation and adaptation procedure yielded corrected functions that are as accurate as functions calculated on uniformly refined grids with ten times as many grid points.
NASA Technical Reports Server (NTRS)
Coirier, William John
1994-01-01
A Cartesian, cell-based scheme for solving the Euler and Navier-Stokes equations in two dimensions is developed and tested. Grids about geometrically complicated bodies are generated automatically, by recursive subdivision of a single Cartesian cell encompassing the entire flow domain. Where the resulting cells intersect bodies, polygonal 'cut' cells are created. The geometry of the cut cells is computed using polygon-clipping algorithms. The grid is stored in a binary-tree data structure which provides a natural means of obtaining cell-to-cell connectivity and of carrying out solution-adaptive refinement. The Euler and Navier-Stokes equations are solved on the resulting grids using a finite-volume formulation. The convective terms are upwinded, with a limited linear reconstruction of the primitive variables used to provide input states to an approximate Riemann solver for computing the fluxes between neighboring cells. A multi-stage time-stepping scheme is used to reach a steady-state solution. Validation of the Euler solver with benchmark numerical and exact solutions is presented. An assessment of the accuracy of the approach is made by uniform and adaptive grid refinements for a steady, transonic, exact solution to the Euler equations. The error of the approach is directly compared to a structured solver formulation. A non smooth flow is also assessed for grid convergence, comparing uniform and adaptively refined results. Several formulations of the viscous terms are assessed analytically, both for accuracy and positivity. The two best formulations are used to compute adaptively refined solutions of the Navier-Stokes equations. These solutions are compared to each other, to experimental results and/or theory for a series of low and moderate Reynolds numbers flow fields. The most suitable viscous discretization is demonstrated for geometrically-complicated internal flows. For flows at high Reynolds numbers, both an altered grid-generation procedure and a different formulation of the viscous terms are shown to be necessary. A hybrid Cartesian/body-fitted grid generation approach is demonstrated. In addition, a grid-generation procedure based on body-aligned cell cutting coupled with a viscous stensil-construction procedure based on quadratic programming is presented.
Procedure for Adapting Direct Simulation Monte Carlo Meshes
NASA Technical Reports Server (NTRS)
Woronowicz, Michael S.; Wilmoth, Richard G.; Carlson, Ann B.; Rault, Didier F. G.
1992-01-01
A technique is presented for adapting computational meshes used in the G2 version of the direct simulation Monte Carlo method. The physical ideas underlying the technique are discussed, and adaptation formulas are developed for use on solutions generated from an initial mesh. The effect of statistical scatter on adaptation is addressed, and results demonstrate the ability of this technique to achieve more accurate results without increasing necessary computational resources.
NASA Technical Reports Server (NTRS)
Padovan, J.; Tovichakchaikul, S.
1983-01-01
This paper will develop a new solution strategy which can handle elastic-plastic-creep problems in an inherently stable manner. This is achieved by introducing a new constrained time stepping algorithm which will enable the solution of creep initiated pre/postbuckling behavior where indefinite tangent stiffnesses are encountered. Due to the generality of the scheme, both monotone and cyclic loading histories can be handled. The presentation will give a thorough overview of current solution schemes and their short comings, the development of constrained time stepping algorithms as well as illustrate the results of several numerical experiments which benchmark the new procedure.
A solution-adaptive hybrid-grid method for the unsteady analysis of turbomachinery
NASA Technical Reports Server (NTRS)
Mathur, Sanjay R.; Madavan, Nateri K.; Rajagopalan, R. G.
1993-01-01
A solution-adaptive method for the time-accurate analysis of two-dimensional flows in turbomachinery is described. The method employs a hybrid structured-unstructured zonal grid topology in conjunction with appropriate modeling equations and solution techniques in each zone. The viscous flow region in the immediate vicinity of the airfoils is resolved on structured O-type grids while the rest of the domain is discretized using an unstructured mesh of triangular cells. Implicit, third-order accurate, upwind solutions of the Navier-Stokes equations are obtained in the inner regions. In the outer regions, the Euler equations are solved using an explicit upwind scheme that incorporates a second-order reconstruction procedure. An efficient and robust grid adaptation strategy, including both grid refinement and coarsening capabilities, is developed for the unstructured grid regions. Grid adaptation is also employed to facilitate information transfer at the interfaces between unstructured grids in relative motion. Results for grid adaptation to various features pertinent to turbomachinery flows are presented. Good comparisons between the present results and experimental measurements and earlier structured-grid results are obtained.
Lusby, Richard Martin; Schwierz, Martin; Range, Troels Martin; Larsen, Jesper
2016-11-01
The aim of this paper is to provide an improved method for solving the so-called dynamic patient admission scheduling (DPAS) problem. This is a complex scheduling problem that involves assigning a set of patients to hospital beds over a given time horizon in such a way that several quality measures reflecting patient comfort and treatment efficiency are maximized. Consideration must be given to uncertainty in the length of stays of patients as well as the possibility of emergency patients. We develop an adaptive large neighborhood search (ALNS) procedure to solve the problem. This procedure utilizes a Simulated Annealing framework. We thoroughly test the performance of the proposed ALNS approach on a set of 450 publicly available problem instances. A comparison with the current state-of-the-art indicates that the proposed methodology provides solutions that are of comparable quality for small and medium sized instances (up to 1000 patients); the two approaches provide solutions that differ in quality by approximately 1% on average. The ALNS procedure does, however, provide solutions in a much shorter time frame. On larger instances (between 1000-4000 patients) the improvement in solution quality by the ALNS procedure is substantial, approximately 3-14% on average, and as much as 22% on a single instance. The time taken to find such results is, however, in the worst case, a factor 12 longer on average than the time limit which is granted to the current state-of-the-art. The proposed ALNS procedure is an efficient and flexible method for solving the DPAS problem. Copyright © 2016 Elsevier B.V. All rights reserved.
Digital adaptive flight controller development
NASA Technical Reports Server (NTRS)
Kaufman, H.; Alag, G.; Berry, P.; Kotob, S.
1974-01-01
A design study of adaptive control logic suitable for implementation in modern airborne digital flight computers was conducted. Two designs are described for an example aircraft. Each of these designs uses a weighted least squares procedure to identify parameters defining the dynamics of the aircraft. The two designs differ in the way in which control law parameters are determined. One uses the solution of an optimal linear regulator problem to determine these parameters while the other uses a procedure called single stage optimization. Extensive simulation results and analysis leading to the designs are presented.
On the dynamics of some grid adaption schemes
NASA Technical Reports Server (NTRS)
Sweby, Peter K.; Yee, Helen C.
1994-01-01
The dynamics of a one-parameter family of mesh equidistribution schemes coupled with finite difference discretisations of linear and nonlinear convection-diffusion model equations is studied numerically. It is shown that, when time marched to steady state, the grid adaption not only influences the stability and convergence rate of the overall scheme, but can also introduce spurious dynamics to the numerical solution procedure.
A new procedure for dynamic adaption of three-dimensional unstructured grids
NASA Technical Reports Server (NTRS)
Biswas, Rupak; Strawn, Roger
1993-01-01
A new procedure is presented for the simultaneous coarsening and refinement of three-dimensional unstructured tetrahedral meshes. This algorithm allows for localized grid adaption that is used to capture aerodynamic flow features such as vortices and shock waves in helicopter flowfield simulations. The mesh-adaption algorithm is implemented in the C programming language and uses a data structure consisting of a series of dynamically-allocated linked lists. These lists allow the mesh connectivity to be rapidly reconstructed when individual mesh points are added and/or deleted. The algorithm allows the mesh to change in an anisotropic manner in order to efficiently resolve directional flow features. The procedure has been successfully implemented on a single processor of a Cray Y-MP computer. Two sample cases are presented involving three-dimensional transonic flow. Computed results show good agreement with conventional structured-grid solutions for the Euler equations.
Near-Body Grid Adaption for Overset Grids
NASA Technical Reports Server (NTRS)
Buning, Pieter G.; Pulliam, Thomas H.
2016-01-01
A solution adaption capability for curvilinear near-body grids has been implemented in the OVERFLOW overset grid computational fluid dynamics code. The approach follows closely that used for the Cartesian off-body grids, but inserts refined grids in the computational space of original near-body grids. Refined curvilinear grids are generated using parametric cubic interpolation, with one-sided biasing based on curvature and stretching ratio of the original grid. Sensor functions, grid marking, and solution interpolation tasks are implemented in the same fashion as for off-body grids. A goal-oriented procedure, based on largest error first, is included for controlling growth rate and maximum size of the adapted grid system. The adaption process is almost entirely parallelized using MPI, resulting in a capability suitable for viscous, moving body simulations. Two- and three-dimensional examples are presented.
NASA Astrophysics Data System (ADS)
Mendoza, G.; Tkach, M.; Kucharski, J.; Chaudhry, R.
2017-12-01
This discussion is focused around the application of a bottom-up vulnerability assessment procedure for planning of climate resilience to a water treament plant for teh city of Iolanda, Zambia. This project is a Millennium Challenge Corporation (MCC) innitiaive with technical support by the UNESCO category II, International Center for Integrated Water Resources Management (ICIWaRM) with secretariat at the US Army Corps of Engineers Institute for Water Resources. The MCC is an innovative and independent U.S. foreign aid agency that is helping lead the fight against global poverty. The bottom-up vulnerability assessmentt framework examines critical performance thresholds, examines the external drivers that would lead to failure, establishes plausibility and analytical uncertainty that would lead to failure, and provides the economic justification for robustness or adaptability. This presentation will showcase the experiences in the application of the bottom-up framework to a region that is very vulnerable to climate variability, has poor instituional capacities, and has very limited data. It will illustrate the technical analysis and a decision process that led to a non-obvious climate robust solution. Most importantly it will highlight the challenges of utilizing discounted cash flow analysis (DCFA), such as net present value, in justifying robust or adaptive solutions, i.e. comparing solution under different future risks. We highlight a solution to manage the potential biases these DCFA procedures can incur.
O'Mahony, M
1979-01-01
The paper reviews how adaptation to sodium chloride, changing in concentration as a result of various experimental procedures, affects measurements of the sensitivity, intensity, and quality of the salt taste. The development of and evidence for the current model that the salt taste depends on an adaptation level (taste zero) determined by the sodium cation concentration is examined and found to be generally supported, despite great methodological complications. It would seem that lower adaptation levels elicit lower thresholds, higher intensity estimates, and altered quality descriptions with predictable effects on psychophysical measures.
A Structured Grid Based Solution-Adaptive Technique for Complex Separated Flows
NASA Technical Reports Server (NTRS)
Thornburg, Hugh; Soni, Bharat K.; Kishore, Boyalakuntla; Yu, Robert
1996-01-01
The objective of this work was to enhance the predictive capability of widely used computational fluid dynamic (CFD) codes through the use of solution adaptive gridding. Most problems of engineering interest involve multi-block grids and widely disparate length scales. Hence, it is desirable that the adaptive grid feature detection algorithm be developed to recognize flow structures of different type as well as differing intensity, and adequately address scaling and normalization across blocks. In order to study the accuracy and efficiency improvements due to the grid adaptation, it is necessary to quantify grid size and distribution requirements as well as computational times of non-adapted solutions. Flow fields about launch vehicles of practical interest often involve supersonic freestream conditions at angle of attack exhibiting large scale separate vortical flow, vortex-vortex and vortex-surface interactions, separated shear layers and multiple shocks of different intensity. In this work, a weight function and an associated mesh redistribution procedure is presented which detects and resolves these features without user intervention. Particular emphasis has been placed upon accurate resolution of expansion regions and boundary layers. Flow past a wedge at Mach=2.0 is used to illustrate the enhanced detection capabilities of this newly developed weight function.
Effects of adaptive refinement on the inverse EEG solution
NASA Astrophysics Data System (ADS)
Weinstein, David M.; Johnson, Christopher R.; Schmidt, John A.
1995-10-01
One of the fundamental problems in electroencephalography can be characterized by an inverse problem. Given a subset of electrostatic potentials measured on the surface of the scalp and the geometry and conductivity properties within the head, calculate the current vectors and potential fields within the cerebrum. Mathematically the generalized EEG problem can be stated as solving Poisson's equation of electrical conduction for the primary current sources. The resulting problem is mathematically ill-posed i.e., the solution does not depend continuously on the data, such that small errors in the measurement of the voltages on the scalp can yield unbounded errors in the solution, and, for the general treatment of a solution of Poisson's equation, the solution is non-unique. However, if accurate solutions the general treatment of a solution of Poisson's equation, the solution is non-unique. However, if accurate solutions to such problems could be obtained, neurologists would gain noninvasive accesss to patient-specific cortical activity. Access to such data would ultimately increase the number of patients who could be effectively treated for pathological cortical conditions such as temporal lobe epilepsy. In this paper, we present the effects of spatial adaptive refinement on the inverse EEG problem and show that the use of adaptive methods allow for significantly better estimates of electric and potential fileds within the brain through an inverse procedure. To test these methods, we have constructed several finite element head models from magneteic resonance images of a patient. The finite element meshes ranged in size from 2724 nodes and 12,812 elements to 5224 nodes and 29,135 tetrahedral elements, depending on the level of discretization. We show that an adaptive meshing algorithm minimizes the error in the forward problem due to spatial discretization and thus increases the accuracy of the inverse solution.
One-dimensional swarm algorithm packaging
NASA Astrophysics Data System (ADS)
Lebedev, Boris K.; Lebedev, Oleg B.; Lebedeva, Ekaterina O.
2018-05-01
The paper considers an algorithm for solving the problem of onedimensional packaging based on the adaptive behavior model of an ant colony. The key role in the development of the ant algorithm is the choice of representation (interpretation) of the solution. The structure of the solution search graph, the procedure for finding solutions on the graph, the methods of deposition and evaporation of pheromone are described. Unlike the canonical paradigm of an ant algorithm, an ant on the solution search graph generates sets of elements distributed across blocks. Experimental studies were conducted on IBM PC. Compared with the existing algorithms, the results are improved.
Adaptive Discrete Hypergraph Matching.
Yan, Junchi; Li, Changsheng; Li, Yin; Cao, Guitao
2018-02-01
This paper addresses the problem of hypergraph matching using higher-order affinity information. We propose a solver that iteratively updates the solution in the discrete domain by linear assignment approximation. The proposed method is guaranteed to converge to a stationary discrete solution and avoids the annealing procedure and ad-hoc post binarization step that are required in several previous methods. Specifically, we start with a simple iterative discrete gradient assignment solver. This solver can be trapped in an -circle sequence under moderate conditions, where is the order of the graph matching problem. We then devise an adaptive relaxation mechanism to jump out this degenerating case and show that the resulting new path will converge to a fixed solution in the discrete domain. The proposed method is tested on both synthetic and real-world benchmarks. The experimental results corroborate the efficacy of our method.
Adaptive EAGLE dynamic solution adaptation and grid quality enhancement
NASA Technical Reports Server (NTRS)
Luong, Phu Vinh; Thompson, J. F.; Gatlin, B.; Mastin, C. W.; Kim, H. J.
1992-01-01
In the effort described here, the elliptic grid generation procedure in the EAGLE grid code was separated from the main code into a subroutine, and a new subroutine which evaluates several grid quality measures at each grid point was added. The elliptic grid routine can now be called, either by a computational fluid dynamics (CFD) code to generate a new adaptive grid based on flow variables and quality measures through multiple adaptation, or by the EAGLE main code to generate a grid based on quality measure variables through static adaptation. Arrays of flow variables can be read into the EAGLE grid code for use in static adaptation as well. These major changes in the EAGLE adaptive grid system make it easier to convert any CFD code that operates on a block-structured grid (or single-block grid) into a multiple adaptive code.
Multiscale computations with a wavelet-adaptive algorithm
NASA Astrophysics Data System (ADS)
Rastigejev, Yevgenii Anatolyevich
A wavelet-based adaptive multiresolution algorithm for the numerical solution of multiscale problems governed by partial differential equations is introduced. The main features of the method include fast algorithms for the calculation of wavelet coefficients and approximation of derivatives on nonuniform stencils. The connection between the wavelet order and the size of the stencil is established. The algorithm is based on the mathematically well established wavelet theory. This allows us to provide error estimates of the solution which are used in conjunction with an appropriate threshold criteria to adapt the collocation grid. The efficient data structures for grid representation as well as related computational algorithms to support grid rearrangement procedure are developed. The algorithm is applied to the simulation of phenomena described by Navier-Stokes equations. First, we undertake the study of the ignition and subsequent viscous detonation of a H2 : O2 : Ar mixture in a one-dimensional shock tube. Subsequently, we apply the algorithm to solve the two- and three-dimensional benchmark problem of incompressible flow in a lid-driven cavity at large Reynolds numbers. For these cases we show that solutions of comparable accuracy as the benchmarks are obtained with more than an order of magnitude reduction in degrees of freedom. The simulations show the striking ability of the algorithm to adapt to a solution having different scales at different spatial locations so as to produce accurate results at a relatively low computational cost.
A grid generation and flow solution method for the Euler equations on unstructured grids
NASA Astrophysics Data System (ADS)
Anderson, W. Kyle
1994-01-01
A grid generation and flow solution algorithm for the Euler equations on unstructured grids is presented. The grid generation scheme utilizes Delaunay triangulation and self-generates the field points for the mesh based on cell aspect ratios and allows for clustering near solid surfaces. The flow solution method is an implicit algorithm in which the linear set of equations arising at each time step is solved using a Gauss Seidel procedure which is completely vectorizable. In addition, a study is conducted to examine the number of subiterations required for good convergence of the overall algorithm. Grid generation results are shown in two dimensions for a National Advisory Committee for Aeronautics (NACA) 0012 airfoil as well as a two-element configuration. Flow solution results are shown for two-dimensional flow over the NACA 0012 airfoil and for a two-element configuration in which the solution has been obtained through an adaptation procedure and compared to an exact solution. Preliminary three-dimensional results are also shown in which subsonic flow over a business jet is computed.
Automated smoother for the numerical decoupling of dynamics models.
Vilela, Marco; Borges, Carlos C H; Vinga, Susana; Vasconcelos, Ana Tereza R; Santos, Helena; Voit, Eberhard O; Almeida, Jonas S
2007-08-21
Structure identification of dynamic models for complex biological systems is the cornerstone of their reverse engineering. Biochemical Systems Theory (BST) offers a particularly convenient solution because its parameters are kinetic-order coefficients which directly identify the topology of the underlying network of processes. We have previously proposed a numerical decoupling procedure that allows the identification of multivariate dynamic models of complex biological processes. While described here within the context of BST, this procedure has a general applicability to signal extraction. Our original implementation relied on artificial neural networks (ANN), which caused slight, undesirable bias during the smoothing of the time courses. As an alternative, we propose here an adaptation of the Whittaker's smoother and demonstrate its role within a robust, fully automated structure identification procedure. In this report we propose a robust, fully automated solution for signal extraction from time series, which is the prerequisite for the efficient reverse engineering of biological systems models. The Whittaker's smoother is reformulated within the context of information theory and extended by the development of adaptive signal segmentation to account for heterogeneous noise structures. The resulting procedure can be used on arbitrary time series with a nonstationary noise process; it is illustrated here with metabolic profiles obtained from in-vivo NMR experiments. The smoothed solution that is free of parametric bias permits differentiation, which is crucial for the numerical decoupling of systems of differential equations. The method is applicable in signal extraction from time series with nonstationary noise structure and can be applied in the numerical decoupling of system of differential equations into algebraic equations, and thus constitutes a rather general tool for the reverse engineering of mechanistic model descriptions from multivariate experimental time series.
Topology and grid adaption for high-speed flow computations
NASA Technical Reports Server (NTRS)
Abolhassani, Jamshid S.; Tiwari, Surendra N.
1989-01-01
This study investigates the effects of grid topology and grid adaptation on numerical solutions of the Navier-Stokes equations. In the first part of this study, a general procedure is presented for computation of high-speed flow over complex three-dimensional configurations. The flow field is simulated on the surface of a Butler wing in a uniform stream. Results are presented for Mach number 3.5 and a Reynolds number of 2,000,000. The O-type and H-type grids have been used for this study, and the results are compared together and with other theoretical and experimental results. The results demonstrate that while the H-type grid is suitable for the leading and trailing edges, a more accurate solution can be obtained for the middle part of the wing with an O-type grid. In the second part of this study, methods of grid adaption are reviewed and a method is developed with the capability of adapting to several variables. This method is based on a variational approach and is an algebraic method. Also, the method has been formulated in such a way that there is no need for any matrix inversion. This method is used in conjunction with the calculation of hypersonic flow over a blunt-nose body. A movie has been produced which shows simultaneously the transient behavior of the solution and the grid adaption.
This method provides a procedure for the determination of low-level orthophosphate concentrations normally found in estuarine and/or coastal waters. It is based upon the method of Murphy and Riley1 adapted for automated segmented flow analysis2 in which the two reagent solutions ...
Teacher Efficacy: A Study of Construct Dimensions.
ERIC Educational Resources Information Center
Guskey, Thomas R.; Passaro, Perry
The structure of a concept generally labeled "teacher efficacy" is examined. A sample of 342 prospective and experienced teachers was administered an efficacy questionnaire adapted from the research of S. Gibson and M. H. Dembo (1984). Factor analytic procedures with varimax rotation were used to generate a 2-factor solution that accounted for 32…
Adaptive multiresolution modeling of groundwater flow in heterogeneous porous media
NASA Astrophysics Data System (ADS)
Malenica, Luka; Gotovac, Hrvoje; Srzic, Veljko; Andric, Ivo
2016-04-01
Proposed methodology was originally developed by our scientific team in Split who designed multiresolution approach for analyzing flow and transport processes in highly heterogeneous porous media. The main properties of the adaptive Fup multi-resolution approach are: 1) computational capabilities of Fup basis functions with compact support capable to resolve all spatial and temporal scales, 2) multi-resolution presentation of heterogeneity as well as all other input and output variables, 3) accurate, adaptive and efficient strategy and 4) semi-analytical properties which increase our understanding of usually complex flow and transport processes in porous media. The main computational idea behind this approach is to separately find the minimum number of basis functions and resolution levels necessary to describe each flow and transport variable with the desired accuracy on a particular adaptive grid. Therefore, each variable is separately analyzed, and the adaptive and multi-scale nature of the methodology enables not only computational efficiency and accuracy, but it also describes subsurface processes closely related to their understood physical interpretation. The methodology inherently supports a mesh-free procedure, avoiding the classical numerical integration, and yields continuous velocity and flux fields, which is vitally important for flow and transport simulations. In this paper, we will show recent improvements within the proposed methodology. Since "state of the art" multiresolution approach usually uses method of lines and only spatial adaptive procedure, temporal approximation was rarely considered as a multiscale. Therefore, novel adaptive implicit Fup integration scheme is developed, resolving all time scales within each global time step. It means that algorithm uses smaller time steps only in lines where solution changes are intensive. Application of Fup basis functions enables continuous time approximation, simple interpolation calculations across different temporal lines and local time stepping control. Critical aspect of time integration accuracy is construction of spatial stencil due to accurate calculation of spatial derivatives. Since common approach applied for wavelets and splines uses a finite difference operator, we developed here collocation one including solution values and differential operator. In this way, new improved algorithm is adaptive in space and time enabling accurate solution for groundwater flow problems, especially in highly heterogeneous porous media with large lnK variances and different correlation length scales. In addition, differences between collocation and finite volume approaches are discussed. Finally, results show application of methodology to the groundwater flow problems in highly heterogeneous confined and unconfined aquifers.
An edge-based solution-adaptive method applied to the AIRPLANE code
NASA Technical Reports Server (NTRS)
Biswas, Rupak; Thomas, Scott D.; Cliff, Susan E.
1995-01-01
Computational methods to solve large-scale realistic problems in fluid flow can be made more efficient and cost effective by using them in conjunction with dynamic mesh adaption procedures that perform simultaneous coarsening and refinement to capture flow features of interest. This work couples the tetrahedral mesh adaption scheme, 3D_TAG, with the AIRPLANE code to solve complete aircraft configuration problems in transonic and supersonic flow regimes. Results indicate that the near-field sonic boom pressure signature of a cone-cylinder is improved, the oblique and normal shocks are better resolved on a transonic wing, and the bow shock ahead of an unstarted inlet is better defined.
Apramian, Tavis; Watling, Christopher; Lingard, Lorelei; Cristancho, Sayra
2015-10-01
Surgical research struggles to describe the relationship between procedural variations in daily practice and traditional conceptualizations of evidence. The problem has resisted simple solutions, in part, because we lack a solid understanding of how surgeons conceptualize and interact around variation, adaptation, innovation, and evidence in daily practice. This grounded theory study aims to describe the social processes that influence how procedural variation is conceptualized in the surgical workplace. Using the constructivist grounded theory methodology, semi-structured interviews with surgeons (n = 19) from four North American academic centres were collected and analysed. Purposive sampling targeted surgeons with experiential knowledge of the role of variations in the workplace. Theoretical sampling was conducted until a theoretical framework representing key processes was conceptually saturated. Surgical procedural variation was influenced by three key processes. Seeking improvement was shaped by having unsolved procedural problems, adapting in the moment, and pursuing personal opportunities. Orienting self and others to variations consisted of sharing stories of variations with others, taking stock of how a variation promoted personal interests, and placing trust in peers. Acting under cultural and material conditions was characterized by being wary, positioning personal image, showing the logic of a variation, and making use of academic resources to do so. Our findings include social processes that influence how adaptations are incubated in surgical practice and mature into innovations. This study offers a language for conceptualizing the sociocultural influences on procedural variations in surgery. Interventions to change how surgeons interact with variations on a day-to-day basis should consider these social processes in their design. © 2015 John Wiley & Sons, Ltd.
A Multi-Start Evolutionary Local Search for the Two-Echelon Location Routing Problem
NASA Astrophysics Data System (ADS)
Nguyen, Viet-Phuong; Prins, Christian; Prodhon, Caroline
This paper presents a new hybrid metaheuristic between a greedy randomized adaptive search procedure (GRASP) and an evolutionary/iterated local search (ELS/ILS), using Tabu list to solve the two-echelon location routing problem (LRP-2E). The GRASP uses in turn three constructive heuristics followed by local search to generate the initial solutions. From a solution of GRASP, an intensification strategy is carried out by a dynamic alternation between ELS and ILS. In this phase, each child is obtained by mutation and evaluated through a splitting procedure of giant tour followed by a local search. The tabu list, defined by two characteristics of solution (total cost and number of trips), is used to avoid searching a space already explored. The results show that our metaheuristic clearly outperforms all previously published methods on LRP-2E benchmark instances. Furthermore, it is competitive with the best meta-heuristic published for the single-echelon LRP.
Game-Theoretical Design of an Adaptive Distributed Dissemination Protocol for VANETs.
Iza-Paredes, Cristhian; Mezher, Ahmad Mohamad; Aguilar Igartua, Mónica; Forné, Jordi
2018-01-19
Road safety applications envisaged for Vehicular Ad Hoc Networks (VANETs) depend largely on the dissemination of warning messages to deliver information to concerned vehicles. The intended applications, as well as some inherent VANET characteristics, make data dissemination an essential service and a challenging task in this kind of networks. This work lays out a decentralized stochastic solution for the data dissemination problem through two game-theoretical mechanisms. Given the non-stationarity induced by a highly dynamic topology, diverse network densities, and intermittent connectivity, a solution for the formulated game requires an adaptive procedure able to exploit the environment changes. Extensive simulations reveal that our proposal excels in terms of number of transmissions, lower end-to-end delay and reduced overhead while maintaining high delivery ratio, compared to other proposals.
Game-Theoretical Design of an Adaptive Distributed Dissemination Protocol for VANETs
Mezher, Ahmad Mohamad; Aguilar Igartua, Mónica
2018-01-01
Road safety applications envisaged for Vehicular Ad Hoc Networks (VANETs) depend largely on the dissemination of warning messages to deliver information to concerned vehicles. The intended applications, as well as some inherent VANET characteristics, make data dissemination an essential service and a challenging task in this kind of networks. This work lays out a decentralized stochastic solution for the data dissemination problem through two game-theoretical mechanisms. Given the non-stationarity induced by a highly dynamic topology, diverse network densities, and intermittent connectivity, a solution for the formulated game requires an adaptive procedure able to exploit the environment changes. Extensive simulations reveal that our proposal excels in terms of number of transmissions, lower end-to-end delay and reduced overhead while maintaining high delivery ratio, compared to other proposals. PMID:29351255
NASA Astrophysics Data System (ADS)
Obracaj, Piotr; Fabianowski, Dariusz
2017-10-01
Implementations concerning adaptation of historic facilities for public utility objects are associated with the necessity of solving many complex, often conflicting expectations of future users. This mainly concerns the function that includes construction, technology and aesthetic issues. The list of issues is completed with proper protection of historic values, different in each case. The procedure leading to obtaining the expected solution is a multicriteria procedure, usually difficult to accurately define and requiring designer’s large experience. An innovative approach has been used for the analysis, namely - the modified EA FAHP (Extent Analysis Fuzzy Analytic Hierarchy Process) Chang’s method of a multicriteria analysis for the assessment of complex functional and spatial issues. Selection of optimal spatial form of an adapted historic building intended for the multi-functional public utility facility was analysed. The assumed functional flexibility was determined in the scope of: education, conference, and chamber spectacles, such as drama, concerts, in different stage-audience layouts.
Efficient robust doubly adaptive regularized regression with applications.
Karunamuni, Rohana J; Kong, Linglong; Tu, Wei
2018-01-01
We consider the problem of estimation and variable selection for general linear regression models. Regularized regression procedures have been widely used for variable selection, but most existing methods perform poorly in the presence of outliers. We construct a new penalized procedure that simultaneously attains full efficiency and maximum robustness. Furthermore, the proposed procedure satisfies the oracle properties. The new procedure is designed to achieve sparse and robust solutions by imposing adaptive weights on both the decision loss and the penalty function. The proposed method of estimation and variable selection attains full efficiency when the model is correct and, at the same time, achieves maximum robustness when outliers are present. We examine the robustness properties using the finite-sample breakdown point and an influence function. We show that the proposed estimator attains the maximum breakdown point. Furthermore, there is no loss in efficiency when there are no outliers or the error distribution is normal. For practical implementation of the proposed method, we present a computational algorithm. We examine the finite-sample and robustness properties using Monte Carlo studies. Two datasets are also analyzed.
Refined numerical solution of the transonic flow past a wedge
NASA Technical Reports Server (NTRS)
Liang, S.-M.; Fung, K.-Y.
1985-01-01
A numerical procedure combining the ideas of solving a modified difference equation and of adaptive mesh refinement is introduced. The numerical solution on a fixed grid is improved by using better approximations of the truncation error computed from local subdomain grid refinements. This technique is used to obtain refined solutions of steady, inviscid, transonic flow past a wedge. The effects of truncation error on the pressure distribution, wave drag, sonic line, and shock position are investigated. By comparing the pressure drag on the wedge and wave drag due to the shocks, a supersonic-to-supersonic shock originating from the wedge shoulder is confirmed.
Determining gold in water by anion-exchange batch extraction
McHugh, J.B.
1986-01-01
This paper describes a batch procedure for determining gold in natural waters. It is completely adaptable to field operations. The water samples are filtered and acidified before they are equilibrated with an anion-exchange resin by shaking. The gold is then eluted with acetone-nitric acid solution, and the eluate evaporated to dryness. The residue is taken up in hydrobromic acid-bromine solution and the gold is extracted with methyl isobutyl ketone. The extract is electrothermally atomized in an atomic-absorption spectrophotometer. The limit of determination is 1 ng 1. ?? 1986.
NASA Astrophysics Data System (ADS)
Julie, Hongki; Sanjaya, Febi; Anggoro, Ant. Yudhi
2017-08-01
One of purposes of this study was to describe the solution profile of the junior high school students for the PISA adaptation test. The procedures conducted by researchers to achieve this objective were (1) adapting the PISA test, (2) validating the adapting PISA test, (3) asking junior high school students to do the adapting PISA test, and (4) making the students' solution profile. The PISA problems for mathematics could be classified into four areas, namely quantity, space and shape, change and relationship, and uncertainty. The research results that would be presented in this paper were the result test for uncertainty problems. In the adapting PISA test, there were fifteen questions. Subjects in this study were 18 students from 11 junior high schools in Yogyakarta, Central Java, and Banten. The type of research that used by the researchers was a qualitative research. For the first uncertainty problem in the adapting test, 66.67% of students reached level 3. For the second uncertainty problem in the adapting test, 44.44% of students achieved level 4, and 33.33% of students reached level 3. For the third uncertainty problem in the adapting test n, 38.89% of students achieved level 5, 11.11% of students reached level 4, and 5.56% of students achieved level 3. For the part a of the fourth uncertainty problem in the adapting test, 72.22% of students reached level 4 and for the part b of the fourth uncertainty problem in the adapting test, 83.33% students achieved level 4.
Kampf, Günter; Degenhardt, Stina; Lackner, Sibylle; Ostermeyer, Christiane
2014-01-01
Background: It has recently been reported that reusable dispensers for surface disinfection tissues may be contaminated, especially with adapted Achromobacter species 3, when products based on surface-active ingredients are used. Fresh solution may quickly become recontaminated if dispensers are not processed adequately. Methods: We evaluated the abilities of six manual and three automatic processes for processing contaminated dispensers to prevent recolonisation of a freshly-prepared disinfectant solution (Mikrobac forte 0.5%). Dispensers were left at room temperature for 28 days. Samples of the disinfectant solution were taken every 7 days and assessed quantitatively for bacterial contamination. Results: All automatic procedures prevented recolonisation of the disinfectant solution when a temperature of 60–70°C was ensured for at least 5 min, with or without the addition of chemical cleaning agents. Manual procedures prevented recontamination of the disinfectant solution when rinsing with hot water or a thorough cleaning step was performed before treating all surfaces with an alcohol-based disinfectant or an oxygen-releaser. Other cleaning and disinfection procedures, including the use of an alcohol-based disinfectant, did not prevent recolonisation. Conclusions: These results indicate that not all processes are effective for processing reusable dispensers for surface-disinfectant tissues, and that a high temperature during the cleaning step or use of a biofilm-active cleaning agent are essential. PMID:24653973
Efficient electromagnetic source imaging with adaptive standardized LORETA/FOCUSS.
Schimpf, Paul H; Liu, Hesheng; Ramon, Ceon; Haueisen, Jens
2005-05-01
Functional brain imaging and source localization based on the scalp's potential field require a solution to an ill-posed inverse problem with many solutions. This makes it necessary to incorporate a priori knowledge in order to select a particular solution. A computational challenge for some subject-specific head models is that many inverse algorithms require a comprehensive sampling of the candidate source space at the desired resolution. In this study, we present an algorithm that can accurately reconstruct details of localized source activity from a sparse sampling of the candidate source space. Forward computations are minimized through an adaptive procedure that increases source resolution as the spatial extent is reduced. With this algorithm, we were able to compute inverses using only 6% to 11% of the full resolution lead-field, with a localization accuracy that was not significantly different than an exhaustive search through a fully-sampled source space. The technique is, therefore, applicable for use with anatomically-realistic, subject-specific forward models for applications with spatially concentrated source activity.
An accuracy assessment of Cartesian-mesh approaches for the Euler equations
NASA Technical Reports Server (NTRS)
Coirier, William J.; Powell, Kenneth G.
1995-01-01
A critical assessment of the accuracy of Cartesian-mesh approaches for steady, transonic solutions of the Euler equations of gas dynamics is made. An exact solution of the Euler equations (Ringleb's flow) is used not only to infer the order of the truncation error of the Cartesian-mesh approaches, but also to compare the magnitude of the discrete error directly to that obtained with a structured mesh approach. Uniformly and adaptively refined solutions using a Cartesian-mesh approach are obtained and compared to each other and to uniformly refined structured mesh results. The effect of cell merging is investigated as well as the use of two different K-exact reconstruction procedures. The solution methodology of the schemes is explained and tabulated results are presented to compare the solution accuracies.
Time-dependent grid adaptation for meshes of triangles and tetrahedra
NASA Technical Reports Server (NTRS)
Rausch, Russ D.
1993-01-01
This paper presents in viewgraph form a method of optimizing grid generation for unsteady CFD flow calculations that distributes the numerical error evenly throughout the mesh. Adaptive meshing is used to locally enrich in regions of relatively large errors and to locally coarsen in regions of relatively small errors. The enrichment/coarsening procedures are robust for isotropic cells; however, enrichment of high aspect ratio cells may fail near boundary surfaces with relatively large curvature. The enrichment indicator worked well for the cases shown, but in general requires user supervision for a more efficient solution.
Adaptive implicit-explicit and parallel element-by-element iteration schemes
NASA Technical Reports Server (NTRS)
Tezduyar, T. E.; Liou, J.; Nguyen, T.; Poole, S.
1989-01-01
Adaptive implicit-explicit (AIE) and grouped element-by-element (GEBE) iteration schemes are presented for the finite element solution of large-scale problems in computational mechanics and physics. The AIE approach is based on the dynamic arrangement of the elements into differently treated groups. The GEBE procedure, which is a way of rewriting the EBE formulation to make its parallel processing potential and implementation more clear, is based on the static arrangement of the elements into groups with no inter-element coupling within each group. Various numerical tests performed demonstrate the savings in the CPU time and memory.
A full potential flow analysis with realistic wake influence for helicopter rotor airload prediction
NASA Technical Reports Server (NTRS)
Egolf, T. Alan; Sparks, S. Patrick
1987-01-01
A 3-D, quasi-steady, full potential flow solver was adapted to include realistic wake influence for the aerodynamic analysis of helicopter rotors. The method is based on a finite difference solution of the full potential equation, using an inner and outer domain procedure for the blade flowfield to accommodate wake effects. The nonlinear flow is computed in the inner domain region using a finite difference solution method. The wake is modeled by a vortex lattice using prescribed geometry techniques to allow for the inclusion of realistic rotor wakes. The key feature of the analysis is that vortices contained within the finite difference mesh (inner domain) were treated with a vortex embedding technique while the influence of the remaining portion of the wake (in the outer domain) is impressed as a boundary condition on the outer surface of the finite difference mesh. The solution procedure couples the wake influence with the inner domain solution in a consistent and efficient solution process. The method has been applied to both hover and forward flight conditions. Correlation with subsonic and transonic hover airload data is shown which demonstrates the merits of the approach.
NASA Astrophysics Data System (ADS)
Re, B.; Dobrzynski, C.; Guardone, A.
2017-07-01
A novel strategy to solve the finite volume discretization of the unsteady Euler equations within the Arbitrary Lagrangian-Eulerian framework over tetrahedral adaptive grids is proposed. The volume changes due to local mesh adaptation are treated as continuous deformations of the finite volumes and they are taken into account by adding fictitious numerical fluxes to the governing equation. This peculiar interpretation enables to avoid any explicit interpolation of the solution between different grids and to compute grid velocities so that the Geometric Conservation Law is automatically fulfilled also for connectivity changes. The solution on the new grid is obtained through standard ALE techniques, thus preserving the underlying scheme properties, such as conservativeness, stability and monotonicity. The adaptation procedure includes node insertion, node deletion, edge swapping and points relocation and it is exploited both to enhance grid quality after the boundary movement and to modify the grid spacing to increase solution accuracy. The presented approach is assessed by three-dimensional simulations of steady and unsteady flow fields. The capability of dealing with large boundary displacements is demonstrated by computing the flow around the translating infinite- and finite-span NACA 0012 wing moving through the domain at the flight speed. The proposed adaptive scheme is applied also to the simulation of a pitching infinite-span wing, where the bi-dimensional character of the flow is well reproduced despite the three-dimensional unstructured grid. Finally, the scheme is exploited in a piston-induced shock-tube problem to take into account simultaneously the large deformation of the domain and the shock wave. In all tests, mesh adaptation plays a crucial role.
NASA Astrophysics Data System (ADS)
Gan, Y.; Liang, X. Z.; Duan, Q.; Xu, J.; Zhao, P.; Hong, Y.
2017-12-01
The uncertainties associated with the parameters of a hydrological model need to be quantified and reduced for it to be useful for operational hydrological forecasting and decision support. An uncertainty quantification framework is presented to facilitate practical assessment and reduction of model parametric uncertainties. A case study, using the distributed hydrological model CREST for daily streamflow simulation during the period 2008-2010 over ten watershed, was used to demonstrate the performance of this new framework. Model behaviors across watersheds were analyzed by a two-stage stepwise sensitivity analysis procedure, using LH-OAT method for screening out insensitive parameters, followed by MARS-based Sobol' sensitivity indices for quantifying each parameter's contribution to the response variance due to its first-order and higher-order effects. Pareto optimal sets of the influential parameters were then found by the adaptive surrogate-based multi-objective optimization procedure, using MARS model for approximating the parameter-response relationship and SCE-UA algorithm for searching the optimal parameter sets of the adaptively updated surrogate model. The final optimal parameter sets were validated against the daily streamflow simulation of the same watersheds during the period 2011-2012. The stepwise sensitivity analysis procedure efficiently reduced the number of parameters that need to be calibrated from twelve to seven, which helps to limit the dimensionality of calibration problem and serves to enhance the efficiency of parameter calibration. The adaptive MARS-based multi-objective calibration exercise provided satisfactory solutions to the reproduction of the observed streamflow for all watersheds. The final optimal solutions showed significant improvement when compared to the default solutions, with about 65-90% reduction in 1-NSE and 60-95% reduction in |RB|. The validation exercise indicated a large improvement in model performance with about 40-85% reduction in 1-NSE, and 35-90% reduction in |RB|. Overall, this uncertainty quantification framework is robust, effective and efficient for parametric uncertainty analysis, the results of which provide useful information that helps to understand the model behaviors and improve the model simulations.
Adaptive Texture Synthesis for Large Scale City Modeling
NASA Astrophysics Data System (ADS)
Despine, G.; Colleu, T.
2015-02-01
Large scale city models textured with aerial images are well suited for bird-eye navigation but generally the image resolution does not allow pedestrian navigation. One solution to face this problem is to use high resolution terrestrial photos but it requires huge amount of manual work to remove occlusions. Another solution is to synthesize generic textures with a set of procedural rules and elementary patterns like bricks, roof tiles, doors and windows. This solution may give realistic textures but with no correlation to the ground truth. Instead of using pure procedural modelling we present a method to extract information from aerial images and adapt the texture synthesis to each building. We describe a workflow allowing the user to drive the information extraction and to select the appropriate texture patterns. We also emphasize the importance to organize the knowledge about elementary pattern in a texture catalogue allowing attaching physical information, semantic attributes and to execute selection requests. Roofs are processed according to the detected building material. Façades are first described in terms of principal colours, then opening positions are detected and some window features are computed. These features allow selecting the most appropriate patterns from the texture catalogue. We experimented this workflow on two samples with 20 cm and 5 cm resolution images. The roof texture synthesis and opening detection were successfully conducted on hundreds of buildings. The window characterization is still sensitive to the distortions inherent to the projection of aerial images onto the facades.
A New Method for 3D Radiative Transfer with Adaptive Grids
NASA Astrophysics Data System (ADS)
Folini, D.; Walder, R.; Psarros, M.; Desboeufs, A.
2003-01-01
We present a new method for 3D NLTE radiative transfer in moving media, including an adaptive grid, along with some test examples and first applications. The central features of our approach we briefly outline in the following. For the solution of the radiative transfer equation, we make use of a generalized mean intensity approach. In this approach, the transfer eqation is solved directly, instead of using the moments of the transfer equation, thus avoiding the associated closure problem. In a first step, a system of equations for the transfer of each directed intensity is set up, using short characteristics. Next, the entity of systems of equations for each directed intensity is re-formulated in the form of one system of equations for the angle-integrated mean intensity. This system then is solved by a modern, fast BiCGStab iterative solver. An additional advantage of this procedure is that convergence rates barely depend on the spatial discretization. For the solution of the rate equations we use Housholder transformations. Lines are treated by a 3D generalization of the well-known Sobolev-approximation. The two parts, solution of the transfer equation and solution of the rate equations, are iteratively coupled. We recently have implemented an adaptive grid, which allows for recursive refinement on a cell-by-cell basis. The spatial resolution, which is always a problematic issue in 3D simulations, we can thus locally reduce or augment, depending on the problem to be solved.
Scheraga, H A; Paine, G H
1986-01-01
We are using a variety of theoretical and computational techniques to study protein structure, protein folding, and higher-order structures. Our earlier work involved treatments of liquid water and aqueous solutions of nonpolar and polar solutes, computations of the stabilities of the fundamental structures of proteins and their packing arrangements, conformations of small cyclic and open-chain peptides, structures of fibrous proteins (collagen), structures of homologous globular proteins, introduction of special procedures as constraints during energy minimization of globular proteins, and structures of enzyme-substrate complexes. Recently, we presented a new methodology for predicting polypeptide structure (described here); the method is based on the calculation of the probable and average conformation of a polypeptide chain by the application of equilibrium statistical mechanics in conjunction with an adaptive, importance sampling Monte Carlo algorithm. As a test, it was applied to Met-enkephalin.
NASA Astrophysics Data System (ADS)
Lin, Geng; Guan, Jian; Feng, Huibin
2018-06-01
The positive influence dominating set problem is a variant of the minimum dominating set problem, and has lots of applications in social networks. It is NP-hard, and receives more and more attention. Various methods have been proposed to solve the positive influence dominating set problem. However, most of the existing work focused on greedy algorithms, and the solution quality needs to be improved. In this paper, we formulate the minimum positive influence dominating set problem as an integer linear programming (ILP), and propose an ILP based memetic algorithm (ILPMA) for solving the problem. The ILPMA integrates a greedy randomized adaptive construction procedure, a crossover operator, a repair operator, and a tabu search procedure. The performance of ILPMA is validated on nine real-world social networks with nodes up to 36,692. The results show that ILPMA significantly improves the solution quality, and is robust.
NASA Technical Reports Server (NTRS)
Shu, Chi-Wang
1997-01-01
In these lecture notes we describe the construction, analysis, and application of ENO (Essentially Non-Oscillatory) and WENO (Weighted Essentially Non-Oscillatory) schemes for hyperbolic conservation laws and related Hamilton- Jacobi equations. ENO and WENO schemes are high order accurate finite difference schemes designed for problems with piecewise smooth solutions containing discontinuities. The key idea lies at the approximation level, where a nonlinear adaptive procedure is used to automatically choose the locally smoothest stencil, hence avoiding crossing discontinuities in the interpolation procedure as much as possible. ENO and WENO schemes have been quite successful in applications, especially for problems containing both shocks and complicated smooth solution structures, such as compressible turbulence simulations and aeroacoustics. These lecture notes are basically self-contained. It is our hope that with these notes and with the help of the quoted references, the reader can understand the algorithms and code them up for applications.
PLUM: Parallel Load Balancing for Unstructured Adaptive Meshes. Degree awarded by Colorado Univ.
NASA Technical Reports Server (NTRS)
Oliker, Leonid
1998-01-01
Dynamic mesh adaption on unstructured grids is a powerful tool for computing large-scale problems that require grid modifications to efficiently resolve solution features. By locally refining and coarsening the mesh to capture physical phenomena of interest, such procedures make standard computational methods more cost effective. Unfortunately, an efficient parallel implementation of these adaptive methods is rather difficult to achieve, primarily due to the load imbalance created by the dynamically-changing nonuniform grid. This requires significant communication at runtime, leading to idle processors and adversely affecting the total execution time. Nonetheless, it is generally thought that unstructured adaptive- grid techniques will constitute a significant fraction of future high-performance supercomputing. Various dynamic load balancing methods have been reported to date; however, most of them either lack a global view of loads across processors or do not apply their techniques to realistic large-scale applications.
Adaptive process control using fuzzy logic and genetic algorithms
NASA Technical Reports Server (NTRS)
Karr, C. L.
1993-01-01
Researchers at the U.S. Bureau of Mines have developed adaptive process control systems in which genetic algorithms (GA's) are used to augment fuzzy logic controllers (FLC's). GA's are search algorithms that rapidly locate near-optimum solutions to a wide spectrum of problems by modeling the search procedures of natural genetics. FLC's are rule based systems that efficiently manipulate a problem environment by modeling the 'rule-of-thumb' strategy used in human decision making. Together, GA's and FLC's possess the capabilities necessary to produce powerful, efficient, and robust adaptive control systems. To perform efficiently, such control systems require a control element to manipulate the problem environment, and a learning element to adjust to the changes in the problem environment. Details of an overall adaptive control system are discussed. A specific laboratory acid-base pH system is used to demonstrate the ideas presented.
Adaptive Process Control with Fuzzy Logic and Genetic Algorithms
NASA Technical Reports Server (NTRS)
Karr, C. L.
1993-01-01
Researchers at the U.S. Bureau of Mines have developed adaptive process control systems in which genetic algorithms (GA's) are used to augment fuzzy logic controllers (FLC's). GA's are search algorithms that rapidly locate near-optimum solutions to a wide spectrum of problems by modeling the search procedures of natural genetics. FLC's are rule based systems that efficiently manipulate a problem environment by modeling the 'rule-of-thumb' strategy used in human decision-making. Together, GA's and FLC's possess the capabilities necessary to produce powerful, efficient, and robust adaptive control systems. To perform efficiently, such control systems require a control element to manipulate the problem environment, an analysis element to recognize changes in the problem environment, and a learning element to adjust to the changes in the problem environment. Details of an overall adaptive control system are discussed. A specific laboratory acid-base pH system is used to demonstrate the ideas presented.
Genetic algorithms in adaptive fuzzy control
NASA Technical Reports Server (NTRS)
Karr, C. Lucas; Harper, Tony R.
1992-01-01
Researchers at the U.S. Bureau of Mines have developed adaptive process control systems in which genetic algorithms (GA's) are used to augment fuzzy logic controllers (FLC's). GA's are search algorithms that rapidly locate near-optimum solutions to a wide spectrum of problems by modeling the search procedures of natural genetics. FLC's are rule based systems that efficiently manipulate a problem environment by modeling the 'rule-of-thumb' strategy used in human decision making. Together, GA's and FLC's possess the capabilities necessary to produce powerful, efficient, and robust adaptive control systems. To perform efficiently, such control systems require a control element to manipulate the problem environment, an analysis element to recognize changes in the problem environment, and a learning element to adjust fuzzy membership functions in response to the changes in the problem environment. Details of an overall adaptive control system are discussed. A specific computer-simulated chemical system is used to demonstrate the ideas presented.
Coelho, V N; Coelho, I M; Souza, M J F; Oliveira, T A; Cota, L P; Haddad, M N; Mladenovic, N; Silva, R C P; Guimarães, F G
2016-01-01
This article presents an Evolution Strategy (ES)--based algorithm, designed to self-adapt its mutation operators, guiding the search into the solution space using a Self-Adaptive Reduced Variable Neighborhood Search procedure. In view of the specific local search operators for each individual, the proposed population-based approach also fits into the context of the Memetic Algorithms. The proposed variant uses the Greedy Randomized Adaptive Search Procedure with different greedy parameters for generating its initial population, providing an interesting exploration-exploitation balance. To validate the proposal, this framework is applied to solve three different [Formula: see text]-Hard combinatorial optimization problems: an Open-Pit-Mining Operational Planning Problem with dynamic allocation of trucks, an Unrelated Parallel Machine Scheduling Problem with Setup Times, and the calibration of a hybrid fuzzy model for Short-Term Load Forecasting. Computational results point out the convergence of the proposed model and highlight its ability in combining the application of move operations from distinct neighborhood structures along the optimization. The results gathered and reported in this article represent a collective evidence of the performance of the method in challenging combinatorial optimization problems from different application domains. The proposed evolution strategy demonstrates an ability of adapting the strength of the mutation disturbance during the generations of its evolution process. The effectiveness of the proposal motivates the application of this novel evolutionary framework for solving other combinatorial optimization problems.
Chen, Xiongzhi; Doerge, Rebecca W; Heyse, Joseph F
2018-05-11
We consider multiple testing with false discovery rate (FDR) control when p values have discrete and heterogeneous null distributions. We propose a new estimator of the proportion of true null hypotheses and demonstrate that it is less upwardly biased than Storey's estimator and two other estimators. The new estimator induces two adaptive procedures, that is, an adaptive Benjamini-Hochberg (BH) procedure and an adaptive Benjamini-Hochberg-Heyse (BHH) procedure. We prove that the adaptive BH (aBH) procedure is conservative nonasymptotically. Through simulation studies, we show that these procedures are usually more powerful than their nonadaptive counterparts and that the adaptive BHH procedure is usually more powerful than the aBH procedure and a procedure based on randomized p-value. The adaptive procedures are applied to a study of HIV vaccine efficacy, where they identify more differentially polymorphic positions than the BH procedure at the same FDR level. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
A multigrid method for steady Euler equations on unstructured adaptive grids
NASA Technical Reports Server (NTRS)
Riemslagh, Kris; Dick, Erik
1993-01-01
A flux-difference splitting type algorithm is formulated for the steady Euler equations on unstructured grids. The polynomial flux-difference splitting technique is used. A vertex-centered finite volume method is employed on a triangular mesh. The multigrid method is in defect-correction form. A relaxation procedure with a first order accurate inner iteration and a second-order correction performed only on the finest grid, is used. A multi-stage Jacobi relaxation method is employed as a smoother. Since the grid is unstructured a Jacobi type is chosen. The multi-staging is necessary to provide sufficient smoothing properties. The domain is discretized using a Delaunay triangular mesh generator. Three grids with more or less uniform distribution of nodes but with different resolution are generated by successive refinement of the coarsest grid. Nodes of coarser grids appear in the finer grids. The multigrid method is started on these grids. As soon as the residual drops below a threshold value, an adaptive refinement is started. The solution on the adaptively refined grid is accelerated by a multigrid procedure. The coarser multigrid grids are generated by successive coarsening through point removement. The adaption cycle is repeated a few times. Results are given for the transonic flow over a NACA-0012 airfoil.
Autoadaptivity and optimization in distributed ECG interpretation.
Augustyniak, Piotr
2010-03-01
This paper addresses principal issues of the ECG interpretation adaptivity in a distributed surveillance network. In the age of pervasive access to wireless digital communication, distributed biosignal interpretation networks may not only optimally solve difficult medical cases, but also adapt the data acquisition, interpretation, and transmission to the variable patient's status and availability of technical resources. The background of such adaptivity is the innovative use of results from the automatic ECG analysis to the seamless remote modification of the interpreting software. Since the medical relevance of issued diagnostic data depends on the patient's status, the interpretation adaptivity implies the flexibility of report content and frequency. Proposed solutions are based on the research on human experts behavior, procedures reliability, and usage statistics. Despite the limited scale of our prototype client-server application, the tests yielded very promising results: the transmission channel occupation was reduced by 2.6 to 5.6 times comparing to the rigid reporting mode and the improvement of the remotely computed diagnostic outcome was achieved in case of over 80% of software adaptation attempts.
Fast solution of elliptic partial differential equations using linear combinations of plane waves.
Pérez-Jordá, José M
2016-02-01
Given an arbitrary elliptic partial differential equation (PDE), a procedure for obtaining its solution is proposed based on the method of Ritz: the solution is written as a linear combination of plane waves and the coefficients are obtained by variational minimization. The PDE to be solved is cast as a system of linear equations Ax=b, where the matrix A is not sparse, which prevents the straightforward application of standard iterative methods in order to solve it. This sparseness problem can be circumvented by means of a recursive bisection approach based on the fast Fourier transform, which makes it possible to implement fast versions of some stationary iterative methods (such as Gauss-Seidel) consuming O(NlogN) memory and executing an iteration in O(Nlog(2)N) time, N being the number of plane waves used. In a similar way, fast versions of Krylov subspace methods and multigrid methods can also be implemented. These procedures are tested on Poisson's equation expressed in adaptive coordinates. It is found that the best results are obtained with the GMRES method using a multigrid preconditioner with Gauss-Seidel relaxation steps.
NASA Technical Reports Server (NTRS)
Schoenauer, W.; Daeubler, H. G.; Glotz, G.; Gruening, J.
1986-01-01
An implicit difference procedure for the solution of equations for a chemically reacting hypersonic boundary layer is described. Difference forms of arbitrary error order in the x and y coordinate plane were used to derive estimates for discretization error. Computational complexity and time were minimized by the use of this difference method and the iteration of the nonlinear boundary layer equations was regulated by discretization error. Velocity and temperature profiles are presented for Mach 20.14 and Mach 18.5; variables are velocity profiles, temperature profiles, mass flow factor, Stanton number, and friction drag coefficient; three figures include numeric data.
A-posteriori error estimation for the finite point method with applications to compressible flow
NASA Astrophysics Data System (ADS)
Ortega, Enrique; Flores, Roberto; Oñate, Eugenio; Idelsohn, Sergio
2017-08-01
An a-posteriori error estimate with application to inviscid compressible flow problems is presented. The estimate is a surrogate measure of the discretization error, obtained from an approximation to the truncation terms of the governing equations. This approximation is calculated from the discrete nodal differential residuals using a reconstructed solution field on a modified stencil of points. Both the error estimation methodology and the flow solution scheme are implemented using the Finite Point Method, a meshless technique enabling higher-order approximations and reconstruction procedures on general unstructured discretizations. The performance of the proposed error indicator is studied and applications to adaptive grid refinement are presented.
The Hartree-Fock calculation of the magnetic properties of molecular solutes
NASA Astrophysics Data System (ADS)
Cammi, R.
1998-08-01
In this paper we set the formal bases for the calculation of the magnetic susceptibility and of the nuclear magnetic shielding tensors for molecular solutes described within the framework of the polarizable continuum model (PCM). The theory has been developed at self-consistent field (SCF) level and adapted to be used within the framework of some of the computational procedures of larger use, i.e., the gauge invariant atomic orbital method (GIAO) and the continuous set gauge transformation method (CSGT). The numerical results relative to the magnetizabilities and chemical shielding of acetonitrile and nitrometane in various solvents computed with the PCM-CSGT method are also presented.
Numerical Hydrodynamics in General Relativity.
Font, José A
2000-01-01
The current status of numerical solutions for the equations of ideal general relativistic hydrodynamics is reviewed. Different formulations of the equations are presented, with special mention of conservative and hyperbolic formulations well-adapted to advanced numerical methods. A representative sample of available numerical schemes is discussed and particular emphasis is paid to solution procedures based on schemes exploiting the characteristic structure of the equations through linearized Riemann solvers. A comprehensive summary of relevant astrophysical simulations in strong gravitational fields, including gravitational collapse, accretion onto black holes and evolution of neutron stars, is also presented. Supplementary material is available for this article at 10.12942/lrr-2000-2.
Transformation of two and three-dimensional regions by elliptic systems
NASA Technical Reports Server (NTRS)
Mastin, C. Wayne
1991-01-01
A reliable linear system is presented for grid generation in 2-D and 3-D. The method is robust in the sense that convergence is guaranteed but is not as reliable as other nonlinear elliptic methods in generating nonfolding grids. The construction of nonfolding grids depends on having reasonable approximations of cell aspect ratios and an appropriate distribution of grid points on the boundary of the region. Some guidelines are included on approximating the aspect ratios, but little help is offered on setting up the boundary grid other than to say that in 2-D the boundary correspondence should be close to that generated by a conformal mapping. It is assumed that the functions which control the grid distribution depend only on the computational variables and not on the physical variables. Whether this is actually the case depends on how the grid is constructed. In a dynamic adaptive procedure where the grid is constructed in the process of solving a fluid flow problem, the grid is usually updated at fixed iteration counts using the current value of the control function. Since the control function is not being updated during the iteration of the grid equations, the grid construction is a linear procedure. However, in the case of a static adaptive procedure where a trial solution is computed and used to construct an adaptive grid, the control functions may be recomputed at every step of the grid iteration.
Practical advantages of evolutionary computation
NASA Astrophysics Data System (ADS)
Fogel, David B.
1997-10-01
Evolutionary computation is becoming a common technique for solving difficult, real-world problems in industry, medicine, and defense. This paper reviews some of the practical advantages to using evolutionary algorithms as compared with classic methods of optimization or artificial intelligence. Specific advantages include the flexibility of the procedures, as well as their ability to self-adapt the search for optimum solutions on the fly. As desktop computers increase in speed, the application of evolutionary algorithms will become routine.
Measure Guideline. Deep Energy Enclosure Retrofit for Zero Energy Ready House Flat Roofs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Loomis, H.; Pettit, B.
2015-05-29
This Measure Guideline provides design and construction information for a deep energy enclosure retrofit solution of a flat roof assembly. It describes the strategies and procedures for an exterior retrofit of a flat wood-framed roof with brick masonry exterior walls using exterior and interior (framing cavity) insulation. The approach supported in this guide could also be adapted for use with flat wood-framed roofs with wood-framed exterior walls.
NASA Technical Reports Server (NTRS)
Ashford, Gregory A.; Powell, Kenneth G.
1995-01-01
A method for generating high quality unstructured triangular grids for high Reynolds number Navier-Stokes calculations about complex geometries is described. Careful attention is paid in the mesh generation process to resolving efficiently the disparate length scales which arise in these flows. First the surface mesh is constructed in a way which ensures that the geometry is faithfully represented. The volume mesh generation then proceeds in two phases thus allowing the viscous and inviscid regions of the flow to be meshed optimally. A solution-adaptive remeshing procedure which allows the mesh to adapt itself to flow features is also described. The procedure for tracking wakes and refinement criteria appropriate for shock detection are described. Although at present it has only been implemented in two dimensions, the grid generation process has been designed with the extension to three dimensions in mind. An implicit, higher-order, upwind method is also presented for computing compressible turbulent flows on these meshes. Two recently developed one-equation turbulence models have been implemented to simulate the effects of the fluid turbulence. Results for flow about a RAE 2822 airfoil and a Douglas three-element airfoil are presented which clearly show the improved resolution obtainable.
NASA Astrophysics Data System (ADS)
Falugi, P.; Olaru, S.; Dumur, D.
2010-08-01
This article proposes an explicit robust predictive control solution based on linear matrix inequalities (LMIs). The considered predictive control strategy uses different local descriptions of the system dynamics and uncertainties and thus allows the handling of less conservative input constraints. The computed control law guarantees constraint satisfaction and asymptotic stability. The technique is effective for a class of nonlinear systems embedded into polytopic models. A detailed discussion of the procedures which adapt the partition of the state space is presented. For the practical implementation the construction of suitable (explicit) descriptions of the control law are described upon concrete algorithms.
Adaptive correction procedure for TVL1 image deblurring under impulse noise
NASA Astrophysics Data System (ADS)
Bai, Minru; Zhang, Xiongjun; Shao, Qianqian
2016-08-01
For the problem of image restoration of observed images corrupted by blur and impulse noise, the widely used TVL1 model may deviate from both the data-acquisition model and the prior model, especially for high noise levels. In order to seek a solution of high recovery quality beyond the reach of the TVL1 model, we propose an adaptive correction procedure for TVL1 image deblurring under impulse noise. Then, a proximal alternating direction method of multipliers (ADMM) is presented to solve the corrected TVL1 model and its convergence is also established under very mild conditions. It is verified by numerical experiments that our proposed approach outperforms the TVL1 model in terms of signal-to-noise ratio (SNR) values and visual quality, especially for high noise levels: it can handle salt-and-pepper noise as high as 90% and random-valued noise as high as 70%. In addition, a comparison with a state-of-the-art method, the two-phase method, demonstrates the superiority of the proposed approach.
Mirza, Nadine; Panagioti, Maria; Waheed, Muhammad Wali; Waheed, Waquas
2017-09-13
The ACE-III, a gold standard for screening cognitive impairment, is restricted by language and culture, with no uniform set of guidelines for its adaptation. To develop guidelines a compilation of all the adaptation procedures undertaken by adapters of the ACE-III and its predecessors is needed. We searched EMBASE, Medline and PsychINFO and screened publications from a previous review. We included publications on adapted versions of the ACE-III and its predecessors, extracting translation and cultural adaptation procedures and assessing their quality. We deemed 32 papers suitable for analysis. 7 translation steps were identified and we determined which items of the ACE-III are culturally dependent. This review lists all adaptations of the ACE, ACE-R and ACE-III, rates the reporting of their adaptation procedures and summarises adaptation procedures into steps that can be undertaken by adapters.
Context-driven Salt Seeking Test (Rats)
Chang, Stephen E.; Smith, Kyle S.
2018-01-01
Changes in reward seeking behavior often occur through incremental learning based on the difference between what is expected and what actually happens. Behavioral flexibility of this sort requires experience with rewards as better or worse than expected. However, there are some instances in which behavior can change through non-incremental learning, which requires no further experience with an outcome. Such an example of non-incremental learning is the salt appetite phenomenon. In this case, animals such as rats will immediately seek out a highly-concentrated salt solution that was previously undesired when they are put in a novel state of sodium deprivation. Importantly, this adaptive salt-seeking behavior occurs despite the fact that the rats never tasted salt in the depleted state, and therefore never tasted it as a highly desirable reward. The following protocol is a method to investigate the neural circuitry mediating adaptive salt seeking using a conditioned place preference (CPP) procedure. The procedure is designed to provide an opportunity to discover possible dissociations between the neural circuitry mediating salt seeking and salt consumption to replenish the bodily deficit after sodium depletion. Additionally, this procedure is amenable to incorporating a number of neurobiological techniques for studying the brain basis of this behavior.
NASA Astrophysics Data System (ADS)
Julie, Hongki; Sanjaya, Febi; Yudhi Anggoro, Ant.
2017-09-01
One of purposes of this study was to describe the solution profile of the junior high school students for the PISA adaptation test. The procedures conducted by researchers to achieve this objective were (1) adapting the PISA test, (2) validating the adapting PISA test, (3) asking junior high school students to do the adapting PISA test, and (4) making the students’ solution profile. The PISA problems for mathematics could be classified into four areas, namely quantity, space and shape, change and relationship, and uncertainty. The research results that would be presented in this paper were the result test for quantity, and change and relationship problems. In the adapting PISA test, there were fifteen questions that consist of two questions for the quantity group, six questions for space and shape group, three questions for the change and relationship group, and four questions for uncertainty. Subjects in this study were 18 students from 11 junior high schools in Yogyakarta, Central Java, and Banten. The type of research that used by the researchers was a qualitative research. For the first quantity problem, there were 38.89 % students who achieved level 3. For the second quantity problem, there were 88.89 % students who achieved level 2. For part a of the first change and relationship problem, there were 55.56 % students who achieved level 5. For part b of the first change and relationship problem, there were 77.78 % students who achieved level 2. For the second change and relationship problem, there were 38.89 % students who achieved level 2.
Modified Sham Feeding of Sweet Solutions in Women with and without Bulimia Nervosa
Klein, DA; Schebendach, JE; Brown, AJ; Smith, GP; Walsh, BT
2009-01-01
Although it is possible that binge eating in humans is due to increased responsiveness of orosensory excitatory controls of eating, there is no direct evidence for this because food ingested during a test meal stimulates both orosensory excitatory and postingestive inhibitory controls. To overcome this problem, we adapted the modified sham feeding technique (MSF) to measure the orosensory excitatory control of intake of a series of sweetened solutions. Previously published data showed the feasibility of a “sip-and-spit” procedure in nine healthy control women using solutions flavored with cherry Kool Aid® and sweetened with sucrose (0-20%)1. The current study extended this technique to measure the intake of artificially sweetened solutions in women with bulimia nervosa (BN) and in women with no history of eating disorders. Ten healthy women and 11 women with BN were randomly presented with cherry Kool Aid® solutions sweetened with five concentrations of aspartame (0, 0.01, 0.03, 0.08 and 0.28%) in a closed opaque container fitted with a straw. They were instructed to sip as much as they wanted of the solution during 1-minute trials and to spit the fluid out into another opaque container. Across all subjects, presence of sweetener increased intake (p<0.001). Women with BN sipped 40.5-53.1% more of all solutions than controls (p=0.03 for total intake across all solutions). Self-report ratings of liking, wanting and sweetness of solutions did not differ between groups. These results support the feasibility of a MSF procedure using artificially sweetened solutions, and the hypothesis that the orosensory stimulation of MSF provokes larger intake in women with BN than controls. PMID:18773914
Measure Guideline: Deep Energy Enclosure Retrofit for Zero Energy Ready House Flat Roofs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Loomis, H.; Pettit, B.
2015-05-01
This Measure Guideline provides design and construction information for a deep energy enclosure retrofit (DEER) solution of a flat roof assembly. It describes the strategies and procedures for an exterior retrofit of a flat, wood-framed roof with brick masonry exterior walls, using exterior and interior (framing cavity) insulation. The approach supported in this guide could also be adapted for use with flat, wood-framed roofs with wood-framed exterior walls.
SAGE - MULTIDIMENSIONAL SELF-ADAPTIVE GRID CODE
NASA Technical Reports Server (NTRS)
Davies, C. B.
1994-01-01
SAGE, Self Adaptive Grid codE, is a flexible tool for adapting and restructuring both 2D and 3D grids. Solution-adaptive grid methods are useful tools for efficient and accurate flow predictions. In supersonic and hypersonic flows, strong gradient regions such as shocks, contact discontinuities, shear layers, etc., require careful distribution of grid points to minimize grid error and produce accurate flow-field predictions. SAGE helps the user obtain more accurate solutions by intelligently redistributing (i.e. adapting) the original grid points based on an initial or interim flow-field solution. The user then computes a new solution using the adapted grid as input to the flow solver. The adaptive-grid methodology poses the problem in an algebraic, unidirectional manner for multi-dimensional adaptations. The procedure is analogous to applying tension and torsion spring forces proportional to the local flow gradient at every grid point and finding the equilibrium position of the resulting system of grid points. The multi-dimensional problem of grid adaption is split into a series of one-dimensional problems along the computational coordinate lines. The reduced one dimensional problem then requires a tridiagonal solver to find the location of grid points along a coordinate line. Multi-directional adaption is achieved by the sequential application of the method in each coordinate direction. The tension forces direct the redistribution of points to the strong gradient region. To maintain smoothness and a measure of orthogonality of grid lines, torsional forces are introduced that relate information between the family of lines adjacent to one another. The smoothness and orthogonality constraints are direction-dependent, since they relate only the coordinate lines that are being adapted to the neighboring lines that have already been adapted. Therefore the solutions are non-unique and depend on the order and direction of adaption. Non-uniqueness of the adapted grid is acceptable since it makes possible an overall and local error reduction through grid redistribution. SAGE includes the ability to modify the adaption techniques in boundary regions, which substantially improves the flexibility of the adaptive scheme. The vectorial approach used in the analysis also provides flexibility. The user has complete choice of adaption direction and order of sequential adaptions without concern for the computational data structure. Multiple passes are available with no restraint on stepping directions; for each adaptive pass the user can choose a completely new set of adaptive parameters. This facility, combined with the capability of edge boundary control, enables the code to individually adapt multi-dimensional multiple grids. Zonal grids can be adapted while maintaining continuity along the common boundaries. For patched grids, the multiple-pass capability enables complete adaption. SAGE is written in FORTRAN 77 and is intended to be machine independent; however, it requires a FORTRAN compiler which supports NAMELIST input. It has been successfully implemented on Sun series computers, SGI IRIS's, DEC MicroVAX computers, HP series computers, the Cray YMP, and IBM PC compatibles. Source code is provided, but no sample input and output files are provided. The code reads three datafiles: one that contains the initial grid coordinates (x,y,z), one that contains corresponding flow-field variables, and one that contains the user control parameters. It is assumed that the first two datasets are formatted as defined in the plotting software package PLOT3D. Several machine versions of PLOT3D are available from COSMIC. The amount of main memory is dependent on the size of the matrix. The standard distribution medium for SAGE is a 5.25 inch 360K MS-DOS format diskette. It is also available on a .25 inch streaming magnetic tape cartridge in UNIX tar format or on a 9-track 1600 BPI ASCII CARD IMAGE format magnetic tape. SAGE was developed in 1989, first released as a 2D version in 1991 and updated to 3D in 1993.
Non-parametric diffeomorphic image registration with the demons algorithm.
Vercauteren, Tom; Pennec, Xavier; Perchant, Aymeric; Ayache, Nicholas
2007-01-01
We propose a non-parametric diffeomorphic image registration algorithm based on Thirion's demons algorithm. The demons algorithm can be seen as an optimization procedure on the entire space of displacement fields. The main idea of our algorithm is to adapt this procedure to a space of diffeomorphic transformations. In contrast to many diffeomorphic registration algorithms, our solution is computationally efficient since in practice it only replaces an addition of free form deformations by a few compositions. Our experiments show that in addition to being diffeomorphic, our algorithm provides results that are similar to the ones from the demons algorithm but with transformations that are much smoother and closer to the true ones in terms of Jacobians.
A shock-capturing SPH scheme based on adaptive kernel estimation
NASA Astrophysics Data System (ADS)
Sigalotti, Leonardo Di G.; López, Hender; Donoso, Arnaldo; Sira, Eloy; Klapp, Jaime
2006-02-01
Here we report a method that converts standard smoothed particle hydrodynamics (SPH) into a working shock-capturing scheme without relying on solutions to the Riemann problem. Unlike existing adaptive SPH simulations, the present scheme is based on an adaptive kernel estimation of the density, which combines intrinsic features of both the kernel and nearest neighbor approaches in a way that the amount of smoothing required in low-density regions is effectively controlled. Symmetrized SPH representations of the gas dynamic equations along with the usual kernel summation for the density are used to guarantee variational consistency. Implementation of the adaptive kernel estimation involves a very simple procedure and allows for a unique scheme that handles strong shocks and rarefactions the same way. Since it represents a general improvement of the integral interpolation on scattered data, it is also applicable to other fluid-dynamic models. When the method is applied to supersonic compressible flows with sharp discontinuities, as in the classical one-dimensional shock-tube problem and its variants, the accuracy of the results is comparable, and in most cases superior, to that obtained from high quality Godunov-type methods and SPH formulations based on Riemann solutions. The extension of the method to two- and three-space dimensions is straightforward. In particular, for the two-dimensional cylindrical Noh's shock implosion and Sedov point explosion problems the present scheme produces much better results than those obtained with conventional SPH codes.
Adaptive [theta]-methods for pricing American options
NASA Astrophysics Data System (ADS)
Khaliq, Abdul Q. M.; Voss, David A.; Kazmi, Kamran
2008-12-01
We develop adaptive [theta]-methods for solving the Black-Scholes PDE for American options. By adding a small, continuous term, the Black-Scholes PDE becomes an advection-diffusion-reaction equation on a fixed spatial domain. Standard implementation of [theta]-methods would require a Newton-type iterative procedure at each time step thereby increasing the computational complexity of the methods. Our linearly implicit approach avoids such complications. We establish a general framework under which [theta]-methods satisfy a discrete version of the positivity constraint characteristic of American options, and numerically demonstrate the sensitivity of the constraint. The positivity results are established for the single-asset and independent two-asset models. In addition, we have incorporated and analyzed an adaptive time-step control strategy to increase the computational efficiency. Numerical experiments are presented for one- and two-asset American options, using adaptive exponential splitting for two-asset problems. The approach is compared with an iterative solution of the two-asset problem in terms of computational efficiency.
Assessment of Three “WHO” Patient Safety Solutions: Where Do We Stand and What Can We Do?
Banihashemi, Sheida; Hatam, Nahid; Zand, Farid; Kharazmi, Erfan; Nasimi, Soheila; Askarian, Mehrdad
2015-01-01
Background: Most medical errors are preventable. The aim of this study was to compare the current execution of the 3 patient safety solutions with WHO suggested actions and standards. Methods: Data collection forms and direct observation were used to determine the status of implementation of existing protocols, resources, and tools. Results: In the field of patient hand-over, there was no standardized approach. In the field of the performance of correct procedure at the correct body site, there were no safety checklists, guideline, and educational content for informing the patients and their families about the procedure. In the field of hand hygiene (HH), although availability of necessary resources was acceptable, availability of promotional HH posters and reminders was substandard. Conclusions: There are some limitations of resources, protocols, and standard checklists in all three areas. We designed some tools that will help both wards to improve patient safety by the implementation of adapted WHO suggested actions. PMID:26900434
Biclustering of gene expression data using reactive greedy randomized adaptive search procedure.
Dharan, Smitha; Nair, Achuthsankar S
2009-01-30
Biclustering algorithms belong to a distinct class of clustering algorithms that perform simultaneous clustering of both rows and columns of the gene expression matrix and can be a very useful analysis tool when some genes have multiple functions and experimental conditions are diverse. Cheng and Church have introduced a measure called mean squared residue score to evaluate the quality of a bicluster and has become one of the most popular measures to search for biclusters. In this paper, we review basic concepts of the metaheuristics Greedy Randomized Adaptive Search Procedure (GRASP)-construction and local search phases and propose a new method which is a variant of GRASP called Reactive Greedy Randomized Adaptive Search Procedure (Reactive GRASP) to detect significant biclusters from large microarray datasets. The method has two major steps. First, high quality bicluster seeds are generated by means of k-means clustering. In the second step, these seeds are grown using the Reactive GRASP, in which the basic parameter that defines the restrictiveness of the candidate list is self-adjusted, depending on the quality of the solutions found previously. We performed statistical and biological validations of the biclusters obtained and evaluated the method against the results of basic GRASP and as well as with the classic work of Cheng and Church. The experimental results indicate that the Reactive GRASP approach outperforms the basic GRASP algorithm and Cheng and Church approach. The Reactive GRASP approach for the detection of significant biclusters is robust and does not require calibration efforts.
Azlan, C A; Mohd Nasir, N F; Saifizul, A A; Faizul, M S; Ng, K H; Abdullah, B J J
2007-12-01
Percutaneous image-guided needle biopsy is typically performed in highly vascular organs or in tumours with rich macroscopic and microscopic blood supply. The main risks related to this procedure are haemorrhage and implantation of tumour cells in the needle tract after the biopsy needle is withdrawn. From numerous conducted studies, it was found that heating the needle tract using alternating current in radiofrequency (RF) range has a potential to minimize these effects. However, this solution requires the use of specially designed needles, which would make the procedure relatively expensive and complicated. Thus, we propose a simple solution by using readily available coaxial core biopsy needles connected to a radiofrequency ablation (RFA) generator. In order to do so, we have designed and developed an adapter to interface between these two devices. For evaluation purpose, we used a bovine liver as a sample tissue. The experimental procedure was done to study the effect of different parameter settings on the size of coagulation necrosis caused by the RF current heating on the subject. The delivery of the RF energy was varied by changing the values for delivered power, power delivery duration, and insertion depth. The results showed that the size of the coagulation necrosis is affected by all of the parameters tested. In general, the size of the region is enlarged with higher delivery of RF power, longer duration of power delivery, and shallower needle insertion and become relatively constant after a certain value. We also found that the solution proposed provides a low cost and practical way to minimizes unwanted post-biopsy effects.
NASA Astrophysics Data System (ADS)
Dönmez, Orhan
2004-09-01
In this paper, the general procedure to solve the general relativistic hydrodynamical (GRH) equations with adaptive-mesh refinement (AMR) is presented. In order to achieve, the GRH equations are written in the conservation form to exploit their hyperbolic character. The numerical solutions of GRH equations are obtained by high resolution shock Capturing schemes (HRSC), specifically designed to solve nonlinear hyperbolic systems of conservation laws. These schemes depend on the characteristic information of the system. The Marquina fluxes with MUSCL left and right states are used to solve GRH equations. First, different test problems with uniform and AMR grids on the special relativistic hydrodynamics equations are carried out to verify the second-order convergence of the code in one, two and three dimensions. Results from uniform and AMR grid are compared. It is found that adaptive grid does a better job when the number of resolution is increased. Second, the GRH equations are tested using two different test problems which are Geodesic flow and Circular motion of particle In order to do this, the flux part of GRH equations is coupled with source part using Strang splitting. The coupling of the GRH equations is carried out in a treatment which gives second order accurate solutions in space and time.
Fast, adaptive summation of point forces in the two-dimensional Poisson equation
NASA Technical Reports Server (NTRS)
Van Dommelen, Leon; Rundensteiner, Elke A.
1989-01-01
A comparatively simple procedure is presented for the direct summation of the velocity field introduced by point vortices which significantly reduces the required number of operations by replacing selected partial sums by asymptotic series. Tables are presented which demonstrate the speed of this algorithm in terms of the mere doubling of computational time in dealing with a doubling of the number of vortices; current methods involve a computational time extension by a factor of 4. This procedure need not be restricted to the solution of the Poisson equation, and may be applied to other problems involving groups of points in which the interaction between elements of different groups can be simplified when the distance between groups is sufficiently great.
NASA Astrophysics Data System (ADS)
Penven, Pierrick; Debreu, Laurent; Marchesiello, Patrick; McWilliams, James C.
What most clearly distinguishes near-shore and off-shore currents is their dominant spatial scale, O (1-30) km near-shore and O (30-1000) km off-shore. In practice, these phenomena are usually both measured and modeled with separate methods. In particular, it is infeasible for any regular computational grid to be large enough to simultaneously resolve well both types of currents. In order to obtain local solutions at high resolution while preserving the regional-scale circulation at an affordable computational cost, a 1-way grid embedding capability has been integrated into the Regional Oceanic Modeling System (ROMS). It takes advantage of the AGRIF (Adaptive Grid Refinement in Fortran) Fortran 90 package based on the use of pointers. After a first evaluation in a baroclinic vortex test case, the embedding procedure has been applied to a domain that covers the central upwelling region off California, around Monterey Bay, embedded in a domain that spans the continental U.S. Pacific Coast. Long-term simulations (10 years) have been conducted to obtain mean-seasonal statistical equilibria. The final solution shows few discontinuities at the parent-child domain boundary and a valid representation of the local upwelling structure, at a CPU cost only slightly greater than for the inner region alone. The solution is assessed by comparison with solutions for the whole US Pacific Coast at both low and high resolutions and to solutions for only the inner region at high resolution with mean-seasonal boundary conditions.
Deb, Kalyanmoy; Sinha, Ankur
2010-01-01
Bilevel optimization problems involve two optimization tasks (upper and lower level), in which every feasible upper level solution must correspond to an optimal solution to a lower level optimization problem. These problems commonly appear in many practical problem solving tasks including optimal control, process optimization, game-playing strategy developments, transportation problems, and others. However, they are commonly converted into a single level optimization problem by using an approximate solution procedure to replace the lower level optimization task. Although there exist a number of theoretical, numerical, and evolutionary optimization studies involving single-objective bilevel programming problems, not many studies look at the context of multiple conflicting objectives in each level of a bilevel programming problem. In this paper, we address certain intricate issues related to solving multi-objective bilevel programming problems, present challenging test problems, and propose a viable and hybrid evolutionary-cum-local-search based algorithm as a solution methodology. The hybrid approach performs better than a number of existing methodologies and scales well up to 40-variable difficult test problems used in this study. The population sizing and termination criteria are made self-adaptive, so that no additional parameters need to be supplied by the user. The study indicates a clear niche of evolutionary algorithms in solving such difficult problems of practical importance compared to their usual solution by a computationally expensive nested procedure. The study opens up many issues related to multi-objective bilevel programming and hopefully this study will motivate EMO and other researchers to pay more attention to this important and difficult problem solving activity.
1H NMR quantification in very dilute toxin solutions: application to anatoxin-a analysis.
Dagnino, Denise; Schripsema, Jan
2005-08-01
A complete procedure is described for the extraction, detection and quantification of anatoxin-a in biological samples. Anatoxin-a is extracted from biomass by a routine acid base extraction. The extract is analysed by GC-MS, without the need of derivatization, with a detection limit of 0.5 ng. A method was developed for the accurate quantification of anatoxin-a in the standard solution to be used for the calibration of the GC analysis. 1H NMR allowed the accurate quantification of microgram quantities of anatoxin-a. The accurate quantification of compounds in standard solutions is rarely discussed, but for compounds like anatoxin-a (toxins with prices in the range of a million dollar a gram), of which generally only milligram quantities or less are available, this factor in the quantitative analysis is certainly not trivial. The method that was developed can easily be adapted for the accurate quantification of other toxins in very dilute solutions.
Three dimensional unstructured multigrid for the Euler equations
NASA Technical Reports Server (NTRS)
Mavriplis, D. J.
1991-01-01
The three dimensional Euler equations are solved on unstructured tetrahedral meshes using a multigrid strategy. The driving algorithm consists of an explicit vertex-based finite element scheme, which employs an edge-based data structure to assemble the residuals. The multigrid approach employs a sequence of independently generated coarse and fine meshes to accelerate the convergence to steady-state of the fine grid solution. Variables, residuals and corrections are passed back and forth between the various grids of the sequence using linear interpolation. The addresses and weights for interpolation are determined in a preprocessing stage using linear interpolation. The addresses and weights for interpolation are determined in a preprocessing stage using an efficient graph traversal algorithm. The preprocessing operation is shown to require a negligible fraction of the CPU time required by the overall solution procedure, while gains in overall solution efficiencies greater than an order of magnitude are demonstrated on meshes containing up to 350,000 vertices. Solutions using globally regenerated fine meshes as well as adaptively refined meshes are given.
Zhang, M; Westerly, D C; Mackie, T R
2011-08-07
With on-line image guidance (IG), prostate shifts relative to the bony anatomy can be corrected by realigning the patient with respect to the treatment fields. In image guided intensity modulated proton therapy (IG-IMPT), because the proton range is more sensitive to the material it travels through, the realignment may introduce large dose variations. This effect is studied in this work and an on-line adaptive procedure is proposed to restore the planned dose to the target. A 2D anthropomorphic phantom was constructed from a real prostate patient's CT image. Two-field laterally opposing spot 3D-modulation and 24-field full arc distal edge tracking (DET) plans were generated with a prescription of 70 Gy to the planning target volume. For the simulated delivery, we considered two types of procedures: the non-adaptive procedure and the on-line adaptive procedure. In the non-adaptive procedure, only patient realignment to match the prostate location in the planning CT was performed. In the on-line adaptive procedure, on top of the patient realignment, the kinetic energy for each individual proton pencil beam was re-determined from the on-line CT image acquired after the realignment and subsequently used for delivery. Dose distributions were re-calculated for individual fractions for different plans and different delivery procedures. The results show, without adaptive, that both the 3D-modulation and the DET plans experienced delivered dose degradation by having large cold or hot spots in the prostate. The DET plan had worse dose degradation than the 3D-modulation plan. The adaptive procedure effectively restored the planned dose distribution in the DET plan, with delivered prostate D(98%), D(50%) and D(2%) values less than 1% from the prescription. In the 3D-modulation plan, in certain cases the adaptive procedure was not effective to reduce the delivered dose degradation and yield similar results as the non-adaptive procedure. In conclusion, based on this 2D phantom study, by updating the proton pencil beam energy from the on-line image after realignment, this on-line adaptive procedure is necessary and effective for the DET-based IG-IMPT. Without dose re-calculation and re-optimization, it could be easily incorporated into the clinical workflow.
NASA Astrophysics Data System (ADS)
Zhang, M.; Westerly, D. C.; Mackie, T. R.
2011-08-01
With on-line image guidance (IG), prostate shifts relative to the bony anatomy can be corrected by realigning the patient with respect to the treatment fields. In image guided intensity modulated proton therapy (IG-IMPT), because the proton range is more sensitive to the material it travels through, the realignment may introduce large dose variations. This effect is studied in this work and an on-line adaptive procedure is proposed to restore the planned dose to the target. A 2D anthropomorphic phantom was constructed from a real prostate patient's CT image. Two-field laterally opposing spot 3D-modulation and 24-field full arc distal edge tracking (DET) plans were generated with a prescription of 70 Gy to the planning target volume. For the simulated delivery, we considered two types of procedures: the non-adaptive procedure and the on-line adaptive procedure. In the non-adaptive procedure, only patient realignment to match the prostate location in the planning CT was performed. In the on-line adaptive procedure, on top of the patient realignment, the kinetic energy for each individual proton pencil beam was re-determined from the on-line CT image acquired after the realignment and subsequently used for delivery. Dose distributions were re-calculated for individual fractions for different plans and different delivery procedures. The results show, without adaptive, that both the 3D-modulation and the DET plans experienced delivered dose degradation by having large cold or hot spots in the prostate. The DET plan had worse dose degradation than the 3D-modulation plan. The adaptive procedure effectively restored the planned dose distribution in the DET plan, with delivered prostate D98%, D50% and D2% values less than 1% from the prescription. In the 3D-modulation plan, in certain cases the adaptive procedure was not effective to reduce the delivered dose degradation and yield similar results as the non-adaptive procedure. In conclusion, based on this 2D phantom study, by updating the proton pencil beam energy from the on-line image after realignment, this on-line adaptive procedure is necessary and effective for the DET-based IG-IMPT. Without dose re-calculation and re-optimization, it could be easily incorporated into the clinical workflow.
Biclustering of gene expression data using reactive greedy randomized adaptive search procedure
Dharan, Smitha; Nair, Achuthsankar S
2009-01-01
Background Biclustering algorithms belong to a distinct class of clustering algorithms that perform simultaneous clustering of both rows and columns of the gene expression matrix and can be a very useful analysis tool when some genes have multiple functions and experimental conditions are diverse. Cheng and Church have introduced a measure called mean squared residue score to evaluate the quality of a bicluster and has become one of the most popular measures to search for biclusters. In this paper, we review basic concepts of the metaheuristics Greedy Randomized Adaptive Search Procedure (GRASP)-construction and local search phases and propose a new method which is a variant of GRASP called Reactive Greedy Randomized Adaptive Search Procedure (Reactive GRASP) to detect significant biclusters from large microarray datasets. The method has two major steps. First, high quality bicluster seeds are generated by means of k-means clustering. In the second step, these seeds are grown using the Reactive GRASP, in which the basic parameter that defines the restrictiveness of the candidate list is self-adjusted, depending on the quality of the solutions found previously. Results We performed statistical and biological validations of the biclusters obtained and evaluated the method against the results of basic GRASP and as well as with the classic work of Cheng and Church. The experimental results indicate that the Reactive GRASP approach outperforms the basic GRASP algorithm and Cheng and Church approach. Conclusion The Reactive GRASP approach for the detection of significant biclusters is robust and does not require calibration efforts. PMID:19208127
Self-organization in neural networks - Applications in structural optimization
NASA Technical Reports Server (NTRS)
Hajela, Prabhat; Fu, B.; Berke, Laszlo
1993-01-01
The present paper discusses the applicability of ART (Adaptive Resonance Theory) networks, and the Hopfield and Elastic networks, in problems of structural analysis and design. A characteristic of these network architectures is the ability to classify patterns presented as inputs into specific categories. The categories may themselves represent distinct procedural solution strategies. The paper shows how this property can be adapted in the structural analysis and design problem. A second application is the use of Hopfield and Elastic networks in optimization problems. Of particular interest are problems characterized by the presence of discrete and integer design variables. The parallel computing architecture that is typical of neural networks is shown to be effective in such problems. Results of preliminary implementations in structural design problems are also included in the paper.
NASA Astrophysics Data System (ADS)
Mowbray, Andrew James
We present a method of wet chemical synthesis of aluminum-doped silicon nanoparticles (Al-doped Si NPs), encompassing the solution-phase co-reduction of silicon tetrachloride (SiCl4) and aluminum chloride (AlCl 3) by sodium naphthalide (Na[NAP]) in 1,2-dimethoxyethane (DME). The development of this method was inspired by the work of Baldwin et al. at the University of California, Davis, and was adapted for our research through some noteworthy procedural modifications. Centrifugation and solvent-based extraction techniques were used throughout various stages of the synthesis procedure to achieve efficient and well-controlled separation of the Si NP product from the reaction media. In addition, the development of a non-aqueous, formamide-based wash solution facilitated simultaneous removal of the NaCl byproduct and Si NP surface passivation via attachment of 1-octanol to the particle surface. As synthesized, the Si NPs were typically 3-15 nm in diameter, and were mainly amorphous, as opposed to crystalline, as concluded from SAED and XRD diffraction pattern analysis. Aluminum doping at various concentrations was accomplished via the inclusion of aluminum chloride (AlCl3); which was in small quantities dissolved into the synthesis solution to be reduced alongside the SiCl4 precursor. The introduction of Al into the chemically-reduced Si NP precipitate was not found to adversely affect the formation of the Si NPs, but was found to influence aspects such as particle stability and dispersibility throughout various stages of the procedure. Analytical techniques including transmission electron microscopy (TEM), FTIR spectroscopy, and ICP-optical emission spectroscopy were used to comprehensively characterize the product NPs. These methods confirm both the presence of Al and surface-bound 1-octanol in the newly formed Si NPs.
Zijp, Michiel C; Posthuma, Leo; Wintersen, Arjen; Devilee, Jeroen; Swartjes, Frank A
2016-05-01
This paper introduces Solution-focused Sustainability Assessment (SfSA), provides practical guidance formatted as a versatile process framework, and illustrates its utility for solving a wicked environmental management problem. Society faces complex and increasingly wicked environmental problems for which sustainable solutions are sought. Wicked problems are multi-faceted, and deriving of a management solution requires an approach that is participative, iterative, innovative, and transparent in its definition of sustainability and translation to sustainability metrics. We suggest to add the use of a solution-focused approach. The SfSA framework is collated from elements from risk assessment, risk governance, adaptive management and sustainability assessment frameworks, expanded with the 'solution-focused' paradigm as recently proposed in the context of risk assessment. The main innovation of this approach is the broad exploration of solutions upfront in assessment projects. The case study concerns the sustainable management of slightly contaminated sediments continuously formed in ditches in rural, agricultural areas. This problem is wicked, as disposal of contaminated sediment on adjacent land is potentially hazardous to humans, ecosystems and agricultural products. Non-removal would however reduce drainage capacity followed by increased risks of flooding, while contaminated sediment removal followed by offsite treatment implies high budget costs and soil subsidence. Application of the steps in the SfSA-framework served in solving this problem. Important elements were early exploration of a wide 'solution-space', stakeholder involvement from the onset of the assessment, clear agreements on the risk and sustainability metrics of the problem and on the interpretation and decision procedures, and adaptive management. Application of the key elements of the SfSA approach eventually resulted in adoption of a novel sediment management policy. The stakeholder participation and the intensive communication throughout the project resulted in broad support for both the scientific approaches and results, as well as for policy implementation. Copyright © 2016 Elsevier Ltd. All rights reserved.
Vortical Flow Prediction Using an Adaptive Unstructured Grid Method
NASA Technical Reports Server (NTRS)
Pirzadeh, Shahyar Z.
2001-01-01
A computational fluid dynamics (CFD) method has been employed to compute vortical flows around slender wing/body configurations. The emphasis of the paper is on the effectiveness of an adaptive grid procedure in "capturing" concentrated vortices generated at sharp edges or flow separation lines of lifting surfaces flying at high angles of attack. The method is based on a tetrahedral unstructured grid technology developed at the NASA Langley Research Center. Two steady-state, subsonic, inviscid and Navier-Stokes flow test cases are presented to demonstrate the applicability of the method for solving practical vortical flow problems. The first test case concerns vortex flow over a simple 65deg delta wing with different values of leading-edge bluntness, and the second case is that of a more complex fighter configuration. The superiority of the adapted solutions in capturing the vortex flow structure over the conventional unadapted results is demonstrated by comparisons with the windtunnel experimental data. The study shows that numerical prediction of vortical flows is highly sensitive to the local grid resolution and that the implementation of grid adaptation is essential when applying CFD methods to such complicated flow problems.
Investigation of the effects of color on judgments of sweetness using a taste adaptation method.
Hidaka, Souta; Shimoda, Kazumasa
2014-01-01
It has been reported that color can affect the judgment of taste. For example, a dark red color enhances the subjective intensity of sweetness. However, the underlying mechanisms of the effect of color on taste have not been fully investigated; in particular, it remains unclear whether the effect is based on cognitive/decisional or perceptual processes. Here, we investigated the effect of color on sweetness judgments using a taste adaptation method. A sweet solution whose color was subjectively congruent with sweetness was judged as sweeter than an uncolored sweet solution both before and after adaptation to an uncolored sweet solution. In contrast, subjective judgment of sweetness for uncolored sweet solutions did not differ between the conditions following adaptation to a colored sweet solution and following adaptation to an uncolored one. Color affected sweetness judgment when the target solution was colored, but the colored sweet solution did not modulate the magnitude of taste adaptation. Therefore, it is concluded that the effect of color on the judgment of taste would occur mainly in cognitive/decisional domains.
Development of a pressure based multigrid solution method for complex fluid flows
NASA Technical Reports Server (NTRS)
Shyy, Wei
1991-01-01
In order to reduce the computational difficulty associated with a single grid (SG) solution procedure, the multigrid (MG) technique was identified as a useful means for improving the convergence rate of iterative methods. A full MG full approximation storage (FMG/FAS) algorithm is used to solve the incompressible recirculating flow problems in complex geometries. The algorithm is implemented in conjunction with a pressure correction staggered grid type of technique using the curvilinear coordinates. In order to show the performance of the method, two flow configurations, one a square cavity and the other a channel, are used as test problems. Comparisons are made between the iterations, equivalent work units, and CPU time. Besides showing that the MG method can yield substantial speed-up with wide variations in Reynolds number, grid distributions, and geometry, issues such as the convergence characteristics of different grid levels, the choice of convection schemes, and the effectiveness of the basic iteration smoothers are studied. An adaptive grid scheme is also combined with the MG procedure to explore the effects of grid resolution on the MG convergence rate as well as the numerical accuracy.
MAG3D and its application to internal flowfield analysis
NASA Technical Reports Server (NTRS)
Lee, K. D.; Henderson, T. L.; Choo, Y. K.
1992-01-01
MAG3D (multiblock adaptive grid, 3D) is a 3D solution-adaptive grid generation code which redistributes grid points to improve the accuracy of a flow solution without increasing the number of grid points. The code is applicable to structured grids with a multiblock topology. It is independent of the original grid generator and the flow solver. The code uses the coordinates of an initial grid and the flow solution interpolated onto the new grid. MAG3D uses a numerical mapping and potential theory to modify the grid distribution based on properties of the flow solution on the initial grid. The adaptation technique is discussed, and the capability of MAG3D is demonstrated with several internal flow examples. Advantages of using solution-adaptive grids are also shown by comparing flow solutions on adaptive grids with those on initial grids.
The block adaptive multigrid method applied to the solution of the Euler equations
NASA Technical Reports Server (NTRS)
Pantelelis, Nikos
1993-01-01
In the present study, a scheme capable of solving very fast and robust complex nonlinear systems of equations is presented. The Block Adaptive Multigrid (BAM) solution method offers multigrid acceleration and adaptive grid refinement based on the prediction of the solution error. The proposed solution method was used with an implicit upwind Euler solver for the solution of complex transonic flows around airfoils. Very fast results were obtained (18-fold acceleration of the solution) using one fourth of the volumes of a global grid with the same solution accuracy for two test cases.
Solving delay differential equations in S-ADAPT by method of steps.
Bauer, Robert J; Mo, Gary; Krzyzanski, Wojciech
2013-09-01
S-ADAPT is a version of the ADAPT program that contains additional simulation and optimization abilities such as parametric population analysis. S-ADAPT utilizes LSODA to solve ordinary differential equations (ODEs), an algorithm designed for large dimension non-stiff and stiff problems. However, S-ADAPT does not have a solver for delay differential equations (DDEs). Our objective was to implement in S-ADAPT a DDE solver using the methods of steps. The method of steps allows one to solve virtually any DDE system by transforming it to an ODE system. The solver was validated for scalar linear DDEs with one delay and bolus and infusion inputs for which explicit analytic solutions were derived. Solutions of nonlinear DDE problems coded in S-ADAPT were validated by comparing them with ones obtained by the MATLAB DDE solver dde23. The estimation of parameters was tested on the MATLB simulated population pharmacodynamics data. The comparison of S-ADAPT generated solutions for DDE problems with the explicit solutions as well as MATLAB produced solutions which agreed to at least 7 significant digits. The population parameter estimates from using importance sampling expectation-maximization in S-ADAPT agreed with ones used to generate the data. Published by Elsevier Ireland Ltd.
Design of Robust Adaptive Unbalance Response Controllers for Rotors with Magnetic Bearings
NASA Technical Reports Server (NTRS)
Knospe, Carl R.; Tamer, Samir M.; Fedigan, Stephen J.
1996-01-01
Experimental results have recently demonstrated that an adaptive open loop control strategy can be highly effective in the suppression of unbalance induced vibration on rotors supported in active magnetic bearings. This algorithm, however, relies upon a predetermined gain matrix. Typically, this matrix is determined by an optimal control formulation resulting in the choice of the pseudo-inverse of the nominal influence coefficient matrix as the gain matrix. This solution may result in problems with stability and performance robustness since the estimated influence coefficient matrix is not equal to the actual influence coefficient matrix. Recently, analysis tools have been developed to examine the robustness of this control algorithm with respect to structured uncertainty. Herein, these tools are extended to produce a design procedure for determining the adaptive law's gain matrix. The resulting control algorithm has a guaranteed convergence rate and steady state performance in spite of the uncertainty in the rotor system. Several examples are presented which demonstrate the effectiveness of this approach and its advantages over the standard optimal control formulation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bleck, Daniela, E-mail: bleck.daniela@baua.bund.de; Wettberg, Wieland, E-mail: wettberg.wieland@baua.bund.de
2012-11-15
Waste management procedures in developing countries are associated with occupational safety and health risks. Gastro-intestinal infections, respiratory and skin diseases as well as muscular-skeletal problems and cutting injuries are commonly found among waste workers around the globe. In order to find efficient, sustainable solutions to reduce occupational risks of waste workers, a methodological risk assessment has to be performed and counteractive measures have to be developed according to an internationally acknowledged hierarchy. From a case study in Addis Ababa, Ethiopia suggestions for the transferral of collected household waste into roadside containers are given. With construction of ramps to dump collectedmore » household waste straight into roadside containers and an adaptation of pushcarts and collection procedures, the risk is tackled at the source.« less
Multigrid solution of internal flows using unstructured solution adaptive meshes
NASA Technical Reports Server (NTRS)
Smith, Wayne A.; Blake, Kenneth R.
1992-01-01
This is the final report of the NASA Lewis SBIR Phase 2 Contract Number NAS3-25785, Multigrid Solution of Internal Flows Using Unstructured Solution Adaptive Meshes. The objective of this project, as described in the Statement of Work, is to develop and deliver to NASA a general three-dimensional Navier-Stokes code using unstructured solution-adaptive meshes for accuracy and multigrid techniques for convergence acceleration. The code will primarily be applied, but not necessarily limited, to high speed internal flows in turbomachinery.
Errors in the estimation method for the rejection of vibrations in adaptive optics systems
NASA Astrophysics Data System (ADS)
Kania, Dariusz
2017-06-01
In recent years the problem of the mechanical vibrations impact in adaptive optics (AO) systems has been renewed. These signals are damped sinusoidal signals and have deleterious effect on the system. One of software solutions to reject the vibrations is an adaptive method called AVC (Adaptive Vibration Cancellation) where the procedure has three steps: estimation of perturbation parameters, estimation of the frequency response of the plant, update the reference signal to reject/minimalize the vibration. In the first step a very important problem is the estimation method. A very accurate and fast (below 10 ms) estimation method of these three parameters has been presented in several publications in recent years. The method is based on using the spectrum interpolation and MSD time windows and it can be used to estimate multifrequency signals. In this paper the estimation method is used in the AVC method to increase the system performance. There are several parameters that affect the accuracy of obtained results, e.g. CiR - number of signal periods in a measurement window, N - number of samples in the FFT procedure, H - time window order, SNR, b - number of ADC bits, γ - damping ratio of the tested signal. Systematic errors increase when N, CiR, H decrease and when γ increases. The value for systematic error is approximately 10^-10 Hz/Hz for N = 2048 and CiR = 0.1. This paper presents equations that can used to estimate maximum systematic errors for given values of H, CiR and N before the start of the estimation process.
Adaptive graph-based multiple testing procedures
Klinglmueller, Florian; Posch, Martin; Koenig, Franz
2016-01-01
Multiple testing procedures defined by directed, weighted graphs have recently been proposed as an intuitive visual tool for constructing multiple testing strategies that reflect the often complex contextual relations between hypotheses in clinical trials. Many well-known sequentially rejective tests, such as (parallel) gatekeeping tests or hierarchical testing procedures are special cases of the graph based tests. We generalize these graph-based multiple testing procedures to adaptive trial designs with an interim analysis. These designs permit mid-trial design modifications based on unblinded interim data as well as external information, while providing strong family wise error rate control. To maintain the familywise error rate, it is not required to prespecify the adaption rule in detail. Because the adaptive test does not require knowledge of the multivariate distribution of test statistics, it is applicable in a wide range of scenarios including trials with multiple treatment comparisons, endpoints or subgroups, or combinations thereof. Examples of adaptations are dropping of treatment arms, selection of subpopulations, and sample size reassessment. If, in the interim analysis, it is decided to continue the trial as planned, the adaptive test reduces to the originally planned multiple testing procedure. Only if adaptations are actually implemented, an adjusted test needs to be applied. The procedure is illustrated with a case study and its operating characteristics are investigated by simulations. PMID:25319733
Removing damped sinusoidal vibrations in adaptive optics systems using a DFT-based estimation method
NASA Astrophysics Data System (ADS)
Kania, Dariusz
2017-06-01
The problem of a vibrations rejection in adaptive optics systems is still present in publications. These undesirable signals emerge because of shaking the system structure, the tracking process, etc., and they usually are damped sinusoidal signals. There are some mechanical solutions to reduce the signals but they are not very effective. One of software solutions are very popular adaptive methods. An AVC (Adaptive Vibration Cancellation) method has been presented and developed in recent years. The method is based on the estimation of three vibrations parameters and values of frequency, amplitude and phase are essential to produce and adjust a proper signal to reduce or eliminate vibrations signals. This paper presents a fast (below 10 ms) and accurate estimation method of frequency, amplitude and phase of a multifrequency signal that can be used in the AVC method to increase the AO system performance. The method accuracy depends on several parameters: CiR - number of signal periods in a measurement window, N - number of samples in the FFT procedure, H - time window order, SNR, THD, b - number of A/D converter bits in a real time system, γ - the damping ratio of the tested signal, φ - the phase of the tested signal. Systematic errors increase when N, CiR, H decrease and when γ increases. The value of systematic error for γ = 0.1%, CiR = 1.1 and N = 32 is approximately 10^-4 Hz/Hz. This paper focuses on systematic errors of and effect of the signal phase and values of γ on the results.
Martínez, T; Cordero, B; Medín, S; Sánchez Salmón, A
2011-01-01
To establish an automated procedure for the preparation of sodium fluoride (18)F injection using the resources available in our laboratory for the preparation of (18)FDG and to analyze the repercussion of the conditioning column of the fluoride ion entrapment on the characteristics of the final product. The sequence of an (18)FDG synthesis module prepared so that it traps the fluoride ion from the cyclotron in ion-exchange resin diluted with 0.9% sodium chloride. The final solution was dosified and sterilized in a final vial in an automatized dispensing module. Three different column conditioning protocols within the process were tested. Quality controls were run according to USP 32 and EurPh 6, adding control of ethanol levels of residual solvent and quality controls of the solution at 8 h post-preparation. Activation of the resin cartridges with ethanol and water was the chosen procedure, with fluoride ion trapping > 95% and pH around 7. Ethanol levels were < 5.000 ppm. Quality controls at 8 h indicated that the solution was in compliance with the USP 32 and EurPh 6 specifications. This is an easy, low-cost, reliable automated method for sodium fluoride preparation in PET facilities with existing equipment for (18)FDG synthesis and quality control. Copyright © 2010 Elsevier España, S.L. y SEMNIM. All rights reserved.
Adaptive multigrid domain decomposition solutions for viscous interacting flows
NASA Technical Reports Server (NTRS)
Rubin, Stanley G.; Srinivasan, Kumar
1992-01-01
Several viscous incompressible flows with strong pressure interaction and/or axial flow reversal are considered with an adaptive multigrid domain decomposition procedure. Specific examples include the triple deck structure surrounding the trailing edge of a flat plate, the flow recirculation in a trough geometry, and the flow in a rearward facing step channel. For the latter case, there are multiple recirculation zones, of different character, for laminar and turbulent flow conditions. A pressure-based form of flux-vector splitting is applied to the Navier-Stokes equations, which are represented by an implicit lowest-order reduced Navier-Stokes (RNS) system and a purely diffusive, higher-order, deferred-corrector. A trapezoidal or box-like form of discretization insures that all mass conservation properties are satisfied at interfacial and outflow boundaries, even for this primitive-variable, non-staggered grid computation.
Bensman, Rachel S; Slusher, Tina M; Butteris, Sabrina M; Pitt, Michael B; On Behalf Of The Sugar Pearls Investigators; Becker, Amanda; Desai, Brinda; George, Alisha; Hagen, Scott; Kiragu, Andrew; Johannsen, Ron; Miller, Kathleen; Rule, Amy; Webber, Sarah
2017-11-01
The authors describe a multiinstitutional collaborative project to address a gap in global health training by creating a free online platform to share a curriculum for performing procedures in resource-limited settings. This curriculum called PEARLS (Procedural Education for Adaptation to Resource-Limited Settings) consists of peer-reviewed instructional and demonstration videos describing modifications for performing common pediatric procedures in resource-limited settings. Adaptations range from the creation of a low-cost spacer for inhaled medications to a suction chamber for continued evacuation of a chest tube. By describing the collaborative process, we provide a model for educators in other fields to collate and disseminate procedural modifications adapted for their own specialty and location, ideally expanding this crowd-sourced curriculum to reach a wide audience of trainees and providers in global health.
Spilker, R L; de Almeida, E S; Donzelli, P S
1992-01-01
This chapter addresses computationally demanding numerical formulations in the biomechanics of soft tissues. The theory of mixtures can be used to represent soft hydrated tissues in the human musculoskeletal system as a two-phase continuum consisting of an incompressible solid phase (collagen and proteoglycan) and an incompressible fluid phase (interstitial water). We first consider the finite deformation of soft hydrated tissues in which the solid phase is represented as hyperelastic. A finite element formulation of the governing nonlinear biphasic equations is presented based on a mixed-penalty approach and derived using the weighted residual method. Fluid and solid phase deformation, velocity, and pressure are interpolated within each element, and the pressure variables within each element are eliminated at the element level. A system of nonlinear, first-order differential equations in the fluid and solid phase deformation and velocity is obtained. In order to solve these equations, the contributions of the hyperelastic solid phase are incrementally linearized, a finite difference rule is introduced for temporal discretization, and an iterative scheme is adopted to achieve equilibrium at the end of each time increment. We demonstrate the accuracy and adequacy of the procedure using a six-node, isoparametric axisymmetric element, and we present an example problem for which independent numerical solution is available. Next, we present an automated, adaptive environment for the simulation of soft tissue continua in which the finite element analysis is coupled with automatic mesh generation, error indicators, and projection methods. Mesh generation and updating, including both refinement and coarsening, for the two-dimensional examples examined in this study are performed using the finite quadtree approach. The adaptive analysis is based on an error indicator which is the L2 norm of the difference between the finite element solution and a projected finite element solution. Total stress, calculated as the sum of the solid and fluid phase stresses, is used in the error indicator. To allow the finite difference algorithm to proceed in time using an updated mesh, solution values must be transferred to the new nodal locations. This rezoning is accomplished using a projected field for the primary variables. The accuracy and effectiveness of this adaptive finite element analysis is demonstrated using a linear, two-dimensional, axisymmetric problem corresponding to the indentation of a thin sheet of soft tissue. The method is shown to effectively capture the steep gradients and to produce solutions in good agreement with independent, converged, numerical solutions.
Frequency-domain beamformers using conjugate gradient techniques for speech enhancement.
Zhao, Shengkui; Jones, Douglas L; Khoo, Suiyang; Man, Zhihong
2014-09-01
A multiple-iteration constrained conjugate gradient (MICCG) algorithm and a single-iteration constrained conjugate gradient (SICCG) algorithm are proposed to realize the widely used frequency-domain minimum-variance-distortionless-response (MVDR) beamformers and the resulting algorithms are applied to speech enhancement. The algorithms are derived based on the Lagrange method and the conjugate gradient techniques. The implementations of the algorithms avoid any form of explicit or implicit autocorrelation matrix inversion. Theoretical analysis establishes formal convergence of the algorithms. Specifically, the MICCG algorithm is developed based on a block adaptation approach and it generates a finite sequence of estimates that converge to the MVDR solution. For limited data records, the estimates of the MICCG algorithm are better than the conventional estimators and equivalent to the auxiliary vector algorithms. The SICCG algorithm is developed based on a continuous adaptation approach with a sample-by-sample updating procedure and the estimates asymptotically converge to the MVDR solution. An illustrative example using synthetic data from a uniform linear array is studied and an evaluation on real data recorded by an acoustic vector sensor array is demonstrated. Performance of the MICCG algorithm and the SICCG algorithm are compared with the state-of-the-art approaches.
A novel model for simultaneous study of neointestinal regeneration and intestinal adaptation.
Jwo, Shyh-Chuan; Tang, Shye-Jye; Chen, Jim-Ray; Chiang, Kun-Chun; Huang, Ting-Shou; Chen, Huang-Yang
2013-01-01
The use of autologous grafts, fabricated from tissue-engineered neointestine, to enhance insufficient compensation of intestinal adaptation for severe short bowel syndrome is a compelling idea. Unfortunately, current approaches and knowledge for neointestinal regeneration, unlike intestinal adaptation, are still unsatisfactory. Thus, we have designed a novel model of intestinal adaptation with simultaneous neointestinal regeneration and evaluated its feasibility for future basic research and clinical application. Fifty male Sprague-Dawley rats weighing 250-350 g underwent this procedure and sacrificed at 4, 8, and 12 weeks postoperatively. Spatiotemporal analyses were carried out by gross, histology, and DNA/protein quantification. Three rats died of operative complications. In early experiments, the use of hard silicone stent as tissue scaffold in 11 rats was unsatisfactory for neointestinal regeneration. In later experiments, when a soft silastic tube was used, the success rate increased up to 90.9%. Further analyses revealed that no neointestine developed without donor intestine; regenerated lengths of mucosa and muscle were positively related to time postsurgery but independent of donor length with 0.5 or 1 cm. Other parameters of neointestinal regeneration or intestinal adaptation showed no relationship to both time postsurgery and donor length. In conclusion, this is a potentially important model for investigators searching for solutions to short bowel syndrome. © 2013 by the Wound Healing Society.
Fuzzy logic control of telerobot manipulators
NASA Technical Reports Server (NTRS)
Franke, Ernest A.; Nedungadi, Ashok
1992-01-01
Telerobot systems for advanced applications will require manipulators with redundant 'degrees of freedom' (DOF) that are capable of adapting manipulator configurations to avoid obstacles while achieving the user specified goal. Conventional methods for control of manipulators (based on solution of the inverse kinematics) cannot be easily extended to these situations. Fuzzy logic control offers a possible solution to these needs. A current research program at SRI developed a fuzzy logic controller for a redundant, 4 DOF, planar manipulator. The manipulator end point trajectory can be specified by either a computer program (robot mode) or by manual input (teleoperator). The approach used expresses end-point error and the location of manipulator joints as fuzzy variables. Joint motions are determined by a fuzzy rule set without requiring solution of the inverse kinematics. Additional rules for sensor data, obstacle avoidance and preferred manipulator configuration, e.g., 'righty' or 'lefty', are easily accommodated. The procedure used to generate the fuzzy rules can be extended to higher DOF systems.
Laser hardening techniques on steam turbine blade and application
NASA Astrophysics Data System (ADS)
Yao, Jianhua; Zhang, Qunli; Kong, Fanzhi; Ding, Qingming
Different laser surface hardening techniques, such as laser alloying and laser solution strengthening were adopted to perform modification treatment on the local region of inset edge for 2Cr13 and 17-4PH steam turbine blades to prolong the life of the blades. The microstructures, microhardness and anti-cavitation properties were investigated on the blades after laser treatment. The hardening mechanism and technique adaptability were researched. Large scale installation practices confirmed that the laser surface modification techniques are safe and reliable, which can improve the properties of blades greatly with advantages of high automation, high quality, little distortion and simple procedure.
a Procedural Solution to Model Roman Masonry Structures
NASA Astrophysics Data System (ADS)
Cappellini, V.; Saleri, R.; Stefani, C.; Nony, N.; De Luca, L.
2013-07-01
The paper will describe a new approach based on the development of a procedural modelling methodology for archaeological data representation. This is a custom-designed solution based on the recognition of the rules belonging to the construction methods used in roman times. We have conceived a tool for 3D reconstruction of masonry structures starting from photogrammetric surveying. Our protocol considers different steps. Firstly we have focused on the classification of opus based on the basic interconnections that can lead to a descriptive system used for their unequivocal identification and design. Secondly, we have chosen an automatic, accurate, flexible and open-source photogrammetric pipeline named Pastis Apero Micmac - PAM, developed by IGN (Paris). We have employed it to generate ortho-images from non-oriented images, using a user-friendly interface implemented by CNRS Marseille (France). Thirdly, the masonry elements are created in parametric and interactive way, and finally they are adapted to the photogrammetric data. The presented application, currently under construction, is developed with an open source programming language called Processing, useful for visual, animated or static, 2D or 3D, interactive creations. Using this computer language, a Java environment has been developed. Therefore, even if the procedural modelling reveals an accuracy level inferior to the one obtained by manual modelling (brick by brick), this method can be useful when taking into account the static evaluation on buildings (requiring quantitative aspects) and metric measures for restoration purposes.
NASA Technical Reports Server (NTRS)
Kiris, Cetin
1995-01-01
Development of an incompressible Navier-Stokes solution procedure was performed for the analysis of a liquid rocket engine pump components and for the mechanical heart assist devices. The solution procedure for the propulsion systems is applicable to incompressible Navier-Stokes flows in a steadily rotating frame of reference for any general complex configurations. The computer codes were tested on different complex configurations such as liquid rocket engine inducer and impellers. As a spin-off technology from the turbopump component simulations, the flow analysis for an axial heart pump was conducted. The baseline Left Ventricular Assist Device (LVAD) design was improved by adding an inducer geometry by adapting from the liquid rocket engine pump. The time-accurate mode of the incompressible Navier-Stokes code was validated with flapping foil experiment by using different domain decomposition methods. In the flapping foil experiment, two upstream NACA 0025 foils perform high-frequency synchronized motion and generate unsteady flow conditions for a downstream larger stationary foil. Fairly good agreement was obtained between unsteady experimental data and numerical results from two different moving boundary procedures. Incompressible Navier-Stokes code (INS3D) has been extended for heat transfer applications. The temperature equation was written for both forced and natural convection phenomena. Flow in a square duct case was used for the validation of the code in both natural and forced convection.
On the solution of evolution equations based on multigrid and explicit iterative methods
NASA Astrophysics Data System (ADS)
Zhukov, V. T.; Novikova, N. D.; Feodoritova, O. B.
2015-08-01
Two schemes for solving initial-boundary value problems for three-dimensional parabolic equations are studied. One is implicit and is solved using the multigrid method, while the other is explicit iterative and is based on optimal properties of the Chebyshev polynomials. In the explicit iterative scheme, the number of iteration steps and the iteration parameters are chosen as based on the approximation and stability conditions, rather than on the optimization of iteration convergence to the solution of the implicit scheme. The features of the multigrid scheme include the implementation of the intergrid transfer operators for the case of discontinuous coefficients in the equation and the adaptation of the smoothing procedure to the spectrum of the difference operators. The results produced by these schemes as applied to model problems with anisotropic discontinuous coefficients are compared.
Development of a Countermeasure to Enhance Postflight Locomotor Adaptability
NASA Technical Reports Server (NTRS)
Bloomberg, Jacob J.
2006-01-01
Astronauts returning from space flight experience locomotor dysfunction following their return to Earth. Our laboratory is currently developing a gait adaptability training program that is designed to facilitate recovery of locomotor function following a return to a gravitational environment. The training program exploits the ability of the sensorimotor system to generalize from exposure to multiple adaptive challenges during training so that the gait control system essentially learns to learn and therefore can reorganize more rapidly when faced with a novel adaptive challenge. We have previously confirmed that subjects participating in adaptive generalization training programs using a variety of visuomotor distortions can enhance their ability to adapt to a novel sensorimotor environment. Importantly, this increased adaptability was retained even one month after completion of the training period. Adaptive generalization has been observed in a variety of other tasks requiring sensorimotor transformations including manual control tasks and reaching (Bock et al., 2001, Seidler, 2003) and obstacle avoidance during walking (Lam and Dietz, 2004). Taken together, the evidence suggests that a training regimen exposing crewmembers to variation in locomotor conditions, with repeated transitions among states, may enhance their ability to learn how to reassemble appropriate locomotor patterns upon return from microgravity. We believe exposure to this type of training will extend crewmembers locomotor behavioral repertoires, facilitating the return of functional mobility after long duration space flight. Our proposed training protocol will compel subjects to develop new behavioral solutions under varying sensorimotor demands. Over time subjects will learn to create appropriate locomotor solution more rapidly enabling acquisition of mobility sooner after long-duration space flight. Our laboratory is currently developing adaptive generalization training procedures and the associated flight hardware to implement such a training program during regular inflight treadmill operations. A visual display system will provide variation in visual flow patterns during treadmill exercise. Crewmembers will be exposed to a virtual scene that can translate and rotate in six-degrees-of freedom during their regular treadmill exercise period. Associated ground based studies are focused on determining optimal combinations of sensory manipulations (visual flow, body loading and support surface variation) and training schedules that will produce the greatest potential for adaptive flexibility in gait function during exposure to challenging and novel environments. An overview of our progress in these areas will be discussed during the presentation.
NASA Astrophysics Data System (ADS)
Zhengyong, R.; Jingtian, T.; Changsheng, L.; Xiao, X.
2007-12-01
Although adaptive finite-element (AFE) analysis is becoming more and more focused in scientific and engineering fields, its efficient implementations are remain to be a discussed problem as its more complex procedures. In this paper, we propose a clear C++ framework implementation to show the powerful properties of Object-oriented philosophy (OOP) in designing such complex adaptive procedure. In terms of the modal functions of OOP language, the whole adaptive system is divided into several separate parts such as the mesh generation or refinement, a-posterior error estimator, adaptive strategy and the final post processing. After proper designs are locally performed on these separate modals, a connected framework of adaptive procedure is formed finally. Based on the general elliptic deferential equation, little efforts should be added in the adaptive framework to do practical simulations. To show the preferable properties of OOP adaptive designing, two numerical examples are tested. The first one is the 3D direct current resistivity problem in which the powerful framework is efficiently shown as only little divisions are added. And then, in the second induced polarization£¨IP£©exploration case, new adaptive procedure is easily added which adequately shows the strong extendibility and re-usage of OOP language. Finally we believe based on the modal framework adaptive implementation by OOP methodology, more advanced adaptive analysis system will be available in future.
Regularized two-step brain activity reconstruction from spatiotemporal EEG data
NASA Astrophysics Data System (ADS)
Alecu, Teodor I.; Voloshynovskiy, Sviatoslav; Pun, Thierry
2004-10-01
We are aiming at using EEG source localization in the framework of a Brain Computer Interface project. We propose here a new reconstruction procedure, targeting source (or equivalently mental task) differentiation. EEG data can be thought of as a collection of time continuous streams from sparse locations. The measured electric potential on one electrode is the result of the superposition of synchronized synaptic activity from sources in all the brain volume. Consequently, the EEG inverse problem is a highly underdetermined (and ill-posed) problem. Moreover, each source contribution is linear with respect to its amplitude but non-linear with respect to its localization and orientation. In order to overcome these drawbacks we propose a novel two-step inversion procedure. The solution is based on a double scale division of the solution space. The first step uses a coarse discretization and has the sole purpose of globally identifying the active regions, via a sparse approximation algorithm. The second step is applied only on the retained regions and makes use of a fine discretization of the space, aiming at detailing the brain activity. The local configuration of sources is recovered using an iterative stochastic estimator with adaptive joint minimum energy and directional consistency constraints.
Self-adaptive predictor-corrector algorithm for static nonlinear structural analysis
NASA Technical Reports Server (NTRS)
Padovan, J.
1981-01-01
A multiphase selfadaptive predictor corrector type algorithm was developed. This algorithm enables the solution of highly nonlinear structural responses including kinematic, kinetic and material effects as well as pro/post buckling behavior. The strategy involves three main phases: (1) the use of a warpable hyperelliptic constraint surface which serves to upperbound dependent iterate excursions during successive incremental Newton Ramphson (INR) type iterations; (20 uses an energy constraint to scale the generation of successive iterates so as to maintain the appropriate form of local convergence behavior; (3) the use of quality of convergence checks which enable various self adaptive modifications of the algorithmic structure when necessary. The restructuring is achieved by tightening various conditioning parameters as well as switch to different algorithmic levels to improve the convergence process. The capabilities of the procedure to handle various types of static nonlinear structural behavior are illustrated.
Stratified and Maximum Information Item Selection Procedures in Computer Adaptive Testing
ERIC Educational Resources Information Center
Deng, Hui; Ansley, Timothy; Chang, Hua-Hua
2010-01-01
In this study we evaluated and compared three item selection procedures: the maximum Fisher information procedure (F), the a-stratified multistage computer adaptive testing (CAT) (STR), and a refined stratification procedure that allows more items to be selected from the high a strata and fewer items from the low a strata (USTR), along with…
Korakakis, Vasileios; Patsiaouras, Asterios; Malliaropoulos, Nikos
2014-12-01
To cross-culturally adapt the VISA-P questionnaire for Greek-speaking patients and evaluate its psychometric properties. The VISA-P was developed in the English language to evaluate patients with patellar tendinopathy. The validity and use of self-administered questionnaires in different language and cultural populations require a specific procedure in order to maintain their content validity. The VISA-P questionnaire was translated and cross-culturally adapted according to specific guidelines. The validity and reliability were tested in 61 healthy recreational athletes, 64 athletes at risk from different sports, 32 patellar tendinopathy patients and 30 patients with other knee injuries. Participants completed the questionnaire at baseline and after 15-17 days. The questionnaire's face and content validity were judged as good by the expert committee, and the participants. Concurrent validity was almost perfect (ρ=-0.839, p<0.001). Also, factorial validity testing revealed a two-factor solution, which explained 85.6% of the total variance. A one-factor solution explained 80.8% of the variance when the other knee injury group was excluded. Known group validity was demonstrated by significant differences between patients compared with the asymptomatic groups (p<0.001). The VISA-P-GR exhibited very good test-retest reliability (ICC=0.818, p<0.001; 95% CI 0.758 to 0.864) and internal consistency since Cronbach's α analysis ranged from α=0.785 to 0.784 following a 15-17 days interval. The translated VISA-P-GR is a valid and reliable questionnaire and its psychometric properties are comparable with the original and adapted versions. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
Regulation of operant oral ethanol self-administration: a dose-response curve study in rats.
Carnicella, Sebastien; Yowell, Quinn V; Ron, Dorit
2011-01-01
Oral ethanol self-administration procedures in rats are useful preclinical tools for the evaluation of potential new pharmacotherapies as well as for the investigation into the etiology of alcohol abuse disorders and addiction. Determination of the effects of a potential treatment on a full ethanol dose-response curve should be essential to predict its clinical efficacy. Unfortunately, this approach has not been fully explored because of the aversive taste reaction to moderate to high doses of ethanol, which may interfere with consumption. In this study, we set out to determine whether a meaningful dose-response curve for oral ethanol self-administration can be obtained in rats. Long-Evans rats were trained to self-administer a 20% ethanol solution in an operant procedure following a history of excessive voluntary ethanol intake. After stabilization of ethanol self-administration, the concentration of the solution was varied from 2.5 to 60% (v/v), and operant and drinking behaviors, as well as blood ethanol concentration (BEC), were evaluated following the self-administration of a 20, 40, and 60% ethanol solution. Varying the concentration of ethanol from 2.5 to 60% after the development of excessive ethanol consumption led to a typical inverted U-shaped dose-response curve. Importantly, rats adapted their level and pattern of responding to changes in ethanol concentration to obtain a constant level of intake and BEC, suggesting that their operant behavior is mainly driven by the motivation to obtain a specific pharmacological effect of ethanol. This procedure can be a useful and straightforward tool for the evaluation of the effects of new potential pharmacotherapies for the treatment of alcohol abuse disorders. Copyright © 2010 by the Research Society on Alcoholism.
NASA Technical Reports Server (NTRS)
Nakamura, S.
1983-01-01
The effects of truncation error on the numerical solution of transonic flows using the full potential equation are studied. The effects of adapting grid point distributions to various solution aspects including shock waves is also discussed. A conclusion is that a rapid change of grid spacing is damaging to the accuracy of the flow solution. Therefore, in a solution adaptive grid application an optimal grid is obtained as a tradeoff between the amount of grid refinement and the rate of grid stretching.
NASA Astrophysics Data System (ADS)
Ercikan, Kadriye; Alper, Naim
2009-03-01
This commentary first summarizes and discusses the analysis of the two translation processes described in the Oliveira, Colak, and Akerson article and the inferences these researchers make based on their research. In the second part of the commentary, we describe procedures and criteria used in adapting tests into different languages and how they may apply to adaptation of instructional materials. The authors provide a good theoretical analysis of what took place in two translation instances and make an important contribution by taking the first step in providing a systematic discussion of adaptation of instructional materials. Our discussion proposes procedures for adapting instructional materials for examining equivalence of source and target versions of adapted instructional materials. We highlight that many of the procedures and criteria used in examining comparability of educational tests is missing in this emerging research of area.
A proposal for amending administrative law to facilitate adaptive management
NASA Astrophysics Data System (ADS)
Craig, Robin K.; Ruhl, J. B.; Brown, Eleanor D.; Williams, Byron K.
2017-07-01
In this article we examine how federal agencies use adaptive management. In order for federal agencies to implement adaptive management more successfully, administrative law must adapt to adaptive management, and we propose changes in administrative law that will help to steer the current process out of a dead end. Adaptive management is a form of structured decision making that is widely used in natural resources management. It involves specific steps integrated in an iterative process for adjusting management actions as new information becomes available. Theoretical requirements for adaptive management notwithstanding, federal agency decision making is subject to the requirements of the federal Administrative Procedure Act, and state agencies are subject to the states’ parallel statutes. We argue that conventional administrative law has unnecessarily shackled effective use of adaptive management. We show that through a specialized ‘adaptive management track’ of administrative procedures, the core values of administrative law—especially public participation, judicial review, and finality— can be implemented in ways that allow for more effective adaptive management. We present and explain draft model legislation (the Model Adaptive Management Procedure Act) that would create such a track for the specific types of agency decision making that could benefit from adaptive management.
A proposal for amending administrative law to facilitate adaptive management
Craig, Robin K.; Ruhl, J.B.; Brown, Eleanor D.; Williams, Byron K.
2017-01-01
In this article we examine how federal agencies use adaptive management. In order for federal agencies to implement adaptive management more successfully, administrative law must adapt to adaptive management, and we propose changes in administrative law that will help to steer the current process out of a dead end. Adaptive management is a form of structured decision making that is widely used in natural resources management. It involves specific steps integrated in an iterative process for adjusting management actions as new information becomes available. Theoretical requirements for adaptive management notwithstanding, federal agency decision making is subject to the requirements of the federal Administrative Procedure Act, and state agencies are subject to the states' parallel statutes. We argue that conventional administrative law has unnecessarily shackled effective use of adaptive management. We show that through a specialized 'adaptive management track' of administrative procedures, the core values of administrative law—especially public participation, judicial review, and finality— can be implemented in ways that allow for more effective adaptive management. We present and explain draft model legislation (the Model Adaptive Management Procedure Act) that would create such a track for the specific types of agency decision making that could benefit from adaptive management.
Distant Operational Care Centre: Design Project Report
NASA Technical Reports Server (NTRS)
1996-01-01
The goal of this project is to outline the design of the Distant Operational Care Centre (DOCC), a modular medical facility to maintain human health and performance in space, that is adaptable to a range of remote human habitats. The purpose of this project is to outline a design, not to go into a complete technical specification of a medical facility for space. This project involves a process to produce a concise set of requirements, addressing the fundamental problems and issues regarding all aspects of a space medical facility for the future. The ideas presented here are at a high level, based on existing, researched, and hypothetical technologies. Given the long development times for space exploration, the outlined concepts from this project embodies a collection of identified problems, and corresponding proposed solutions and ideas, ready to contribute to future space exploration efforts. In order to provide a solid extrapolation and speculation in the context of the future of space medicine, the extent of this project's vision is roughly within the next two decades. The Distant Operational Care Centre (DOCC) is a modular medical facility for space. That is, its function is to maintain human health and performance in space environments, through prevention, diagnosis, and treatment. Furthermore, the DOCC must be adaptable to meet the environmental requirements of different remote human habitats, and support a high quality of human performance. To meet a diverse range of remote human habitats, the DOCC concentrates on a core medical capability that can then be adapted. Adaptation would make use of the DOCC's functional modularity, providing the ability to replace, add, and modify core functions of the DOCC by updating hardware, operations, and procedures. Some of the challenges to be addressed by this project include what constitutes the core medical capability in terms of hardware, operations, and procedures, and how DOCC can be adapted to different remote habitats.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-01-28
... procedure for determining the energy consumption of electric refrigerators and refrigerator-freezers. The... condensation. The existing test procedure does not take humidity or adaptive control technology into account. Therefore, Electrolux has suggested an alternate test procedure that takes adaptive control technology into...
NASA Astrophysics Data System (ADS)
Kuo, Eric; Hallinen, Nicole R.; Conlin, Luke D.
2017-05-01
One aim of school science instruction is to help students become adaptive problem solvers. Though successful at structuring novice problem solving, step-by-step problem-solving frameworks may also constrain students' thinking. This study utilises a paradigm established by Heckler [(2010). Some consequences of prompting novice physics students to construct force diagrams. International Journal of Science Education, 32(14), 1829-1851] to test how cuing the first step in a standard framework affects undergraduate students' approaches and evaluation of solutions in physics problem solving. Specifically, prompting the construction of a standard diagram before problem solving increases the use of standard procedures, decreasing the use of a conceptual shortcut. Providing a diagram prompt also lowers students' ratings of informal approaches to similar problems. These results suggest that reminding students to follow typical problem-solving frameworks limits their views of what counts as good problem solving.
Tipireddy, R.; Stinis, P.; Tartakovsky, A. M.
2017-09-04
In this paper, we present a novel approach for solving steady-state stochastic partial differential equations (PDEs) with high-dimensional random parameter space. The proposed approach combines spatial domain decomposition with basis adaptation for each subdomain. The basis adaptation is used to address the curse of dimensionality by constructing an accurate low-dimensional representation of the stochastic PDE solution (probability density function and/or its leading statistical moments) in each subdomain. Restricting the basis adaptation to a specific subdomain affords finding a locally accurate solution. Then, the solutions from all of the subdomains are stitched together to provide a global solution. We support ourmore » construction with numerical experiments for a steady-state diffusion equation with a random spatially dependent coefficient. Lastly, our results show that highly accurate global solutions can be obtained with significantly reduced computational costs.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tipireddy, R.; Stinis, P.; Tartakovsky, A. M.
We present a novel approach for solving steady-state stochastic partial differential equations (PDEs) with high-dimensional random parameter space. The proposed approach combines spatial domain decomposition with basis adaptation for each subdomain. The basis adaptation is used to address the curse of dimensionality by constructing an accurate low-dimensional representation of the stochastic PDE solution (probability density function and/or its leading statistical moments) in each subdomain. Restricting the basis adaptation to a specific subdomain affords finding a locally accurate solution. Then, the solutions from all of the subdomains are stitched together to provide a global solution. We support our construction with numericalmore » experiments for a steady-state diffusion equation with a random spatially dependent coefficient. Our results show that highly accurate global solutions can be obtained with significantly reduced computational costs.« less
An hp-adaptivity and error estimation for hyperbolic conservation laws
NASA Technical Reports Server (NTRS)
Bey, Kim S.
1995-01-01
This paper presents an hp-adaptive discontinuous Galerkin method for linear hyperbolic conservation laws. A priori and a posteriori error estimates are derived in mesh-dependent norms which reflect the dependence of the approximate solution on the element size (h) and the degree (p) of the local polynomial approximation. The a posteriori error estimate, based on the element residual method, provides bounds on the actual global error in the approximate solution. The adaptive strategy is designed to deliver an approximate solution with the specified level of error in three steps. The a posteriori estimate is used to assess the accuracy of a given approximate solution and the a priori estimate is used to predict the mesh refinements and polynomial enrichment needed to deliver the desired solution. Numerical examples demonstrate the reliability of the a posteriori error estimates and the effectiveness of the hp-adaptive strategy.
Gravitational instability of slowly rotating isothermal spheres
NASA Astrophysics Data System (ADS)
Chavanis, P. H.
2002-12-01
We discuss the statistical mechanics of rotating self-gravitating systems by allowing properly for the conservation of angular momentum. We study analytically the case of slowly rotating isothermal spheres by expanding the solutions of the Boltzmann-Poisson equation in a series of Legendre polynomials, adapting the procedure introduced by Chandrasekhar (1933) for distorted polytropes. We show how the classical spiral of Lynden-Bell & Wood (1967) in the temperature-energy plane is deformed by rotation. We find that gravitational instability occurs sooner in the microcanonical ensemble and later in the canonical ensemble. According to standard turning point arguments, the onset of the collapse coincides with the minimum energy or minimum temperature state in the series of equilibria. Interestingly, it happens to be close to the point of maximum flattening. We generalize the singular isothermal solution to the case of a slowly rotating configuration. We also consider slowly rotating configurations of the self-gravitating Fermi gas at non-zero temperature.
A Rapid Item-Search Procedure for Bayesian Adaptive Testing.
1977-05-01
properties of the • procedure , they migh t well introduce undesirable psychological effects on test scores (e.g., Betz & Weiss , 1976r.’ , 1976b...ge of results and adaptive ability test .~~~~ (Research Rep . 76—4). Minneapolis: University of Minnesota , Departmen t of Psychology , Psychometric...t~~[AH ~~~ ~~~~ r _ _ _ _ A RAPID ITEM -SEARC H PROCEDURE FOR BAYESIAN ADAPTIVE TESTING C. David Vale d D D Can David J . Weiss RESEARCH REPORT 77-n
NASA Astrophysics Data System (ADS)
Fambri, Francesco; Dumbser, Michael; Zanotti, Olindo
2017-11-01
This paper presents an arbitrary high-order accurate ADER Discontinuous Galerkin (DG) method on space-time adaptive meshes (AMR) for the solution of two important families of non-linear time dependent partial differential equations for compressible dissipative flows : the compressible Navier-Stokes equations and the equations of viscous and resistive magnetohydrodynamics in two and three space-dimensions. The work continues a recent series of papers concerning the development and application of a proper a posteriori subcell finite volume limiting procedure suitable for discontinuous Galerkin methods (Dumbser et al., 2014, Zanotti et al., 2015 [40,41]). It is a well known fact that a major weakness of high order DG methods lies in the difficulty of limiting discontinuous solutions, which generate spurious oscillations, namely the so-called 'Gibbs phenomenon'. In the present work, a nonlinear stabilization of the scheme is sequentially and locally introduced only for troubled cells on the basis of a novel a posteriori detection criterion, i.e. the MOOD approach. The main benefits of the MOOD paradigm, i.e. the computational robustness even in the presence of strong shocks, are preserved and the numerical diffusion is considerably reduced also for the limited cells by resorting to a proper sub-grid. In practice the method first produces a so-called candidate solution by using a high order accurate unlimited DG scheme. Then, a set of numerical and physical detection criteria is applied to the candidate solution, namely: positivity of pressure and density, absence of floating point errors and satisfaction of a discrete maximum principle in the sense of polynomials. Furthermore, in those cells where at least one of these criteria is violated the computed candidate solution is detected as troubled and is locally rejected. Subsequently, a more reliable numerical solution is recomputed a posteriori by employing a more robust but still very accurate ADER-WENO finite volume scheme on the subgrid averages within that troubled cell. Finally, a high order DG polynomial is reconstructed back from the evolved subcell averages. We apply the whole approach for the first time to the equations of compressible gas dynamics and magnetohydrodynamics in the presence of viscosity, thermal conductivity and magnetic resistivity, therefore extending our family of adaptive ADER-DG schemes to cases for which the numerical fluxes also depend on the gradient of the state vector. The distinguished high-resolution properties of the presented numerical scheme standout against a wide number of non-trivial test cases both for the compressible Navier-Stokes and the viscous and resistive magnetohydrodynamics equations. The present results show clearly that the shock-capturing capability of the news schemes is significantly enhanced within a cell-by-cell Adaptive Mesh Refinement (AMR) implementation together with time accurate local time stepping (LTS).
Adaptive Assessment for Nonacademic Secondary Reading.
ERIC Educational Resources Information Center
Hittleman, Daniel R.
Adaptive assessment procedures are a means of determining the quality of a reader's performance in a variety of reading situations and on a variety of written materials. Such procedures are consistent with the idea that there are functional competencies which change with the reading task. Adaptive assessment takes into account that a lack of…
The use of solution adaptive grids in solving partial differential equations
NASA Technical Reports Server (NTRS)
Anderson, D. A.; Rai, M. M.
1982-01-01
The grid point distribution used in solving a partial differential equation using a numerical method has a substantial influence on the quality of the solution. An adaptive grid which adjusts as the solution changes provides the best results when the number of grid points available for use during the calculation is fixed. Basic concepts used in generating and applying adaptive grids are reviewed in this paper, and examples illustrating applications of these concepts are presented.
[Apheresis in children: procedures and outcome].
Tummolo, Albina; Colella, Vincenzo; Bellantuono, Rosa; Giordano, Mario; Messina, Giovanni; Puteo, Flora; Sorino, Palma; De Palo, Tommaso
2012-01-01
Apheresis procedures are used in children to treat an increasing number of conditions by removing different types of substances from the bloodstream. In a previous study we evaluated the first results of our experience in children, emphasizing the solutions adopted to overcome technical difficulties and to adapt adult apheresis procedures to a pediatric population. The aim of the present study is to present data on a larger number of patients in whom apheresis was the main treatment. Ninety-three children (50 m, 43 f) affected by renal and/or extrarenal diseases were included. They were treated with LDL apheresis, protein A immunoadsorption, or plasma exchange. Our therapeutic protocol was the same as described in the previous study. Renal diseases and immunological disorders remained the most common conditions requiring this therapeutic approach. However, hemolytic uremic syndrome (HUS) was no longer the most frequent renal condition to be treated, as apheresis is currently the first treatment option only in cases of atypical HUS. In this series we also treated small children, showing that low weight should no longer be considered a contraindication to apheresis procedures. The low rate of complications and the overall satisfactory clinical results with increasingly advanced technical procedures make a wider use of apheresis in children realistic in the years to come.
Quasi-2D Unsteady Flow Solver Module for Rocket Engine and Propulsion System Simulations
2006-06-14
Conference, Sacramento, CA, 9-12 July 2006. 14. ABSTRACT A new quasi-two-dimensional procedure is presented for the transient solution of real-fluid flows...solution procedures is being developed in parallel to provide verification test cases. The solution procedure for both codes is coupled with a state-of...Davis, Davis, CA, 95616 A new quasi-two-dimensional procedure is presented for the transient solution of real- fluid flows in lines and volumes
A robust, efficient equidistribution 2D grid generation method
NASA Astrophysics Data System (ADS)
Chacon, Luis; Delzanno, Gian Luca; Finn, John; Chung, Jeojin; Lapenta, Giovanni
2007-11-01
We present a new cell-area equidistribution method for two- dimensional grid adaptation [1]. The method is able to satisfy the equidistribution constraint to arbitrary precision while optimizing desired grid properties (such as isotropy and smoothness). The method is based on the minimization of the grid smoothness integral, constrained to producing a given positive-definite cell volume distribution. The procedure gives rise to a single, non-linear scalar equation with no free-parameters. We solve this equation numerically with the Newton-Krylov technique. The ellipticity property of the linearized scalar equation allows multigrid preconditioning techniques to be effectively used. We demonstrate a solution exists and is unique. Therefore, once the solution is found, the adapted grid cannot be folded due to the positivity of the constraint on the cell volumes. We present several challenging tests to show that our new method produces optimal grids in which the constraint is satisfied numerically to arbitrary precision. We also compare the new method to the deformation method [2] and show that our new method produces better quality grids. [1] G.L. Delzanno, L. Chac'on, J.M. Finn, Y. Chung, G. Lapenta, A new, robust equidistribution method for two-dimensional grid generation, in preparation. [2] G. Liao and D. Anderson, A new approach to grid generation, Appl. Anal. 44, 285--297 (1992).
NASA Astrophysics Data System (ADS)
Qian, Xiaoshan
2018-01-01
The traditional model of evaporation process parameters have continuity and cumulative characteristics of the prediction error larger issues, based on the basis of the process proposed an adaptive particle swarm neural network forecasting method parameters established on the autoregressive moving average (ARMA) error correction procedure compensated prediction model to predict the results of the neural network to improve prediction accuracy. Taking a alumina plant evaporation process to analyze production data validation, and compared with the traditional model, the new model prediction accuracy greatly improved, can be used to predict the dynamic process of evaporation of sodium aluminate solution components.
A-Posteriori Error Estimation for Hyperbolic Conservation Laws with Constraint
NASA Technical Reports Server (NTRS)
Barth, Timothy
2004-01-01
This lecture considers a-posteriori error estimates for the numerical solution of conservation laws with time invariant constraints such as those arising in magnetohydrodynamics (MHD) and gravitational physics. Using standard duality arguments, a-posteriori error estimates for the discontinuous Galerkin finite element method are then presented for MHD with solenoidal constraint. From these estimates, a procedure for adaptive discretization is outlined. A taxonomy of Green's functions for the linearized MHD operator is given which characterizes the domain of dependence for pointwise errors. The extension to other constrained systems such as the Einstein equations of gravitational physics are then considered. Finally, future directions and open problems are discussed.
Item Selection and Ability Estimation Procedures for a Mixed-Format Adaptive Test
ERIC Educational Resources Information Center
Ho, Tsung-Han; Dodd, Barbara G.
2012-01-01
In this study we compared five item selection procedures using three ability estimation methods in the context of a mixed-format adaptive test based on the generalized partial credit model. The item selection procedures used were maximum posterior weighted information, maximum expected information, maximum posterior weighted Kullback-Leibler…
Copper-encapsulated vertically aligned carbon nanotube arrays.
Stano, Kelly L; Chapla, Rachel; Carroll, Murphy; Nowak, Joshua; McCord, Marian; Bradford, Philip D
2013-11-13
A new procedure is described for the fabrication of vertically aligned carbon nanotubes (VACNTs) that are decorated, and even completely encapsulated, by a dense network of copper nanoparticles. The process involves the conformal deposition of pyrolytic carbon (Py-C) to stabilize the aligned carbon-nanotube structure during processing. The stabilized arrays are mildly functionalized using oxygen plasma treatment to improve wettability, and they are then infiltrated with an aqueous, supersaturated Cu salt solution. Once dried, the salt forms a stabilizing crystal network throughout the array. After calcination and H2 reduction, Cu nanoparticles are left decorating the CNT surfaces. Studies were carried out to determine the optimal processing parameters to maximize Cu content in the composite. These included the duration of Py-C deposition and system process pressure as well as the implementation of subsequent and multiple Cu salt solution infiltrations. The optimized procedure yielded a nanoscale hybrid material where the anisotropic alignment from the VACNT array was preserved, and the mass of the stabilized arrays was increased by over 24-fold because of the addition of Cu. The procedure has been adapted for other Cu salts and can also be used for other metal salts altogether, including Ni, Co, Fe, and Ag. The resulting composite is ideally suited for application in thermal management devices because of its low density, mechanical integrity, and potentially high thermal conductivity. Additionally, further processing of the material via pressing and sintering can yield consolidated, dense bulk composites.
André, A; Crouzet, C; De Boissezon, X; Grolleau, J-L
2015-06-01
Surgical treatment of perineal pressure sores could be done with various fascio-cutaneous or musculo-cutaneous flaps, which provide cover and filling of most of pressure sores after spinal cord injuries. In rare cases, classical solutions are overtaken, then it is necessary to use more complex techniques. We report a case of a made-to-measure lower limb flap for coverage of confluent perineal pressure sores. A 49-year-old paraplegic patient developed multiple pressure sores on left and right ischial tuberosity, inferior pubic bone and bilateral trochanters with hips dislocation. Surgical treatment involved a whole right thigh flap to cover and fill right side lesions, associated to a posterior right leg musculo-cutaneous island flap to cover and fill the left trochanteric pressure sore. The surgical procedure lasted 6.5 hours and required massive blood transfusion. Antibiotics were adapted to bacteriological samples. There were no postoperative complications; complete wound healing occurred after three weeks. A lower limb sacrifice for coverage of a giant perineal pressure sores is an extreme surgical solution, reserved to patients understanding the issues of this last chance procedure. A good knowledge of vascular anatomy is an essential prerequisite, and allows to shape made-to-measure flaps. The success of such a procedure is closely linked to the collaboration with the rehabilitation team (appropriate therapeutic education concerning transfers and positioning). Copyright © 2014 Elsevier Masson SAS. All rights reserved.
Adaptive Batch Mode Active Learning.
Chakraborty, Shayok; Balasubramanian, Vineeth; Panchanathan, Sethuraman
2015-08-01
Active learning techniques have gained popularity to reduce human effort in labeling data instances for inducing a classifier. When faced with large amounts of unlabeled data, such algorithms automatically identify the exemplar and representative instances to be selected for manual annotation. More recently, there have been attempts toward a batch mode form of active learning, where a batch of data points is simultaneously selected from an unlabeled set. Real-world applications require adaptive approaches for batch selection in active learning, depending on the complexity of the data stream in question. However, the existing work in this field has primarily focused on static or heuristic batch size selection. In this paper, we propose two novel optimization-based frameworks for adaptive batch mode active learning (BMAL), where the batch size as well as the selection criteria are combined in a single formulation. We exploit gradient-descent-based optimization strategies as well as properties of submodular functions to derive the adaptive BMAL algorithms. The solution procedures have the same computational complexity as existing state-of-the-art static BMAL techniques. Our empirical results on the widely used VidTIMIT and the mobile biometric (MOBIO) data sets portray the efficacy of the proposed frameworks and also certify the potential of these approaches in being used for real-world biometric recognition applications.
Lattice model for water-solute mixtures.
Furlan, A P; Almarza, N G; Barbosa, M C
2016-10-14
A lattice model for the study of mixtures of associating liquids is proposed. Solvent and solute are modeled by adapting the associating lattice gas (ALG) model. The nature of interaction of solute/solvent is controlled by tuning the energy interactions between the patches of ALG model. We have studied three set of parameters, resulting in, hydrophilic, inert, and hydrophobic interactions. Extensive Monte Carlo simulations were carried out, and the behavior of pure components and the excess properties of the mixtures have been studied. The pure components, water (solvent) and solute, have quite similar phase diagrams, presenting gas, low density liquid, and high density liquid phases. In the case of solute, the regions of coexistence are substantially reduced when compared with both the water and the standard ALG models. A numerical procedure has been developed in order to attain series of results at constant pressure from simulations of the lattice gas model in the grand canonical ensemble. The excess properties of the mixtures, volume and enthalpy as the function of the solute fraction, have been studied for different interaction parameters of the model. Our model is able to reproduce qualitatively well the excess volume and enthalpy for different aqueous solutions. For the hydrophilic case, we show that the model is able to reproduce the excess volume and enthalpy of mixtures of small alcohols and amines. The inert case reproduces the behavior of large alcohols such as propanol, butanol, and pentanol. For the last case (hydrophobic), the excess properties reproduce the behavior of ionic liquids in aqueous solution.
How Near is a Near-Optimal Solution: Confidence Limits for the Global Optimum.
1980-05-01
or near-optimal solutions are the only practical solutions available. This paper identifies and compares some procedures which use independent near...approximate or near-optimal solutions are the only practical solutions available. This paper identifies and compares some procedures which use inde- pendent...The objective of this paper is to indicate some relatively new statistical procedures for obtaining an upper confidence limit on G Each of these
Spatial Data Quality Control Procedure applied to the Okavango Basin Information System
NASA Astrophysics Data System (ADS)
Butchart-Kuhlmann, Daniel
2014-05-01
Spatial data is a powerful form of information, capable of providing information of great interest and tremendous use to a variety of users. However, much like other data representing the 'real world', precision and accuracy must be high for the results of data analysis to be deemed reliable and thus applicable to real world projects and undertakings. The spatial data quality control (QC) procedure presented here was developed as the topic of a Master's thesis, in the sphere of and using data from the Okavango Basin Information System (OBIS), itself a part of The Future Okavango (TFO) project. The aim of the QC procedure was to form the basis of a method through which to determine the quality of spatial data relevant for application to hydrological, solute, and erosion transport modelling using the Jena Adaptable Modelling System (JAMS). As such, the quality of all data present in OBIS classified under the topics of elevation, geoscientific information, or inland waters, was evaluated. Since the initial data quality has been evaluated, efforts are underway to correct the errors found, thus improving the quality of the dataset.
8s, a numerical simulator of the challenging optical calibration of the E-ELT adaptive mirror M4
NASA Astrophysics Data System (ADS)
Briguglio, Runa; Pariani, Giorgio; Xompero, Marco; Riccardi, Armando; Tintori, Matteo; Lazzarini, Paolo; Spanò, Paolo
2016-07-01
8s stands for Optical Test TOwer Simulator (with 8 read as in italian 'otto'): it is a simulation tool for the optical calibration of the E-ELT deformable mirror M4 on its test facility. It has been developed to identify possible criticalities in the procedure, evaluate the solutions and estimate the sensitivity to environmental noise. The simulation system is composed by the finite elements model of the tower, the analytic influence functions of the actuators, the ray tracing propagation of the laser beam through the optical surfaces. The tool delivers simulated phasemaps of M4, associated with the current system status: actuator commands, optics alignment and position, beam vignetting, bench temperature and vibrations. It is possible to simulate a single step of the optical test of M4 by changing the system parameters according to a calibration procedure and collect the associated phasemap for performance evaluation. In this paper we will describe the simulation package and outline the proposed calibration procedure of M4.
Investigations in adaptive processing of multispectral data
NASA Technical Reports Server (NTRS)
Kriegler, F. J.; Horwitz, H. M.
1973-01-01
Adaptive data processing procedures are applied to the problem of classifying objects in a scene scanned by multispectral sensor. These procedures show a performance improvement over standard nonadaptive techniques. Some sources of error in classification are identified and those correctable by adaptive processing are discussed. Experiments in adaptation of signature means by decision-directed methods are described. Some of these methods assume correlation between the trajectories of different signature means; for others this assumption is not made.
QUEST - A Bayesian adaptive psychometric method
NASA Technical Reports Server (NTRS)
Watson, A. B.; Pelli, D. G.
1983-01-01
An adaptive psychometric procedure that places each trial at the current most probable Bayesian estimate of threshold is described. The procedure takes advantage of the common finding that the human psychometric function is invariant in form when expressed as a function of log intensity. The procedure is simple, fast, and efficient, and may be easily implemented on any computer.
Differential Effects of Two Spelling Procedures on Acquisition, Maintenance and Adaption to Reading
ERIC Educational Resources Information Center
Cates, Gary L.; Dunne, Megan; Erkfritz, Karyn N.; Kivisto, Aaron; Lee, Nicole; Wierzbicki, Jennifer
2007-01-01
An alternating treatments design was used to assess the effects of a constant time delay (CTD) procedure and a cover-copy-compare (CCC) procedure on three students' acquisition, subsequent maintenance, and adaptation (i.e., application) of acquired spelling words to reading passages. Students were randomly presented two trials of word lists from…
Continuing challenges for computer-based neuropsychological tests.
Letz, Richard
2003-08-01
A number of issues critical to the development of computer-based neuropsychological testing systems that remain continuing challenges to their widespread use in occupational and environmental health are reviewed. Several computer-based neuropsychological testing systems have been developed over the last 20 years, and they have contributed substantially to the study of neurologic effects of a number of environmental exposures. However, many are no longer supported and do not run on contemporary personal computer operating systems. Issues that are continuing challenges for development of computer-based neuropsychological tests in environmental and occupational health are discussed: (1) some current technological trends that generally make test development more difficult; (2) lack of availability of usable speech recognition of the type required for computer-based testing systems; (3) implementing computer-based procedures and tasks that are improvements over, not just adaptations of, their manually-administered predecessors; (4) implementing tests of a wider range of memory functions than the limited range now available; (5) paying more attention to motivational influences that affect the reliability and validity of computer-based measurements; and (6) increasing the usability of and audience for computer-based systems. Partial solutions to some of these challenges are offered. The challenges posed by current technological trends are substantial and generally beyond the control of testing system developers. Widespread acceptance of the "tablet PC" and implementation of accurate small vocabulary, discrete, speaker-independent speech recognition would enable revolutionary improvements to computer-based testing systems, particularly for testing memory functions not covered in existing systems. Dynamic, adaptive procedures, particularly ones based on item-response theory (IRT) and computerized-adaptive testing (CAT) methods, will be implemented in new tests that will be more efficient, reliable, and valid than existing test procedures. These additional developments, along with implementation of innovative reporting formats, are necessary for more widespread acceptance of the testing systems.
Adaptive sampling in research on risk-related behaviors.
Thompson, Steven K; Collins, Linda M
2002-11-01
This article introduces adaptive sampling designs to substance use researchers. Adaptive sampling is particularly useful when the population of interest is rare, unevenly distributed, hidden, or hard to reach. Examples of such populations are injection drug users, individuals at high risk for HIV/AIDS, and young adolescents who are nicotine dependent. In conventional sampling, the sampling design is based entirely on a priori information, and is fixed before the study begins. By contrast, in adaptive sampling, the sampling design adapts based on observations made during the survey; for example, drug users may be asked to refer other drug users to the researcher. In the present article several adaptive sampling designs are discussed. Link-tracing designs such as snowball sampling, random walk methods, and network sampling are described, along with adaptive allocation and adaptive cluster sampling. It is stressed that special estimation procedures taking the sampling design into account are needed when adaptive sampling has been used. These procedures yield estimates that are considerably better than conventional estimates. For rare and clustered populations adaptive designs can give substantial gains in efficiency over conventional designs, and for hidden populations link-tracing and other adaptive procedures may provide the only practical way to obtain a sample large enough for the study objectives.
Solution-adaptive finite element method in computational fracture mechanics
NASA Technical Reports Server (NTRS)
Min, J. B.; Bass, J. M.; Spradley, L. W.
1993-01-01
Some recent results obtained using solution-adaptive finite element method in linear elastic two-dimensional fracture mechanics problems are presented. The focus is on the basic issue of adaptive finite element method for validating the applications of new methodology to fracture mechanics problems by computing demonstration problems and comparing the stress intensity factors to analytical results.
Higher-order adaptive finite-element methods for Kohn–Sham density functional theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Motamarri, P.; Nowak, M.R.; Leiter, K.
2013-11-15
We present an efficient computational approach to perform real-space electronic structure calculations using an adaptive higher-order finite-element discretization of Kohn–Sham density-functional theory (DFT). To this end, we develop an a priori mesh-adaption technique to construct a close to optimal finite-element discretization of the problem. We further propose an efficient solution strategy for solving the discrete eigenvalue problem by using spectral finite-elements in conjunction with Gauss–Lobatto quadrature, and a Chebyshev acceleration technique for computing the occupied eigenspace. The proposed approach has been observed to provide a staggering 100–200-fold computational advantage over the solution of a generalized eigenvalue problem. Using the proposedmore » solution procedure, we investigate the computational efficiency afforded by higher-order finite-element discretizations of the Kohn–Sham DFT problem. Our studies suggest that staggering computational savings—of the order of 1000-fold—relative to linear finite-elements can be realized, for both all-electron and local pseudopotential calculations, by using higher-order finite-element discretizations. On all the benchmark systems studied, we observe diminishing returns in computational savings beyond the sixth-order for accuracies commensurate with chemical accuracy, suggesting that the hexic spectral-element may be an optimal choice for the finite-element discretization of the Kohn–Sham DFT problem. A comparative study of the computational efficiency of the proposed higher-order finite-element discretizations suggests that the performance of finite-element basis is competing with the plane-wave discretization for non-periodic local pseudopotential calculations, and compares to the Gaussian basis for all-electron calculations to within an order of magnitude. Further, we demonstrate the capability of the proposed approach to compute the electronic structure of a metallic system containing 1688 atoms using modest computational resources, and good scalability of the present implementation up to 192 processors.« less
Development of Ecological Toxicity and Biomagnification Data for Explosives Contaminants in Soil
2003-07-01
explosive contaminated soil leachates to Daphnia magna using an adapted toxicity characteristic leaching procedure. U.S. Army Chemical and Biological...1993) Toxicity determination of explosive contaminated soil leachates to Daphnia magna using an adapted toxicity characteristic leaching procedure...Sadusky, M. (1993). Toxicity determination of explosive contaminated soil leachates to Daphnia magna using C-46 an adapted toxicity
Recovery of Peripheral Nerve with Massive Loss Defect by Tissue Engineered Guiding Regenerative Gel
Nevo, Zvi
2014-01-01
Objective. Guiding Regeneration Gel (GRG) was developed in response to the clinical need of improving treatment for peripheral nerve injuries and helping patients regenerate massive regional losses in peripheral nerves. The efficacy of GRG based on tissue engineering technology for the treatment of complete peripheral nerve injury with significant loss defect was investigated. Background. Many severe peripheral nerve injuries can only be treated through surgical reconstructive procedures. Such procedures are challenging, since functional recovery is slow and can be unsatisfactory. One of the most promising solutions already in clinical practice is synthetic nerve conduits connecting the ends of damaged nerve supporting nerve regeneration. However, this solution still does not enable recovery of massive nerve loss defect. The proposed technology is a biocompatible and biodegradable gel enhancing axonal growth and nerve regeneration. It is composed of a complex of substances comprising transparent, highly viscous gel resembling the extracellular matrix that is almost impermeable to liquids and gasses, flexible, elastic, malleable, and adaptable to various shapes and formats. Preclinical study on rat model of peripheral nerve injury showed that GRG enhanced nerve regeneration when placed in nerve conduits, enabling recovery of massive nerve loss, previously unbridgeable, and enabled nerve regeneration at least as good as with autologous nerve graft “gold standard” treatment. PMID:25105121
Failure of Anisotropic Unstructured Mesh Adaption Based on Multidimensional Residual Minimization
NASA Technical Reports Server (NTRS)
Wood, William A.; Kleb, William L.
2003-01-01
An automated anisotropic unstructured mesh adaptation strategy is proposed, implemented, and assessed for the discretization of viscous flows. The adaption criteria is based upon the minimization of the residual fluctuations of a multidimensional upwind viscous flow solver. For scalar advection, this adaption strategy has been shown to use fewer grid points than gradient based adaption, naturally aligning mesh edges with discontinuities and characteristic lines. The adaption utilizes a compact stencil and is local in scope, with four fundamental operations: point insertion, point deletion, edge swapping, and nodal displacement. Evaluation of the solution-adaptive strategy is performed for a two-dimensional blunt body laminar wind tunnel case at Mach 10. The results demonstrate that the strategy suffers from a lack of robustness, particularly with regard to alignment of the bow shock in the vicinity of the stagnation streamline. In general, constraining the adaption to such a degree as to maintain robustness results in negligible improvement to the solution. Because the present method fails to consistently or significantly improve the flow solution, it is rejected in favor of simple uniform mesh refinement.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tipireddy, R.; Stinis, P.; Tartakovsky, A. M.
In this paper, we present a novel approach for solving steady-state stochastic partial differential equations (PDEs) with high-dimensional random parameter space. The proposed approach combines spatial domain decomposition with basis adaptation for each subdomain. The basis adaptation is used to address the curse of dimensionality by constructing an accurate low-dimensional representation of the stochastic PDE solution (probability density function and/or its leading statistical moments) in each subdomain. Restricting the basis adaptation to a specific subdomain affords finding a locally accurate solution. Then, the solutions from all of the subdomains are stitched together to provide a global solution. We support ourmore » construction with numerical experiments for a steady-state diffusion equation with a random spatially dependent coefficient. Lastly, our results show that highly accurate global solutions can be obtained with significantly reduced computational costs.« less
Horst, Reto; Wüthrich, Kurt
2015-07-20
Reconstitution of integral membrane proteins (IMP) in aqueous solutions of detergent micelles has been extensively used in structural biology, using either X-ray crystallography or NMR in solution. Further progress could be achieved by establishing a rational basis for the selection of detergent and buffer conditions, since the stringent bottleneck that slows down the structural biology of IMPs is the preparation of diffracting crystals or concentrated solutions of stable isotope labeled IMPs. Here, we describe procedures to monitor the quality of aqueous solutions of [ 2 H, 15 N]-labeled IMPs reconstituted in detergent micelles. This approach has been developed for studies of β-barrel IMPs, where it was successfully applied for numerous NMR structure determinations, and it has also been adapted for use with α-helical IMPs, in particular GPCRs, in guiding crystallization trials and optimizing samples for NMR studies (Horst et al ., 2013). 2D [ 15 N, 1 H]-correlation maps are used as "fingerprints" to assess the foldedness of the IMP in solution. For promising samples, these "inexpensive" data are then supplemented with measurements of the translational and rotational diffusion coefficients, which give information on the shape and size of the IMP/detergent mixed micelles. Using microcoil equipment for these NMR experiments enables data collection with only micrograms of protein and detergent. This makes serial screens of variable solution conditions viable, enabling the optimization of parameters such as the detergent concentration, sample temperature, pH and the composition of the buffer.
Key properties of expert movement systems in sport : an ecological dynamics perspective.
Seifert, Ludovic; Button, Chris; Davids, Keith
2013-03-01
This paper identifies key properties of expertise in sport predicated on the performer-environment relationship. Weaknesses of traditional approaches to expert performance, which uniquely focus on the performer and the environment separately, are highlighted by an ecological dynamics perspective. Key properties of expert movement systems include 'multi- and meta-stability', 'adaptive variability', 'redundancy', 'degeneracy' and the 'attunement to affordances'. Empirical research on these expert system properties indicates that skill acquisition does not emerge from the internal representation of declarative and procedural knowledge, or the imitation of expert behaviours to linearly reduce a perceived 'gap' separating movements of beginners and a putative expert model. Rather, expert performance corresponds with the ongoing co-adaptation of an individual's behaviours to dynamically changing, interacting constraints, individually perceived and encountered. The functional role of adaptive movement variability is essential to expert performance in many different sports (involving individuals and teams; ball games and outdoor activities; land and aquatic environments). These key properties signify that, in sport performance, although basic movement patterns need to be acquired by developing athletes, there exists no ideal movement template towards which all learners should aspire, since relatively unique functional movement solutions emerge from the interaction of key constraints.
NASA Technical Reports Server (NTRS)
Kim, Hyoungin; Liou, Meng-Sing
2011-01-01
In this paper, we demonstrate improved accuracy of the level set method for resolving deforming interfaces by proposing two key elements: (1) accurate level set solutions on adapted Cartesian grids by judiciously choosing interpolation polynomials in regions of different grid levels and (2) enhanced reinitialization by an interface sharpening procedure. The level set equation is solved using a fifth order WENO scheme or a second order central differencing scheme depending on availability of uniform stencils at each grid point. Grid adaptation criteria are determined so that the Hamiltonian functions at nodes adjacent to interfaces are always calculated by the fifth order WENO scheme. This selective usage between the fifth order WENO and second order central differencing schemes is confirmed to give more accurate results compared to those in literature for standard test problems. In order to further improve accuracy especially near thin filaments, we suggest an artificial sharpening method, which is in a similar form with the conventional re-initialization method but utilizes sign of curvature instead of sign of the level set function. Consequently, volume loss due to numerical dissipation on thin filaments is remarkably reduced for the test problems
The Development and Assesment of Adaptation Pathways for Urban Pluvial Flooding
NASA Astrophysics Data System (ADS)
Babovic, F.; Mijic, A.; Madani, K.
2017-12-01
Around the globe, urban areas are growing in both size and importance. However, due to the prevalence of impermeable surfaces within the urban fabric of cities these areas have a high risk of pluvial flooding. Due to the convergence of population growth and climate change the risk of pluvial flooding is growing. When designing solutions and adaptations to pluvial flood risk urban planners and engineers encounter a great deal of uncertainty due to model uncertainty, uncertainty within the data utilised, and uncertainty related to future climate and land use conditions. The interaction of these uncertainties leads to conditions of deep uncertainty. However, infrastructure systems must be designed and built in the face of this deep uncertainty. An Adaptation Tipping Points (ATP) methodology was used to develop a strategy to adapt an urban drainage system in the North East of London under conditions of deep uncertainty. The ATP approach was used to assess the current drainage system and potential drainage system adaptations. These adaptations were assessed against potential changes in rainfall depth and peakedness-defined as the ratio of mean to peak rainfall. These solutions encompassed both traditional and blue-green solutions that the Local Authority are known to be considering. This resulted in a set of Adaptation Pathways. However, theses pathways do not convey any information regarding the relative merits and demerits of the potential adaptation options presented. To address this a cost-benefit metric was developed that would reflect the solutions' costs and benefits under uncertainty. The resulting metric combines elements of the Benefits of SuDS Tool (BeST) with real options analysis in order to reflect the potential value of ecosystem services delivered by blue-green solutions under uncertainty. Lastly, it is discussed how a local body can utilise the adaptation pathways; their relative costs and benefits; and a system of local data collection to help guide better decision making with respect to urban flood adaptation.
Borrego-Jaraba, Francisco; Garrido, Pilar Castro; García, Gonzalo Cerruela; Ruiz, Irene Luque; Gómez-Nieto, Miguel Ángel
2013-01-01
Because of the global economic turmoil, nowadays a lot of companies are adopting a “deal of the day” business model, some of them with great success. Generally, they try to attract and retain customers through discount coupons and gift cards, using, generally, traditional distribution media. This paper describes a framework, which integrates intelligent environments by using NFC, oriented to the full management of this kind of businesses. The system is responsible for diffusion, distribution, sourcing, validation, redemption and managing of vouchers, loyalty cards and all kind of mobile coupons using NFC, as well as QR codes. WingBonus can be fully adapted to the requirements of marketing campaigns, voucher providers, shop or retailer infrastructures and mobile devices and purchasing habits. Security of the voucher is granted by the system by synchronizing procedures using secure encriptation algorithms. The WingBonus website and mobile applications can be adapted to any requirement of the system actors. PMID:23673675
Direct simulation Monte Carlo prediction of on-orbit contaminant deposit levels for HALOE
NASA Technical Reports Server (NTRS)
Woronowicz, Michael S.; Rault, Didier F. G.
1994-01-01
A three-dimensional version of the direct simulation Monte Carlo method is adapted to assess the contamination environment surrounding a highly detailed model of the Upper Atmosphere Research Satellite. Emphasis is placed on simulating a realistic, worst-case set of flow field and surface conditions and geometric orientations for the satellite in order to estimate an upper limit for the cumulative level of volatile organic molecular deposits at the aperture of the Halogen Occultation Experiment. A detailed description of the adaptation of this solution method to the study of the satellite's environment is also presented. Results pertaining to the satellite's environment are presented regarding contaminant cloud structure, cloud composition, and statistics of simulated molecules impinging on the target surface, along with data related to code performance. Using procedures developed in standard contamination analyses, along with many worst-case assumptions, the cumulative upper-limit level of volatile organic deposits on HALOE's aperture over the instrument's 35-month nominal data collection period is estimated at about 13,350 A.
Borrego-Jaraba, Francisco; Garrido, Pilar Castro; García, Gonzalo Cerruela; Ruiz, Irene Luque; Gómez-Nieto, Miguel Angel
2013-05-14
Because of the global economic turmoil, nowadays a lot of companies are adopting a "deal of the day" business model, some of them with great success. Generally, they try to attract and retain customers through discount coupons and gift cards, using, generally, traditional distribution media. This paper describes a framework, which integrates intelligent environments by using NFC, oriented to the full management of this kind of businesses. The system is responsible for diffusion, distribution, sourcing, validation, redemption and managing of vouchers, loyalty cards and all kind of mobile coupons using NFC, as well as QR codes. WingBonus can be fully adapted to the requirements of marketing campaigns, voucher providers, shop or retailer infrastructures and mobile devices and purchasing habits. Security of the voucher is granted by the system by synchronizing procedures using secure encriptation algorithms. The WingBonus website and mobile applications can be adapted to any requirement of the system actors.
A Method for Extracting Pigments from Squid Doryteuthis pealeii.
DiBona, Christopher W; Williams, Thomas L; Dinneen, Sean R; Jones Labadie, Stephanie F; Deravi, Leila F
2016-11-09
Cephalopods can undergo rapid and adaptive changes in dermal coloration for sensing, communication, defense, and reproduction purposes. These capabilities are supported in part by the areal expansion and retraction of pigmented organs known as chromatophores. While it is known that the chromatophores contain a tethered network of pigmented granules, their structure-function properties have not been fully detailed. We describe a method for isolating the nanostructured granules in squid Doryteuthis pealeii chromatophores and demonstrate how their associated pigments can be extracted in acidic solvents. To accomplish this, the chromatophore containing dermal layer is first manually isolated using a superficial dissection, and the pigment granules are removed using sonication, centrifugation, and washing cycles. Pigments confined within the purified granules are then extracted via acidic methanol solutions, leaving nanostructures with smaller diameters that are void of visible color. This extraction procedure produces a 58% yield of soluble pigments isolated from granules. Using this method, the composition of the chromatophore pigments can be determined and used to provide insight into the mechanism of adaptive coloration in cephalopods.
Rehabilitation for bilateral amputation of fingers
Stapanian, Martin A.; Stapanian, Adrienne M.P.; Staley, Keith E.
2010-01-01
We describe reconstructive surgeries, therapy, prostheses, and adaptations for a patient who experienced bilateral amputation of all five fingers of both hands through the proximal phalanges in January 1992. The patient made considerable progress in the use of his hands in the 10 mo after amputation, including nearly a 120% increase in the active range of flexion of metacarpophalangeal joints. In late 1992 and early 1993, the patient had "on-top plasty" surgeries, in which the index finger remnants were transferred onto the thumb stumps, performed on both hands. The increased web space and functional pinch resulting from these procedures made many tasks much easier. The patient and occupational therapists set challenging goals at all times. Moreover, the patient was actively involved in the design and fabrication of all prostheses and adaptations or he developed them himself. Although he was discharged from occupational therapy in 1997, the patient continues to actively find new solutions for prehension and grip strength 18 yr after amputation.
Asymptotic Linearity of Optimal Control Modification Adaptive Law with Analytical Stability Margins
NASA Technical Reports Server (NTRS)
Nguyen, Nhan T.
2010-01-01
Optimal control modification has been developed to improve robustness to model-reference adaptive control. For systems with linear matched uncertainty, optimal control modification adaptive law can be shown by a singular perturbation argument to possess an outer solution that exhibits a linear asymptotic property. Analytical expressions of phase and time delay margins for the outer solution can be obtained. Using the gradient projection operator, a free design parameter of the adaptive law can be selected to satisfy stability margins.
Space-time mesh adaptation for solute transport in randomly heterogeneous porous media.
Dell'Oca, Aronne; Porta, Giovanni Michele; Guadagnini, Alberto; Riva, Monica
2018-05-01
We assess the impact of an anisotropic space and time grid adaptation technique on our ability to solve numerically solute transport in heterogeneous porous media. Heterogeneity is characterized in terms of the spatial distribution of hydraulic conductivity, whose natural logarithm, Y, is treated as a second-order stationary random process. We consider nonreactive transport of dissolved chemicals to be governed by an Advection Dispersion Equation at the continuum scale. The flow field, which provides the advective component of transport, is obtained through the numerical solution of Darcy's law. A suitable recovery-based error estimator is analyzed to guide the adaptive discretization. We investigate two diverse strategies guiding the (space-time) anisotropic mesh adaptation. These are respectively grounded on the definition of the guiding error estimator through the spatial gradients of: (i) the concentration field only; (ii) both concentration and velocity components. We test the approach for two-dimensional computational scenarios with moderate and high levels of heterogeneity, the latter being expressed in terms of the variance of Y. As quantities of interest, we key our analysis towards the time evolution of section-averaged and point-wise solute breakthrough curves, second centered spatial moment of concentration, and scalar dissipation rate. As a reference against which we test our results, we consider corresponding solutions associated with uniform space-time grids whose level of refinement is established through a detailed convergence study. We find a satisfactory comparison between results for the adaptive methodologies and such reference solutions, our adaptive technique being associated with a markedly reduced computational cost. Comparison of the two adaptive strategies tested suggests that: (i) defining the error estimator relying solely on concentration fields yields some advantages in grasping the key features of solute transport taking place within low velocity regions, where diffusion-dispersion mechanisms are dominant; and (ii) embedding the velocity field in the error estimator guiding strategy yields an improved characterization of the forward fringe of solute fronts which propagate through high velocity regions. Copyright © 2017 Elsevier B.V. All rights reserved.
Stimulus-Dependent Effects of Temperature on Bitter Taste in Humans
Andrew, Kendra
2017-01-01
This study investigated the effects of temperature on bitter taste in humans. The experiments were conducted within the context of current understanding of the neurobiology of bitter taste and recent evidence of stimulus-dependent effects of temperature on sweet taste. In the first experiment, the bitterness of caffeine and quinine sampled with the tongue tip was assessed at 4 different temperatures (10°, 21°, 30°, and 37 °C) following pre-exposure to the same solution or to water for 0, 3, or 10 s. The results showed that initial bitterness (0-s pre-exposure) followed an inverted U-shaped function of temperature for both stimuli, but the differences across temperature were statistically significant only for quinine. Conversely, temperature significantly affected adaptation to the bitterness of quinine but not caffeine. A second experiment used the same procedure to test 2 additional stimuli, naringin and denatonium benzoate. Temperature significantly affected the initial bitterness of both stimuli but had no effect on adaptation to either stimulus. These results confirm that like sweet taste, temperature affects bitter taste sensitivity and adaptation in stimulus-dependent ways. However, the thermal effect on quinine adaptation, which increased with warming, was opposite to what had been found previously for adaptation to sweetness. The implications of these results are discussed in relation to findings from prior studies of temperature and bitter taste in humans and the possible neurobiological mechanisms of gustatory thermal sensitivity. PMID:28119357
Design and preparation of polymeric scaffolds for tissue engineering.
Weigel, Thomas; Schinkel, Gregor; Lendlein, Andreas
2006-11-01
Polymeric scaffolds for tissue engineering can be prepared with a multitude of different techniques. Many diverse approaches have recently been under development. The adaptation of conventional preparation methods, such as electrospinning, induced phase separation of polymer solutions or porogen leaching, which were developed originally for other research areas, are described. In addition, the utilization of novel fabrication techniques, such as rapid prototyping or solid free-form procedures, with their many different methods to generate or to embody scaffold structures or the usage of self-assembly systems that mimic the properties of the extracellular matrix are also described. These methods are reviewed and evaluated with specific regard to their utility in the area of tissue engineering.
Comparing Anisotropic Output-Based Grid Adaptation Methods by Decomposition
NASA Technical Reports Server (NTRS)
Park, Michael A.; Loseille, Adrien; Krakos, Joshua A.; Michal, Todd
2015-01-01
Anisotropic grid adaptation is examined by decomposing the steps of flow solution, ad- joint solution, error estimation, metric construction, and simplex grid adaptation. Multiple implementations of each of these steps are evaluated by comparison to each other and expected analytic results when available. For example, grids are adapted to analytic metric fields and grid measures are computed to illustrate the properties of multiple independent implementations of grid adaptation mechanics. Different implementations of each step in the adaptation process can be evaluated in a system where the other components of the adaptive cycle are fixed. Detailed examination of these properties allows comparison of different methods to identify the current state of the art and where further development should be targeted.
Ijabadeniyi, Oluwatosin Ademola; Mnyandu, Elizabeth
2017-04-13
The effectiveness of sodium dodecyl sulphate (SDS), sodium hypochlorite solution and levulinic acid in reducing the survival of heat adapted and chlorine adapted Listeria monocytogenes ATCC 7644 was evaluated. The results against heat adapted L. monocytognes revealed that sodium hypochlorite solution was the least effective, achieving log reduction of 2.75, 2.94 and 3.97 log colony forming unit (CFU)/mL for 1, 3 and 5 minutes, respectively. SDS was able to achieve 8 log reduction for both heat adapted and chlorine adapted bacteria. When used against chlorine adapted L. monocytogenes sodium hypochlorite solution achieved log reduction of 2.76, 2.93 and 3.65 log CFU/mL for 1, 3 and 5 minutes, respectively. Using levulinic acid on heat adapted bacteria achieved log reduction of 3.07, 2.78 and 4.97 log CFU/mL for 1, 3, 5 minutes, respectively. On chlorine adapted bacteria levulinic acid achieved log reduction of 2.77, 3.07 and 5.21 log CFU/mL for 1, 3 and 5 minutes, respectively. Using a mixture of 0.05% SDS and 0.5% levulinic acid on heat adapted bacteria achieved log reduction of 3.13, 3.32 and 4.79 log CFU/mL for 1, 3 and 5 minutes while on chlorine adapted bacteria it achieved 3.20, 3.33 and 5.66 log CFU/mL, respectively. Increasing contact time also increased log reduction for both test pathogens. A storage period of up to 72 hours resulted in progressive log reduction for both test pathogens. Results also revealed that there was a significant difference (P≤0.05) among contact times, storage times and sanitizers. Findings from this study can be used to select suitable sanitizers and contact times for heat and chlorine adapted L. monocytogenes in the fresh produce industry.
A selective-update affine projection algorithm with selective input vectors
NASA Astrophysics Data System (ADS)
Kong, NamWoong; Shin, JaeWook; Park, PooGyeon
2011-10-01
This paper proposes an affine projection algorithm (APA) with selective input vectors, which based on the concept of selective-update in order to reduce estimation errors and computations. The algorithm consists of two procedures: input- vector-selection and state-decision. The input-vector-selection procedure determines the number of input vectors by checking with mean square error (MSE) whether the input vectors have enough information for update. The state-decision procedure determines the current state of the adaptive filter by using the state-decision criterion. As the adaptive filter is in transient state, the algorithm updates the filter coefficients with the selected input vectors. On the other hand, as soon as the adaptive filter reaches the steady state, the update procedure is not performed. Through these two procedures, the proposed algorithm achieves small steady-state estimation errors, low computational complexity and low update complexity for colored input signals.
Core, Cynthia; Brown, Janean W; Larsen, Michael D; Mahshie, James
2014-01-01
The objectives of this research were to determine whether an adapted version of a Hybrid Visual Habituation procedure could be used to assess speech perception of phonetic and prosodic features of speech (vowel height, lexical stress, and intonation) in individual pre-school-age children who use cochlear implants. Nine children ranging in age from 3;4 to 5;5 participated in this study. Children were prelingually deaf and used cochlear implants and had no other known disabilities. Children received two speech feature tests using an adaptation of a Hybrid Visual Habituation procedure. Seven of the nine children demonstrated perception of at least one speech feature using this procedure using results from a Bayesian linear regression analysis. At least one child demonstrated perception of each speech feature using this assessment procedure. An adapted version of the Hybrid Visual Habituation Procedure with an appropriate statistical analysis provides a way to assess phonetic and prosodicaspects of speech in pre-school-age children who use cochlear implants.
Procedures for Selecting Items for Computerized Adaptive Tests.
ERIC Educational Resources Information Center
Kingsbury, G. Gage; Zara, Anthony R.
1989-01-01
Several classical approaches and alternative approaches to item selection for computerized adaptive testing (CAT) are reviewed and compared. The study also describes procedures for constrained CAT that may be added to classical item selection approaches to allow them to be used for applied testing. (TJH)
The Biopsychosocial-Digital Approach to Health and Disease: Call for a Paradigm Expansion
2018-01-01
Digital health is an advancing phenomenon in modern health care systems. Currently, numerous stakeholders in various countries are evaluating the potential benefits of digital health solutions at the individual, population, and/or organizational levels. Additionally, driving factors are being created from the customer-side of the health care systems to push health care providers, policymakers, or researchers to embrace digital health solutions. However, health care providers may differ in their approach to adopt these solutions. Health care providers are not assumed to be appropriately trained to address the requirements of integrating digital health solutions into daily everyday practices and procedures. To adapt to the changing demands of health care systems, it is necessary to expand relevant paradigms and to train human resources as required. In this article, a more comprehensive paradigm will be proposed, based on the ‘biopsychosocial model’ of assessing health and disease, originally introduced by George L Engel. The “biopsychosocial model” must be leveraged to include a “digital” component, thus suggesting a ‘biopsychosocial-digital’ approach to health and disease. Modifications to the “biopsychosocial” model and transition to the “biopsychosocial-digital” model are explained. Furthermore, the emerging implications of understanding health and disease are clarified pertaining to their relevance in training human resources for health care provision and research. PMID:29776900
Towards a framework for agent-based image analysis of remote-sensing data
Hofmann, Peter; Lettmayer, Paul; Blaschke, Thomas; Belgiu, Mariana; Wegenkittl, Stefan; Graf, Roland; Lampoltshammer, Thomas Josef; Andrejchenko, Vera
2015-01-01
Object-based image analysis (OBIA) as a paradigm for analysing remotely sensed image data has in many cases led to spatially and thematically improved classification results in comparison to pixel-based approaches. Nevertheless, robust and transferable object-based solutions for automated image analysis capable of analysing sets of images or even large image archives without any human interaction are still rare. A major reason for this lack of robustness and transferability is the high complexity of image contents: Especially in very high resolution (VHR) remote-sensing data with varying imaging conditions or sensor characteristics, the variability of the objects’ properties in these varying images is hardly predictable. The work described in this article builds on so-called rule sets. While earlier work has demonstrated that OBIA rule sets bear a high potential of transferability, they need to be adapted manually, or classification results need to be adjusted manually in a post-processing step. In order to automate these adaptation and adjustment procedures, we investigate the coupling, extension and integration of OBIA with the agent-based paradigm, which is exhaustively investigated in software engineering. The aims of such integration are (a) autonomously adapting rule sets and (b) image objects that can adopt and adjust themselves according to different imaging conditions and sensor characteristics. This article focuses on self-adapting image objects and therefore introduces a framework for agent-based image analysis (ABIA). PMID:27721916
Towards a framework for agent-based image analysis of remote-sensing data.
Hofmann, Peter; Lettmayer, Paul; Blaschke, Thomas; Belgiu, Mariana; Wegenkittl, Stefan; Graf, Roland; Lampoltshammer, Thomas Josef; Andrejchenko, Vera
2015-04-03
Object-based image analysis (OBIA) as a paradigm for analysing remotely sensed image data has in many cases led to spatially and thematically improved classification results in comparison to pixel-based approaches. Nevertheless, robust and transferable object-based solutions for automated image analysis capable of analysing sets of images or even large image archives without any human interaction are still rare. A major reason for this lack of robustness and transferability is the high complexity of image contents: Especially in very high resolution (VHR) remote-sensing data with varying imaging conditions or sensor characteristics, the variability of the objects' properties in these varying images is hardly predictable. The work described in this article builds on so-called rule sets. While earlier work has demonstrated that OBIA rule sets bear a high potential of transferability, they need to be adapted manually, or classification results need to be adjusted manually in a post-processing step. In order to automate these adaptation and adjustment procedures, we investigate the coupling, extension and integration of OBIA with the agent-based paradigm, which is exhaustively investigated in software engineering. The aims of such integration are (a) autonomously adapting rule sets and (b) image objects that can adopt and adjust themselves according to different imaging conditions and sensor characteristics. This article focuses on self-adapting image objects and therefore introduces a framework for agent-based image analysis (ABIA).
FAST SIMULATION OF SOLID TUMORS THERMAL ABLATION TREATMENTS WITH A 3D REACTION DIFFUSION MODEL *
BERTACCINI, DANIELE; CALVETTI, DANIELA
2007-01-01
An efficient computational method for near real-time simulation of thermal ablation of tumors via radio frequencies is proposed. Model simulations of the temperature field in a 3D portion of tissue containing the tumoral mass for different patterns of source heating can be used to design the ablation procedure. The availability of a very efficient computational scheme makes it possible update the predicted outcome of the procedure in real time. In the algorithms proposed here a discretization in space of the governing equations is followed by an adaptive time integration based on implicit multistep formulas. A modification of the ode15s MATLAB function which uses Krylov space iterative methods for the solution of for the linear systems arising at each integration step makes it possible to perform the simulations on standard desktop for much finer grids than using the built-in ode15s. The proposed algorithm can be applied to a wide class of nonlinear parabolic differential equations. PMID:17173888
2012-03-27
pulse- detonation engines ( PDE ), stage separation, supersonic cav- ity oscillations, hypersonic aerodynamics, detonation induced structural...ADAPTIVE UNSTRUCTURED CARTESIAN METHOD FOR LARGE-EDDY SIMULATION OF DETONATION IN MULTI-PHASE TURBULENT REACTIVE MIXTURES 5b. GRANT NUMBER FA9550...CCL Report TR-2012-03-03 Hybrid Solution-Adaptive Unstructured Cartesian Method for Large-Eddy Simulation of Detonation in Multi-Phase Turbulent
Mesh refinement in finite element analysis by minimization of the stiffness matrix trace
NASA Technical Reports Server (NTRS)
Kittur, Madan G.; Huston, Ronald L.
1989-01-01
Most finite element packages provide means to generate meshes automatically. However, the user is usually confronted with the problem of not knowing whether the mesh generated is appropriate for the problem at hand. Since the accuracy of the finite element results is mesh dependent, mesh selection forms a very important step in the analysis. Indeed, in accurate analyses, meshes need to be refined or rezoned until the solution converges to a value so that the error is below a predetermined tolerance. A-posteriori methods use error indicators, developed by using the theory of interpolation and approximation theory, for mesh refinements. Some use other criterions, such as strain energy density variation and stress contours for example, to obtain near optimal meshes. Although these methods are adaptive, they are expensive. Alternatively, a priori methods, until now available, use geometrical parameters, for example, element aspect ratio. Therefore, they are not adaptive by nature. An adaptive a-priori method is developed. The criterion is that the minimization of the trace of the stiffness matrix with respect to the nodal coordinates, leads to a minimization of the potential energy, and as a consequence provide a good starting mesh. In a few examples the method is shown to provide the optimal mesh. The method is also shown to be relatively simple and amenable to development of computer algorithms. When the procedure is used in conjunction with a-posteriori methods of grid refinement, it is shown that fewer refinement iterations and fewer degrees of freedom are required for convergence as opposed to when the procedure is not used. The mesh obtained is shown to have uniform distribution of stiffness among the nodes and elements which, as a consequence, leads to uniform error distribution. Thus the mesh obtained meets the optimality criterion of uniform error distribution.
Balancing Flexible Constraints and Measurement Precision in Computerized Adaptive Testing
ERIC Educational Resources Information Center
Moyer, Eric L.; Galindo, Jennifer L.; Dodd, Barbara G.
2012-01-01
Managing test specifications--both multiple nonstatistical constraints and flexibly defined constraints--has become an important part of designing item selection procedures for computerized adaptive tests (CATs) in achievement testing. This study compared the effectiveness of three procedures: constrained CAT, flexible modified constrained CAT,…
A Pilot Program in Adapted Physical Education: Hillsborough High School.
ERIC Educational Resources Information Center
Thompson, Vince
The instructor of an adapted physical education program describes his experiences and suggests guidelines for implementing other programs. Reviewed are such aspects as program orientation, class procedures, identification of student participants, and grading procedures. Objectives, lesson plans and evaluations are presented for the following units…
NASA Technical Reports Server (NTRS)
Schmidt, R. J.; Dodds, R. H., Jr.
1985-01-01
The dynamic analysis of complex structural systems using the finite element method and multilevel substructured models is presented. The fixed-interface method is selected for substructure reduction because of its efficiency, accuracy, and adaptability to restart and reanalysis. This method is extended to reduction of substructures which are themselves composed of reduced substructures. The implementation and performance of the method in a general purpose software system is emphasized. Solution algorithms consistent with the chosen data structures are presented. It is demonstrated that successful finite element software requires the use of software executives to supplement the algorithmic language. The complexity of the implementation of restart and reanalysis porcedures illustrates the need for executive systems to support the noncomputational aspects of the software. It is shown that significant computational efficiencies can be achieved through proper use of substructuring and reduction technbiques without sacrificing solution accuracy. The restart and reanalysis capabilities and the flexible procedures for multilevel substructured modeling gives economical yet accurate analyses of complex structural systems.
Supply network configuration—A benchmarking problem
NASA Astrophysics Data System (ADS)
Brandenburg, Marcus
2018-03-01
Managing supply networks is a highly relevant task that strongly influences the competitiveness of firms from various industries. Designing supply networks is a strategic process that considerably affects the structure of the whole network. In contrast, supply networks for new products are configured without major adaptations of the existing structure, but the network has to be configured before the new product is actually launched in the marketplace. Due to dynamics and uncertainties, the resulting planning problem is highly complex. However, formal models and solution approaches that support supply network configuration decisions for new products are scant. The paper at hand aims at stimulating related model-based research. To formulate mathematical models and solution procedures, a benchmarking problem is introduced which is derived from a case study of a cosmetics manufacturer. Tasks, objectives, and constraints of the problem are described in great detail and numerical values and ranges of all problem parameters are given. In addition, several directions for future research are suggested.
Self-adaptive multi-objective harmony search for optimal design of water distribution networks
NASA Astrophysics Data System (ADS)
Choi, Young Hwan; Lee, Ho Min; Yoo, Do Guen; Kim, Joong Hoon
2017-11-01
In multi-objective optimization computing, it is important to assign suitable parameters to each optimization problem to obtain better solutions. In this study, a self-adaptive multi-objective harmony search (SaMOHS) algorithm is developed to apply the parameter-setting-free technique, which is an example of a self-adaptive methodology. The SaMOHS algorithm attempts to remove some of the inconvenience from parameter setting and selects the most adaptive parameters during the iterative solution search process. To verify the proposed algorithm, an optimal least cost water distribution network design problem is applied to three different target networks. The results are compared with other well-known algorithms such as multi-objective harmony search and the non-dominated sorting genetic algorithm-II. The efficiency of the proposed algorithm is quantified by suitable performance indices. The results indicate that SaMOHS can be efficiently applied to the search for Pareto-optimal solutions in a multi-objective solution space.
Pickl, Karin E; Adamek, Viktor; Gorges, Roland; Sinner, Frank M
2011-07-15
Due to increased regulatory requirements, the interaction of active pharmaceutical ingredients with various surfaces and solutions during production and storage is gaining interest in the pharmaceutical research field, in particular with respect to development of new formulations, new packaging material and the evaluation of cleaning processes. Experimental adsorption/absorption studies as well as the study of cleaning processes require sophisticated analytical methods with high sensitivity for the drug of interest. In the case of 2,6-diisopropylphenol - a small lipophilic drug which is typically formulated as lipid emulsion for intravenous injection - a highly sensitive method in the concentration range of μg/l suitable to be applied to a variety of different sample matrices including lipid emulsions is needed. We hereby present a headspace-solid phase microextraction (HS-SPME) approach as a simple cleanup procedure for sensitive 2,6-diisopropylphenol quantification from diverse matrices choosing a lipid emulsion as the most challenging matrix with regard to complexity. By combining the simple and straight forward HS-SPME sample pretreatment with an optimized GC-MS quantification method a robust and sensitive method for 2,6-diisopropylphenol was developed. This method shows excellent sensitivity in the low μg/l concentration range (5-200μg/l), good accuracy (94.8-98.8%) and precision (intraday-precision 0.1-9.2%, inter-day precision 2.0-7.7%). The method can be easily adapted to other, less complex, matrices such as water or swab extracts. Hence, the presented method holds the potential to serve as a single and simple analytical procedure for 2,6-diisopropylphenol analysis in various types of samples such as required in, e.g. adsorption/absorption studies which typically deal with a variety of different surfaces (steel, plastic, glass, etc.) and solutions/matrices including lipid emulsions. Copyright © 2011 Elsevier B.V. All rights reserved.
Efficient runner safety assessment during early design phase and root cause analysis
NASA Astrophysics Data System (ADS)
Liang, Q. W.; Lais, S.; Gentner, C.; Braun, O.
2012-11-01
Fatigue related problems in Francis turbines, especially high head Francis turbines, have been published several times in the last years. During operation the runner is exposed to various steady and unsteady hydraulic loads. Therefore the analysis of forced response of the runner structure requires a combined approach of fluid dynamics and structural dynamics. Due to the high complexity of the phenomena and due to the limitation of computer power, the numerical prediction was in the past too expensive and not feasible for the use as standard design tool. However, due to continuous improvement of the knowledge and the simulation tools such complex analysis has become part of the design procedure in ANDRITZ HYDRO. This article describes the application of most advanced analysis techniques in runner safety check (RSC), including steady state CFD analysis, transient CFD analysis considering rotor stator interaction (RSI), static FE analysis and modal analysis in water considering the added mass effect, in the early design phase. This procedure allows a very efficient interaction between the hydraulic designer and the mechanical designer during the design phase, such that a risk of failure can be detected and avoided in an early design stage.The RSC procedure can also be applied to a root cause analysis (RCA) both to find out the cause of failure and to quickly define a technical solution to meet the safety criteria. An efficient application to a RCA of cracks in a Francis runner is quoted in this article as an example. The results of the RCA are presented together with an efficient and inexpensive solution whose effectiveness could be proven again by applying the described RSC technics. It is shown that, with the RSC procedure developed and applied as standard procedure in ANDRITZ HYDRO such a failure is excluded in an early design phase. Moreover, the RSC procedure is compatible with different commercial and open source codes and can be easily adapted to apply for other types of turbines, such as pump turbines and Pelton runners.
Evaluating Content Alignment in Computerized Adaptive Testing
ERIC Educational Resources Information Center
Wise, Steven L.; Kingsbury, G. Gage; Webb, Norman L.
2015-01-01
The alignment between a test and the content domain it measures represents key evidence for the validation of test score inferences. Although procedures have been developed for evaluating the content alignment of linear tests, these procedures are not readily applicable to computerized adaptive tests (CATs), which require large item pools and do…
Adaptive and dynamic meshing methods for numerical simulations
NASA Astrophysics Data System (ADS)
Acikgoz, Nazmiye
For the numerical simulation of many problems of engineering interest, it is desirable to have an automated mesh adaption tool capable of producing high quality meshes with an affordably low number of mesh points. This is important especially for problems, which are characterized by anisotropic features of the solution and require mesh clustering in the direction of high gradients. Another significant issue in meshing emerges in the area of unsteady simulations with moving boundaries or interfaces, where the motion of the boundary has to be accommodated by deforming the computational grid. Similarly, there exist problems where current mesh needs to be adapted to get more accurate solutions because either the high gradient regions are initially predicted inaccurately or they change location throughout the simulation. To solve these problems, we propose three novel procedures. For this purpose, in the first part of this work, we present an optimization procedure for three-dimensional anisotropic tetrahedral grids based on metric-driven h-adaptation. The desired anisotropy in the grid is dictated by a metric that defines the size, shape, and orientation of the grid elements throughout the computational domain. Through the use of topological and geometrical operators, the mesh is iteratively adapted until the final mesh minimizes a given objective function. In this work, the objective function measures the distance between the metric of each simplex and a target metric, which can be either user-defined (a-priori) or the result of a-posteriori error analysis. During the adaptation process, one tries to decrease the metric-based objective function until the final mesh is compliant with the target within a given tolerance. However, in regions such as corners and complex face intersections, the compliance condition was found to be very difficult or sometimes impossible to satisfy. In order to address this issue, we propose an optimization process based on an ad-hoc application of the simulated annealing technique, which improves the likelihood of removing poor elements from the grid. Moreover, a local implementation of the simulated annealing is proposed to reduce the computational cost. Many challenging multi-physics and multi-field problems that are unsteady in nature are characterized by moving boundaries and/or interfaces. When the boundary displacements are large, which typically occurs when implicit time marching procedures are used, degenerate elements are easily formed in the grid such that frequent remeshing is required. To deal with this problem, in the second part of this work, we propose a new r-adaptation methodology. The new technique is valid for both simplicial (e.g., triangular, tet) and non-simplicial (e.g., quadrilateral, hex) deforming grids that undergo large imposed displacements at their boundaries. A two- or three-dimensional grid is deformed using a network of linear springs composed of edge springs and a set of virtual springs. The virtual springs are constructed in such a way as to oppose element collapsing. This is accomplished by confining each vertex to its ball through springs that are attached to the vertex and its projection on the ball entities. The resulting linear problem is solved using a preconditioned conjugate gradient method. The new method is compared with the classical spring analogy technique in two- and three-dimensional examples, highlighting the performance improvements achieved by the new method. Meshes are an important part of numerical simulations. Depending on the geometry and flow conditions, the most suitable mesh for each particular problem is different. Meshes are usually generated by either using a suitable software package or solving a PDE. In both cases, engineering intuition plays a significant role in deciding where clusterings should take place. In addition, for unsteady problems, the gradients vary for each time step, which requires frequent remeshing during simulations. Therefore, in order to minimize user intervention and prevent frequent remeshings, we conclude this work by defining a novel mesh adaptation technique that integrates metric based target mesh definitions with the ball-vertex mesh deformation method. In this new approach, the entire mesh is deformed based on either an a-priori or an a-posteriori error estimator. In other words, nodal points are repositioned upon application of a force field in order to comply with the target mesh or to get more accurate solutions. The method has been tested for two-dimensional problems of a-priori metric definitions as well as for oblique shock clusterings.
Recent developments in the Dorfman-Berbaum-Metz procedure for multireader ROC study analysis.
Hillis, Stephen L; Berbaum, Kevin S; Metz, Charles E
2008-05-01
The Dorfman-Berbaum-Metz (DBM) method has been one of the most popular methods for analyzing multireader receiver-operating characteristic (ROC) studies since it was proposed in 1992. Despite its popularity, the original procedure has several drawbacks: it is limited to jackknife accuracy estimates, it is substantially conservative, and it is not based on a satisfactory conceptual or theoretical model. Recently, solutions to these problems have been presented in three papers. Our purpose is to summarize and provide an overview of these recent developments. We present and discuss the recently proposed solutions for the various drawbacks of the original DBM method. We compare the solutions in a simulation study and find that they result in improved performance for the DBM procedure. We also compare the solutions using two real data studies and find that the modified DBM procedure that incorporates these solutions yields more significant results and clearer interpretations of the variance component parameters than the original DBM procedure. We recommend using the modified DBM procedure that incorporates the recent developments.
Thermal Adaptation Methods of Urban Plaza Users in Asia's Hot-Humid Regions: A Taiwan Case Study.
Wu, Chen-Fa; Hsieh, Yen-Fen; Ou, Sheng-Jung
2015-10-27
Thermal adaptation studies provide researchers great insight to help understand how people respond to thermal discomfort. This research aims to assess outdoor urban plaza conditions in hot and humid regions of Asia by conducting an evaluation of thermal adaptation. We also propose that questionnaire items are appropriate for determining thermal adaptation strategies adopted by urban plaza users. A literature review was conducted and first hand data collected by field observations and interviews used to collect information on thermal adaptation strategies. Item analysis--Exploratory Factor Analysis (EFA) and Confirmatory Factor Analysis (CFA)--were applied to refine the questionnaire items and determine the reliability of the questionnaire evaluation procedure. The reliability and validity of items and constructing process were also analyzed. Then, researchers facilitated an evaluation procedure for assessing the thermal adaptation strategies of urban plaza users in hot and humid regions of Asia and formulated a questionnaire survey that was distributed in Taichung's Municipal Plaza in Taiwan. Results showed that most users responded with behavioral adaptation when experiencing thermal discomfort. However, if the thermal discomfort could not be alleviated, they then adopted psychological strategies. In conclusion, the evaluation procedure for assessing thermal adaptation strategies and the questionnaire developed in this study can be applied to future research on thermal adaptation strategies adopted by urban plaza users in hot and humid regions of Asia.
Thermal Adaptation Methods of Urban Plaza Users in Asia’s Hot-Humid Regions: A Taiwan Case Study
Wu, Chen-Fa; Hsieh, Yen-Fen; Ou, Sheng-Jung
2015-01-01
Thermal adaptation studies provide researchers great insight to help understand how people respond to thermal discomfort. This research aims to assess outdoor urban plaza conditions in hot and humid regions of Asia by conducting an evaluation of thermal adaptation. We also propose that questionnaire items are appropriate for determining thermal adaptation strategies adopted by urban plaza users. A literature review was conducted and first hand data collected by field observations and interviews used to collect information on thermal adaptation strategies. Item analysis—Exploratory Factor Analysis (EFA) and Confirmatory Factor Analysis (CFA)—were applied to refine the questionnaire items and determine the reliability of the questionnaire evaluation procedure. The reliability and validity of items and constructing process were also analyzed. Then, researchers facilitated an evaluation procedure for assessing the thermal adaptation strategies of urban plaza users in hot and humid regions of Asia and formulated a questionnaire survey that was distributed in Taichung’s Municipal Plaza in Taiwan. Results showed that most users responded with behavioral adaptation when experiencing thermal discomfort. However, if the thermal discomfort could not be alleviated, they then adopted psychological strategies. In conclusion, the evaluation procedure for assessing thermal adaptation strategies and the questionnaire developed in this study can be applied to future research on thermal adaptation strategies adopted by urban plaza users in hot and humid regions of Asia. PMID:26516881
Random element method for numerical modeling of diffusional processes
NASA Technical Reports Server (NTRS)
Ghoniem, A. F.; Oppenheim, A. K.
1982-01-01
The random element method is a generalization of the random vortex method that was developed for the numerical modeling of momentum transport processes as expressed in terms of the Navier-Stokes equations. The method is based on the concept that random walk, as exemplified by Brownian motion, is the stochastic manifestation of diffusional processes. The algorithm based on this method is grid-free and does not require the diffusion equation to be discritized over a mesh, it is thus devoid of numerical diffusion associated with finite difference methods. Moreover, the algorithm is self-adaptive in space and explicit in time, resulting in an improved numerical resolution of gradients as well as a simple and efficient computational procedure. The method is applied here to an assortment of problems of diffusion of momentum and energy in one-dimension as well as heat conduction in two-dimensions in order to assess its validity and accuracy. The numerical solutions obtained are found to be in good agreement with exact solution except for a statistical error introduced by using a finite number of elements, the error can be reduced by increasing the number of elements or by using ensemble averaging over a number of solutions.
Improvement of In Vitro Date Palm Plantlet Acclimatization Rate with Kinetin and Hoagland Solution.
Hassan, Mona M
2017-01-01
In vitro propagation of date palm Phoenix dactylifera L. is an ideal method to produce large numbers of healthy plants with specific characteristics and has the ability to transfer plantlets to ex vitro conditions at low cost and with a high survival rate. This chapter describes optimized acclimatization procedures for in vitro date palm plantlets. Primarily, the protocol presents the use of kinetin and Hoagland solution to enhance the growth of Barhee cv. plantlets in the greenhouse at two stages of acclimatization and the appropriate planting medium under shade and sunlight in the nursery. Foliar application of kinetin (20 mg/L) is recommended at the first stage. A combination between soil and foliar application of 50% Hoagland solution is favorable to plant growth and developmental parameters including plant height, leaf width, stem base diameter, chlorophyll A and B, carotenoids, and indoles. The optimum values of vegetative growth parameters during the adaptation stage in a shaded nursery are achieved using planting medium containing peat moss/perlite 2:1 (v/v), while in a sunlight nursery, clay/perlite/compost at equal ratio is the best. This protocol is suitable for large-scale production of micropropagated date palm plantlets.
NASA Astrophysics Data System (ADS)
Forghani, Ali; Peralta, Richard C.
2017-10-01
The study presents a procedure using solute transport and statistical models to evaluate the performance of aquifer storage and recovery (ASR) systems designed to earn additional water rights in freshwater aquifers. The recovery effectiveness (REN) index quantifies the performance of these ASR systems. REN is the proportion of the injected water that the same ASR well can recapture during subsequent extraction periods. To estimate REN for individual ASR wells, the presented procedure uses finely discretized groundwater flow and contaminant transport modeling. Then, the procedure uses multivariate adaptive regression splines (MARS) analysis to identify the significant variables affecting REN, and to identify the most recovery-effective wells. Achieving REN values close to 100% is the desire of the studied 14-well ASR system operator. This recovery is feasible for most of the ASR wells by extracting three times the injectate volume during the same year as injection. Most of the wells would achieve RENs below 75% if extracting merely the same volume as they injected. In other words, recovering almost all the same water molecules that are injected requires having a pre-existing water right to extract groundwater annually. MARS shows that REN most significantly correlates with groundwater flow velocity, or hydraulic conductivity and hydraulic gradient. MARS results also demonstrate that maximizing REN requires utilizing the wells located in areas with background Darcian groundwater velocities less than 0.03 m/d. The study also highlights the superiority of MARS over regular multiple linear regressions to identify the wells that can provide the maximum REN. This is the first reported application of MARS for evaluating performance of an ASR system in fresh water aquifers.
NASA Astrophysics Data System (ADS)
Wedding, L.; Hartge, E. H.; Guannel, G.; Melius, M.; Reiter, S. M.; Ruckelshaus, M.; Guerry, A.; Caldwell, M.
2014-12-01
To support decision-makers in their efforts to manage coastal resources in a changing climate the Natural Capital Project and the Center for Ocean Solutions are engaging in, informing, and helping to shape climate adaptation planning at various scales throughout coastal California. Our team is building collaborations with regional planners and local scientific and legal experts to inform local climate adaptation decisions that might minimize the economic and social losses associated with rising seas and more damaging storms. Decision-makers are considering engineered solutions (e.g. seawalls), natural solutions (e.g. dune or marsh restoration), and combinations of the two. To inform decisions about what kinds of solutions might best work in specific locations, we are comparing alternate climate and adaptation scenarios. We will present results from our use of the InVEST ecosystem service models in Sonoma County, with an initial focus on protection from coastal hazards due to erosion and inundation. By strategically choosing adaptation alternatives, communities and agencies can work to protect people and property while also protecting or restoring dwindling critical habitat and the full suite of benefits those habitats provide to people.
Krämer, Irene; Federici, Matteo; Kaiser, Vanessa; Thiesen, Judith
2016-04-01
The purpose of this study was to evaluate the contamination rate of media-fill products either prepared automated with a robotic system (APOTECAchemo™) or prepared manually at cytotoxic workbenches in the same cleanroom environment and by experienced operators. Media fills were completed by microbiological environmental control in the critical zones and used to validate the cleaning and disinfection procedures of the robotic system. The aseptic preparation of patient individual ready-to-use injection solutions was simulated by using double concentrated tryptic soy broth as growth medium, water for injection and plastic syringes as primary packaging materials. Media fills were either prepared automated (500 units) in the robot or manually (500 units) in cytotoxic workbenches in the same cleanroom over a period of 18 working days. The test solutions were incubated at room temperature (22℃) over 4 weeks. Products were visually inspected for turbidity after a 2-week and 4-week period. Following incubation, growth promotion tests were performed with Staphylococcus epidermidis. During the media-fill procedures, passive air monitoring was performed with settle plates and surface monitoring with contact plates on predefined locations as well as fingerprints. The plates got incubated for 5-7 days at room temperature, followed by 2-3 days at 30-35℃ and the colony forming units (cfu) counted after both periods. The robot was cleaned and disinfected according to the established standard operating procedure on two working days prior to the media-fill session, while on six other working days only six critical components were sanitized at the end of the media-fill sessions. Every day UV irradiation was operated for 4 h after finishing work. None of the 1000 media-fill products prepared in the two different settings showed turbidity after the incubation period thereby indicating no contamination with microorganisms. All products remained uniform, clear, and light-amber solutions. In addition, the reliability of the nutrient medium and the process was demonstrated by positive growth promotion tests with S. epidermidis. During automated preparation the recommended limits < 1 cfu per settle/contact plate set for cleanroom Grade A zones were not succeeded in the carousel and working area, but in the loading area of the robot. During manual preparation, the number of cfus detected on settle/contact plates inside the workbenches lay far below the limits. The number of cfus detected on fingertips succeeded several times the limit during manual preparation but not during automated preparation. There was no difference in the microbial contamination rate depending on the extent of cleaning and disinfection of the robot. Extensive media-fill tests simulating manual and automated preparation of ready-to-use cytotoxic injection solutions revealed the same level of sterility for both procedures. The results of supplemental environmental controls confirmed that the aseptic procedures are well controlled. As there was no difference in the microbial contamination rates of the media preparations depending on the extent of cleaning and disinfection of the robot, the results were used to adapt the respective standard operating procedures. © The Author(s) 2014.
On Accuracy of Adaptive Grid Methods for Captured Shocks
NASA Technical Reports Server (NTRS)
Yamaleev, Nail K.; Carpenter, Mark H.
2002-01-01
The accuracy of two grid adaptation strategies, grid redistribution and local grid refinement, is examined by solving the 2-D Euler equations for the supersonic steady flow around a cylinder. Second- and fourth-order linear finite difference shock-capturing schemes, based on the Lax-Friedrichs flux splitting, are used to discretize the governing equations. The grid refinement study shows that for the second-order scheme, neither grid adaptation strategy improves the numerical solution accuracy compared to that calculated on a uniform grid with the same number of grid points. For the fourth-order scheme, the dominant first-order error component is reduced by the grid adaptation, while the design-order error component drastically increases because of the grid nonuniformity. As a result, both grid adaptation techniques improve the numerical solution accuracy only on the coarsest mesh or on very fine grids that are seldom found in practical applications because of the computational cost involved. Similar error behavior has been obtained for the pressure integral across the shock. A simple analysis shows that both grid adaptation strategies are not without penalties in the numerical solution accuracy. Based on these results, a new grid adaptation criterion for captured shocks is proposed.
NASA Astrophysics Data System (ADS)
Donmez, Orhan
We present a general procedure to solve the General Relativistic Hydrodynamical (GRH) equations with Adaptive-Mesh Refinement (AMR) and model of an accretion disk around a black hole. To do this, the GRH equations are written in a conservative form to exploit their hyperbolic character. The numerical solutions of the general relativistic hydrodynamic equations is done by High Resolution Shock Capturing schemes (HRSC), specifically designed to solve non-linear hyperbolic systems of conservation laws. These schemes depend on the characteristic information of the system. We use Marquina fluxes with MUSCL left and right states to solve GRH equations. First, we carry out different test problems with uniform and AMR grids on the special relativistic hydrodynamics equations to verify the second order convergence of the code in 1D, 2 D and 3D. Second, we solve the GRH equations and use the general relativistic test problems to compare the numerical solutions with analytic ones. In order to this, we couple the flux part of general relativistic hydrodynamic equation with a source part using Strang splitting. The coupling of the GRH equations is carried out in a treatment which gives second order accurate solutions in space and time. The test problems examined include shock tubes, geodesic flows, and circular motion of particle around the black hole. Finally, we apply this code to the accretion disk problems around the black hole using the Schwarzschild metric at the background of the computational domain. We find spiral shocks on the accretion disk. They are observationally expected results. We also examine the star-disk interaction near a massive black hole. We find that when stars are grounded down or a hole is punched on the accretion disk, they create shock waves which destroy the accretion disk.
Residual Distribution Schemes for Conservation Laws Via Adaptive Quadrature
NASA Technical Reports Server (NTRS)
Barth, Timothy; Abgrall, Remi; Biegel, Bryan (Technical Monitor)
2000-01-01
This paper considers a family of nonconservative numerical discretizations for conservation laws which retains the correct weak solution behavior in the limit of mesh refinement whenever sufficient order numerical quadrature is used. Our analysis of 2-D discretizations in nonconservative form follows the 1-D analysis of Hou and Le Floch. For a specific family of nonconservative discretizations, it is shown under mild assumptions that the error arising from non-conservation is strictly smaller than the discretization error in the scheme. In the limit of mesh refinement under the same assumptions, solutions are shown to satisfy an entropy inequality. Using results from this analysis, a variant of the "N" (Narrow) residual distribution scheme of van der Weide and Deconinck is developed for first-order systems of conservation laws. The modified form of the N-scheme supplants the usual exact single-state mean-value linearization of flux divergence, typically used for the Euler equations of gasdynamics, by an equivalent integral form on simplex interiors. This integral form is then numerically approximated using an adaptive quadrature procedure. This renders the scheme nonconservative in the sense described earlier so that correct weak solutions are still obtained in the limit of mesh refinement. Consequently, we then show that the modified form of the N-scheme can be easily applied to general (non-simplicial) element shapes and general systems of first-order conservation laws equipped with an entropy inequality where exact mean-value linearization of the flux divergence is not readily obtained, e.g. magnetohydrodynamics, the Euler equations with certain forms of chemistry, etc. Numerical examples of subsonic, transonic and supersonic flows containing discontinuities together with multi-level mesh refinement are provided to verify the analysis.
Quality factors and local adaption (with applications in Eulerian hydrodynamics)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Crowley, W.P.
1992-06-17
Adapting the mesh to suit the solution is a technique commonly used for solving both ode`s and pde`s. For Lagrangian hydrodynamics, ALE and Free-Lagrange are examples of structured and unstructured adaptive methods. For Eulerian hydrodynamics the two basic approaches are the macro-unstructuring technique pioneered by Oliger and Berger and the micro-structuring technique due to Lohner and others. Here we will describe a new micro-unstructuring technique, LAM, (for Local Adaptive Mesh) as applied to Eulerian hydrodynamics. The LAM technique consists of two independent parts: (1) the time advance scheme is a variation on the artificial viscosity method; (2) the adaption schememore » uses a micro-unstructured mesh with quadrilateral mesh elements. The adaption scheme makes use of quality factors and the relation between these and truncation errors is discussed. The time advance scheme; the adaption strategy; and the effect of different adaption parameters on numerical solutions are described.« less
Quality factors and local adaption (with applications in Eulerian hydrodynamics)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Crowley, W.P.
1992-06-17
Adapting the mesh to suit the solution is a technique commonly used for solving both ode's and pde's. For Lagrangian hydrodynamics, ALE and Free-Lagrange are examples of structured and unstructured adaptive methods. For Eulerian hydrodynamics the two basic approaches are the macro-unstructuring technique pioneered by Oliger and Berger and the micro-structuring technique due to Lohner and others. Here we will describe a new micro-unstructuring technique, LAM, (for Local Adaptive Mesh) as applied to Eulerian hydrodynamics. The LAM technique consists of two independent parts: (1) the time advance scheme is a variation on the artificial viscosity method; (2) the adaption schememore » uses a micro-unstructured mesh with quadrilateral mesh elements. The adaption scheme makes use of quality factors and the relation between these and truncation errors is discussed. The time advance scheme; the adaption strategy; and the effect of different adaption parameters on numerical solutions are described.« less
Quality assessment and control of finite element solutions
NASA Technical Reports Server (NTRS)
Noor, Ahmed K.; Babuska, Ivo
1987-01-01
Status and some recent developments in the techniques for assessing the reliability of finite element solutions are summarized. Discussion focuses on a number of aspects including: the major types of errors in the finite element solutions; techniques used for a posteriori error estimation and the reliability of these estimators; the feedback and adaptive strategies for improving the finite element solutions; and postprocessing approaches used for improving the accuracy of stresses and other important engineering data. Also, future directions for research needed to make error estimation and adaptive movement practical are identified.
NASA Astrophysics Data System (ADS)
Gotovac, Hrvoje; Srzic, Veljko
2014-05-01
Contaminant transport in natural aquifers is a complex, multiscale process that is frequently studied using different Eulerian, Lagrangian and hybrid numerical methods. Conservative solute transport is typically modeled using the advection-dispersion equation (ADE). Despite the large number of available numerical methods that have been developed to solve it, the accurate numerical solution of the ADE still presents formidable challenges. In particular, current numerical solutions of multidimensional advection-dominated transport in non-uniform velocity fields are affected by one or all of the following problems: numerical dispersion that introduces artificial mixing and dilution, grid orientation effects, unresolved spatial and temporal scales and unphysical numerical oscillations (e.g., Herrera et al, 2009; Bosso et al., 2012). In this work we will present Eulerian Lagrangian Adaptive Fup Collocation Method (ELAFCM) based on Fup basis functions and collocation approach for spatial approximation and explicit stabilized Runge-Kutta-Chebyshev temporal integration (public domain routine SERK2) which is especially well suited for stiff parabolic problems. Spatial adaptive strategy is based on Fup basis functions which are closely related to the wavelets and splines so that they are also compactly supported basis functions; they exactly describe algebraic polynomials and enable a multiresolution adaptive analysis (MRA). MRA is here performed via Fup Collocation Transform (FCT) so that at each time step concentration solution is decomposed using only a few significant Fup basis functions on adaptive collocation grid with appropriate scales (frequencies) and locations, a desired level of accuracy and a near minimum computational cost. FCT adds more collocations points and higher resolution levels only in sensitive zones with sharp concentration gradients, fronts and/or narrow transition zones. According to the our recent achievements there is no need for solving the large linear system on adaptive grid because each Fup coefficient is obtained by predefined formulas equalizing Fup expansion around corresponding collocation point and particular collocation operator based on few surrounding solution values. Furthermore, each Fup coefficient can be obtained independently which is perfectly suited for parallel processing. Adaptive grid in each time step is obtained from solution of the last time step or initial conditions and advective Lagrangian step in the current time step according to the velocity field and continuous streamlines. On the other side, we implement explicit stabilized routine SERK2 for dispersive Eulerian part of solution in the current time step on obtained spatial adaptive grid. Overall adaptive concept does not require the solving of large linear systems for the spatial and temporal approximation of conservative transport. Also, this new Eulerian-Lagrangian-Collocation scheme resolves all mentioned numerical problems due to its adaptive nature and ability to control numerical errors in space and time. Proposed method solves advection in Lagrangian way eliminating problems in Eulerian methods, while optimal collocation grid efficiently describes solution and boundary conditions eliminating usage of large number of particles and other problems in Lagrangian methods. Finally, numerical tests show that this approach enables not only accurate velocity field, but also conservative transport even in highly heterogeneous porous media resolving all spatial and temporal scales of concentration field.
Girard, R; Amazian, K; Fabry, J
2001-02-01
The aim of the study was to demonstrate that the introduction of rub-in hand disinfection (RHD) in hospital units, with the implementation of suitable equipment, drafting of specific protocols, and training users, improved compliance of hand disinfection and tolerance of user's hands. In four hospital units not previously using RHD an external investigator conducted two identical studies in order to measure the rate of compliance with, and the quality of, disinfection practices, [rate of adapted (i.e., appropriate) procedures, rate of correct (i.e., properly performed) procedures, rate of adapted and correct procedures carried out] and to assess the state of hands (clinical scores of dryness and irritation, measuring hydration with a corneometer). Between the two studies, the units were equipped with dispensers for RHD products and staff were trained. Compliance improved from 62.2 to 66.5%, quality was improved (rate of adapted procedures from 66.8% to 84.3%, P > or = 10(-6), rate of correct procedures from 11.1% to 28.9%, P > or = 10(-8), rate of adapted and correct procedures from 6.0 to 17.8%, P > or = 10(-8)). The tolerance was improved significantly (P > or = 10(-2)) for clinical dryness and irritation scores, although not significantly for measurements using a corneometer. This study shows the benefit of introducing RHD with a technical and educational accompaniment. Copyright 2001 The Hospital Infection Society.
Mixture-based gatekeeping procedures in adaptive clinical trials.
Kordzakhia, George; Dmitrienko, Alex; Ishida, Eiji
2018-01-01
Clinical trials with data-driven decision rules often pursue multiple clinical objectives such as the evaluation of several endpoints or several doses of an experimental treatment. These complex analysis strategies give rise to "multivariate" multiplicity problems with several components or sources of multiplicity. A general framework for defining gatekeeping procedures in clinical trials with adaptive multistage designs is proposed in this paper. The mixture method is applied to build a gatekeeping procedure at each stage and inferences at each decision point (interim or final analysis) are performed using the combination function approach. An advantage of utilizing the mixture method is that it enables powerful gatekeeping procedures applicable to a broad class of settings with complex logical relationships among the hypotheses of interest. Further, the combination function approach supports flexible data-driven decisions such as a decision to increase the sample size or remove a treatment arm. The paper concludes with a clinical trial example that illustrates the methodology by applying it to develop an adaptive two-stage design with a mixture-based gatekeeping procedure.
Adaptive mesh strategies for the spectral element method
NASA Technical Reports Server (NTRS)
Mavriplis, Catherine
1992-01-01
An adaptive spectral method was developed for the efficient solution of time dependent partial differential equations. Adaptive mesh strategies that include resolution refinement and coarsening by three different methods are illustrated on solutions to the 1-D viscous Burger equation and the 2-D Navier-Stokes equations for driven flow in a cavity. Sharp gradients, singularities, and regions of poor resolution are resolved optimally as they develop in time using error estimators which indicate the choice of refinement to be used. The adaptive formulation presents significant increases in efficiency, flexibility, and general capabilities for high order spectral methods.
A Comparison of Procedures for Content-Sensitive Item Selection in Computerized Adaptive Tests.
ERIC Educational Resources Information Center
Kingsbury, G. Gage; Zara, Anthony R.
1991-01-01
This simulation investigated two procedures that reduce differences between paper-and-pencil testing and computerized adaptive testing (CAT) by making CAT content sensitive. Results indicate that the price in terms of additional test items of using constrained CAT for content balancing is much smaller than that of using testlets. (SLD)
ERIC Educational Resources Information Center
Yang, Xiangdong; Poggio, John C.; Glasnapp, Douglas R.
2006-01-01
The effects of five ability estimators, that is, maximum likelihood estimator, weighted likelihood estimator, maximum a posteriori, expected a posteriori, and Owen's sequential estimator, on the performances of the item response theory-based adaptive classification procedure on multiple categories were studied via simulations. The following…
Dynamic grid refinement for partial differential equations on parallel computers
NASA Technical Reports Server (NTRS)
Mccormick, S.; Quinlan, D.
1989-01-01
The fast adaptive composite grid method (FAC) is an algorithm that uses various levels of uniform grids to provide adaptive resolution and fast solution of PDEs. An asynchronous version of FAC, called AFAC, that completely eliminates the bottleneck to parallelism is presented. This paper describes the advantage that this algorithm has in adaptive refinement for moving singularities on multiprocessor computers. This work is applicable to the parallel solution of two- and three-dimensional shock tracking problems.
A simple anaesthetic and monitoring system for magnetic resonance imaging.
Rejger, V S; Cohn, B F; Vielvoye, G J; de Raadt, F B
1989-09-01
Clinical magnetic resonance imaging (MRI) is a digital tomographic technique which utilizes radio waves emitted by hydrogen protons in a powerful magnetic field to form an image of soft-tissue structures and abnormalities within the body. Unfortunately, because of the relatively long scanning time required and the narrow deep confines of the MRI tunnel and Faraday cage, some patients cannot be examined without the use of heavy sedation or general anaesthesia. Due to poor access to the patient and the strong magnetic field, several problems arise in monitoring and administering anaesthesia during this procedure. In this presentation these problems and their solutions, as resolved by our institution, are discussed. Of particular interest is the anaesthesia circuit specifically adapted for use during MRI scanning.
Extension of transonic flow computational concepts in the analysis of cavitated bearings
NASA Technical Reports Server (NTRS)
Vijayaraghavan, D.; Keith, T. G., Jr.; Brewe, D. E.
1990-01-01
An analogy between the mathematical modeling of transonic potential flow and the flow in a cavitating bearing is described. Based on the similarities, characteristics of the cavitated region and jump conditions across the film reformation and rupture fronts are developed using the method of weak solutions. The mathematical analogy is extended by utilizing a few computational concepts of transonic flow to numerically model the cavitating bearing. Methods of shock fitting and shock capturing are discussed. Various procedures used in transonic flow computations are adapted to bearing cavitation applications, for example, type differencing, grid transformation, an approximate factorization technique, and Newton's iteration method. These concepts have proved to be successful and have vastly improved the efficiency of numerical modeling of cavitated bearings.
On the Solution of the Three-Dimensional Flowfield About a Flow-Through Nacelle. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Compton, William Bernard
1985-01-01
The solution of the three dimensional flow field for a flow through nacelle was studied. Both inviscid and viscous inviscid interacting solutions were examined. Inviscid solutions were obtained with two different computational procedures for solving the three dimensional Euler equations. The first procedure employs an alternating direction implicit numerical algorithm, and required the development of a complete computational model for the nacelle problem. The second computational technique employs a fourth order Runge-Kutta numerical algorithm which was modified to fit the nacelle problem. Viscous effects on the flow field were evaluated with a viscous inviscid interacting computational model. This model was constructed by coupling the explicit Euler solution procedure with a flag entrainment boundary layer solution procedure in a global iteration scheme. The computational techniques were used to compute the flow field for a long duct turbofan engine nacelle at free stream Mach numbers of 0.80 and 0.94 and angles of attack of 0 and 4 deg.
An adaptive grid algorithm for one-dimensional nonlinear equations
NASA Technical Reports Server (NTRS)
Gutierrez, William E.; Hills, Richard G.
1990-01-01
Richards' equation, which models the flow of liquid through unsaturated porous media, is highly nonlinear and difficult to solve. Step gradients in the field variables require the use of fine grids and small time step sizes. The numerical instabilities caused by the nonlinearities often require the use of iterative methods such as Picard or Newton interation. These difficulties result in large CPU requirements in solving Richards equation. With this in mind, adaptive and multigrid methods are investigated for use with nonlinear equations such as Richards' equation. Attention is focused on one-dimensional transient problems. To investigate the use of multigrid and adaptive grid methods, a series of problems are studied. First, a multigrid program is developed and used to solve an ordinary differential equation, demonstrating the efficiency with which low and high frequency errors are smoothed out. The multigrid algorithm and an adaptive grid algorithm is used to solve one-dimensional transient partial differential equations, such as the diffusive and convective-diffusion equations. The performance of these programs are compared to that of the Gauss-Seidel and tridiagonal methods. The adaptive and multigrid schemes outperformed the Gauss-Seidel algorithm, but were not as fast as the tridiagonal method. The adaptive grid scheme solved the problems slightly faster than the multigrid method. To solve nonlinear problems, Picard iterations are introduced into the adaptive grid and tridiagonal methods. Burgers' equation is used as a test problem for the two algorithms. Both methods obtain solutions of comparable accuracy for similar time increments. For the Burgers' equation, the adaptive grid method finds the solution approximately three times faster than the tridiagonal method. Finally, both schemes are used to solve the water content formulation of the Richards' equation. For this problem, the adaptive grid method obtains a more accurate solution in fewer work units and less computation time than required by the tridiagonal method. The performance of the adaptive grid method tends to degrade as the solution process proceeds in time, but still remains faster than the tridiagonal scheme.
Data forwarding mechanism for supporting real-time services during relocations in UMTS systems
NASA Astrophysics Data System (ADS)
Cai, Wei; Liao, Xianglong; Zheng, Liang; Liu, Zehong
2004-04-01
To minimize the interruption during the handovers or relocations invoked by subscribers moving is a very critical factor to enhance the performance of the UMTS systems. We know that the 2G systems have been optimized to minimize the interruption of speech during handovers by two main technologies: one is the bi-casting for the DL traffic and the other is the fast radio resynchronization by the UE for the UL traffic. In the UMTS systems, we have also implemented lossless relocations for non real-time services with high reliability by data buffering in the source RNC and target RNC for the UE. However, the UMTS systems support four QoS classes traffic flow: conversational class, streaming class, interactive class and background class. The main distinguishing factor between these QoS classes is how delay sensitive the traffic is: Conversational and Streaming classes are mainly used to carry real-time traffic flows, like video telephony, interactive and background classes are mainly used by traditional Internet applications like WWW, E-mail and FTP. It"s essential to provide the solutions for supporting real-time services to meet the requirement for QoS in UMTS systems. Apparently, the Data buffering mechanism is not adapted to real-time services because of it"s delay may exceed the basic requirement for real-time services. Under this background, the paper discussed two data forwarding solutions for real-time services from the PS domain in the UMTS systems: packet duplication and Core Network bi-casting. The former mechanism does not require any new procedures, messages nor information elements. The later mechanism requires that the GGSN or SGSN is able to bi-cast the DL traffic to the target RNC according to the relocations involving two SGSNs or just involving one SGSN. It also implicitly shows that we need change procedures at the nodes SGSN, GGSN and RNC which are involved in the relocation procedure based on existing procedures that we have already designed if adopt the later solution. In a detail way, the paper analyzed the characteristic for these two solutions respectively, concentrated on the packet flows and the message flows in those nodes involved in relocations. Additionally, also gave out the impact on present transport technologies in the wireless communication systems. However we shall minimize the impact of evolution of transport mechanism and utilize the resource efficiently according to the general requirements for QoS in UMTS systems.
[European Portuguese EARS test battery adaptation].
Alves, Marisa; Ramos, Daniela; Oliveira, Graça; Alves, Helena; Anderson, Ilona; Magalhães, Isabel; Martins, Jorge H; Simões, Margarida; Ferreira, Raquel; Fonseca, Rita; Andrade, Susana; Silva, Luís; Ribeiro, Carlos; Ferreira, Pedro Lopes
2014-01-01
The use of adequate assessment tools in health care is crucial for the management of care. The lack of specific tools in Portugal for assessing the performance of children who use cochlear implants motivated the translation and adaptation of the EARS (Evaluation of Auditory Responses to Speech) test battery into European Portuguese. This test battery is today one of the most commonly used by (re)habilitation teams of deaf children who use cochlear implants worldwide. The goal to be achieved with the validation of EARS was to provide (re)habilitation teams an instrument that enables: (i) monitoring the progress of individual (re)habilitation, (ii) managing a (re)habilitation program according to objective results, comparable between different (re)habilitation teams, (iii) obtaining data that can be compared with the results of international teams, and (iv) improving engagement and motivation of the family and other professionals from local teams. For the test battery translation and adaptation process, the adopted procedures were the following: (i) translation of the English version into European Portuguese by a professional translator, (ii) revision of the translation performed by an expert panel, including doctors, speech-language pathologists and audiologists, (iii) adaptation of the test stimuli by the team's speechlanguage pathologist, and (iv) further review by the expert panel. For each of the tests that belong to the EARS battery, the introduced adaptations and adjustments are presented, combining the characteristics and objectives of the original tests with the linguistic and cultural specificities of the Portuguese population. The difficulties that have been encountered during the translation and adaptation process and the adopted solutions are discussed. Comparisons are made with other versions of the EARS battery. We defend that the translation and the adaptation process followed for the EARS test battery into European Portuguese was correctly conducted, respecting the characteristics of the original instruments and adapting the test stimuli to the linguistic and cultural reality of the Portuguese population, thus meeting the goals that have been set.
Numerical simulation of aerothermal loads in hypersonic engine inlets due to shock impingement
NASA Technical Reports Server (NTRS)
Ramakrishnan, R.
1992-01-01
The effect of shock impingement on an axial corner simulating the inlet of a hypersonic vehicle engine is modeled using a finite-difference procedure. A three-dimensional dynamic grid adaptation procedure is utilized to move the grids to regions with strong flow gradients. The adaptation procedure uses a grid relocation stencil that is valid at both the interior and boundary points of the finite-difference grid. A linear combination of spatial derivatives of specific flow variables, calculated with finite-element interpolation functions, are used as adaptation measures. This computational procedure is used to study laminar and turbulent Mach 6 flows in the axial corner. The description of flow physics and qualitative measures of heat transfer distributions on cowl and strut surfaces obtained from the analysis are compared with experimental observations. Conclusions are drawn regarding the capability of the numerical scheme for enhanced modeling of high-speed compressible flows.
Ryeznik, Yevgen; Sverdlov, Oleksandr; Wong, Weng Kee
2015-08-01
Response-adaptive randomization designs are becoming increasingly popular in clinical trial practice. In this paper, we present RARtool , a user interface software developed in MATLAB for designing response-adaptive randomized comparative clinical trials with censored time-to-event outcomes. The RARtool software can compute different types of optimal treatment allocation designs, and it can simulate response-adaptive randomization procedures targeting selected optimal allocations. Through simulations, an investigator can assess design characteristics under a variety of experimental scenarios and select the best procedure for practical implementation. We illustrate the utility of our RARtool software by redesigning a survival trial from the literature.
Adaptive Modeling Procedure Selection by Data Perturbation.
Zhang, Yongli; Shen, Xiaotong
2015-10-01
Many procedures have been developed to deal with the high-dimensional problem that is emerging in various business and economics areas. To evaluate and compare these procedures, modeling uncertainty caused by model selection and parameter estimation has to be assessed and integrated into a modeling process. To do this, a data perturbation method estimates the modeling uncertainty inherited in a selection process by perturbing the data. Critical to data perturbation is the size of perturbation, as the perturbed data should resemble the original dataset. To account for the modeling uncertainty, we derive the optimal size of perturbation, which adapts to the data, the model space, and other relevant factors in the context of linear regression. On this basis, we develop an adaptive data-perturbation method that, unlike its nonadaptive counterpart, performs well in different situations. This leads to a data-adaptive model selection method. Both theoretical and numerical analysis suggest that the data-adaptive model selection method adapts to distinct situations in that it yields consistent model selection and optimal prediction, without knowing which situation exists a priori. The proposed method is applied to real data from the commodity market and outperforms its competitors in terms of price forecasting accuracy.
NASA Astrophysics Data System (ADS)
Chen, Xianshun; Feng, Liang; Ong, Yew Soon
2012-07-01
In this article, we proposed a self-adaptive memeplex robust search (SAMRS) for finding robust and reliable solutions that are less sensitive to stochastic behaviours of customer demands and have low probability of route failures, respectively, in vehicle routing problem with stochastic demands (VRPSD). In particular, the contribution of this article is three-fold. First, the proposed SAMRS employs the robust solution search scheme (RS 3) as an approximation of the computationally intensive Monte Carlo simulation, thus reducing the computation cost of fitness evaluation in VRPSD, while directing the search towards robust and reliable solutions. Furthermore, a self-adaptive individual learning based on the conceptual modelling of memeplex is introduced in the SAMRS. Finally, SAMRS incorporates a gene-meme co-evolution model with genetic and memetic representation to effectively manage the search for solutions in VRPSD. Extensive experimental results are then presented for benchmark problems to demonstrate that the proposed SAMRS serves as an efficable means of generating high-quality robust and reliable solutions in VRPSD.
An adaptive gridless methodology in one dimension
DOE Office of Scientific and Technical Information (OSTI.GOV)
Snyder, N.T.; Hailey, C.E.
1996-09-01
Gridless numerical analysis offers great potential for accurately solving for flow about complex geometries or moving boundary problems. Because gridless methods do not require point connection, the mesh cannot twist or distort. The gridless method utilizes a Taylor series about each point to obtain the unknown derivative terms from the current field variable estimates. The governing equation is then numerically integrated to determine the field variables for the next iteration. Effects of point spacing and Taylor series order on accuracy are studied, and they follow similar trends of traditional numerical techniques. Introducing adaption by point movement using a spring analogymore » allows the solution method to track a moving boundary. The adaptive gridless method models linear, nonlinear, steady, and transient problems. Comparison with known analytic solutions is given for these examples. Although point movement adaption does not provide a significant increase in accuracy, it helps capture important features and provides an improved solution.« less
Space-Wise approach for airborne gravity data modelling
NASA Astrophysics Data System (ADS)
Sampietro, D.; Capponi, M.; Mansi, A. H.; Gatti, A.; Marchetti, P.; Sansò, F.
2017-05-01
Regional gravity field modelling by means of remove-compute-restore procedure is nowadays widely applied in different contexts: it is the most used technique for regional gravimetric geoid determination, and it is also used in exploration geophysics to predict grids of gravity anomalies (Bouguer, free-air, isostatic, etc.), which are useful to understand and map geological structures in a specific region. Considering this last application, due to the required accuracy and resolution, airborne gravity observations are usually adopted. However, due to the relatively high acquisition velocity, presence of atmospheric turbulence, aircraft vibration, instrumental drift, etc., airborne data are usually contaminated by a very high observation error. For this reason, a proper procedure to filter the raw observations in both the low and high frequencies should be applied to recover valuable information. In this work, a software to filter and grid raw airborne observations is presented: the proposed solution consists in a combination of an along-track Wiener filter and a classical Least Squares Collocation technique. Basically, the proposed procedure is an adaptation to airborne gravimetry of the Space-Wise approach, developed by Politecnico di Milano to process data coming from the ESA satellite mission GOCE. Among the main differences with respect to the satellite application of this approach, there is the fact that, while in processing GOCE data the stochastic characteristics of the observation error can be considered a-priori well known, in airborne gravimetry, due to the complex environment in which the observations are acquired, these characteristics are unknown and should be retrieved from the dataset itself. The presented solution is suited for airborne data analysis in order to be able to quickly filter and grid gravity observations in an easy way. Some innovative theoretical aspects focusing in particular on the theoretical covariance modelling are presented too. In the end, the goodness of the procedure is evaluated by means of a test on real data retrieving the gravitational signal with a predicted accuracy of about 0.4 mGal.
Mueller, Dirk; Breeman, Wouter A P; Klette, Ingo; Gottschaldt, Michael; Odparlik, Andreas; Baehre, Manfred; Tworowska, Izabela; Schultz, Michael K
2017-01-01
Gallium-68 (68Ga) is a generator-produced radionuclide with a short half-life (t½ = 68 min) that is particularly well suited for molecular imaging by positron emission tomography (PET). Methods have been developed to synthesize 68Ga-labeled imaging agents possessing certain drawbacks, such as longer synthesis time because of a required final purification step, the use of organic solvents or concentrated hydrochloric acid (HCl). In our manuscript, we provide a detailed protocol for the use of an advantageous sodium chloride (NaCl)-based method for radiolabeling of chelator-modified peptides for molecular imaging. By working in a lead-shielded hot-cell system, 68Ga3+ of the generator eluate is trapped on a cation exchanger cartridge (100 mg, ∼8 mm long and 5 mm diameter) and then eluted with acidified 5 M NaCl solution directly into a sodium acetate-buffered solution containing a DOTA (1,4,7,10-tetraazacyclododecane-1,4,7,10-tetraacetic acid) or DOTA-like chelator-modified peptide. The main advantages of this procedure are the high efficiency and the absence of organic solvents. It can be applied to a variety of peptides, which are stable in 1 M NaCl solution at a pH value of 3–4 during reaction. After labeling, neutralization, sterile filtration and quality control (instant thin-layer chromatography (iTLC), HPLC and pH), the radiopharmaceutical can be directly administered to patients, without determination of organic solvents, which reduces the overall synthesis-to-release time. This procedure has been adapted easily to automated synthesis modules, which leads to a rapid preparation of 68Ga radiopharmaceuticals (12–16 min). PMID:27172166
NASA Astrophysics Data System (ADS)
Sun, Jianbao; Shen, Zheng-Kang; Bürgmann, Roland; Wang, Min; Chen, Lichun; Xu, Xiwei
2013-08-01
develop a three-step maximum a posteriori probability method for coseismic rupture inversion, which aims at maximizing the a posterior probability density function (PDF) of elastic deformation solutions of earthquake rupture. The method originates from the fully Bayesian inversion and mixed linear-nonlinear Bayesian inversion methods and shares the same posterior PDF with them, while overcoming difficulties with convergence when large numbers of low-quality data are used and greatly improving the convergence rate using optimization procedures. A highly efficient global optimization algorithm, adaptive simulated annealing, is used to search for the maximum of a posterior PDF ("mode" in statistics) in the first step. The second step inversion approaches the "true" solution further using the Monte Carlo inversion technique with positivity constraints, with all parameters obtained from the first step as the initial solution. Then slip artifacts are eliminated from slip models in the third step using the same procedure of the second step, with fixed fault geometry parameters. We first design a fault model with 45° dip angle and oblique slip, and produce corresponding synthetic interferometric synthetic aperture radar (InSAR) data sets to validate the reliability and efficiency of the new method. We then apply this method to InSAR data inversion for the coseismic slip distribution of the 14 April 2010 Mw 6.9 Yushu, China earthquake. Our preferred slip model is composed of three segments with most of the slip occurring within 15 km depth and the maximum slip reaches 1.38 m at the surface. The seismic moment released is estimated to be 2.32e+19 Nm, consistent with the seismic estimate of 2.50e+19 Nm.
Adaptiveness in monotone pseudo-Boolean optimization and stochastic neural computation.
Grossi, Giuliano
2009-08-01
Hopfield neural network (HNN) is a nonlinear computational model successfully applied in finding near-optimal solutions of several difficult combinatorial problems. In many cases, the network energy function is obtained through a learning procedure so that its minima are states falling into a proper subspace (feasible region) of the search space. However, because of the network nonlinearity, a number of undesirable local energy minima emerge from the learning procedure, significantly effecting the network performance. In the neural model analyzed here, we combine both a penalty and a stochastic process in order to enhance the performance of a binary HNN. The penalty strategy allows us to gradually lead the search towards states representing feasible solutions, so avoiding oscillatory behaviors or asymptotically instable convergence. Presence of stochastic dynamics potentially prevents the network to fall into shallow local minima of the energy function, i.e., quite far from global optimum. Hence, for a given fixed network topology, the desired final distribution on the states can be reached by carefully modulating such process. The model uses pseudo-Boolean functions both to express problem constraints and cost function; a combination of these two functions is then interpreted as energy of the neural network. A wide variety of NP-hard problems fall in the class of problems that can be solved by the model at hand, particularly those having a monotonic quadratic pseudo-Boolean function as constraint function. That is, functions easily derived by closed algebraic expressions representing the constraint structure and easy (polynomial time) to maximize. We show the asymptotic convergence properties of this model characterizing its state space distribution at thermal equilibrium in terms of Markov chain and give evidence of its ability to find high quality solutions on benchmarks and randomly generated instances of two specific problems taken from the computational graph theory.
Mueller, Dirk; Breeman, Wouter A P; Klette, Ingo; Gottschaldt, Michael; Odparlik, Andreas; Baehre, Manfred; Tworowska, Izabela; Schultz, Michael K
2016-06-01
Gallium-68 ((68)Ga) is a generator-produced radionuclide with a short half-life (t½ = 68 min) that is particularly well suited for molecular imaging by positron emission tomography (PET). Methods have been developed to synthesize (68)Ga-labeled imaging agents possessing certain drawbacks, such as longer synthesis time because of a required final purification step, the use of organic solvents or concentrated hydrochloric acid (HCl). In our manuscript, we provide a detailed protocol for the use of an advantageous sodium chloride (NaCl)-based method for radiolabeling of chelator-modified peptides for molecular imaging. By working in a lead-shielded hot-cell system,(68)Ga(3+) of the generator eluate is trapped on a cation exchanger cartridge (100 mg, ∼8 mm long and 5 mm diameter) and then eluted with acidified 5 M NaCl solution directly into a sodium acetate-buffered solution containing a DOTA (1,4,7,10-tetraazacyclododecane-1,4,7,10-tetraacetic acid) or DOTA-like chelator-modified peptide. The main advantages of this procedure are the high efficiency and the absence of organic solvents. It can be applied to a variety of peptides, which are stable in 1 M NaCl solution at a pH value of 3-4 during reaction. After labeling, neutralization, sterile filtration and quality control (instant thin-layer chromatography (iTLC), HPLC and pH), the radiopharmaceutical can be directly administered to patients, without determination of organic solvents, which reduces the overall synthesis-to-release time. This procedure has been adapted easily to automated synthesis modules, which leads to a rapid preparation of (68)Ga radiopharmaceuticals (12-16 min).
Taborri, Juri; Scalona, Emilia; Palermo, Eduardo; Rossi, Stefano; Cappa, Paolo
2015-09-23
Gait-phase recognition is a necessary functionality to drive robotic rehabilitation devices for lower limbs. Hidden Markov Models (HMMs) represent a viable solution, but they need subject-specific training, making data processing very time-consuming. Here, we validated an inter-subject procedure to avoid the intra-subject one in two, four and six gait-phase models in pediatric subjects. The inter-subject procedure consists in the identification of a standardized parameter set to adapt the model to measurements. We tested the inter-subject procedure both on scalar and distributed classifiers. Ten healthy children and ten hemiplegic children, each equipped with two Inertial Measurement Units placed on shank and foot, were recruited. The sagittal component of angular velocity was recorded by gyroscopes while subjects performed four walking trials on a treadmill. The goodness of classifiers was evaluated with the Receiver Operating Characteristic. The results provided a goodness from good to optimum for all examined classifiers (0 < G < 0.6), with the best performance for the distributed classifier in two-phase recognition (G = 0.02). Differences were found among gait partitioning models, while no differences were found between training procedures with the exception of the shank classifier. Our results raise the possibility of avoiding subject-specific training in HMM for gait-phase recognition and its implementation to control exoskeletons for the pediatric population.
Taborri, Juri; Scalona, Emilia; Palermo, Eduardo; Rossi, Stefano; Cappa, Paolo
2015-01-01
Gait-phase recognition is a necessary functionality to drive robotic rehabilitation devices for lower limbs. Hidden Markov Models (HMMs) represent a viable solution, but they need subject-specific training, making data processing very time-consuming. Here, we validated an inter-subject procedure to avoid the intra-subject one in two, four and six gait-phase models in pediatric subjects. The inter-subject procedure consists in the identification of a standardized parameter set to adapt the model to measurements. We tested the inter-subject procedure both on scalar and distributed classifiers. Ten healthy children and ten hemiplegic children, each equipped with two Inertial Measurement Units placed on shank and foot, were recruited. The sagittal component of angular velocity was recorded by gyroscopes while subjects performed four walking trials on a treadmill. The goodness of classifiers was evaluated with the Receiver Operating Characteristic. The results provided a goodness from good to optimum for all examined classifiers (0 < G < 0.6), with the best performance for the distributed classifier in two-phase recognition (G = 0.02). Differences were found among gait partitioning models, while no differences were found between training procedures with the exception of the shank classifier. Our results raise the possibility of avoiding subject-specific training in HMM for gait-phase recognition and its implementation to control exoskeletons for the pediatric population. PMID:26404309
Adaptively Refined Euler and Navier-Stokes Solutions with a Cartesian-Cell Based Scheme
NASA Technical Reports Server (NTRS)
Coirier, William J.; Powell, Kenneth G.
1995-01-01
A Cartesian-cell based scheme with adaptive mesh refinement for solving the Euler and Navier-Stokes equations in two dimensions has been developed and tested. Grids about geometrically complicated bodies were generated automatically, by recursive subdivision of a single Cartesian cell encompassing the entire flow domain. Where the resulting cells intersect bodies, N-sided 'cut' cells were created using polygon-clipping algorithms. The grid was stored in a binary-tree data structure which provided a natural means of obtaining cell-to-cell connectivity and of carrying out solution-adaptive mesh refinement. The Euler and Navier-Stokes equations were solved on the resulting grids using an upwind, finite-volume formulation. The inviscid fluxes were found in an upwinded manner using a linear reconstruction of the cell primitives, providing the input states to an approximate Riemann solver. The viscous fluxes were formed using a Green-Gauss type of reconstruction upon a co-volume surrounding the cell interface. Data at the vertices of this co-volume were found in a linearly K-exact manner, which ensured linear K-exactness of the gradients. Adaptively-refined solutions for the inviscid flow about a four-element airfoil (test case 3) were compared to theory. Laminar, adaptively-refined solutions were compared to accepted computational, experimental and theoretical results.
Huang, X N; Ren, H P
2016-05-13
Robust adaptation is a critical ability of gene regulatory network (GRN) to survive in a fluctuating environment, which represents the system responding to an input stimulus rapidly and then returning to its pre-stimulus steady state timely. In this paper, the GRN is modeled using the Michaelis-Menten rate equations, which are highly nonlinear differential equations containing 12 undetermined parameters. The robust adaption is quantitatively described by two conflicting indices. To identify the parameter sets in order to confer the GRNs with robust adaptation is a multi-variable, multi-objective, and multi-peak optimization problem, which is difficult to acquire satisfactory solutions especially high-quality solutions. A new best-neighbor particle swarm optimization algorithm is proposed to implement this task. The proposed algorithm employs a Latin hypercube sampling method to generate the initial population. The particle crossover operation and elitist preservation strategy are also used in the proposed algorithm. The simulation results revealed that the proposed algorithm could identify multiple solutions in one time running. Moreover, it demonstrated a superior performance as compared to the previous methods in the sense of detecting more high-quality solutions within an acceptable time. The proposed methodology, owing to its universality and simplicity, is useful for providing the guidance to design GRN with superior robust adaptation.
NASA Technical Reports Server (NTRS)
Stricklin, J. A.; Haisler, W. E.; Von Riesemann, W. A.
1972-01-01
This paper presents an assessment of the solution procedures available for the analysis of inelastic and/or large deflection structural behavior. A literature survey is given which summarized the contribution of other researchers in the analysis of structural problems exhibiting material nonlinearities and combined geometric-material nonlinearities. Attention is focused at evaluating the available computation and solution techniques. Each of the solution techniques is developed from a common equation of equilibrium in terms of pseudo forces. The solution procedures are applied to circular plates and shells of revolution in an attempt to compare and evaluate each with respect to computational accuracy, economy, and efficiency. Based on the numerical studies, observations and comments are made with regard to the accuracy and economy of each solution technique.
An adaptive SVSF-SLAM algorithm to improve the success and solving the UGVs cooperation problem
NASA Astrophysics Data System (ADS)
Demim, Fethi; Nemra, Abdelkrim; Louadj, Kahina; Hamerlain, Mustapha; Bazoula, Abdelouahab
2018-05-01
This paper aims to present a Decentralised Cooperative Simultaneous Localization and Mapping (DCSLAM) solution based on 2D laser data using an Adaptive Covariance Intersection (ACI). The ACI-DCSLAM algorithm will be validated on a swarm of Unmanned Ground Vehicles (UGVs) receiving features to estimate the position and covariance of shared features before adding them to the global map. With the proposed solution, a group of (UGVs) will be able to construct a large reliable map and localise themselves within this map without any user intervention. The most popular solutions to this problem are the EKF-SLAM, Nonlinear H-infinity ? SLAM and the FAST-SLAM. The former suffers from two important problems which are the poor consistency caused by the linearization problem and the calculation of Jacobian. The second solution is the ? which is a very promising filter because it doesn't make any assumption about noise characteristics, while the latter is not suitable for real time implementation. Therefore, a new alternative solution based on the smooth variable structure filter (SVSF) is adopted. Cooperative adaptive SVSF-SLAM algorithm is proposed in this paper to solve the UGVs SLAM problem. Our main contribution consists in adapting the SVSF filter to solve the Decentralised Cooperative SLAM problem for multiple UGVs. The algorithms developed in this paper were implemented using two mobile robots Pioneer ?, equiped with 2D laser telemetry sensors. Good results are obtained by the Cooperative adaptive SVSF-SLAM algorithm compared to the Cooperative EKF/?-SLAM algorithms, especially when the noise is colored or affected by a variable bias. Simulation results confirm and show the efficiency of the proposed algorithm which is more robust, stable and adapted to real time applications.
A Robust Adaptive Autonomous Approach to Optimal Experimental Design
NASA Astrophysics Data System (ADS)
Gu, Hairong
Experimentation is the fundamental tool of scientific inquiries to understand the laws governing the nature and human behaviors. Many complex real-world experimental scenarios, particularly in quest of prediction accuracy, often encounter difficulties to conduct experiments using an existing experimental procedure for the following two reasons. First, the existing experimental procedures require a parametric model to serve as the proxy of the latent data structure or data-generating mechanism at the beginning of an experiment. However, for those experimental scenarios of concern, a sound model is often unavailable before an experiment. Second, those experimental scenarios usually contain a large number of design variables, which potentially leads to a lengthy and costly data collection cycle. Incompetently, the existing experimental procedures are unable to optimize large-scale experiments so as to minimize the experimental length and cost. Facing the two challenges in those experimental scenarios, the aim of the present study is to develop a new experimental procedure that allows an experiment to be conducted without the assumption of a parametric model while still achieving satisfactory prediction, and performs optimization of experimental designs to improve the efficiency of an experiment. The new experimental procedure developed in the present study is named robust adaptive autonomous system (RAAS). RAAS is a procedure for sequential experiments composed of multiple experimental trials, which performs function estimation, variable selection, reverse prediction and design optimization on each trial. Directly addressing the challenges in those experimental scenarios of concern, function estimation and variable selection are performed by data-driven modeling methods to generate a predictive model from data collected during the course of an experiment, thus exempting the requirement of a parametric model at the beginning of an experiment; design optimization is performed to select experimental designs on the fly of an experiment based on their usefulness so that fewest designs are needed to reach useful inferential conclusions. Technically, function estimation is realized by Bayesian P-splines, variable selection is realized by Bayesian spike-and-slab prior, reverse prediction is realized by grid-search and design optimization is realized by the concepts of active learning. The present study demonstrated that RAAS achieves statistical robustness by making accurate predictions without the assumption of a parametric model serving as the proxy of latent data structure while the existing procedures can draw poor statistical inferences if a misspecified model is assumed; RAAS also achieves inferential efficiency by taking fewer designs to acquire useful statistical inferences than non-optimal procedures. Thus, RAAS is expected to be a principled solution to real-world experimental scenarios pursuing robust prediction and efficient experimentation.
Evolutionary online behaviour learning and adaptation in real robots.
Silva, Fernando; Correia, Luís; Christensen, Anders Lyhne
2017-07-01
Online evolution of behavioural control on real robots is an open-ended approach to autonomous learning and adaptation: robots have the potential to automatically learn new tasks and to adapt to changes in environmental conditions, or to failures in sensors and/or actuators. However, studies have so far almost exclusively been carried out in simulation because evolution in real hardware has required several days or weeks to produce capable robots. In this article, we successfully evolve neural network-based controllers in real robotic hardware to solve two single-robot tasks and one collective robotics task. Controllers are evolved either from random solutions or from solutions pre-evolved in simulation. In all cases, capable solutions are found in a timely manner (1 h or less). Results show that more accurate simulations may lead to higher-performing controllers, and that completing the optimization process in real robots is meaningful, even if solutions found in simulation differ from solutions in reality. We furthermore demonstrate for the first time the adaptive capabilities of online evolution in real robotic hardware, including robots able to overcome faults injected in the motors of multiple units simultaneously, and to modify their behaviour in response to changes in the task requirements. We conclude by assessing the contribution of each algorithmic component on the performance of the underlying evolutionary algorithm.
Analyzing Hedges in Verbal Communication: An Adaptation-Based Approach
ERIC Educational Resources Information Center
Wang, Yuling
2010-01-01
Based on Adaptation Theory, the article analyzes the production process of hedges. The procedure consists of the continuous making of choices in linguistic forms and communicative strategies. These choices are made just for adaptation to the contextual correlates. Besides, the adaptation process is dynamic, intentional and bidirectional.
Sampling procedures for inventory of commercial volume tree species in Amazon Forest.
Netto, Sylvio P; Pelissari, Allan L; Cysneiros, Vinicius C; Bonazza, Marcelo; Sanquetta, Carlos R
2017-01-01
The spatial distribution of tropical tree species can affect the consistency of the estimators in commercial forest inventories, therefore, appropriate sampling procedures are required to survey species with different spatial patterns in the Amazon Forest. For this, the present study aims to evaluate the conventional sampling procedures and introduce the adaptive cluster sampling for volumetric inventories of Amazonian tree species, considering the hypotheses that the density, the spatial distribution and the zero-plots affect the consistency of the estimators, and that the adaptive cluster sampling allows to obtain more accurate volumetric estimation. We use data from a census carried out in Jamari National Forest, Brazil, where trees with diameters equal to or higher than 40 cm were measured in 1,355 plots. Species with different spatial patterns were selected and sampled with simple random sampling, systematic sampling, linear cluster sampling and adaptive cluster sampling, whereby the accuracy of the volumetric estimation and presence of zero-plots were evaluated. The sampling procedures applied to species were affected by the low density of trees and the large number of zero-plots, wherein the adaptive clusters allowed concentrating the sampling effort in plots with trees and, thus, agglutinating more representative samples to estimate the commercial volume.
Shen, Yi
2013-05-01
A subject's sensitivity to a stimulus variation can be studied by estimating the psychometric function. Generally speaking, three parameters of the psychometric function are of interest: the performance threshold, the slope of the function, and the rate at which attention lapses occur. In the present study, three psychophysical procedures were used to estimate the three-parameter psychometric function for an auditory gap detection task. These were an up-down staircase (up-down) procedure, an entropy-based Bayesian (entropy) procedure, and an updated maximum-likelihood (UML) procedure. Data collected from four young, normal-hearing listeners showed that while all three procedures provided similar estimates of the threshold parameter, the up-down procedure performed slightly better in estimating the slope and lapse rate for 200 trials of data collection. When the lapse rate was increased by mixing in random responses for the three adaptive procedures, the larger lapse rate was especially detrimental to the efficiency of the up-down procedure, and the UML procedure provided better estimates of the threshold and slope than did the other two procedures.
Global Load Balancing with Parallel Mesh Adaption on Distributed-Memory Systems
NASA Technical Reports Server (NTRS)
Biswas, Rupak; Oliker, Leonid; Sohn, Andrew
1996-01-01
Dynamic mesh adaption on unstructured grids is a powerful tool for efficiently computing unsteady problems to resolve solution features of interest. Unfortunately, this causes load imbalance among processors on a parallel machine. This paper describes the parallel implementation of a tetrahedral mesh adaption scheme and a new global load balancing method. A heuristic remapping algorithm is presented that assigns partitions to processors such that the redistribution cost is minimized. Results indicate that the parallel performance of the mesh adaption code depends on the nature of the adaption region and show a 35.5X speedup on 64 processors of an SP2 when 35% of the mesh is randomly adapted. For large-scale scientific computations, our load balancing strategy gives almost a sixfold reduction in solver execution times over non-balanced loads. Furthermore, our heuristic remapper yields processor assignments that are less than 3% off the optimal solutions but requires only 1% of the computational time.
Improving patient care by making small sustainable changes: a cardiac telemetry unit's experience.
Braaten, Jane S; Bellhouse, Dorothy E
2007-01-01
With the introduction of each new drug, technology, and regulation, the processes of care become more complicated, creating an elaborate set of procedures connecting various hospital units and departments. Using methods of Adaptive Design and the Toyota Production System, a nursing unit redesigned work systems to achieve sustainable improvements in productivity, staff and patient satisfaction, and quality outcomes. The first hurdle of redesign was identifying problems, to which staff had become so accustomed with various work arounds that they had trouble seeing the process bottlenecks. Once the staff identified problems, they assumed they could solve the problem because they assumed they knew the causes. Utilizing root cause analysis, asking, "why, why, why," was essential to unearthing the true cause of a problem. Similarly, identifying solutions that were simple and low cost was an essential step in problem solving. Adopting new procedures and sustaining the commitment to identify and signal problems was a last and critical step toward realizing improvement, requiring a manager to function as "teacher/coach" rather than "fixer/firefighter".
Diffeomorphic demons: efficient non-parametric image registration.
Vercauteren, Tom; Pennec, Xavier; Perchant, Aymeric; Ayache, Nicholas
2009-03-01
We propose an efficient non-parametric diffeomorphic image registration algorithm based on Thirion's demons algorithm. In the first part of this paper, we show that Thirion's demons algorithm can be seen as an optimization procedure on the entire space of displacement fields. We provide strong theoretical roots to the different variants of Thirion's demons algorithm. This analysis predicts a theoretical advantage for the symmetric forces variant of the demons algorithm. We show on controlled experiments that this advantage is confirmed in practice and yields a faster convergence. In the second part of this paper, we adapt the optimization procedure underlying the demons algorithm to a space of diffeomorphic transformations. In contrast to many diffeomorphic registration algorithms, our solution is computationally efficient since in practice it only replaces an addition of displacement fields by a few compositions. Our experiments show that in addition to being diffeomorphic, our algorithm provides results that are similar to the ones from the demons algorithm but with transformations that are much smoother and closer to the gold standard, available in controlled experiments, in terms of Jacobians.
A Framework for Reproducible Latent Fingerprint Enhancements.
Carasso, Alfred S
2014-01-01
Photoshop processing of latent fingerprints is the preferred methodology among law enforcement forensic experts, but that appproach is not fully reproducible and may lead to questionable enhancements. Alternative, independent, fully reproducible enhancements, using IDL Histogram Equalization and IDL Adaptive Histogram Equalization, can produce better-defined ridge structures, along with considerable background information. Applying a systematic slow motion smoothing procedure to such IDL enhancements, based on the rapid FFT solution of a Lévy stable fractional diffusion equation, can attenuate background detail while preserving ridge information. The resulting smoothed latent print enhancements are comparable to, but distinct from, forensic Photoshop images suitable for input into automated fingerprint identification systems, (AFIS). In addition, this progressive smoothing procedure can be reexamined by displaying the suite of progressively smoother IDL images. That suite can be stored, providing an audit trail that allows monitoring for possible loss of useful information, in transit to the user-selected optimal image. Such independent and fully reproducible enhancements provide a valuable frame of reference that may be helpful in informing, complementing, and possibly validating the forensic Photoshop methodology.
Investigation of Variations in the Equivalent Number of Looks for Polarimetric Channels
NASA Astrophysics Data System (ADS)
Hu, Dingsheng; Anfinsen, Stian Normann; Tao, Ding; Qiu, Xiaolan
2015-04-01
Current estimators of equivalent number of looks (ENL) have already been able to adapt the full-polarimetric SAR data and work in an unsupervised way. However, for some complex SAR scenes, the existing unsupervised estimation procedure would underestimate the ENL value, as the influence of inhomogeneous factor surpasses the allowance. Before determining further solution, this paper first investigates deviations in the estimated ENL that are observed when processing polarimetric synthetic aperture radar images of ocean surfaces. Even for surface that appears to be homogeneous, the estimated ENL is significantly different in cross-polarimetric (cross-pol) and co-polarimetric (co-pol) channels. We have formulated two hypotheses for the cause of this. Both hypotheses reflect that the mixtures are different in each channel, which leads us to question the validity of using the polarimetric information as a whole to eliminate mixture influence, in terms of accuracy and rationality. In the paper, we proposes a new unsupervised estimation procedure to avoid the mixture influence and with robust capability to obtain accurate ENL estimation even for some complex SAR scene.
A flexible motif search technique based on generalized profiles.
Bucher, P; Karplus, K; Moeri, N; Hofmann, K
1996-03-01
A flexible motif search technique is presented which has two major components: (1) a generalized profile syntax serving as a motif definition language; and (2) a motif search method specifically adapted to the problem of finding multiple instances of a motif in the same sequence. The new profile structure, which is the core of the generalized profile syntax, combines the functions of a variety of motif descriptors implemented in other methods, including regular expression-like patterns, weight matrices, previously used profiles, and certain types of hidden Markov models (HMMs). The relationship between generalized profiles and other biomolecular motif descriptors is analyzed in detail, with special attention to HMMs. Generalized profiles are shown to be equivalent to a particular class of HMMs, and conversion procedures in both directions are given. The conversion procedures provide an interpretation for local alignment in the framework of stochastic models, allowing for clear, simple significance tests. A mathematical statement of the motif search problem defines the new method exactly without linking it to a specific algorithmic solution. Part of the definition includes a new definition of disjointness of alignments.
Emerging Issues and Future Developments in Capsule Endoscopy
Slawinski, Piotr R.; Obstein, Keith L.; Valdastri, Pietro
2015-01-01
Capsule endoscopy (CE) has transformed from a research venture into a widely used clinical tool and the primary means for diagnosing small bowel pathology. These orally administered capsules traverse passively through the gastrointestinal tract via peristalsis and are used in the esophagus, stomach, small bowel, and colon. The primary focus of CE research in recent years has been enabling active CE manipulation and extension of the technology to therapeutic functionality; thus, widening the scope of the procedure. This review outlines clinical standards of the technology as well as recent advances in CE research. Clinical capsule applications are discussed with respect to each portion of the gastrointestinal tract. Promising research efforts are presented with an emphasis on enabling active capsule locomotion. The presented studies suggest, in particular, that the most viable solution for active capsule manipulation is actuation of a capsule via exterior permanent magnet held by a robot. Developing capsule procedures adhering to current healthcare standards, such as enabling a tool channel or irrigation in a therapeutic device, is a vital phase in the adaptation of CE in the clinical setting. PMID:26028956
A Framework for Reproducible Latent Fingerprint Enhancements
Carasso, Alfred S.
2014-01-01
Photoshop processing1 of latent fingerprints is the preferred methodology among law enforcement forensic experts, but that appproach is not fully reproducible and may lead to questionable enhancements. Alternative, independent, fully reproducible enhancements, using IDL Histogram Equalization and IDL Adaptive Histogram Equalization, can produce better-defined ridge structures, along with considerable background information. Applying a systematic slow motion smoothing procedure to such IDL enhancements, based on the rapid FFT solution of a Lévy stable fractional diffusion equation, can attenuate background detail while preserving ridge information. The resulting smoothed latent print enhancements are comparable to, but distinct from, forensic Photoshop images suitable for input into automated fingerprint identification systems, (AFIS). In addition, this progressive smoothing procedure can be reexamined by displaying the suite of progressively smoother IDL images. That suite can be stored, providing an audit trail that allows monitoring for possible loss of useful information, in transit to the user-selected optimal image. Such independent and fully reproducible enhancements provide a valuable frame of reference that may be helpful in informing, complementing, and possibly validating the forensic Photoshop methodology. PMID:26601028
ICASE/LaRC Workshop on Adaptive Grid Methods
NASA Technical Reports Server (NTRS)
South, Jerry C., Jr. (Editor); Thomas, James L. (Editor); Vanrosendale, John (Editor)
1995-01-01
Solution-adaptive grid techniques are essential to the attainment of practical, user friendly, computational fluid dynamics (CFD) applications. In this three-day workshop, experts gathered together to describe state-of-the-art methods in solution-adaptive grid refinement, analysis, and implementation; to assess the current practice; and to discuss future needs and directions for research. This was accomplished through a series of invited and contributed papers. The workshop focused on a set of two-dimensional test cases designed by the organizers to aid in assessing the current state of development of adaptive grid technology. In addition, a panel of experts from universities, industry, and government research laboratories discussed their views of needs and future directions in this field.
Mesh refinement strategy for optimal control problems
NASA Astrophysics Data System (ADS)
Paiva, L. T.; Fontes, F. A. C. C.
2013-10-01
Direct methods are becoming the most used technique to solve nonlinear optimal control problems. Regular time meshes having equidistant spacing are frequently used. However, in some cases these meshes cannot cope accurately with nonlinear behavior. One way to improve the solution is to select a new mesh with a greater number of nodes. Another way, involves adaptive mesh refinement. In this case, the mesh nodes have non equidistant spacing which allow a non uniform nodes collocation. In the method presented in this paper, a time mesh refinement strategy based on the local error is developed. After computing a solution in a coarse mesh, the local error is evaluated, which gives information about the subintervals of time domain where refinement is needed. This procedure is repeated until the local error reaches a user-specified threshold. The technique is applied to solve the car-like vehicle problem aiming minimum consumption. The approach developed in this paper leads to results with greater accuracy and yet with lower overall computational time as compared to using a time meshes having equidistant spacing.
Numerical Hydrodynamics in General Relativity.
Font, José A
2003-01-01
The current status of numerical solutions for the equations of ideal general relativistic hydrodynamics is reviewed. With respect to an earlier version of the article, the present update provides additional information on numerical schemes, and extends the discussion of astrophysical simulations in general relativistic hydrodynamics. Different formulations of the equations are presented, with special mention of conservative and hyperbolic formulations well-adapted to advanced numerical methods. A large sample of available numerical schemes is discussed, paying particular attention to solution procedures based on schemes exploiting the characteristic structure of the equations through linearized Riemann solvers. A comprehensive summary of astrophysical simulations in strong gravitational fields is presented. These include gravitational collapse, accretion onto black holes, and hydrodynamical evolutions of neutron stars. The material contained in these sections highlights the numerical challenges of various representative simulations. It also follows, to some extent, the chronological development of the field, concerning advances on the formulation of the gravitational field and hydrodynamic equations and the numerical methodology designed to solve them. Supplementary material is available for this article at 10.12942/lrr-2003-4.
Rubin, Jacob
1992-01-01
The feed forward (FF) method derives efficient operational equations for simulating transport of reacting solutes. It has been shown to be applicable in the presence of networks with any number of homogeneous and/or heterogeneous, classical reaction segments that consist of three, at most binary participants. Using a sequential (network type after network type) exploration approach and, independently, theoretical explanations, it is demonstrated for networks with classical reaction segments containing more than three, at most binary participants that if any one of such networks leads to a solvable transport problem then the FF method is applicable. Ways of helping to avoid networks that produce problem insolvability are developed and demonstrated. A previously suggested algebraic, matrix rank procedure has been adapted and augmented to serve as the main, easy-to-apply solvability test for already postulated networks. Four network conditions that often generate insolvability have been identified and studied. Their early detection during network formulation may help to avoid postulation of insolvable networks.
NASA Astrophysics Data System (ADS)
Tian, Yu-Kun; Zhou, Hui; Chen, Han-Ming; Zou, Ya-Ming; Guan, Shou-Jun
2013-12-01
Seismic inversion is a highly ill-posed problem, due to many factors such as the limited seismic frequency bandwidth and inappropriate forward modeling. To obtain a unique solution, some smoothing constraints, e.g., the Tikhonov regularization are usually applied. The Tikhonov method can maintain a global smooth solution, but cause a fuzzy structure edge. In this paper we use Huber-Markov random-field edge protection method in the procedure of inverting three parameters, P-velocity, S-velocity and density. The method can avoid blurring the structure edge and resist noise. For the parameter to be inverted, the Huber-Markov random-field constructs a neighborhood system, which further acts as the vertical and lateral constraints. We use a quadratic Huber edge penalty function within the layer to suppress noise and a linear one on the edges to avoid a fuzzy result. The effectiveness of our method is proved by inverting the synthetic data without and with noises. The relationship between the adopted constraints and the inversion results is analyzed as well.
NASA Technical Reports Server (NTRS)
Coirier, William J.; Powell, Kenneth G.
1994-01-01
A Cartesian, cell-based approach for adaptively-refined solutions of the Euler and Navier-Stokes equations in two dimensions is developed and tested. Grids about geometrically complicated bodies are generated automatically, by recursive subdivision of a single Cartesian cell encompassing the entire flow domain. Where the resulting cells intersect bodies, N-sided 'cut' cells are created using polygon-clipping algorithms. The grid is stored in a binary-tree structure which provides a natural means of obtaining cell-to-cell connectivity and of carrying out solution-adaptive mesh refinement. The Euler and Navier-Stokes equations are solved on the resulting grids using a finite-volume formulation. The convective terms are upwinded: a gradient-limited, linear reconstruction of the primitive variables is performed, providing input states to an approximate Riemann solver for computing the fluxes between neighboring cells. The more robust of a series of viscous flux functions is used to provide the viscous fluxes at the cell interfaces. Adaptively-refined solutions of the Navier-Stokes equations using the Cartesian, cell-based approach are obtained and compared to theory, experiment, and other accepted computational results for a series of low and moderate Reynolds number flows.
NASA Technical Reports Server (NTRS)
Coirier, William J.; Powell, Kenneth G.
1995-01-01
A Cartesian, cell-based approach for adaptively-refined solutions of the Euler and Navier-Stokes equations in two dimensions is developed and tested. Grids about geometrically complicated bodies are generated automatically, by recursive subdivision of a single Cartesian cell encompassing the entire flow domain. Where the resulting cells intersect bodies, N-sided 'cut' cells are created using polygon-clipping algorithms. The grid is stored in a binary-tree data structure which provides a natural means of obtaining cell-to-cell connectivity and of carrying out solution-adaptive mesh refinement. The Euler and Navier-Stokes equations are solved on the resulting grids using a finite-volume formulation. The convective terms are upwinded: A gradient-limited, linear reconstruction of the primitive variables is performed, providing input states to an approximate Riemann solver for computing the fluxes between neighboring cells. The more robust of a series of viscous flux functions is used to provide the viscous fluxes at the cell interfaces. Adaptively-refined solutions of the Navier-Stokes equations using the Cartesian, cell-based approach are obtained and compared to theory, experiment and other accepted computational results for a series of low and moderate Reynolds number flows.
Hofmann, Melanie; Winzer, Matthias; Weber, Christian; Gieseler, Henning
2016-06-01
The development of highly concentrated protein formulations is more demanding than for conventional concentrations due to an elevated protein aggregation tendency. Predictive protein-protein interaction parameters, such as the second virial coefficient B22 or the interaction parameter kD, have already been used to predict aggregation tendency and optimize protein formulations. However, these parameters can only be determined in diluted solutions, up to 20 mg/mL. And their validity at high concentrations is currently controversially discussed. This work presents a μ-scale screening approach which has been adapted to early industrial project needs. The procedure is based on static light scattering to directly determine protein-protein interactions at concentrations up to 100 mg/mL. Three different therapeutic molecules were formulated, varying in pH, salt content, and addition of excipients (e.g., sugars, amino acids, polysorbates, or other macromolecules). Validity of the predicted aggregation tendency was confirmed by stability data of selected formulations. Based on the results obtained, the new prediction method is a promising screening tool for fast and easy formulation development of highly concentrated protein solutions, consuming only microliter of sample volumes. Copyright © 2016 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.
Multi-level adaptive finite element methods. 1: Variation problems
NASA Technical Reports Server (NTRS)
Brandt, A.
1979-01-01
A general numerical strategy for solving partial differential equations and other functional problems by cycling between coarser and finer levels of discretization is described. Optimal discretization schemes are provided together with very fast general solvers. It is described in terms of finite element discretizations of general nonlinear minimization problems. The basic processes (relaxation sweeps, fine-grid-to-coarse-grid transfers of residuals, coarse-to-fine interpolations of corrections) are directly and naturally determined by the objective functional and the sequence of approximation spaces. The natural processes, however, are not always optimal. Concrete examples are given and some new techniques are reviewed. Including the local truncation extrapolation and a multilevel procedure for inexpensively solving chains of many boundary value problems, such as those arising in the solution of time-dependent problems.
Frugal innovation in medicine for low resource settings.
Tran, Viet-Thi; Ravaud, Philippe
2016-07-07
Whilst it is clear that technology is crucial to advance healthcare: innovation in medicine is not just about high-tech tools, new procedures or genome discoveries. In constrained environments, healthcare providers often create unexpected solutions to provide adequate healthcare to patients. These inexpensive but effective frugal innovations may be imperfect, but they have the power to ensure that health is within reach of everyone. Frugal innovations are not limited to low-resource settings: ingenuous ideas can be adapted to offer simpler and disruptive alternatives to usual care all around the world, representing the concept of "reverse innovation". In this article, we discuss the different types of frugal innovations, illustrated with examples from the literature, and argue for the need to give voice to this neglected type of innovation in medicine.
Endothelial protection: avoiding air bubble formation at the phacoemulsification tip.
Kim, Eung Kweon; Cristol, Stephen M; Kang, Shin J; Edelhauser, Henry F; Yeon, Dong-Soo; Lee, Jae Bum
2002-03-01
To investigate the conditions under which bubbles form during phacoemulsification. Department of Ophthalmology, Yonsei University College of Medicine, Seoul, Korea. In the first part of the study, the partial pressure of oxygen (pO(2)) was used as a surrogate measure for the partial pressure of air. Irrigation solutions packaged in glass and plastic containers were studied. A directly vented glass bottle was also tested. The pO(2) of the various irrigation solutions was measured as the containers were emptied. In the second part, phacoemulsification procedures were performed in rabbit eyes with different power settings and different irrigation solutions. Intracameral bubble formation during the procedure was recorded. Following the phacoemulsification procedures, the corneas were stained for F-actin and examined for endothelial injury. The initial pO(2) in irrigation solutions packaged in glass bottles was about half that at atmospheric levels; in solutions packaged in plastic, it was at atmospheric levels. As irrigation solutions were drained from the container, the pO(2) of the solution tended to rise toward atmospheric levels. The rate of pO(2) increase was markedly reduced by using a directly vented glass bottle. In the phacoemulsification procedures, bubble formation was most likely to occur with higher pO(2) and higher power settings. Observation of bubbles by the surgeon was highly correlated with endothelial damage. Keeping the pO(2) low reduced the risk of endothelial damage, especially at higher phacoemulsification powers. The packaging of irrigation solutions was the most important factor in controlling the initial pO(2) of the solution. The pO(2) can be minimized throughout a phacoemulsification procedure by using a directly vented glass bottle.
NASA Technical Reports Server (NTRS)
Crook, Andrew J.; Delaney, Robert A.
1992-01-01
The purpose of this study is the development of a three-dimensional Euler/Navier-Stokes flow analysis for fan section/engine geometries containing multiple blade rows and multiple spanwise flow splitters. An existing procedure developed by Dr. J. J. Adamczyk and associates and the NASA Lewis Research Center was modified to accept multiple spanwise splitter geometries and simulate engine core conditions. The procedure was also modified to allow coarse parallelization of the solution algorithm. This document is a final report outlining the development and techniques used in the procedure. The numerical solution is based upon a finite volume technique with a four stage Runge-Kutta time marching procedure. Numerical dissipation is used to gain solution stability but is reduced in viscous dominated flow regions. Local time stepping and implicit residual smoothing are used to increase the rate of convergence. Multiple blade row solutions are based upon the average-passage system of equations. The numerical solutions are performed on an H-type grid system, with meshes being generated by the system (TIGG3D) developed earlier under this contract. The grid generation scheme meets the average-passage requirement of maintaining a common axisymmetric mesh for each blade row grid. The analysis was run on several geometry configurations ranging from one to five blade rows and from one to four radial flow splitters. Pure internal flow solutions were obtained as well as solutions with flow about the cowl/nacelle and various engine core flow conditions. The efficiency of the solution procedure was shown to be the same as the original analysis.
A Stochastic Total Least Squares Solution of Adaptive Filtering Problem
Ahmad, Noor Atinah
2014-01-01
An efficient and computationally linear algorithm is derived for total least squares solution of adaptive filtering problem, when both input and output signals are contaminated by noise. The proposed total least mean squares (TLMS) algorithm is designed by recursively computing an optimal solution of adaptive TLS problem by minimizing instantaneous value of weighted cost function. Convergence analysis of the algorithm is given to show the global convergence of the proposed algorithm, provided that the stepsize parameter is appropriately chosen. The TLMS algorithm is computationally simpler than the other TLS algorithms and demonstrates a better performance as compared with the least mean square (LMS) and normalized least mean square (NLMS) algorithms. It provides minimum mean square deviation by exhibiting better convergence in misalignment for unknown system identification under noisy inputs. PMID:24688412
Shape reconstruction of irregular bodies with multiple complementary data sources
NASA Astrophysics Data System (ADS)
Kaasalainen, M.; Viikinkoski, M.; Carry, B.; Durech, J.; Lamy, P.; Jorda, L.; Marchis, F.; Hestroffer, D.
2011-10-01
Irregularly shaped bodies with at most partial in situ data are a particular challenge for shape reconstruction and mapping. We have created an inversion algorithm and software package for complementary data sources, with which it is possible to create shape and spin models with feature details even when only groundbased data are available. The procedure uses photometry, adaptive optics or other images, occultation timings, and interferometry as main data sources, and we are extending it to include range-Doppler radar and thermal infrared data as well. The data sources are described as generalized projections in various observable spaces [2], which allows their uniform handling with essentially the same techniques, making the addition of new data sources inexpensive in terms of computation time or software development. We present a generally applicable shape support that can be automatically used for all surface types, including strongly nonconvex or non-starlike shapes. New models of Kleopatra (from photometry, adaptive optics, and interferometry) and Hermione are examples of this approach. When using adaptive optics images, the main information from these is extracted from the limb and terminator contours that can be determined much more accurately than the image pixel brightnesses that inevitably contain large errors for most targets. We have shown that the contours yield a wealth of information independent of the scattering properties of the surface [3]. Their use also facilitates a very fast and robustly converging algorithm. An important concept in the inversion is the optimal weighting of the various data modes. We have developed a mathematicallly rigorous scheme for this purpose. The resulting maximum compatibility estimate [3], a multimodal generalization of the maximum likelihood estimate, ensures that the actual information content of each source is properly taken into account, and that the resolution scale of the ensuing model can be reliably estimated. We have applied our procedure to several asteroids, and the ground truth from the Rosetta/Lutetia flyby confirmed the ability of the approach to recover shape details [1] (see also Carry et al., this meeting). We have created a general flyby-version of the procedure to construct full models of planetary targets for which probe images are only available of a part of the surface (a typical setup for many planetary missions). We have successfully combined flyby images with photometry (Steins [4]) and adaptive optics images (Lutetia); the portion of the surface accurately determined by the flyby constrains the shape solution of the "dark side" efficiently.
An Adaptive Evolutionary Algorithm for Traveling Salesman Problem with Precedence Constraints
Sung, Jinmo; Jeong, Bongju
2014-01-01
Traveling sales man problem with precedence constraints is one of the most notorious problems in terms of the efficiency of its solution approach, even though it has very wide range of industrial applications. We propose a new evolutionary algorithm to efficiently obtain good solutions by improving the search process. Our genetic operators guarantee the feasibility of solutions over the generations of population, which significantly improves the computational efficiency even when it is combined with our flexible adaptive searching strategy. The efficiency of the algorithm is investigated by computational experiments. PMID:24701158
An adaptive evolutionary algorithm for traveling salesman problem with precedence constraints.
Sung, Jinmo; Jeong, Bongju
2014-01-01
Traveling sales man problem with precedence constraints is one of the most notorious problems in terms of the efficiency of its solution approach, even though it has very wide range of industrial applications. We propose a new evolutionary algorithm to efficiently obtain good solutions by improving the search process. Our genetic operators guarantee the feasibility of solutions over the generations of population, which significantly improves the computational efficiency even when it is combined with our flexible adaptive searching strategy. The efficiency of the algorithm is investigated by computational experiments.
NASA Astrophysics Data System (ADS)
Aftosmis, Michael J.
1992-10-01
A new node based upwind scheme for the solution of the 3D Navier-Stokes equations on adaptively refined meshes is presented. The method uses a second-order upwind TVD scheme to integrate the convective terms, and discretizes the viscous terms with a new compact central difference technique. Grid adaptation is achieved through directional division of hexahedral cells in response to evolving features as the solution converges. The method is advanced in time with a multistage Runge-Kutta time stepping scheme. Two- and three-dimensional examples establish the accuracy of the inviscid and viscous discretization. These investigations highlight the ability of the method to produce crisp shocks, while accurately and economically resolving viscous layers. The representation of these and other structures is shown to be comparable to that obtained by structured methods. Further 3D examples demonstrate the ability of the adaptive algorithm to effectively locate and resolve multiple scale features in complex 3D flows with many interacting, viscous, and inviscid structures.
Evolutionary online behaviour learning and adaptation in real robots
Correia, Luís; Christensen, Anders Lyhne
2017-01-01
Online evolution of behavioural control on real robots is an open-ended approach to autonomous learning and adaptation: robots have the potential to automatically learn new tasks and to adapt to changes in environmental conditions, or to failures in sensors and/or actuators. However, studies have so far almost exclusively been carried out in simulation because evolution in real hardware has required several days or weeks to produce capable robots. In this article, we successfully evolve neural network-based controllers in real robotic hardware to solve two single-robot tasks and one collective robotics task. Controllers are evolved either from random solutions or from solutions pre-evolved in simulation. In all cases, capable solutions are found in a timely manner (1 h or less). Results show that more accurate simulations may lead to higher-performing controllers, and that completing the optimization process in real robots is meaningful, even if solutions found in simulation differ from solutions in reality. We furthermore demonstrate for the first time the adaptive capabilities of online evolution in real robotic hardware, including robots able to overcome faults injected in the motors of multiple units simultaneously, and to modify their behaviour in response to changes in the task requirements. We conclude by assessing the contribution of each algorithmic component on the performance of the underlying evolutionary algorithm. PMID:28791130
Száková, J; Tlustos, P; Goessler, W; Frková, Z; Najmanová, J
2009-12-30
The effect of soil extraction procedures and/or sample pretreatment (drying, freezing of the soil sample) on the extractability of arsenic and its compounds was tested. In the first part, five extraction procedures were compared with following order of extractable arsenic portions: 2M HNO(3)>0.43 M CH(3)COOH>or=0.05 M EDTA>or=Mehlich III (0.2M CH(3)COOH+0.25 M NH(4)NO(3)+0.013 M HNO(3)+0.015 M NH(4)F+0.001 M EDTA) extraction>water). Additionally, two methods of soil solution sampling were compared, centrifugation of saturated soil and the use of suction cups. The results showed that different sample pretreatments including soil solution sampling could lead to different absolute values of mobile arsenic content in soils. However, the interpretation of the data can lead to similar conclusions as apparent from the comparison of the soil solution sampling methods (r=0.79). For determination of arsenic compounds mild extraction procedures (0.05 M (NH(4))(2)SO(4), 0.01 M CaCl(2), and water) and soil solution sampling using suction cups were compared. Regarding the real soil conditions the extraction of fresh samples and/or in situ collection of soil solution are preferred among the sample pretreatments and/or soil extraction procedures. However, chemical stabilization of the solutions should be allowed and included in the analytical procedures for determination of individual arsenic compounds.
Three-dimensional self-adaptive grid method for complex flows
NASA Technical Reports Server (NTRS)
Djomehri, M. Jahed; Deiwert, George S.
1988-01-01
A self-adaptive grid procedure for efficient computation of three-dimensional complex flow fields is described. The method is based on variational principles to minimize the energy of a spring system analogy which redistributes the grid points. Grid control parameters are determined by specifying maximum and minimum grid spacing. Multidirectional adaptation is achieved by splitting the procedure into a sequence of successive applications of a unidirectional adaptation. One-sided, two-directional constraints for orthogonality and smoothness are used to enhance the efficiency of the method. Feasibility of the scheme is demonstrated by application to a multinozzle, afterbody, plume flow field. Application of the algorithm for initial grid generation is illustrated by constructing a three-dimensional grid about a bump-like geometry.
Studying the neural bases of prism adaptation using fMRI: A technical and design challenge.
Bultitude, Janet H; Farnè, Alessandro; Salemme, Romeo; Ibarrola, Danielle; Urquizar, Christian; O'Shea, Jacinta; Luauté, Jacques
2017-12-01
Prism adaptation induces rapid recalibration of visuomotor coordination. The neural mechanisms of prism adaptation have come under scrutiny since the observations that the technique can alleviate hemispatial neglect following stroke, and can alter spatial cognition in healthy controls. Relative to non-imaging behavioral studies, fMRI investigations of prism adaptation face several challenges arising from the confined physical environment of the scanner and the supine position of the participants. Any researcher who wishes to administer prism adaptation in an fMRI environment must adjust their procedures enough to enable the experiment to be performed, but not so much that the behavioral task departs too much from true prism adaptation. Furthermore, the specific temporal dynamics of behavioral components of prism adaptation present additional challenges for measuring their neural correlates. We developed a system for measuring the key features of prism adaptation behavior within an fMRI environment. To validate our configuration, we present behavioral (pointing) and head movement data from 11 right-hemisphere lesioned patients and 17 older controls who underwent sham and real prism adaptation in an MRI scanner. Most participants could adapt to prismatic displacement with minimal head movements, and the procedure was well tolerated. We propose recommendations for fMRI studies of prism adaptation based on the design-specific constraints and our results.
A computational procedure for large rotational motions in multibody dynamics
NASA Technical Reports Server (NTRS)
Park, K. C.; Chiou, J. C.
1987-01-01
A computational procedure suitable for the solution of equations of motion for multibody systems is presented. The present procedure adopts a differential partitioning of the translational motions and the rotational motions. The translational equations of motion are then treated by either a conventional explicit or an implicit direct integration method. A principle feature of this procedure is a nonlinearly implicit algorithm for updating rotations via the Euler four-parameter representation. This procedure is applied to the rolling of a sphere through a specific trajectory, which shows that it yields robust solutions.
Translational Advances of Hydrofection by Hydrodynamic Injection
Herrero, María José; Aliño, Salvador F.
2018-01-01
Hydrodynamic gene delivery has proven to be a safe and efficient procedure for gene transfer, able to mediate, in murine model, therapeutic levels of proteins encoded by the transfected gene. In different disease models and targeting distinct organs, it has been demonstrated to revert the pathologic symptoms and signs. The therapeutic potential of hydrofection led different groups to work on the clinical translation of the procedure. In order to prevent the hemodynamic side effects derived from the rapid injection of a large volume, the conditions had to be moderated to make them compatible with its use in mid-size animal models such as rat, hamster and rabbit and large animals as dog, pig and primates. Despite the different approaches performed to adapt the conditions of gene delivery, the results obtained in any of these mid-size and large animals have been poorer than those obtained in murine model. Among these different strategies to reduce the volume employed, the most effective one has been to exclude the vasculature of the target organ and inject the solution directly. This procedure has permitted, by catheterization and surgical procedures in large animals, achieving protein expression levels in tissue close to those achieved in gold standard models. These promising results and the possibility of employing these strategies to transfer gene constructs able to edit genes, such as CRISPR, have renewed the clinical interest of this procedure of gene transfer. In order to translate the hydrodynamic gene delivery to human use, it is demanding the standardization of the procedure conditions and the molecular parameters of evaluation in order to be able to compare the results and establish a homogeneous manner of expressing the data obtained, as ‘classic’ drugs. PMID:29494564
Davis, Laurie Laughlin; Dodd, Barbara G
2008-01-01
Exposure control research with polytomous item pools has determined that randomization procedures can be very effective for controlling test security in computerized adaptive testing (CAT). The current study investigated the performance of four procedures for controlling item exposure in a CAT under the partial credit model. In addition to a no exposure control baseline condition, the Kingsbury-Zara, modified-within-.10-logits, Sympson-Hetter, and conditional Sympson-Hetter procedures were implemented to control exposure rates. The Kingsbury-Zara and the modified-within-.10-logits procedures were implemented with 3 and 6 item candidate conditions. The results show that the Kingsbury-Zara and modified-within-.10-logits procedures with 6 item candidates performed as well as the conditional Sympson-Hetter in terms of exposure rates, overlap rates, and pool utilization. These two procedures are strongly recommended for use with partial credit CATs due to their simplicity and strength of their results.
Adaptive Finite Element Methods for Continuum Damage Modeling
NASA Technical Reports Server (NTRS)
Min, J. B.; Tworzydlo, W. W.; Xiques, K. E.
1995-01-01
The paper presents an application of adaptive finite element methods to the modeling of low-cycle continuum damage and life prediction of high-temperature components. The major objective is to provide automated and accurate modeling of damaged zones through adaptive mesh refinement and adaptive time-stepping methods. The damage modeling methodology is implemented in an usual way by embedding damage evolution in the transient nonlinear solution of elasto-viscoplastic deformation problems. This nonlinear boundary-value problem is discretized by adaptive finite element methods. The automated h-adaptive mesh refinements are driven by error indicators, based on selected principal variables in the problem (stresses, non-elastic strains, damage, etc.). In the time domain, adaptive time-stepping is used, combined with a predictor-corrector time marching algorithm. The time selection is controlled by required time accuracy. In order to take into account strong temperature dependency of material parameters, the nonlinear structural solution a coupled with thermal analyses (one-way coupling). Several test examples illustrate the importance and benefits of adaptive mesh refinements in accurate prediction of damage levels and failure time.
Multithreaded Model for Dynamic Load Balancing Parallel Adaptive PDE Computations
NASA Technical Reports Server (NTRS)
Chrisochoides, Nikos
1995-01-01
We present a multithreaded model for the dynamic load-balancing of numerical, adaptive computations required for the solution of Partial Differential Equations (PDE's) on multiprocessors. Multithreading is used as a means of exploring concurrency in the processor level in order to tolerate synchronization costs inherent to traditional (non-threaded) parallel adaptive PDE solvers. Our preliminary analysis for parallel, adaptive PDE solvers indicates that multithreading can be used an a mechanism to mask overheads required for the dynamic balancing of processor workloads with computations required for the actual numerical solution of the PDE's. Also, multithreading can simplify the implementation of dynamic load-balancing algorithms, a task that is very difficult for traditional data parallel adaptive PDE computations. Unfortunately, multithreading does not always simplify program complexity, often makes code re-usability not an easy task, and increases software complexity.
NASA Astrophysics Data System (ADS)
Chai, Runqi; Savvaris, Al; Tsourdos, Antonios
2016-06-01
In this paper, a fuzzy physical programming (FPP) method has been introduced for solving multi-objective Space Manoeuvre Vehicles (SMV) skip trajectory optimization problem based on hp-adaptive pseudospectral methods. The dynamic model of SMV is elaborated and then, by employing hp-adaptive pseudospectral methods, the problem has been transformed to nonlinear programming (NLP) problem. According to the mission requirements, the solutions were calculated for each single-objective scenario. To get a compromised solution for each target, the fuzzy physical programming (FPP) model is proposed. The preference function is established with considering the fuzzy factor of the system such that a proper compromised trajectory can be acquired. In addition, the NSGA-II is tested to obtain the Pareto-optimal solution set and verify the Pareto optimality of the FPP solution. Simulation results indicate that the proposed method is effective and feasible in terms of dealing with the multi-objective skip trajectory optimization for the SMV.
NASA Technical Reports Server (NTRS)
Gossard, Myron L
1952-01-01
An iterative transformation procedure suggested by H. Wielandt for numerical solution of flutter and similar characteristic-value problems is presented. Application of this procedure to ordinary natural-vibration problems and to flutter problems is shown by numerical examples. Comparisons of computed results with experimental values and with results obtained by other methods of analysis are made.
Schwanke, J; Rienhoff, O; Schulze, T G; Nussbeck, S Y
2013-01-01
Longitudinal biomedical research projects study patients or participants over a course of time. No IT solution is known that can manage study participants, enhance quality of data, support re-contacting of participants, plan study visits, and keep track of informed consent procedures and recruitments that may be subject to change over time. In business settings management of personal is one of the major aspects of customer relationship management systems (CRMS). To evaluate whether CRMS are suitable IT solutions for study participant management in biomedical research. Three boards of experts in the field of biomedical research were consulted to get an insight into recent IT developments regarding study participant management systems (SPMS). Subsequently, a requirements analysis was performed with stakeholders of a major biomedical research project. The successive suitability evaluation was based on the comparison of the identified requirements with the features of six CRMS. Independently of each other, the interviewed expert boards confirmed that there is no generic IT solution for the management of participants. Sixty-four requirements were identified and prioritized in a requirements analysis. The best CRMS was able to fulfill forty-two of these requirements. The non-fulfilled requirements demand an adaption of the CRMS, consuming time and resources, reducing the update compatibility, the system's suitability, and the security of the CRMS. A specific solution for the SPMS is favored instead of a generic and commercially-oriented CRMS. Therefore, the development of a small and specific SPMS solution was commenced and is currently on the way to completion.
ERIC Educational Resources Information Center
Meijer, Rob R.; van Krimpen-Stoop, Edith M. L. A.
In this study a cumulative-sum (CUSUM) procedure from the theory of Statistical Process Control was modified and applied in the context of person-fit analysis in a computerized adaptive testing (CAT) environment. Six person-fit statistics were proposed using the CUSUM procedure, and three of them could be used to investigate the CAT in online test…
New multigrid approach for three-dimensional unstructured, adaptive grids
NASA Technical Reports Server (NTRS)
Parthasarathy, Vijayan; Kallinderis, Y.
1994-01-01
A new multigrid method with adaptive unstructured grids is presented. The three-dimensional Euler equations are solved on tetrahedral grids that are adaptively refined or coarsened locally. The multigrid method is employed to propagate the fine grid corrections more rapidly by redistributing the changes-in-time of the solution from the fine grid to the coarser grids to accelerate convergence. A new approach is employed that uses the parent cells of the fine grid cells in an adapted mesh to generate successively coaser levels of multigrid. This obviates the need for the generation of a sequence of independent, nonoverlapping grids as well as the relatively complicated operations that need to be performed to interpolate the solution and the residuals between the independent grids. The solver is an explicit, vertex-based, finite volume scheme that employs edge-based data structures and operations. Spatial discretization is of central-differencing type combined with a special upwind-like smoothing operators. Application cases include adaptive solutions obtained with multigrid acceleration for supersonic and subsonic flow over a bump in a channel, as well as transonic flow around the ONERA M6 wing. Two levels of multigrid resulted in reduction in the number of iterations by a factor of 5.
Adaptation of Selenastrum capricornutum (Chlorophyceae) to copper
Kuwabara, J.S.; Leland, H.V.
1986-01-01
Selenastrum capricornutum Printz, growing in a chemically defined medium, was used as a model for studying adaptation of algae to a toxic metal (copper) ion. Cells exhibited lag-phase adaptation to 0.8 ??M total Cu (10-12 M free ion concentration) after 20 generations of Cu exposure. Selenastrum adapted to the same concentration when Cu was gradually introduced over an 8-h period using a specially designed apparatus that provided a transient increase in exposure concentration. Cu adaptation was not attributable to media conditioning by algal exudates. Duration of lag phase was a more sensitive index of copper toxicity to Selenastrum that was growth rate or stationary-phase cell density under the experimental conditions used. Chemical speciation of the Cu dosing solution influenced the duration of lag phase even when media formulations were identical after dosing. Selenastrum initially exposed to Cu in a CuCl2 injection solution exhibited a lag phase of 3.9 d, but this was reduced to 1.5 d when a CuEDTA solution was used to achieve the same total Cu and EDTA concentrations. Physical and chemical processes that accelerated the rate of increase in cupric ion concentration generally increased the duration of lag phase. ?? 1986.
Broadcasting satellite service synthesis using gradient and cyclic coordinate search procedures
NASA Technical Reports Server (NTRS)
Reilly, C. H.; Mount-Campbell, C. A.; Gonsalvez, D. J.; Martin, C. H.; Levis, C. A.; Wang, C. W.
1986-01-01
Two search techniques are considered for solving satellite synthesis problems. Neither is likely to find a globally optimal solution. In order to determine which method performs better and what factors affect their performance, we design an experiment and solve the same problem under a variety of starting solution configuration-algorithm combinations. Since there is no randomization in the experiment, we present results of practical, rather than statistical, significance. Our implementation of a cyclic coordinate search procedure clearly finds better synthesis solutions than our implementation of a gradient search procedure does with our objective of maximizing the minimum C/I ratio computed at test points on the perimeters of the intended service areas. The length of the available orbital arc and the configuration of the starting solution are shown to affect the quality of the solutions found.
NASA Technical Reports Server (NTRS)
Wang, Gang
2003-01-01
A multi grid solution procedure for the numerical simulation of turbulent flows in complex geometries has been developed. A Full Multigrid-Full Approximation Scheme (FMG-FAS) is incorporated into the continuity and momentum equations, while the scalars are decoupled from the multi grid V-cycle. A standard kappa-Epsilon turbulence model with wall functions has been used to close the governing equations. The numerical solution is accomplished by solving for the Cartesian velocity components either with a traditional grid staggering arrangement or with a multiple velocity grid staggering arrangement. The two solution methodologies are evaluated for relative computational efficiency. The solution procedure with traditional staggering arrangement is subsequently applied to calculate the flow and temperature fields around a model Short Take-off and Vertical Landing (STOVL) aircraft hovering in ground proximity.
Broadcasting satellite service synthesis using gradient and cyclic coordinate search procedures
NASA Technical Reports Server (NTRS)
Reilly, C. H.; Mount-Campbell, C. A.; Gonsalvez, D. J.; Martin, C. H.; Levis, C. A.
1986-01-01
Two search techniques are considered for solving satellite synthesis problems. Neither is likely to find a globally optimal solution. In order to determine which method performs better and what factors affect their performance, an experiment is designed and the same problem is solved under a variety of starting solution configuration-algorithm combinations. Since there is no randomization in the experiment, results of practical, rather than statistical, significance are presented. Implementation of a cyclic coordinate search procedure clearly finds better synthesis solutions than implementation of a gradient search procedure does with the objective of maximizing the minimum C/I ratio computed at test points on the perimeters of the intended service areas. The length of the available orbital arc and the configuration of the starting solution are shown to affect the quality of the solutions found.
An adaptive time-stepping strategy for solving the phase field crystal model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Zhengru, E-mail: zrzhang@bnu.edu.cn; Ma, Yuan, E-mail: yuner1022@gmail.com; Qiao, Zhonghua, E-mail: zqiao@polyu.edu.hk
2013-09-15
In this work, we will propose an adaptive time step method for simulating the dynamics of the phase field crystal (PFC) model. The numerical simulation of the PFC model needs long time to reach steady state, and then large time-stepping method is necessary. Unconditionally energy stable schemes are used to solve the PFC model. The time steps are adaptively determined based on the time derivative of the corresponding energy. It is found that the use of the proposed time step adaptivity cannot only resolve the steady state solution, but also the dynamical development of the solution efficiently and accurately. Themore » numerical experiments demonstrate that the CPU time is significantly saved for long time simulations.« less
Grid adaption for hypersonic flow
NASA Technical Reports Server (NTRS)
Abolhassani, Jamshid S.; Tiwari, Surendra N.; Smith, Robert E.
1987-01-01
The methods of grid adaption are reviewed and a method is developed with the capability of adaption to several flow variables. This method is based on a variational approach and is an algebraic method which does not require the solution of partial differential equations. Also the method has been formulated in such a way that there is no need for any matrix inversion. The method is used in conjunction with the calculation of hypersonic flow over a blunt nose body. The equations of motion are the compressible Navier-Stokes equations where all viscous terms are retained. They are solved by the MacCormack time-splitting method. A movie has been produced which shows simultaneously the transient behavior of the solution and the grid adaption.
2013-01-01
Background Optimization procedures to identify gene knockouts for targeted biochemical overproduction have been widely in use in modern metabolic engineering. Flux balance analysis (FBA) framework has provided conceptual simplifications for genome-scale dynamic analysis at steady states. Based on FBA, many current optimization methods for targeted bio-productions have been developed under the maximum cell growth assumption. The optimization problem to derive gene knockout strategies recently has been formulated as a bi-level programming problem in OptKnock for maximum targeted bio-productions with maximum growth rates. However, it has been shown that knockout mutants in fact reach the steady states with the minimization of metabolic adjustment (MOMA) from the corresponding wild-type strains instead of having maximal growth rates after genetic or metabolic intervention. In this work, we propose a new bi-level computational framework--MOMAKnock--which can derive robust knockout strategies under the MOMA flux distribution approximation. Methods In this new bi-level optimization framework, we aim to maximize the production of targeted chemicals by identifying candidate knockout genes or reactions under phenotypic constraints approximated by the MOMA assumption. Hence, the targeted chemical production is the primary objective of MOMAKnock while the MOMA assumption is formulated as the inner problem of constraining the knockout metabolic flux to be as close as possible to the steady-state phenotypes of wide-type strains. As this new inner problem becomes a quadratic programming problem, a novel adaptive piecewise linearization algorithm is developed in this paper to obtain the exact optimal solution to this new bi-level integer quadratic programming problem for MOMAKnock. Results Our new MOMAKnock model and the adaptive piecewise linearization solution algorithm are tested with a small E. coli core metabolic network and a large-scale iAF1260 E. coli metabolic network. The derived knockout strategies are compared with those from OptKnock. Our preliminary experimental results show that MOMAKnock can provide improved targeted productions with more robust knockout strategies. PMID:23368729
Ren, Shaogang; Zeng, Bo; Qian, Xiaoning
2013-01-01
Optimization procedures to identify gene knockouts for targeted biochemical overproduction have been widely in use in modern metabolic engineering. Flux balance analysis (FBA) framework has provided conceptual simplifications for genome-scale dynamic analysis at steady states. Based on FBA, many current optimization methods for targeted bio-productions have been developed under the maximum cell growth assumption. The optimization problem to derive gene knockout strategies recently has been formulated as a bi-level programming problem in OptKnock for maximum targeted bio-productions with maximum growth rates. However, it has been shown that knockout mutants in fact reach the steady states with the minimization of metabolic adjustment (MOMA) from the corresponding wild-type strains instead of having maximal growth rates after genetic or metabolic intervention. In this work, we propose a new bi-level computational framework--MOMAKnock--which can derive robust knockout strategies under the MOMA flux distribution approximation. In this new bi-level optimization framework, we aim to maximize the production of targeted chemicals by identifying candidate knockout genes or reactions under phenotypic constraints approximated by the MOMA assumption. Hence, the targeted chemical production is the primary objective of MOMAKnock while the MOMA assumption is formulated as the inner problem of constraining the knockout metabolic flux to be as close as possible to the steady-state phenotypes of wide-type strains. As this new inner problem becomes a quadratic programming problem, a novel adaptive piecewise linearization algorithm is developed in this paper to obtain the exact optimal solution to this new bi-level integer quadratic programming problem for MOMAKnock. Our new MOMAKnock model and the adaptive piecewise linearization solution algorithm are tested with a small E. coli core metabolic network and a large-scale iAF1260 E. coli metabolic network. The derived knockout strategies are compared with those from OptKnock. Our preliminary experimental results show that MOMAKnock can provide improved targeted productions with more robust knockout strategies.
Therapeutic apheresis in the Republic of Macedonia - our five years experience (2000-2004).
Milovanceva-Popovska, M; Stojkovski, Lj; Grcevska, L; Dzikova, S; Ristovska, V; Gogovska, L; Polenakovic, M
2006-07-01
Membrane plasma exchange (PE) is a mode of extracorporeal blood purification. Since 1985 membrane PE has been in regular use at the Department of Nephrology, Medical Faculty of Skopje, R.Macedonia. In this paper we report on five years (2000-2004) of single centre plasma exchange activity. We performed 540 PE treatments (108 PE/per year) on 99 patients. The M/F ratio was 40/48. The patients underwent a median of 5.45 procedures (range, 1-16). The treated patients were from different Departments. Protocols for PE depend on the disease and its severity. PE were performed 2-4 times weekly using Gambro PF 2000 N filters with an adaptation of the Gambro AK10 dialysis machine or with the Gambro Prizma machine (2 cases). Blood access was achieved through femoral vein. Substitution was made with fresh frozen plasma and/or with 20% human albumin combined with Ringer's solution. An average amount of 2150 ml plasmafiltrate per treatment (respectively 30 to 40 ml plasmafiltrate/kg body weight) was eliminated. Most therapeutic procedures were performed on patients from the Department of Neurology. 63.6% of all patients were referred for Myasthenia gravis and the Guillian Barre syndrome. The total number of procedures per year has remained fairly stable, corresponding to a median of 5.4 treatments/100 000 inhabitants. We observed hypocalcaemia in 8% of the patients, urticarial reactions in 7.3%, pruritic reactions in 12%, and hypotension/headache in 6.8%. No major procedural complications were seen.
Development of Multiobjective Optimization Techniques for Sonic Boom Minimization
NASA Technical Reports Server (NTRS)
Chattopadhyay, Aditi; Rajadas, John Narayan; Pagaldipti, Naryanan S.
1996-01-01
A discrete, semi-analytical sensitivity analysis procedure has been developed for calculating aerodynamic design sensitivities. The sensitivities of the flow variables and the grid coordinates are numerically calculated using direct differentiation of the respective discretized governing equations. The sensitivity analysis techniques are adapted within a parabolized Navier Stokes equations solver. Aerodynamic design sensitivities for high speed wing-body configurations are calculated using the semi-analytical sensitivity analysis procedures. Representative results obtained compare well with those obtained using the finite difference approach and establish the computational efficiency and accuracy of the semi-analytical procedures. Multidisciplinary design optimization procedures have been developed for aerospace applications namely, gas turbine blades and high speed wing-body configurations. In complex applications, the coupled optimization problems are decomposed into sublevels using multilevel decomposition techniques. In cases with multiple objective functions, formal multiobjective formulation such as the Kreisselmeier-Steinhauser function approach and the modified global criteria approach have been used. Nonlinear programming techniques for continuous design variables and a hybrid optimization technique, based on a simulated annealing algorithm, for discrete design variables have been used for solving the optimization problems. The optimization procedure for gas turbine blades improves the aerodynamic and heat transfer characteristics of the blades. The two-dimensional, blade-to-blade aerodynamic analysis is performed using a panel code. The blade heat transfer analysis is performed using an in-house developed finite element procedure. The optimization procedure yields blade shapes with significantly improved velocity and temperature distributions. The multidisciplinary design optimization procedures for high speed wing-body configurations simultaneously improve the aerodynamic, the sonic boom and the structural characteristics of the aircraft. The flow solution is obtained using a comprehensive parabolized Navier Stokes solver. Sonic boom analysis is performed using an extrapolation procedure. The aircraft wing load carrying member is modeled as either an isotropic or a composite box beam. The isotropic box beam is analyzed using thin wall theory. The composite box beam is analyzed using a finite element procedure. The developed optimization procedures yield significant improvements in all the performance criteria and provide interesting design trade-offs. The semi-analytical sensitivity analysis techniques offer significant computational savings and allow the use of comprehensive analysis procedures within design optimization studies.
NASA Technical Reports Server (NTRS)
Chang, S. C.
1986-01-01
A two-step semidirect procedure is developed to accelerate the one-step procedure described in NASA TP-2529. For a set of constant coefficient model problems, the acceleration factor increases from 1 to 2 as the one-step procedure convergence rate decreases from + infinity to 0. It is also shown numerically that the two-step procedure can substantially accelerate the convergence of the numerical solution of many partial differential equations (PDE's) with variable coefficients.
Adaptive reconnection-based arbitrary Lagrangian Eulerian method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bo, Wurigen; Shashkov, Mikhail
We present a new adaptive Arbitrary Lagrangian Eulerian (ALE) method. This method is based on the reconnection-based ALE (ReALE) methodology of Refs. [35], [34] and [6]. The main elements in a standard ReALE method are: an explicit Lagrangian phase on an arbitrary polygonal (in 2D) mesh in which the solution and positions of grid nodes are updated; a rezoning phase in which a new grid is defined by changing the connectivity (using Voronoi tessellation) but not the number of cells; and a remapping phase in which the Lagrangian solution is transferred onto the new grid. Furthermore, in the standard ReALEmore » method, the rezoned mesh is smoothed by using one or several steps toward centroidal Voronoi tessellation, but it is not adapted to the solution in any way.« less
Adaptive reconnection-based arbitrary Lagrangian Eulerian method
Bo, Wurigen; Shashkov, Mikhail
2015-07-21
We present a new adaptive Arbitrary Lagrangian Eulerian (ALE) method. This method is based on the reconnection-based ALE (ReALE) methodology of Refs. [35], [34] and [6]. The main elements in a standard ReALE method are: an explicit Lagrangian phase on an arbitrary polygonal (in 2D) mesh in which the solution and positions of grid nodes are updated; a rezoning phase in which a new grid is defined by changing the connectivity (using Voronoi tessellation) but not the number of cells; and a remapping phase in which the Lagrangian solution is transferred onto the new grid. Furthermore, in the standard ReALEmore » method, the rezoned mesh is smoothed by using one or several steps toward centroidal Voronoi tessellation, but it is not adapted to the solution in any way.« less
Yang, S; Wang, D
2000-01-01
This paper presents a constraint satisfaction adaptive neural network, together with several heuristics, to solve the generalized job-shop scheduling problem, one of NP-complete constraint satisfaction problems. The proposed neural network can be easily constructed and can adaptively adjust its weights of connections and biases of units based on the sequence and resource constraints of the job-shop scheduling problem during its processing. Several heuristics that can be combined with the neural network are also presented. In the combined approaches, the neural network is used to obtain feasible solutions, the heuristic algorithms are used to improve the performance of the neural network and the quality of the obtained solutions. Simulations have shown that the proposed neural network and its combined approaches are efficient with respect to the quality of solutions and the solving speed.
Valdivia, Maria Pia; Stutman, Dan; Stoeckl, Christian; Mileham, Chad; Begishev, Ildar A; Bromage, Jake; Regan, Sean P
2018-01-10
Talbot-Lau x-ray interferometry uses incoherent x-ray sources to measure refraction index changes in matter. These measurements can provide accurate electron density mapping through phase retrieval. An adaptation of the interferometer has been developed in order to meet the specific requirements of high-energy density experiments. This adaptation is known as a moiré deflectometer, which allows for single-shot capabilities in the form of interferometric fringe patterns. The moiré x-ray deflectometry technique requires a set of object and reference images in order to provide electron density maps, which can be costly in the high-energy density environment. In particular, synthetic reference phase images obtained ex situ through a phase-scan procedure, can provide a feasible solution. To test this procedure, an object phase map was retrieved from a single-shot moiré image obtained from a plasma-produced x-ray source. A reference phase map was then obtained from phase-stepping measurements using a continuous x-ray tube source in a small laboratory setting. The two phase maps were used to retrieve an electron density map. A comparison of the moiré and phase-stepping phase-retrieval methods was performed to evaluate single-exposure plasma electron density mapping for high-energy density and other transient plasma experiments. It was found that a combination of phase-retrieval methods can deliver accurate refraction angle mapping. Once x-ray backlighter quality is optimized, the ex situ method is expected to deliver electron density mapping with improved resolution. The steps necessary for improved diagnostic performance are discussed.
[Developing team reflexivity as a learning and working tool for medical teams].
Riskin, Arieh; Bamberger, Peter
2014-01-01
Team reflexivity is a collective activity in which team members review their previous work, and develop ideas on how to modify their work behavior in order to achieve better future results. It is an important learning tool and a key factor in explaining the varying effectiveness of teams. Team reflexivity encompasses both self-awareness and agency, and includes three main activities: reflection, planning, and adaptation. The model of briefing-debriefing cycles promotes team reflexivity. Its key elements include: Pre-action briefing--setting objectives, roles, and strategies the mission, as well as proposing adaptations based on what was previously learnt from similar procedures; Post-action debriefing--reflecting on the procedure performed and reviewing the extent to which objectives were met, and what can be learnt for future tasks. Given the widespread attention to team-based work systems and organizational learning, efforts should be made toward ntroducing team reflexivity in health administration systems. Implementation could be difficult because most teams in hospitals are short-lived action teams formed for a particular event, with limited time and opportunity to consciously reflect upon their actions. But it is precisely in these contexts that reflexive processes have the most to offer instead of the natural impulsive collective logics. Team reflexivity suggests a potential solution to the major problems of iatorgenesis--avoidable medical errors, as it forces all team members to participate in a reflexive process together. Briefing-debriefing technology was studied mainly in surgical teams and was shown to enhance team-based learning and to improve quality-related outcomes and safety.
NASA Astrophysics Data System (ADS)
Kim, Gi Young
The problem we investigate deals with an Image Intelligence (IMINT) sensor allocation schedule for High Altitude Long Endurance UAVs in a dynamic and Anti-Access Area Denial (A2AD) environment. The objective is to maximize the Situational Awareness (SA) of decision makers. The value of SA can be improved in two different ways. First, if a sensor allocated to an Areas of Interest (AOI) detects target activity, then the SA value will be increased. Second, the SA value increases if an AOI is monitored for a certain period of time, regardless of target detections. These values are functions of the sensor allocation time, sensor type and mode. Relatively few studies in the archival literature have been devoted to an analytic, detailed explanation of the target detection process, and AOI monitoring value dynamics. These two values are the fundamental criteria used to choose the most judicious sensor allocation schedule. This research presents mathematical expressions for target detection processes, and shows the monitoring value dynamics. Furthermore, the dynamics of target detection is the result of combined processes between belligerent behavior (target activity) and friendly behavior (sensor allocation). We investigate these combined processes and derive mathematical expressions for simplified cases. These closed form mathematical models can be used for Measures of Effectiveness (MOEs), i.e., target activity detection to evaluate sensor allocation schedules. We also verify these models with discrete event simulations which can also be used to describe more complex systems. We introduce several methodologies to achieve a judicious sensor allocation schedule focusing on the AOI monitoring value. The first methodology is a discrete time integer programming model which provides an optimal solution but is impractical for real world scenarios due to its computation time. Thus, it is necessary to trade off the quality of solution with computation time. The Myopic Greedy Procedure (MGP) is a heuristic which chooses the largest immediate unit time return at each decision epoch. This reduces computation time significantly, but the quality of the solution may be only 95% of optimal (for small size problems). Another alternative is a multi-start random constructive Hybrid Myopic Greedy Procedure (H-MGP), which incorporates stochastic variation in choosing an action at each stage, and repeats it a predetermined number of times (roughly 99.3% of optimal with 1000 repetitions). Finally, the One Stage Look Ahead (OSLA) procedure considers all the 'top choices' at each stage for a temporary time horizon and chooses the best action (roughly 98.8% of optimal with no repetition). Using OSLA procedure, we can have ameliorated solutions within a reasonable computation time. Other important issues discussed in this research are methodologies for the development of input parameters for real world applications.
NASA Astrophysics Data System (ADS)
Alvarez, Alejandro; Beche, Alexandre; Furano, Fabrizio; Hellmich, Martin; Keeble, Oliver; Rocha, Ricardo
2012-12-01
The Disk Pool Manager (DPM) is a lightweight solution for grid enabled disk storage management. Operated at more than 240 sites it has the widest distribution of all grid storage solutions in the WLCG infrastructure. It provides an easy way to manage and configure disk pools, and exposes multiple interfaces for data access (rfio, xroot, nfs, gridftp and http/dav) and control (srm). During the last year we have been working on providing stable, high performant data access to our storage system using standard protocols, while extending the storage management functionality and adapting both configuration and deployment procedures to reuse commonly used building blocks. In this contribution we cover in detail the extensive evaluation we have performed of our new HTTP/WebDAV and NFS 4.1 frontends, in terms of functionality and performance. We summarize the issues we faced and the solutions we developed to turn them into valid alternatives to the existing grid protocols - namely the additional work required to provide multi-stream transfers for high performance wide area access, support for third party copies, credential delegation or the required changes in the experiment and fabric management frameworks and tools. We describe new functionality that has been added to ease system administration, such as different filesystem weights and a faster disk drain, and new configuration and monitoring solutions based on the industry standards Puppet and Nagios. Finally, we explain some of the internal changes we had to do in the DPM architecture to better handle the additional load from the analysis use cases.
Viets, J.G.; Clark, J.R.; Campbell, W.L.
1984-01-01
A solution of dilute hydrochloric acid, ascorbic acid, and potassium iodide has been found to dissolve weakly bound metals in soils, stream sediments, and oxidized rocks. Silver, Bi, Cd, Cu, Mo, Pb, Sb, and Zn are selectively extracted from this solution by a mixture of Aliquat 336 (tricaprylyl methyl ammonium chloride) and MIBK (methyl isobutyl ketone). Because potentially interfering major and minor elements do not extract, the organic separation allows interference-free determinations of Ag and Cd to the 0.05 ppm level, Mo, Cu, and Zn to 0.5 ppm, and Bi, Pb, and Sb to 1 ppm in the sample using flame atomic absorption spectroscopy. The analytical absorbance values of the organic solution used in the proposed method are generally enhanced more than threefold as compared to aqueous solutions, due to more efficient atomization and burning characteristics. The leaching and extraction procedures are extremely rapid; as many as 100 samples may be analyzed per day, yielding 800 determinations, and the technique is adaptable to field use. The proposed method was compared to total digestion methods for geochemical reference samples as well as soils and stream sediments from mineralized and unmineralized areas. The partial leach showed better anomaly contrasts than did total digestions. Because the proposed method is very rapid and is sensitive to pathfinder elements for several types of ore deposits, it should be useful for reconnaissance surveys for concealed deposits. ?? 1984.
The Biopsychosocial-Digital Approach to Health and Disease: Call for a Paradigm Expansion.
Ahmadvand, Alireza; Gatchel, Robert; Brownstein, John; Nissen, Lisa
2018-05-18
Digital health is an advancing phenomenon in modern health care systems. Currently, numerous stakeholders in various countries are evaluating the potential benefits of digital health solutions at the individual, population, and/or organizational levels. Additionally, driving factors are being created from the customer-side of the health care systems to push health care providers, policymakers, or researchers to embrace digital health solutions. However, health care providers may differ in their approach to adopt these solutions. Health care providers are not assumed to be appropriately trained to address the requirements of integrating digital health solutions into daily everyday practices and procedures. To adapt to the changing demands of health care systems, it is necessary to expand relevant paradigms and to train human resources as required. In this article, a more comprehensive paradigm will be proposed, based on the 'biopsychosocial model' of assessing health and disease, originally introduced by George L Engel. The "biopsychosocial model" must be leveraged to include a "digital" component, thus suggesting a 'biopsychosocial-digital' approach to health and disease. Modifications to the "biopsychosocial" model and transition to the "biopsychosocial-digital" model are explained. Furthermore, the emerging implications of understanding health and disease are clarified pertaining to their relevance in training human resources for health care provision and research. ©Alireza Ahmadvand, Robert Gatchel, John Brownstein, Lisa Nissen. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 18.05.2018.
Mahillo-Isla, R; Gonźalez-Morales, M J; Dehesa-Martínez, C
2011-06-01
The slowly varying envelope approximation is applied to the radiation problems of the Helmholtz equation with a planar single-layer and dipolar sources. The analyses of such problems provide procedures to recover solutions of the Helmholtz equation based on the evaluation of solutions of the parabolic wave equation at a given plane. Furthermore, the conditions that must be fulfilled to apply each procedure are also discussed. The relations to previous work are given as well.
Wang, Li; McKeith, Amanda Gipe; Shen, Cangliang; Carter, Kelsey; Huff, Alyssa; McKeith, Russell; Zhang, Xinxia; Chen, Zhengxing
2016-02-01
This study evaluated the antilisterial activity of hops beta acids (HBA) and their impact on the quality and sensory attributes of ham. Commercially cured ham slices were inoculated with unstressed- and acid-stress-adapted (ASA)-L. monocytogenes (2.2 to 2.5 log CFU/cm(2) ), followed by no dipping (control), dipping in deionized (DI) water, or dipping in a 0.11% HBA solution. This was followed by vacuum or aerobic packaging and storage (7.2 °C, 35 or 20 d). Samples were taken periodically during storage to check for pH changes and analyze the microbial populations. Color measurements were obtained by dipping noninoculated ham slices in a 0.11% HBA solution, followed by vacuum packaging and storage (4.0 °C, 42 d). Sensory evaluations were performed on ham slices treated with 0.05% to 0.23% HBA solutions, followed by vacuum packaging and storage (4.0 °C, 30 d). HBA caused immediate reductions of 1.2 to 1.5 log CFU/cm(2) (P < 0.05) in unstressed- and ASA-L. monocytogenes populations on ham slices. During storage, the unstressed-L. monocytogenes populations on HBA-treated samples were 0.5 to 2.0 log CFU/cm(2) lower (P < 0.05) than control samples and those dipped in DI water. The lag-phase of the unstressed-L. monocytogenes population was extended from 3.396 to 7.125 d (control) to 7.194 to 10.920 d in the HBA-treated samples. However, the ASA-L. monocytogenes population showed resistance to HBA because they had a higher growth rate than control samples and had similar growth variables to DI water-treated samples during storage. Dipping in HBA solution did not adversely affect the color or sensory attributes of the ham slices stored in vacuum packages. These results are useful for helping ready-to-eat meat processors develop operational procedures for applying HBA on ham slices. © 2016 Institute of Food Technologists®
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roberts, Nathan V.; Demkowiz, Leszek; Moser, Robert
2015-11-15
The discontinuous Petrov-Galerkin methodology with optimal test functions (DPG) of Demkowicz and Gopalakrishnan [18, 20] guarantees the optimality of the solution in an energy norm, and provides several features facilitating adaptive schemes. Whereas Bubnov-Galerkin methods use identical trial and test spaces, Petrov-Galerkin methods allow these function spaces to differ. In DPG, test functions are computed on the fly and are chosen to realize the supremum in the inf-sup condition; the method is equivalent to a minimum residual method. For well-posed problems with sufficiently regular solutions, DPG can be shown to converge at optimal rates—the inf-sup constants governing the convergence aremore » mesh-independent, and of the same order as those governing the continuous problem [48]. DPG also provides an accurate mechanism for measuring the error, and this can be used to drive adaptive mesh refinements. We employ DPG to solve the steady incompressible Navier-Stokes equations in two dimensions, building on previous work on the Stokes equations, and focusing particularly on the usefulness of the approach for automatic adaptivity starting from a coarse mesh. We apply our approach to a manufactured solution due to Kovasznay as well as the lid-driven cavity flow, backward-facing step, and flow past a cylinder problems.« less
Aeroacoustic Simulation of Nose Landing Gear on Adaptive Unstructured Grids With FUN3D
NASA Technical Reports Server (NTRS)
Vatsa, Veer N.; Khorrami, Mehdi R.; Park, Michael A.; Lockard, David P.
2013-01-01
Numerical simulations have been performed for a partially-dressed, cavity-closed nose landing gear configuration that was tested in NASA Langley s closed-wall Basic Aerodynamic Research Tunnel (BART) and in the University of Florida's open-jet acoustic facility known as the UFAFF. The unstructured-grid flow solver FUN3D, developed at NASA Langley Research center, is used to compute the unsteady flow field for this configuration. Starting with a coarse grid, a series of successively finer grids were generated using the adaptive gridding methodology available in the FUN3D code. A hybrid Reynolds-averaged Navier-Stokes/large eddy simulation (RANS/LES) turbulence model is used for these computations. Time-averaged and instantaneous solutions obtained on these grids are compared with the measured data. In general, the correlation with the experimental data improves with grid refinement. A similar trend is observed for sound pressure levels obtained by using these CFD solutions as input to a FfowcsWilliams-Hawkings noise propagation code to compute the farfield noise levels. In general, the numerical solutions obtained on adapted grids compare well with the hand-tuned enriched fine grid solutions and experimental data. In addition, the grid adaption strategy discussed here simplifies the grid generation process, and results in improved computational efficiency of CFD simulations.
Global Load Balancing with Parallel Mesh Adaption on Distributed-Memory Systems
NASA Technical Reports Server (NTRS)
Biswas, Rupak; Oliker, Leonid; Sohn, Andrew
1996-01-01
Dynamic mesh adaptation on unstructured grids is a powerful tool for efficiently computing unsteady problems to resolve solution features of interest. Unfortunately, this causes load inbalances among processors on a parallel machine. This paper described the parallel implementation of a tetrahedral mesh adaption scheme and a new global load balancing method. A heuristic remapping algorithm is presented that assigns partitions to processors such that the redistribution coast is minimized. Results indicate that the parallel performance of the mesh adaption code depends on the nature of the adaption region and show a 35.5X speedup on 64 processors of an SP2 when 35 percent of the mesh is randomly adapted. For large scale scientific computations, our load balancing strategy gives an almost sixfold reduction in solver execution times over non-balanced loads. Furthermore, our heuristic remappier yields processor assignments that are less than 3 percent of the optimal solutions, but requires only 1 percent of the computational time.
NASA Astrophysics Data System (ADS)
Garner, G. G.; Keller, K.
2017-12-01
Sea-level rise poses considerable risks to coastal communities, ecosystems, and infrastructure. Decision makers are faced with deeply uncertain sea-level projections when designing a strategy for coastal adaptation. The traditional methods have provided tremendous insight into this decision problem, but are often silent on tradeoffs as well as the effects of tail-area events and of potential future learning. Here we reformulate a simple sea-level rise adaptation model to address these concerns. We show that Direct Policy Search yields improved solution quality, with respect to Pareto-dominance in the objectives, over the traditional approach under uncertain sea-level rise projections and storm surge. Additionally, the new formulation produces high quality solutions with less computational demands than the traditional approach. Our results illustrate the utility of multi-objective adaptive formulations for the example of coastal adaptation, the value of information provided by observations, and point to wider-ranging application in climate change adaptation decision problems.
An Adaptive Unstructured Grid Method by Grid Subdivision, Local Remeshing, and Grid Movement
NASA Technical Reports Server (NTRS)
Pirzadeh, Shahyar Z.
1999-01-01
An unstructured grid adaptation technique has been developed and successfully applied to several three dimensional inviscid flow test cases. The approach is based on a combination of grid subdivision, local remeshing, and grid movement. For solution adaptive grids, the surface triangulation is locally refined by grid subdivision, and the tetrahedral grid in the field is partially remeshed at locations of dominant flow features. A grid redistribution strategy is employed for geometric adaptation of volume grids to moving or deforming surfaces. The method is automatic and fast and is designed for modular coupling with different solvers. Several steady state test cases with different inviscid flow features were tested for grid/solution adaptation. In all cases, the dominant flow features, such as shocks and vortices, were accurately and efficiently predicted with the present approach. A new and robust method of moving tetrahedral "viscous" grids is also presented and demonstrated on a three-dimensional example.
Adaptive Osher-type scheme for the Euler equations with highly nonlinear equations of state
NASA Astrophysics Data System (ADS)
Lee, Bok Jik; Toro, Eleuterio F.; Castro, Cristóbal E.; Nikiforakis, Nikolaos
2013-08-01
For the numerical simulation of detonation of condensed phase explosives, a complex equation of state (EOS), such as the Jones-Wilkins-Lee (JWL) EOS or the Cochran-Chan (C-C) EOS, are widely used. However, when a conservative scheme is used for solving the Euler equations with such equations of state, a spurious solution across the contact discontinuity, a well known phenomenon in multi-fluid systems, arises even for single materials. In this work, we develop a generalised Osher-type scheme in an adaptive primitive-conservative framework to overcome the aforementioned difficulties. Resulting numerical solutions are compared with the exact solutions and with the numerical solutions from the Godunov method in conjunction with the exact Riemann solver for the Euler equations with Mie-Grüneisen form of equations of state, such as the JWL and the C-C equations of state. The adaptive scheme is extended to second order and its empirical convergence rates are presented, verifying second order accuracy for smooth solutions. Through a suite of several tests problems in one and two space dimensions we illustrate the failure of conservative schemes and the capability of the methods of this paper to overcome the difficulties.
Psychophysical measurements in children: challenges, pitfalls, and considerations.
Witton, Caroline; Talcott, Joel B; Henning, G Bruce
2017-01-01
Measuring sensory sensitivity is important in studying development and developmental disorders. However, with children, there is a need to balance reliable but lengthy sensory tasks with the child's ability to maintain motivation and vigilance. We used simulations to explore the problems associated with shortening adaptive psychophysical procedures, and suggest how these problems might be addressed. We quantify how adaptive procedures with too few reversals can over-estimate thresholds, introduce substantial measurement error, and make estimates of individual thresholds less reliable. The associated measurement error also obscures group differences. Adaptive procedures with children should therefore use as many reversals as possible, to reduce the effects of both Type 1 and Type 2 errors. Differences in response consistency, resulting from lapses in attention, further increase the over-estimation of threshold. Comparisons between data from individuals who may differ in lapse rate are therefore problematic, but measures to estimate and account for lapse rates in analyses may mitigate this problem.
NASA Technical Reports Server (NTRS)
Jiang, Yi-Tsann
1993-01-01
A general solution adaptive scheme-based on a remeshing technique is developed for solving the two-dimensional and quasi-three-dimensional Euler and Favre-averaged Navier-Stokes equations. The numerical scheme is formulated on an unstructured triangular mesh utilizing an edge-based pointer system which defines the edge connectivity of the mesh structure. Jameson's four-stage hybrid Runge-Kutta scheme is used to march the solution in time. The convergence rate is enhanced through the use of local time stepping and implicit residual averaging. As the solution evolves, the mesh is regenerated adaptively using flow field information. Mesh adaptation parameters are evaluated such that an estimated local numerical error is equally distributed over the whole domain. For inviscid flows, the present approach generates a complete unstructured triangular mesh using the advancing front method. For turbulent flows, the approach combines a local highly stretched structured triangular mesh in the boundary layer region with an unstructured mesh in the remaining regions to efficiently resolve the important flow features. One-equation and two-equation turbulence models are incorporated into the present unstructured approach. Results are presented for a wide range of flow problems including two-dimensional multi-element airfoils, two-dimensional cascades, and quasi-three-dimensional cascades. This approach is shown to gain flow resolution in the refined regions while achieving a great reduction in the computational effort and storage requirements since solution points are not wasted in regions where they are not required.
NASA Technical Reports Server (NTRS)
Jiang, Yi-Tsann; Usab, William J., Jr.
1993-01-01
A general solution adaptive scheme based on a remeshing technique is developed for solving the two-dimensional and quasi-three-dimensional Euler and Favre-averaged Navier-Stokes equations. The numerical scheme is formulated on an unstructured triangular mesh utilizing an edge-based pointer system which defines the edge connectivity of the mesh structure. Jameson's four-stage hybrid Runge-Kutta scheme is used to march the solution in time. The convergence rate is enhanced through the use of local time stepping and implicit residual averaging. As the solution evolves, the mesh is regenerated adaptively using flow field information. Mesh adaptation parameters are evaluated such that an estimated local numerical error is equally distributed over the whole domain. For inviscid flows, the present approach generates a complete unstructured triangular mesh using the advancing front method. For turbulent flows, the approach combines a local highly stretched structured triangular mesh in the boundary layer region with an unstructured mesh in the remaining regions to efficiently resolve the important flow features. One-equation and two-equation turbulence models are incorporated into the present unstructured approach. Results are presented for a wide range of flow problems including two-dimensional multi-element airfoils, two-dimensional cascades, and quasi-three-dimensional cascades. This approach is shown to gain flow resolution in the refined regions while achieving a great reduction in the computational effort and storage requirements since solution points are not wasted in regions where they are not required.
Abubakar, Amina; Kalu, Raphael Birya; Katana, Khamis; Kabunda, Beatrice; Hassan, Amin S; Newton, Charles R; Van de Vijver, Fons
2016-01-01
We set out to adapt the Beck Depression Inventory (BDI)-II in Kenya and examine its factorial structure. In the first phase we carried out in-depth interviews involving 29 adult members of the community to elicit their understanding of depression and identify aspects of the BDI-II that required adaptation. In the second phase, a modified version of BDI-II was administered to 221 adults randomly selected from the community to allow for the evaluation of its psychometric properties. In the third phase of the study we evaluated the discriminative validity of BDI-11 by comparing a randomly chosen community sample (n = 29) with caregivers of adolescents affected by HIV (n = 77). A considerable overlap between the BDI symptoms and those generated in the interviews was observed. Relevant idioms and symptoms such as 'thinking too much' and 'Kuchoka moyo (having a tired heart)' were identified. The administration of the BDI had to be modified to make it suitable for the low literacy levels of our participants. Fit indices for several models (one factorial, two-factor model and a three factor model) were all within acceptable range. Evidence indicated that while multidimensional models could be fitted, the strong correlations between the factors implied that a single factor model may be the best suited solution (alpha [0.89], and a significant correlation with locally identified items [r = 0.51]) confirmed the good psychometric properties of the adapted BDI-II. No evidence was found to support the hypothesis that somatization was more prevalent. Lastly, caregivers of HIV affected adolescents had significantly higher scores compared to adults randomly selected from the community F(1, 121) = 23.31, p < .001 indicating the discriminative validity of the adapted BDI = II. With an adapted administration procedure, the BDI-II provides an adequate measure of depressive symptoms which can be used alongside other measures for proper diagnosis in a low literacy population.
Closed-Cycle Nutrient Supply For Hydroponics
NASA Technical Reports Server (NTRS)
Schwartzkopf, Steven H.
1991-01-01
Hydroponic system controls composition and feed rate of nutrient solution and recovers and recycles excess solution. Uses air pressure on bladders to transfer aqueous nutrient solution. Measures and adjusts composition of solution before it goes to hydroponic chamber. Eventually returns excess solution to one of tanks. Designed to operate in microgravity, also adaptable to hydroponic plant-growing systems on Earth.
ERIC Educational Resources Information Center
Geri, George A.; Hubbard, David C.
Two adaptive psychophysical procedures (tracking and "yes-no" staircase) for obtaining human visual contrast sensitivity functions (CSF) were evaluated. The procedures were chosen based on their proven validity and the desire to evaluate the practical effects of stimulus transients, since tracking procedures traditionally employ gradual…
A Comparison of Exposure Control Procedures in CATs Using the 3PL Model
ERIC Educational Resources Information Center
Leroux, Audrey J.; Lopez, Myriam; Hembry, Ian; Dodd, Barbara G.
2013-01-01
This study compares the progressive-restricted standard error (PR-SE) exposure control procedure to three commonly used procedures in computerized adaptive testing, the randomesque, Sympson-Hetter (SH), and no exposure control methods. The performance of these four procedures is evaluated using the three-parameter logistic model under the…
Adaptive neuro-heuristic hybrid model for fruit peel defects detection.
Woźniak, Marcin; Połap, Dawid
2018-02-01
Fusion of machine learning methods benefits in decision support systems. A composition of approaches gives a possibility to use the most efficient features composed into one solution. In this article we would like to present an approach to the development of adaptive method based on fusion of proposed novel neural architecture and heuristic search into one co-working solution. We propose a developed neural network architecture that adapts to processed input co-working with heuristic method used to precisely detect areas of interest. Input images are first decomposed into segments. This is to make processing easier, since in smaller images (decomposed segments) developed Adaptive Artificial Neural Network (AANN) processes less information what makes numerical calculations more precise. For each segment a descriptor vector is composed to be presented to the proposed AANN architecture. Evaluation is run adaptively, where the developed AANN adapts to inputs and their features by composed architecture. After evaluation, selected segments are forwarded to heuristic search, which detects areas of interest. As a result the system returns the image with pixels located over peel damages. Presented experimental research results on the developed solution are discussed and compared with other commonly used methods to validate the efficacy and the impact of the proposed fusion in the system structure and training process on classification results. Copyright © 2017 Elsevier Ltd. All rights reserved.
This standard operating procedure describes the method used for preparing internal standard, surrogate recovery standard and calibration standard solutions for neutral analytes used for gas chromatography/mass spectrometry analysis.
The minimal number of parameters in triclinic crystal-field potentials
NASA Astrophysics Data System (ADS)
Mulak, J.
2003-09-01
The optimal parametrization schemes of the crystal-field (CF) potential in fitting procedures are those based on the smallest numbers of parameters. The surplus parametrizations usually lead to artificial and non-physical solutions. Therefore, the symmetry adapted reference systems are commonly used. Instead of them, however, the coordinate systems with the z-axis directed along the principal axes of the CF multipoles (2 k-poles) can be applied successfully, particularly for triclinic CF potentials. Due to the irreducibility of the D(k) representations such a choice can reduce the number of the k-order parameters by 2 k: from 2 k+1 (in the most general case) to only 1 (the axial one). Unfortunately, in general, the numbers of other order CF parameters stay then unrestricted. In this way, the number of parameters for the k-even triclinic CF potentials can be reduced by 4, 8 or 12, for k=2,4 or 6, respectively. Hence, the parametrization schemes based on maximum 14 parameters can be in use solely. For higher point symmetries this number is usually greater than that for the symmetry adapted systems. Nonetheless, many instructive correlations between the multipole contributions to the CF interaction are attainable in this way.
Gu, Hairong; Kim, Woojae; Hou, Fang; Lesmes, Luis Andres; Pitt, Mark A; Lu, Zhong-Lin; Myung, Jay I
2016-01-01
Measurement efficiency is of concern when a large number of observations are required to obtain reliable estimates for parametric models of vision. The standard entropy-based Bayesian adaptive testing procedures addressed the issue by selecting the most informative stimulus in sequential experimental trials. Noninformative, diffuse priors were commonly used in those tests. Hierarchical adaptive design optimization (HADO; Kim, Pitt, Lu, Steyvers, & Myung, 2014) further improves the efficiency of the standard Bayesian adaptive testing procedures by constructing an informative prior using data from observers who have already participated in the experiment. The present study represents an empirical validation of HADO in estimating the human contrast sensitivity function. The results show that HADO significantly improves the accuracy and precision of parameter estimates, and therefore requires many fewer observations to obtain reliable inference about contrast sensitivity, compared to the method of quick contrast sensitivity function (Lesmes, Lu, Baek, & Albright, 2010), which uses the standard Bayesian procedure. The improvement with HADO was maintained even when the prior was constructed from heterogeneous populations or a relatively small number of observers. These results of this case study support the conclusion that HADO can be used in Bayesian adaptive testing by replacing noninformative, diffuse priors with statistically justified informative priors without introducing unwanted bias.
Training working memory updating in young adults.
Linares, Rocío; Borella, Erika; Lechuga, M Teresa; Carretti, Barbara; Pelegrina, Santiago
2018-05-01
Working memory updating (WMU) is a core mechanism in the human mental architecture and a good predictor of a wide range of cognitive processes. This study analyzed the benefits of two different WMU training procedures, near transfer effects on a working memory measure, and far transfer effects on nonverbal reasoning. Maintenance of any benefits a month later was also assessed. Participants were randomly assigned to: an adaptive training group that performed two numerical WMU tasks during four sessions; a non-adaptive training group that performed the same tasks but on a constant and less demanding level of difficulty; or an active control group that performed other tasks unrelated with working memory. After the training, all three groups showed improvements in most of the tasks, and these benefits were maintained a month later. The gain in one of the two WMU measures was larger for the adaptive and non-adaptive groups than for the control group. This specific gain in a task similar to the one trained would indicate the use of a better strategy for performing the task. Besides this nearest transfer effect, no other transfer effects were found. The adaptability of the training procedure did not produce greater improvements. These results are discussed in terms of the training procedure and the feasibility of training WMU.
Gu, Hairong; Kim, Woojae; Hou, Fang; Lesmes, Luis Andres; Pitt, Mark A.; Lu, Zhong-Lin; Myung, Jay I.
2016-01-01
Measurement efficiency is of concern when a large number of observations are required to obtain reliable estimates for parametric models of vision. The standard entropy-based Bayesian adaptive testing procedures addressed the issue by selecting the most informative stimulus in sequential experimental trials. Noninformative, diffuse priors were commonly used in those tests. Hierarchical adaptive design optimization (HADO; Kim, Pitt, Lu, Steyvers, & Myung, 2014) further improves the efficiency of the standard Bayesian adaptive testing procedures by constructing an informative prior using data from observers who have already participated in the experiment. The present study represents an empirical validation of HADO in estimating the human contrast sensitivity function. The results show that HADO significantly improves the accuracy and precision of parameter estimates, and therefore requires many fewer observations to obtain reliable inference about contrast sensitivity, compared to the method of quick contrast sensitivity function (Lesmes, Lu, Baek, & Albright, 2010), which uses the standard Bayesian procedure. The improvement with HADO was maintained even when the prior was constructed from heterogeneous populations or a relatively small number of observers. These results of this case study support the conclusion that HADO can be used in Bayesian adaptive testing by replacing noninformative, diffuse priors with statistically justified informative priors without introducing unwanted bias. PMID:27105061
Adaptive Units of Learning and Educational Videogames
ERIC Educational Resources Information Center
Moreno-Ger, Pablo; Thomas, Pilar Sancho; Martinez-Ortiz, Ivan; Sierra, Jose Luis; Fernandez-Manjon, Baltasar
2007-01-01
In this paper, we propose three different ways of using IMS Learning Design to support online adaptive learning modules that include educational videogames. The first approach relies on IMS LD to support adaptation procedures where the educational games are considered as Learning Objects. These games can be included instead of traditional content…
Evaluation Plan for the Computerized Adaptive Vocational Aptitude Battery.
ERIC Educational Resources Information Center
Green, Bert F.; And Others
The United States Armed Services are planning to introduce computerized adaptive testing (CAT) into the Armed Services Vocational Aptitude Battery (ASVAB), which is a major part of the present personnel assessment procedures. Adaptive testing will improve efficiency greatly by assessing each candidate's answers as the test progresses and posing…
Adaptive multitaper time-frequency spectrum estimation
NASA Astrophysics Data System (ADS)
Pitton, James W.
1999-11-01
In earlier work, Thomson's adaptive multitaper spectrum estimation method was extended to the nonstationary case. This paper reviews the time-frequency multitaper method and the adaptive procedure, and explores some properties of the eigenvalues and eigenvectors. The variance of the adaptive estimator is used to construct an adaptive smoother, which is used to form a high resolution estimate. An F-test for detecting and removing sinusoidal components in the time-frequency spectrum is also given.
A Fast and Robust Poisson-Boltzmann Solver Based on Adaptive Cartesian Grids
Boschitsch, Alexander H.; Fenley, Marcia O.
2011-01-01
An adaptive Cartesian grid (ACG) concept is presented for the fast and robust numerical solution of the 3D Poisson-Boltzmann Equation (PBE) governing the electrostatic interactions of large-scale biomolecules and highly charged multi-biomolecular assemblies such as ribosomes and viruses. The ACG offers numerous advantages over competing grid topologies such as regular 3D lattices and unstructured grids. For very large biological molecules and multi-biomolecule assemblies, the total number of grid-points is several orders of magnitude less than that required in a conventional lattice grid used in the current PBE solvers thus allowing the end user to obtain accurate and stable nonlinear PBE solutions on a desktop computer. Compared to tetrahedral-based unstructured grids, ACG offers a simpler hierarchical grid structure, which is naturally suited to multigrid, relieves indirect addressing requirements and uses fewer neighboring nodes in the finite difference stencils. Construction of the ACG and determination of the dielectric/ionic maps are straightforward, fast and require minimal user intervention. Charge singularities are eliminated by reformulating the problem to produce the reaction field potential in the molecular interior and the total electrostatic potential in the exterior ionic solvent region. This approach minimizes grid-dependency and alleviates the need for fine grid spacing near atomic charge sites. The technical portion of this paper contains three parts. First, the ACG and its construction for general biomolecular geometries are described. Next, a discrete approximation to the PBE upon this mesh is derived. Finally, the overall solution procedure and multigrid implementation are summarized. Results obtained with the ACG-based PBE solver are presented for: (i) a low dielectric spherical cavity, containing interior point charges, embedded in a high dielectric ionic solvent – analytical solutions are available for this case, thus allowing rigorous assessment of the solution accuracy; (ii) a pair of low dielectric charged spheres embedded in a ionic solvent to compute electrostatic interaction free energies as a function of the distance between sphere centers; (iii) surface potentials of proteins, nucleic acids and their larger-scale assemblies such as ribosomes; and (iv) electrostatic solvation free energies and their salt sensitivities – obtained with both linear and nonlinear Poisson-Boltzmann equation – for a large set of proteins. These latter results along with timings can serve as benchmarks for comparing the performance of different PBE solvers. PMID:21984876
Gómez-Valero, S; García-Pérez, F; Flórez-García, M T; Miangolarra-Page, J C
The aim of this study was to conduct a systematic review of self-administered knee-disability functional assessment questionnaires adapted to Spanish, analysing the quality of the transcultural adaptation procedure and the psychometric properties of the new version. A search was conducted in the main biomedical databases to find knee-function assessment scales adapted into Spanish, in order to assess their questionnaire adaptation process as well as their psychometric properties. Ten scales were identified; 3 for lower limb: 2 for any type of pathologies (Lower Limb Functional Index [LLFI]; Lower Extremity Functional Scale [LEFS]) and 1 specific for arthrosis (Arthrosis des Membres Inférieurs et Qualité de vie [AMICAL]); Other 3 for knee and hip pathologies (Western Ontario and McMaster Universities Osteoarthritis [WOMAC] index; Osteoarthritis Knee and Hip Quality of Life [OAKHQOL] questionnaire; Hip and Knee Questionnaire [HKQ]), and other 4 for knee: 2 general scales (Knee Injury and Osteoarthritis Outcome Score [KOOS]; Knee Society Clinical Rating System [KSS]) and 2 specifics (Victorian Institute of Sport Assessment [VISA-P] questionnaire for patients with patellar tendinopathy and Kujala Score for patellofemoral pain). The transcultural adaptation procedure was satisfactory, albeit somewhat less rigorous for HKQ and LLFI. In no study were all psychometric properties assessed. Reliability was analyzed in all cases, except in KSS. Validity was measured in all questionnaires. The transcultural adaptation procedure was satisfactory and the psychometric properties analysed were similar to both the original version and other versions adapted to other languages. Copyright © 2017 SECOT. Publicado por Elsevier España, S.L.U. All rights reserved.
Knowledge acquisition for case-based reasoning systems
NASA Technical Reports Server (NTRS)
Riesbeck, Christopher K.
1988-01-01
Case-based reasoning (CBR) is a simple idea: solve new problems by adapting old solutions to similar problems. The CBR approach offers several potential advantages over rule-based reasoning: rules are not combined blindly in a search for solutions, solutions can be explained in terms of concrete examples, and performance can improve automatically as new problems are solved and added to the case library. Moving CBR for the university research environment to the real world requires smooth interfaces for getting knowledge from experts. Described are the basic elements of an interface for acquiring three basic bodies of knowledge that any case-based reasoner requires: the case library of problems and their solutions, the analysis rules that flesh out input problem specifications so that relevant cases can be retrieved, and the adaptation rules that adjust old solutions to fit new problems.
Interdisciplinarity in Adapted Physical Activity
ERIC Educational Resources Information Center
Bouffard, Marcel; Spencer-Cavaliere, Nancy
2016-01-01
It is commonly accepted that inquiry in adapted physical activity involves the use of different disciplines to address questions. It is often advanced today that complex problems of the kind frequently encountered in adapted physical activity require a combination of disciplines for their solution. At the present time, individual research…
NASA Astrophysics Data System (ADS)
Ben Regaya, Chiheb; Farhani, Fethi; Zaafouri, Abderrahmen; Chaari, Abdelkader
2018-02-01
This paper presents a new adaptive Backstepping technique to handle the induction motor (IM) rotor resistance tracking problem. The proposed solution leads to improve the robustness of the control system. Given the presence of static error when estimating the rotor resistance with classical methods, and the sensitivity to the load torque variation at low speed, a new Backstepping observer enhanced with an integral action of the tracking errors is presented, which can be established in two steps. The first one consists to estimate the rotor flux using a Backstepping observer. The second step, defines the adaptation mechanism of the rotor resistance based on the estimated rotor-flux. The asymptotic stability of the observer is proven by Lyapunov theory. To validate the proposed solution, a simulation and experimental benchmarking of a 3 kW induction motor are presented and analyzed. The obtained results show the effectiveness of the proposed solution compared to the model reference adaptive system (MRAS) rotor resistance observer presented in other recent works.
Adaptive building skin structures
NASA Astrophysics Data System (ADS)
Del Grosso, A. E.; Basso, P.
2010-12-01
The concept of adaptive and morphing structures has gained considerable attention in the recent years in many fields of engineering. In civil engineering very few practical applications are reported to date however. Non-conventional structural concepts like deployable, inflatable and morphing structures may indeed provide innovative solutions to some of the problems that the construction industry is being called to face. To give some examples, searches for low-energy consumption or even energy-harvesting green buildings are amongst such problems. This paper first presents a review of the above problems and technologies, which shows how the solution to these problems requires a multidisciplinary approach, involving the integration of architectural and engineering disciplines. The discussion continues with the presentation of a possible application of two adaptive and dynamically morphing structures which are proposed for the realization of an acoustic envelope. The core of the two applications is the use of a novel optimization process which leads the search for optimal solutions by means of an evolutionary technique while the compatibility of the resulting configurations of the adaptive envelope is ensured by the virtual force density method.
Reduced rank regression via adaptive nuclear norm penalization
Chen, Kun; Dong, Hongbo; Chan, Kung-Sik
2014-01-01
Summary We propose an adaptive nuclear norm penalization approach for low-rank matrix approximation, and use it to develop a new reduced rank estimation method for high-dimensional multivariate regression. The adaptive nuclear norm is defined as the weighted sum of the singular values of the matrix, and it is generally non-convex under the natural restriction that the weight decreases with the singular value. However, we show that the proposed non-convex penalized regression method has a global optimal solution obtained from an adaptively soft-thresholded singular value decomposition. The method is computationally efficient, and the resulting solution path is continuous. The rank consistency of and prediction/estimation performance bounds for the estimator are established for a high-dimensional asymptotic regime. Simulation studies and an application in genetics demonstrate its efficacy. PMID:25045172
Grid adaption for bluff bodies
NASA Technical Reports Server (NTRS)
Abolhassani, Jamshid S.; Tiwari, Surendra N.
1986-01-01
Methods of grid adaptation are reviewed and a method is developed with the capability of adaptation to several flow variables. This method is based on a variational approach and is an algebraic method which does not require the solution of partial differential equations. Also the method was formulated in such a way that there is no need for any matrix inversion. The method is used in conjunction with the calculation of hypersonic flow over a blunt nose. The equations of motion are the compressible Navier-Stokes equations where all viscous terms are retained. They are solved by the MacCormack time-splitting method and a movie was produced which shows simulataneously the transient behavior of the solution and the grid adaptation. The results are compared with the experimental and other numerical results.
Code of Federal Regulations, 2010 CFR
2010-07-01
... has established implementing procedures based on those previously adopted and utilized by the Chief of Engineers prior to 15 October 1966. This regulation adapts these cost apportionment procedures, found in...
Code of Federal Regulations, 2011 CFR
2011-07-01
... has established implementing procedures based on those previously adopted and utilized by the Chief of Engineers prior to 15 October 1966. This regulation adapts these cost apportionment procedures, found in...
Dynamic mesh adaption for triangular and tetrahedral grids
NASA Technical Reports Server (NTRS)
Biswas, Rupak; Strawn, Roger
1993-01-01
The following topics are discussed: requirements for dynamic mesh adaption; linked-list data structure; edge-based data structure; adaptive-grid data structure; three types of element subdivision; mesh refinement; mesh coarsening; additional constraints for coarsening; anisotropic error indicator for edges; unstructured-grid Euler solver; inviscid 3-D wing; and mesh quality for solution-adaptive grids. The discussion is presented in viewgraph form.
Vazquez-Leal, H.; Jimenez-Fernandez, V. M.; Benhammouda, B.; Filobello-Nino, U.; Sarmiento-Reyes, A.; Ramirez-Pinero, A.; Marin-Hernandez, A.; Huerta-Chua, J.
2014-01-01
We present a homotopy continuation method (HCM) for finding multiple operating points of nonlinear circuits composed of devices modelled by using piecewise linear (PWL) representations. We propose an adaptation of the modified spheres path tracking algorithm to trace the homotopy trajectories of PWL circuits. In order to assess the benefits of this proposal, four nonlinear circuits composed of piecewise linear modelled devices are analysed to determine their multiple operating points. The results show that HCM can find multiple solutions within a single homotopy trajectory. Furthermore, we take advantage of the fact that homotopy trajectories are PWL curves meant to replace the multidimensional interpolation and fine tuning stages of the path tracking algorithm with a simple and highly accurate procedure based on the parametric straight line equation. PMID:25184157
Shared decision making in Australia in 2017.
Trevena, Lyndal; Shepherd, Heather L; Bonner, Carissa; Jansen, Jesse; Cust, Anne E; Leask, Julie; Shadbolt, Narelle; Del Mar, Chris; McCaffery, Kirsten; Hoffmann, Tammy
2017-06-01
Shared decision making (SDM) is now firmly established within national clinical standards for accrediting hospitals, day procedure services, public dental services and medical education in Australia, with plans to align general practice, aged care and disability service. Implementation of these standards and training of health professionals is a key challenge for the Australian health sector at this time. Consumer involvement in health research, policy and clinical service governance has also increased, with a major focus on encouraging patients to ask questions during their clinical care. Tools to support shared decision making are increasingly used but there is a need for more systemic approaches to their development, cultural adaptation and implementation. Sustainable solutions to ensure tools are kept up-to-date with the best available evidence will be important for the future. Copyright © 2017. Published by Elsevier GmbH.
Noise effects in nonlinear biochemical signaling
NASA Astrophysics Data System (ADS)
Bostani, Neda; Kessler, David A.; Shnerb, Nadav M.; Rappel, Wouter-Jan; Levine, Herbert
2012-01-01
It has been generally recognized that stochasticity can play an important role in the information processing accomplished by reaction networks in biological cells. Most treatments of that stochasticity employ Gaussian noise even though it is a priori obvious that this approximation can violate physical constraints, such as the positivity of chemical concentrations. Here, we show that even when such nonphysical fluctuations are rare, an exact solution of the Gaussian model shows that the model can yield unphysical results. This is done in the context of a simple incoherent-feedforward model which exhibits perfect adaptation in the deterministic limit. We show how one can use the natural separation of time scales in this model to yield an approximate model, that is analytically solvable, including its dynamical response to an environmental change. Alternatively, one can employ a cutoff procedure to regularize the Gaussian result.
NASA Technical Reports Server (NTRS)
Bradford, D. F.; Kelejian, H. H.; Brusch, R.; Gross, J.; Fishman, H.; Feenberg, D.
1974-01-01
The value of improving information for forecasting future crop harvests was investigated. Emphasis was placed upon establishing practical evaluation procedures firmly based in economic theory. The analysis was applied to the case of U.S. domestic wheat consumption. Estimates for a cost of storage function and a demand function for wheat were calculated. A model of market determinations of wheat inventories was developed for inventory adjustment. The carry-over horizon is computed by the solution of a nonlinear programming problem, and related variables such as spot and future price at each stage are determined. The model is adaptable to other markets. Results are shown to depend critically on the accuracy of current and proposed measurement techniques. The quantitative results are presented parametrically, in terms of various possible values of current and future accuracies.
The measurement and reinforcement of behavior of psychotics1
Ayllon, T.; Azrin, N. H.
1965-01-01
An attempt was made to strengthen behaviors of psychotics by applying operant reinforcement principles in a mental hospital ward. The behaviors studied were necessary and/or useful for the patient to function in the hospital environment. Reinforcement consisted of the opportunity to engage in activities that had a high level of occurrence when freely allowed. Tokens were used as conditioned reinforcers to bridge the delay between behavior and reinforcement. Emphasis was placed on objective definition and quantification of the responses and reinforcers and upon programming and recording procedures. Standardizing the objective criteria permitted ward attendants to administer the program. The procedures were found to be effective in maintaining the desired adaptive behaviors for as long as the procedures were in effect. In a series of six experiments, reinforced behaviors were considerably reduced when the reinforcement procedure was discontinued; the adaptive behaviors increased immediately when the reinforcement procedure was re-introduced. PMID:5851397
Cohomogeneity-one solutions in Einstein-Maxwell-dilaton gravity
NASA Astrophysics Data System (ADS)
Lim, Yen-Kheng
2017-05-01
The field equations for Einstein-Maxwell-dilaton gravity in D dimensions are reduced to an effective one-dimensional system under the influence of exponential potentials. Various cases where exact solutions can be found are explored. With this procedure, we present interesting solutions such as a one-parameter generalization of the dilaton-Melvin spacetime and a three-parameter solution that interpolates between the Reissner-Nordström and Bertotti-Robinson solutions. This procedure also allows simple, alternative derivations of known solutions such as the Lifshitz spacetime and the planar anti-de Sitter naked singularity. In the latter case, the metric is cast in a simpler form which reveals the presence of an additional curvature singularity.
Transonic flow solutions using a composite velocity procedure for potential, Euler and RNS equations
NASA Technical Reports Server (NTRS)
Gordnier, R. E.; Rubin, S. G.
1986-01-01
Solutions for transonic viscous and inviscid flows using a composite velocity procedure are presented. The velocity components of the compressible flow equations are written in terms of a multiplicative composite consisting of a viscous or rotational velocity and an inviscid, irrotational, potential-like function. This provides for an efficient solution procedure that is locally representative of both asymptotic inviscid and boundary layer theories. A modified conservative form of the axial momentum equation that is required to obtain rotational solutions in the inviscid region is presented and a combined conservation/nonconservation form is applied for evaluation of the reduced Navier-Stokes (RNS), Euler and potential equations. A variety of results is presented and the effects of the approximations on entropy production, shock capturing, and viscous interaction are discussed.
A FLOW-THROUGH TESTING PROCEDURE WITH DUCKWEED (LEMNA MINOR L.)
Lemna minor is one of the smallest flowering plants. Because of its floating habit, ease of culture, and small size it is well adapted for laboratory investigations. Procedures for flow-through tests were developed. Testing procedures were developed with this apparatus. By using ...
Chaudhuri, Shomesh E; Merfeld, Daniel M
2013-03-01
Psychophysics generally relies on estimating a subject's ability to perform a specific task as a function of an observed stimulus. For threshold studies, the fitted functions are called psychometric functions. While fitting psychometric functions to data acquired using adaptive sampling procedures (e.g., "staircase" procedures), investigators have encountered a bias in the spread ("slope" or "threshold") parameter that has been attributed to the serial dependency of the adaptive data. Using simulations, we confirm this bias for cumulative Gaussian parametric maximum likelihood fits on data collected via adaptive sampling procedures, and then present a bias-reduced maximum likelihood fit that substantially reduces the bias without reducing the precision of the spread parameter estimate and without reducing the accuracy or precision of the other fit parameters. As a separate topic, we explain how to implement this bias reduction technique using generalized linear model fits as well as other numeric maximum likelihood techniques such as the Nelder-Mead simplex. We then provide a comparison of the iterative bootstrap and observed information matrix techniques for estimating parameter fit variance from adaptive sampling procedure data sets. The iterative bootstrap technique is shown to be slightly more accurate; however, the observed information technique executes in a small fraction (0.005 %) of the time required by the iterative bootstrap technique, which is an advantage when a real-time estimate of parameter fit variance is required.
Awareness of Sensorimotor Adaptation to Visual Rotations of Different Size
Werner, Susen; van Aken, Bernice C.; Hulst, Thomas; Frens, Maarten A.; van der Geest, Jos N.; Strüder, Heiko K.; Donchin, Opher
2015-01-01
Previous studies on sensorimotor adaptation revealed no awareness of the nature of the perturbation after adaptation to an abrupt 30° rotation of visual feedback or after adaptation to gradually introduced perturbations. Whether the degree of awareness depends on the magnitude of the perturbation, though, has as yet not been tested. Instead of using questionnaires, as was often done in previous work, the present study used a process dissociation procedure to measure awareness and unawareness. A naïve, implicit group and a group of subjects using explicit strategies adapted to 20°, 40° and 60° cursor rotations in different adaptation blocks that were each followed by determination of awareness and unawareness indices. The awareness index differed between groups and increased from 20° to 60° adaptation. In contrast, there was no group difference for the unawareness index, but it also depended on the size of the rotation. Early adaptation varied between groups and correlated with awareness: The more awareness a participant had developed the more the person adapted in the beginning of the adaptation block. In addition, there was a significant group difference for savings but it did not correlate with awareness. Our findings suggest that awareness depends on perturbation size and that aware and strategic processes are differentially involved during adaptation and savings. Moreover, the use of the process dissociation procedure opens the opportunity to determine awareness and unawareness indices in future sensorimotor adaptation research. PMID:25894396
Organic compatible solutes of halotolerant and halophilic microorganisms
Roberts, Mary F
2005-01-01
Microorganisms that adapt to moderate and high salt environments use a variety of solutes, organic and inorganic, to counter external osmotic pressure. The organic solutes can be zwitterionic, noncharged, or anionic (along with an inorganic cation such as K+). The range of solutes, their diverse biosynthetic pathways, and physical properties of the solutes that effect molecular stability are reviewed. PMID:16176595
Reversed-phase liquid chromatography column testing: robustness study of the test.
Le Mapihan, K; Vial, J; Jardy, A
2004-12-24
Choosing the right RPLC column for an actual separation among the more than 600 commercially available ones still represents a real challenge for the analyst particularly when basic solutes are involved. Many tests dedicated to the characterization and the classification of stationary phases have been proposed in the literature and some of them highlighted the need of a better understanding of retention properties to lead to a rational choice of columns. However, unlike classical chromatographic methods, the problem of their robustness evaluation has often been left unaddressed. In the present study, we present a robustness study that was applied to the chromatographic testing procedure we had developed and optimized previously. A design of experiment (DoE) approach was implemented. Four factors, previously identified as potentially influent, were selected and subjected to small controlled variations: solvent fraction, temperature, pH and buffer concentration. As our model comprised quadratic terms instead of a simple linear model, we chose a D-optimal design in order to minimize the experiment number. As a previous batch-to-batch study [K. Le Mapihan, Caractérisation et classification des phases stationnaires utilisées pour l'analyse CPL de produits pharmaceutiques, Ph.D. Thesis, Pierre and Marie Curie University, 2004] had shown a low variability on the selected stationary phase, it was then possible to split the design into two parts, according to the solvent nature, each using one column. Actually, our testing procedure involving assays both with methanol and with acetonitrile as organic modifier, such an approach enabled to avoid a possible bias due to the column ageing considering the number of experiments required (16 + 6 center points). Experimental results were computed thanks to a Partial Least Squares regression procedure, more adapted than the classical regression to handle factors and responses not completely independent. The results showed the behavior of the solutes in relation to their physico-chemical properties and the relevance of the second term degree of our model. Finally, the robust domain of the test has been fairly identified, so that any potential user precisely knows to which extend each experimental parameter must be controlled when our testing procedure is to be implemented.
Nonlinear system guidance in the presence of transmission zero dynamics
NASA Technical Reports Server (NTRS)
Meyer, G.; Hunt, L. R.; Su, R.
1995-01-01
An iterative procedure is proposed for computing the commanded state trajectories and controls that guide a possibly multiaxis, time-varying, nonlinear system with transmission zero dynamics through a given arbitrary sequence of control points. The procedure is initialized by the system inverse with the transmission zero effects nulled out. Then the 'steady state' solution of the perturbation model with the transmission zero dynamics intact is computed and used to correct the initial zero-free solution. Both time domain and frequency domain methods are presented for computing the steady state solutions of the possibly nonminimum phase transmission zero dynamics. The procedure is illustrated by means of linear and nonlinear examples.
Shao, Liujiazi; Wang, Baoguo; Wang, Shuangyan; Mu, Feng; Gu, Ke
2013-01-01
The ideal solution for fluid management during neurosurgical procedures remains controversial. The aim of this study was to compare the effects of a 7.2% hypertonic saline - 6% hydroxyethyl starch (HS-HES) solution and a 6% hydroxyethyl starch (HES) solution on clinical, hemodynamic and laboratory variables during elective neurosurgical procedures. Forty patients scheduled for elective neurosurgical procedures were randomly assigned to the HS-HES group orthe HES group. Afterthe induction of anesthesia, patients in the HS-HES group received 250 mL of HS-HES (500 mL/h), whereas the patients in the HES group received 1,000 mL of HES (1000 mL/h). The monitored variables included clinical, hemodynamic and laboratory parameters. Chictr.org: ChiCTR-TRC-12002357 The patients who received the HS-HES solution had a significant decrease in the intraoperative total fluid input (p<0.01), the volume of Ringer's solution required (p<0.05), the fluid balance (p<0.01) and their dural tension scores (p<0.05). The total urine output, blood loss, bleeding severity scores, operation duration and hemodynamic variables were similar in both groups (p>0.05). Moreover, compared with the HES group, the HS-HES group had significantly higher plasma concentrations of sodium and chloride, increasing the osmolality (p<0.01). Our results suggest that HS-HES reduced the volume of intraoperative fluid required to maintain the patients undergoing surgery and led to a decrease in the intraoperative fluid balance. Moreover, HS-HES improved the dural tension scores and provided satisfactory brain relaxation. Our results indicate that HS-HES may represent a new avenue for volume therapy during elective neurosurgical procedures.
Shao, Liujiazi; Wang, Baoguo; Wang, Shuangyan; Mu, Feng; Gu, Ke
2013-01-01
OBJECTIVE: The ideal solution for fluid management during neurosurgical procedures remains controversial. The aim of this study was to compare the effects of a 7.2% hypertonic saline - 6% hydroxyethyl starch (HS-HES) solution and a 6% hydroxyethyl starch (HES) solution on clinical, hemodynamic and laboratory variables during elective neurosurgical procedures. METHODS: Forty patients scheduled for elective neurosurgical procedures were randomly assigned to the HS-HES group or the HES group. After the induction of anesthesia, patients in the HS-HES group received 250 mL of HS-HES (500 mL/h), whereas the patients in the HES group received 1,000 mL of HES (1000 mL/h). The monitored variables included clinical, hemodynamic and laboratory parameters. Chictr.org: ChiCTR-TRC-12002357 RESULTS: The patients who received the HS-HES solution had a significant decrease in the intraoperative total fluid input (p<0.01), the volume of Ringer's solution required (p<0.05), the fluid balance (p<0.01) and their dural tension scores (p<0.05). The total urine output, blood loss, bleeding severity scores, operation duration and hemodynamic variables were similar in both groups (p>0.05). Moreover, compared with the HES group, the HS-HES group had significantly higher plasma concentrations of sodium and chloride, increasing the osmolality (p<0.01). CONCLUSION: Our results suggest that HS-HES reduced the volume of intraoperative fluid required to maintain the patients undergoing surgery and led to a decrease in the intraoperative fluid balance. Moreover, HS-HES improved the dural tension scores and provided satisfactory brain relaxation. Our results indicate that HS-HES may represent a new avenue for volume therapy during elective neurosurgical procedures. PMID:23644851
NASA Astrophysics Data System (ADS)
Franck, Bas A. M.; Dreschler, Wouter A.; Lyzenga, Johannes
2004-12-01
In this study we investigated the reliability and convergence characteristics of an adaptive multidirectional pattern search procedure, relative to a nonadaptive multidirectional pattern search procedure. The procedure was designed to optimize three speech-processing strategies. These comprise noise reduction, spectral enhancement, and spectral lift. The search is based on a paired-comparison paradigm, in which subjects evaluated the listening comfort of speech-in-noise fragments. The procedural and nonprocedural factors that influence the reliability and convergence of the procedure are studied using various test conditions. The test conditions combine different tests, initial settings, background noise types, and step size configurations. Seven normal hearing subjects participated in this study. The results indicate that the reliability of the optimization strategy may benefit from the use of an adaptive step size. Decreasing the step size increases accuracy, while increasing the step size can be beneficial to create clear perceptual differences in the comparisons. The reliability also depends on starting point, stop criterion, step size constraints, background noise, algorithms used, as well as the presence of drifting cues and suboptimal settings. There appears to be a trade-off between reliability and convergence, i.e., when the step size is enlarged the reliability improves, but the convergence deteriorates. .
Lessons from a Space Analog on Adaptation for Long-Duration Exploration Missions.
Anglin, Katlin M; Kring, Jason P
2016-04-01
Exploration missions to asteroids and Mars will bring new challenges associated with communication delays and more autonomy for crews. Mission safety and success will rely on how well the entire system, from technology to the human elements, is adaptable and resilient to disruptive, novel, or potentially catastrophic events. The recent NASA Extreme Environment Missions Operations (NEEMO) 20 mission highlighted this need and produced valuable "lessons learned" that will inform future research on team adaptation and resilience. A team of NASA, industry, and academic members used an iterative process to design a tripod shaped structure, called the CORAL Tower, for two astronauts to assemble underwater with minimal tools. The team also developed assembly procedures, administered training to the crew, and provided support during the mission. During the design, training, and assembly of the Tower, the team learned first-hand how adaptation in extreme environments depends on incremental testing, thorough procedures and contingency plans that predict possible failure scenarios, and effective team adaptation and resiliency for the crew and support personnel. Findings from NEEMO 20 provide direction on the design and testing process for future space systems and crews to maximize adaptation. This experience also underscored the need for more research on team adaptation, particularly how input and process factors affect adaption outcomes, the team adaptation iterative process, and new ways to measure the adaptation process.
WE-EF-BRD-02: Battling Maxwell’s Equations: Physics Challenges and Solutions for Hybrid MRI Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Keall, P.
MRI-guided treatment is a growing area of medicine, particularly in radiotherapy and surgery. The exquisite soft tissue anatomic contrast offered by MRI, along with functional imaging, makes the use of MRI during therapeutic procedures very attractive. Challenging the utility of MRI in the therapy room are many issues including the physics of MRI and the impact on the environment and therapeutic instruments, the impact of the room and instruments on the MRI; safety, space, design and cost. In this session, the applications and challenges of MRI-guided treatment will be described. The session format is: Past, present and future: MRI-guided radiotherapymore » from 2005 to 2025: Jan Lagendijk Battling Maxwell’s equations: Physics challenges and solutions for hybrid MRI systems: Paul Keall I want it now!: Advances in MRI acquisition, reconstruction and the use of priors to enable fast anatomic and physiologic imaging to inform guidance and adaptation decisions: Yanle Hu MR in the OR: The growth and applications of MRI for interventional radiology and surgery: Rebecca Fahrig Learning Objectives: To understand the history and trajectory of MRI-guided radiotherapy To understand the challenges of integrating MR imaging systems with linear accelerators To understand the latest in fast MRI methods to enable the visualisation of anatomy and physiology on radiotherapy treatment timescales To understand the growing role and challenges of MRI for image-guided surgical procedures My disclosures are publicly available and updated at: http://sydney.edu.au/medicine/radiation-physics/about-us/disclosures.php.« less
Chinda, Betty; Medvedev, George; Siu, William; Ester, Martin; Arab, Ali; Gu, Tao; Moreno, Sylvain; D’Arcy, Ryan C N; Song, Xiaowei
2018-01-01
Introduction Haemorrhagic stroke is of significant healthcare concern due to its association with high mortality and lasting impact on the survivors’ quality of life. Treatment decisions and clinical outcomes depend strongly on the size, spread and location of the haematoma. Non-contrast CT (NCCT) is the primary neuroimaging modality for haematoma assessment in haemorrhagic stroke diagnosis. Current procedures do not allow convenient NCCT-based haemorrhage volume calculation in clinical settings, while research-based approaches are yet to be tested for clinical utility; there is a demonstrated need for developing effective solutions. The project under review investigates the development of an automatic NCCT-based haematoma computation tool in support of accurate quantification of haematoma volumes. Methods and analysis Several existing research methods for haematoma volume estimation are studied. Selected methods are tested using NCCT images of patients diagnosed with acute haemorrhagic stroke. For inter-rater and intrarater reliability evaluation, different raters will analyse haemorrhage volumes independently. The efficiency with respect to time of haematoma volume assessments will be examined to compare with the results from routine clinical evaluations and planimetry assessment that are known to be more accurate. The project will target the development of an enhanced solution by adapting existing methods and integrating machine learning algorithms. NCCT-based information of brain haemorrhage (eg, size, volume, location) and other relevant information (eg, age, sex, risk factor, comorbidities) will be used in relation to clinical outcomes with future project development. Validity and reliability of the solution will be examined for potential clinical utility. Ethics and dissemination The project including procedures for deidentification of NCCT data has been ethically approved. The study involves secondary use of existing data and does not require new consent of participation. The team consists of clinical neuroimaging scientists, computing scientists and clinical professionals in neurology and neuroradiology and includes patient representatives. Research outputs will be disseminated following knowledge translation plans towards improving stroke patient care. Significant findings will be published in scientific journals. Anticipated deliverables include computer solutions for improved clinical assessment of haematoma using NCCT. PMID:29674371
A "Rearrangement Procedure" for Scoring Adaptive Tests with Review Options.
ERIC Educational Resources Information Center
Papanastasiou, Elena C.
Due to the increased popularity of computerized adaptive testing (CAT), many admissions tests, as well as certification and licensure examinations, have been transformed from their paper-and-pencil versions to computerized adaptive versions. A major difference between paper-and-pencil tests and CAT, from an examinees point of view, is that in many…
A "Rearrangement Procedure" for Scoring Adaptive Tests with Review Options
ERIC Educational Resources Information Center
Papanastasiou, Elena C.; Reckase, Mark D.
2007-01-01
Because of the increased popularity of computerized adaptive testing (CAT), many admissions tests, as well as certification and licensure examinations, have been transformed from their paper-and-pencil versions to computerized adaptive versions. A major difference between paper-and-pencil tests and CAT from an examinee's point of view is that in…
On the implementation of an accurate and efficient solver for convection-diffusion equations
NASA Astrophysics Data System (ADS)
Wu, Chin-Tien
In this dissertation, we examine several different aspects of computing the numerical solution of the convection-diffusion equation. The solution of this equation often exhibits sharp gradients due to Dirichlet outflow boundaries or discontinuities in boundary conditions. Because of the singular-perturbed nature of the equation, numerical solutions often have severe oscillations when grid sizes are not small enough to resolve sharp gradients. To overcome such difficulties, the streamline diffusion discretization method can be used to obtain an accurate approximate solution in regions where the solution is smooth. To increase accuracy of the solution in the regions containing layers, adaptive mesh refinement and mesh movement based on a posteriori error estimations can be employed. An error-adapted mesh refinement strategy based on a posteriori error estimations is also proposed to resolve layers. For solving the sparse linear systems that arise from discretization, goemetric multigrid (MG) and algebraic multigrid (AMG) are compared. In addition, both methods are also used as preconditioners for Krylov subspace methods. We derive some convergence results for MG with line Gauss-Seidel smoothers and bilinear interpolation. Finally, while considering adaptive mesh refinement as an integral part of the solution process, it is natural to set a stopping tolerance for the iterative linear solvers on each mesh stage so that the difference between the approximate solution obtained from iterative methods and the finite element solution is bounded by an a posteriori error bound. Here, we present two stopping criteria. The first is based on a residual-type a posteriori error estimator developed by Verfurth. The second is based on an a posteriori error estimator, using local solutions, developed by Kay and Silvester. Our numerical results show the refined mesh obtained from the iterative solution which satisfies the second criteria is similar to the refined mesh obtained from the finite element solution.
NASA Technical Reports Server (NTRS)
Wang, Ren H.
1991-01-01
A method of combined use of magnetic vector potential (MVP) based finite element (FE) formulations and magnetic scalar potential (MSP) based FE formulations for computation of three-dimensional (3D) magnetostatic fields is developed. This combined MVP-MSP 3D-FE method leads to considerable reduction by nearly a factor of 3 in the number of unknowns in comparison to the number of unknowns which must be computed in global MVP based FE solutions. This method allows one to incorporate portions of iron cores sandwiched in between coils (conductors) in current-carrying regions. Thus, it greatly simplifies the geometries of current carrying regions (in comparison with the exclusive MSP based methods) in electric machinery applications. A unique feature of this approach is that the global MSP solution is single valued in nature, that is, no branch cut is needed. This is again a superiority over the exclusive MSP based methods. A Newton-Raphson procedure with a concept of an adaptive relaxation factor was developed and successfully used in solving the 3D-FE problem with magnetic material anisotropy and nonlinearity. Accordingly, this combined MVP-MSP 3D-FE method is most suited for solution of large scale global type magnetic field computations in rotating electric machinery with very complex magnetic circuit geometries, as well as nonlinear and anisotropic material properties.
Dynamic Consent: a potential solution to some of the challenges of modern biomedical research.
Budin-Ljøsne, Isabelle; Teare, Harriet J A; Kaye, Jane; Beck, Stephan; Bentzen, Heidi Beate; Caenazzo, Luciana; Collett, Clive; D'Abramo, Flavio; Felzmann, Heike; Finlay, Teresa; Javaid, Muhammad Kassim; Jones, Erica; Katić, Višnja; Simpson, Amy; Mascalzoni, Deborah
2017-01-25
Innovations in technology have contributed to rapid changes in the way that modern biomedical research is carried out. Researchers are increasingly required to endorse adaptive and flexible approaches to accommodate these innovations and comply with ethical, legal and regulatory requirements. This paper explores how Dynamic Consent may provide solutions to address challenges encountered when researchers invite individuals to participate in research and follow them up over time in a continuously changing environment. An interdisciplinary workshop jointly organised by the University of Oxford and the COST Action CHIP ME gathered clinicians, researchers, ethicists, lawyers, research participants and patient representatives to discuss experiences of using Dynamic Consent, and how such use may facilitate the conduct of specific research tasks. The data collected during the workshop were analysed using a content analysis approach. Dynamic Consent can provide practical, sustainable and future-proof solutions to challenges related to participant recruitment, the attainment of informed consent, participant retention and consent management, and may bring economic efficiencies. Dynamic Consent offers opportunities for ongoing communication between researchers and research participants that can positively impact research. Dynamic Consent supports inter-sector, cross-border approaches and large scale data-sharing. Whilst it is relatively easy to set up and maintain, its implementation will require that researchers re-consider their relationship with research participants and adopt new procedures.
NASA Technical Reports Server (NTRS)
Farhat, C.; Park, K. C.; Dubois-Pelerin, Y.
1991-01-01
An unconditionally stable second order accurate implicit-implicit staggered procedure for the finite element solution of fully coupled thermoelasticity transient problems is proposed. The procedure is stabilized with a semi-algebraic augmentation technique. A comparative cost analysis reveals the superiority of the proposed computational strategy to other conventional staggered procedures. Numerical examples of one and two-dimensional thermomechanical coupled problems demonstrate the accuracy of the proposed numerical solution algorithm.
NASA Technical Reports Server (NTRS)
Duque, Earl P. N.; Biswas, Rupak; Strawn, Roger C.
1995-01-01
This paper summarizes a method that solves both the three dimensional thin-layer Navier-Stokes equations and the Euler equations using overset structured and solution adaptive unstructured grids with applications to helicopter rotor flowfields. The overset structured grids use an implicit finite-difference method to solve the thin-layer Navier-Stokes/Euler equations while the unstructured grid uses an explicit finite-volume method to solve the Euler equations. Solutions on a helicopter rotor in hover show the ability to accurately convect the rotor wake. However, isotropic subdivision of the tetrahedral mesh rapidly increases the overall problem size.
Rapid, generalized adaptation to asynchronous audiovisual speech
Van der Burg, Erik; Goodbourn, Patrick T.
2015-01-01
The brain is adaptive. The speed of propagation through air, and of low-level sensory processing, differs markedly between auditory and visual stimuli; yet the brain can adapt to compensate for the resulting cross-modal delays. Studies investigating temporal recalibration to audiovisual speech have used prolonged adaptation procedures, suggesting that adaptation is sluggish. Here, we show that adaptation to asynchronous audiovisual speech occurs rapidly. Participants viewed a brief clip of an actor pronouncing a single syllable. The voice was either advanced or delayed relative to the corresponding lip movements, and participants were asked to make a synchrony judgement. Although we did not use an explicit adaptation procedure, we demonstrate rapid recalibration based on a single audiovisual event. We find that the point of subjective simultaneity on each trial is highly contingent upon the modality order of the preceding trial. We find compelling evidence that rapid recalibration generalizes across different stimuli, and different actors. Finally, we demonstrate that rapid recalibration occurs even when auditory and visual events clearly belong to different actors. These results suggest that rapid temporal recalibration to audiovisual speech is primarily mediated by basic temporal factors, rather than higher-order factors such as perceived simultaneity and source identity. PMID:25716790
Developing Competency in Payroll Procedures
ERIC Educational Resources Information Center
Jackson, Allen L.
1975-01-01
The author describes a sequence of units that provides for competency in payroll procedures. The units could be the basis for a four to six week minicourse and are adaptable, so that the student, upon completion, will be able to apply his learning to any payroll procedures system. (Author/AJ)
Saving Time with Automated Account Management
ERIC Educational Resources Information Center
School Business Affairs, 2013
2013-01-01
Thanks to intelligent solutions, schools, colleges, and universities no longer need to manage user account life cycles by using scripts or tedious manual procedures. The solutions house the scripts and manual procedures. Accounts can be automatically created, modified, or deleted in all applications within the school. This article describes how an…
Discrete False-Discovery Rate Improves Identification of Differentially Abundant Microbes.
Jiang, Lingjing; Amir, Amnon; Morton, James T; Heller, Ruth; Arias-Castro, Ery; Knight, Rob
2017-01-01
Differential abundance testing is a critical task in microbiome studies that is complicated by the sparsity of data matrices. Here we adapt for microbiome studies a solution from the field of gene expression analysis to produce a new method, discrete false-discovery rate (DS-FDR), that greatly improves the power to detect differential taxa by exploiting the discreteness of the data. Additionally, DS-FDR is relatively robust to the number of noninformative features, and thus removes the problem of filtering taxonomy tables by an arbitrary abundance threshold. We show by using a combination of simulations and reanalysis of nine real-world microbiome data sets that this new method outperforms existing methods at the differential abundance testing task, producing a false-discovery rate that is up to threefold more accurate, and halves the number of samples required to find a given difference (thus increasing the efficiency of microbiome experiments considerably). We therefore expect DS-FDR to be widely applied in microbiome studies. IMPORTANCE DS-FDR can achieve higher statistical power to detect significant findings in sparse and noisy microbiome data compared to the commonly used Benjamini-Hochberg procedure and other FDR-controlling procedures.
Psychosocial and legal aspects of oncological treatment in patients with cognitive impairment.
Kuśnierkiewicz, Maria; Kędziora, Justyna; Jaroszyk-Pawlukiewicz, Joanna; Nowak-Jaroszyk, Monika
2014-05-01
With society getting older and affected by many diseases, more and more people suffer from severe cognitive disorders. As practice shows, the legal situations of such people is often problematic. This is due to a number of factors, such as short time since the deterioration of patient's condition, initial symptoms ignored, social prejudice towards the idea of incapacitation or taking decisions for a patient, complicated procedures and, sometimes, insufficient knowledge of legal regulations. Cognitive disorders also occur in patients treated for cancer. To be effective, oncological treatment needs to be started as early as possible. This, however, does not meet the criteria of sudden threat to life. The present article relates to both the psychosocial and legal aspects of care of people suffering from intense disorders of memory, attention, problem solving, executive functions, and other. Surely, physicians know how to handle patients with the above dysfunctions. However, legal procedures aimed to protect patients' rights are often unclear and time consuming. In practice, this often amounts to a dilemma whether to treat or follow the applicable law. Certainly, solutions in this regard should be clearer and better adapted to the needs arising from specific treatment needs of particular groups of patients.
NASA Astrophysics Data System (ADS)
Maugeri, L.; Moraschi, M.; Summers, P.; Favilla, S.; Mascali, D.; Cedola, A.; Porro, C. A.; Giove, F.; Fratini, M.
2018-02-01
Functional Magnetic Resonance Imaging (fMRI) based on Blood Oxygenation Level Dependent (BOLD) contrast has become one of the most powerful tools in neuroscience research. On the other hand, fMRI approaches have seen limited use in the study of spinal cord and subcortical brain regions (such as the brainstem and portions of the diencephalon). Indeed obtaining good BOLD signal in these areas still represents a technical and scientific challenge, due to poor control of physiological noise and to a limited overall quality of the functional series. A solution can be found in the combination of optimized experimental procedures at acquisition stage, and well-adapted artifact mitigation procedures in the data processing. In this framework, we studied two different data processing strategies to reduce physiological noise in cortical and subcortical brain regions and in the spinal cord, based on the aCompCor and RETROICOR denoising tools respectively. The study, performed in healthy subjects, was carried out using an ad hoc isometric motor task. We observed an increased signal to noise ratio in the denoised functional time series in the spinal cord and in the subcortical brain region.
Minimally invasive repair of pectus carinatum and how to deal with complications
Aragone, Xavier; Blanco, Javier Borbore; Ciano, Alejandro; Abramson, Leonardo
2016-01-01
While less common than pectus excavatum, pectus carinatum is also a chest wall deformity affecting males in higher proportion than women. Patient requests for a solution of this disease occur especially during the growth spurt of puberty when this malformation becomes more obvious and difficult to conceal. Those people suffering from pectus carinatum are very likely subject to behavioral changes and negative personality impacts. By compressing the protruding anterior region of the chest wall we achieve correction of the chest contour and simultaneous lateral expansion of the depressed costochondral arches. This original technique and the procedure to apply it fit within the category of minimally invasive surgery. The compression system acts in a way similar to that of orthodontic braces. Two rectangular fixation plates are fixed to the compression strut with screws. The plates have threaded holes along a groove in the central portion, and two holes at both ends used to attach them to the ribs by means of steel wire suture. The compression strut has to be modified into a convex shape to adapt it to the particular characteristics of the patient’s malformation. This molding is done using benders designed as part of the procedure. PMID:29078492
Minimally invasive repair of pectus carinatum and how to deal with complications.
Abramson, Horacio; Aragone, Xavier; Blanco, Javier Borbore; Ciano, Alejandro; Abramson, Leonardo
2016-01-01
While less common than pectus excavatum, pectus carinatum is also a chest wall deformity affecting males in higher proportion than women. Patient requests for a solution of this disease occur especially during the growth spurt of puberty when this malformation becomes more obvious and difficult to conceal. Those people suffering from pectus carinatum are very likely subject to behavioral changes and negative personality impacts. By compressing the protruding anterior region of the chest wall we achieve correction of the chest contour and simultaneous lateral expansion of the depressed costochondral arches. This original technique and the procedure to apply it fit within the category of minimally invasive surgery. The compression system acts in a way similar to that of orthodontic braces. Two rectangular fixation plates are fixed to the compression strut with screws. The plates have threaded holes along a groove in the central portion, and two holes at both ends used to attach them to the ribs by means of steel wire suture. The compression strut has to be modified into a convex shape to adapt it to the particular characteristics of the patient's malformation. This molding is done using benders designed as part of the procedure.
Design of a Model Reference Adaptive Controller for an Unmanned Air Vehicle
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Matsutani, Megumi; Annaswamy, Anuradha M.
2010-01-01
This paper presents the "Adaptive Control Technology for Safe Flight (ACTS)" architecture, which consists of a non-adaptive controller that provides satisfactory performance under nominal flying conditions, and an adaptive controller that provides robustness under off nominal ones. The design and implementation procedures of both controllers are presented. The aim of these procedures, which encompass both theoretical and practical considerations, is to develop a controller suitable for flight. The ACTS architecture is applied to the Generic Transport Model developed by NASA-Langley Research Center. The GTM is a dynamically scaled test model of a transport aircraft for which a flight-test article and a high-fidelity simulation are available. The nominal controller at the core of the ACTS architecture has a multivariable LQR-PI structure while the adaptive one has a direct, model reference structure. The main control surfaces as well as the throttles are used as control inputs. The inclusion of the latter alleviates the pilot s workload by eliminating the need for cancelling the pitch coupling generated by changes in thrust. Furthermore, the independent usage of the throttles by the adaptive controller enables their use for attitude control. Advantages and potential drawbacks of adaptation are demonstrated by performing high fidelity simulations of a flight-validated controller and of its adaptive augmentation.
Disentangling Complexity in Bayesian Automatic Adaptive Quadrature
NASA Astrophysics Data System (ADS)
Adam, Gheorghe; Adam, Sanda
2018-02-01
The paper describes a Bayesian automatic adaptive quadrature (BAAQ) solution for numerical integration which is simultaneously robust, reliable, and efficient. Detailed discussion is provided of three main factors which contribute to the enhancement of these features: (1) refinement of the m-panel automatic adaptive scheme through the use of integration-domain-length-scale-adapted quadrature sums; (2) fast early problem complexity assessment - enables the non-transitive choice among three execution paths: (i) immediate termination (exceptional cases); (ii) pessimistic - involves time and resource consuming Bayesian inference resulting in radical reformulation of the problem to be solved; (iii) optimistic - asks exclusively for subrange subdivision by bisection; (3) use of the weaker accuracy target from the two possible ones (the input accuracy specifications and the intrinsic integrand properties respectively) - results in maximum possible solution accuracy under minimum possible computing time.
NASA Technical Reports Server (NTRS)
Momoh, James A.; Wang, Yanchun; Dolce, James L.
1997-01-01
This paper describes the application of neural network adaptive wavelets for fault diagnosis of space station power system. The method combines wavelet transform with neural network by incorporating daughter wavelets into weights. Therefore, the wavelet transform and neural network training procedure become one stage, which avoids the complex computation of wavelet parameters and makes the procedure more straightforward. The simulation results show that the proposed method is very efficient for the identification of fault locations.
NASA Technical Reports Server (NTRS)
Miles, Jeffrey Hilton
2015-01-01
A cross-power spectrum phase based adaptive technique is discussed which iteratively determines the time delay between two digitized signals that are coherent. The adaptive delay algorithm belongs to a class of algorithms that identifies a minimum of a pattern matching function. The algorithm uses a gradient technique to find the value of the adaptive delay that minimizes a cost function based in part on the slope of a linear function that fits the measured cross power spectrum phase and in part on the standard error of the curve fit. This procedure is applied to data from a Honeywell TECH977 static-engine test. Data was obtained using a combustor probe, two turbine exit probes, and far-field microphones. Signals from this instrumentation are used estimate the post-combustion residence time in the combustor. Comparison with previous studies of the post-combustion residence time validates this approach. In addition, the procedure removes the bias due to misalignment of signals in the calculation of coherence which is a first step in applying array processing methods to the magnitude squared coherence data. The procedure also provides an estimate of the cross-spectrum phase-offset.
Adaptive near-field beamforming techniques for sound source imaging.
Cho, Yong Thung; Roan, Michael J
2009-02-01
Phased array signal processing techniques such as beamforming have a long history in applications such as sonar for detection and localization of far-field sound sources. Two sometimes competing challenges arise in any type of spatial processing; these are to minimize contributions from directions other than the look direction and minimize the width of the main lobe. To tackle this problem a large body of work has been devoted to the development of adaptive procedures that attempt to minimize side lobe contributions to the spatial processor output. In this paper, two adaptive beamforming procedures-minimum variance distorsionless response and weight optimization to minimize maximum side lobes--are modified for use in source visualization applications to estimate beamforming pressure and intensity using near-field pressure measurements. These adaptive techniques are compared to a fixed near-field focusing technique (both techniques use near-field beamforming weightings focusing at source locations estimated based on spherical wave array manifold vectors with spatial windows). Sound source resolution accuracies of near-field imaging procedures with different weighting strategies are compared using numerical simulations both in anechoic and reverberant environments with random measurement noise. Also, experimental results are given for near-field sound pressure measurements of an enclosed loudspeaker.
A Bayesian Hybrid Adaptive Randomisation Design for Clinical Trials with Survival Outcomes.
Moatti, M; Chevret, S; Zohar, S; Rosenberger, W F
2016-01-01
Response-adaptive randomisation designs have been proposed to improve the efficiency of phase III randomised clinical trials and improve the outcomes of the clinical trial population. In the setting of failure time outcomes, Zhang and Rosenberger (2007) developed a response-adaptive randomisation approach that targets an optimal allocation, based on a fixed sample size. The aim of this research is to propose a response-adaptive randomisation procedure for survival trials with an interim monitoring plan, based on the following optimal criterion: for fixed variance of the estimated log hazard ratio, what allocation minimizes the expected hazard of failure? We demonstrate the utility of the design by redesigning a clinical trial on multiple myeloma. To handle continuous monitoring of data, we propose a Bayesian response-adaptive randomisation procedure, where the log hazard ratio is the effect measure of interest. Combining the prior with the normal likelihood, the mean posterior estimate of the log hazard ratio allows derivation of the optimal target allocation. We perform a simulation study to assess and compare the performance of this proposed Bayesian hybrid adaptive design to those of fixed, sequential or adaptive - either frequentist or fully Bayesian - designs. Non informative normal priors of the log hazard ratio were used, as well as mixture of enthusiastic and skeptical priors. Stopping rules based on the posterior distribution of the log hazard ratio were computed. The method is then illustrated by redesigning a phase III randomised clinical trial of chemotherapy in patients with multiple myeloma, with mixture of normal priors elicited from experts. As expected, there was a reduction in the proportion of observed deaths in the adaptive vs. non-adaptive designs; this reduction was maximized using a Bayes mixture prior, with no clear-cut improvement by using a fully Bayesian procedure. The use of stopping rules allows a slight decrease in the observed proportion of deaths under the alternate hypothesis compared with the adaptive designs with no stopping rules. Such Bayesian hybrid adaptive survival trials may be promising alternatives to traditional designs, reducing the duration of survival trials, as well as optimizing the ethical concerns for patients enrolled in the trial.
Vitrification as an alternative means of cryopreserving ovarian tissue.
Amorim, Christiani A; Curaba, Mara; Van Langendonckt, Anne; Dolmans, Marie-Madeleine; Donnez, Jacques
2011-08-01
Because of the simplicity of vitrification, many authors have investigated it as an alternative to slow freezing for cryopreserving ovarian tissue. In the last decade, numerous studies have evaluated vitrification of ovarian tissue from both humans and animals.Different vitrification solutions and protocols, mostly adapted from embryo and oocyte vitrification, have been applied. The results have been discrepant from species to species and even within the same species, but lately they appear to indicate that vitrification can achieve similar or even superior results to conventional freezing. Despite the encouraging results obtained with vitrification of ovarian tissue from humans and different animal species, it is necessary to understand how vitrification solutions and protocols can affect ovarian tissue, notably preantral follicles. In addition, it is important to bear in mind that the utilization of different approaches to assess tissue functionality and oocyte quality is essential in order to validate the promising results already obtained with vitrification procedures. This review summarizes the principles of vitrification, discusses the advantages of vitrification protocols for ovarian tissue cryopreservation and describes different studies conducted on the vitrification of ovarian tissue in humans and animal species. Copyright © 2011 Reproductive Healthcare Ltd. Published by Elsevier Ltd. All rights reserved.
Canbay, Ferhat; Levent, Vecdi Emre; Serbes, Gorkem; Ugurdag, H. Fatih; Goren, Sezer
2016-01-01
The authors aimed to develop an application for producing different architectures to implement dual tree complex wavelet transform (DTCWT) having near shift-invariance property. To obtain a low-cost and portable solution for implementing the DTCWT in multi-channel real-time applications, various embedded-system approaches are realised. For comparison, the DTCWT was implemented in C language on a personal computer and on a PIC microcontroller. However, in the former approach portability and in the latter desired speed performance properties cannot be achieved. Hence, implementation of the DTCWT on a reconfigurable platform such as field programmable gate array, which provides portable, low-cost, low-power, and high-performance computing, is considered as the most feasible solution. At first, they used the system generator DSP design tool of Xilinx for algorithm design. However, the design implemented by using such tools is not optimised in terms of area and power. To overcome all these drawbacks mentioned above, they implemented the DTCWT algorithm by using Verilog Hardware Description Language, which has its own difficulties. To overcome these difficulties, simplify the usage of proposed algorithms and the adaptation procedures, a code generator program that can produce different architectures is proposed. PMID:27733925
Capabilities of Fully Parallelized MHD Stability Code MARS
NASA Astrophysics Data System (ADS)
Svidzinski, Vladimir; Galkin, Sergei; Kim, Jin-Soo; Liu, Yueqiang
2016-10-01
Results of full parallelization of the plasma stability code MARS will be reported. MARS calculates eigenmodes in 2D axisymmetric toroidal equilibria in MHD-kinetic plasma models. Parallel version of MARS, named PMARS, has been recently developed at FAR-TECH. Parallelized MARS is an efficient tool for simulation of MHD instabilities with low, intermediate and high toroidal mode numbers within both fluid and kinetic plasma models, implemented in MARS. Parallelization of the code included parallelization of the construction of the matrix for the eigenvalue problem and parallelization of the inverse vector iterations algorithm, implemented in MARS for the solution of the formulated eigenvalue problem. Construction of the matrix is parallelized by distributing the load among processors assigned to different magnetic surfaces. Parallelization of the solution of the eigenvalue problem is made by repeating steps of the MARS algorithm using parallel libraries and procedures. Parallelized MARS is capable of calculating eigenmodes with significantly increased spatial resolution: up to 5,000 adapted radial grid points with up to 500 poloidal harmonics. Such resolution is sufficient for simulation of kink, tearing and peeling-ballooning instabilities with physically relevant parameters. Work is supported by the U.S. DOE SBIR program.
Fully Parallel MHD Stability Analysis Tool
NASA Astrophysics Data System (ADS)
Svidzinski, Vladimir; Galkin, Sergei; Kim, Jin-Soo; Liu, Yueqiang
2015-11-01
Progress on full parallelization of the plasma stability code MARS will be reported. MARS calculates eigenmodes in 2D axisymmetric toroidal equilibria in MHD-kinetic plasma models. It is a powerful tool for studying MHD and MHD-kinetic instabilities and it is widely used by fusion community. Parallel version of MARS is intended for simulations on local parallel clusters. It will be an efficient tool for simulation of MHD instabilities with low, intermediate and high toroidal mode numbers within both fluid and kinetic plasma models, already implemented in MARS. Parallelization of the code includes parallelization of the construction of the matrix for the eigenvalue problem and parallelization of the inverse iterations algorithm, implemented in MARS for the solution of the formulated eigenvalue problem. Construction of the matrix is parallelized by distributing the load among processors assigned to different magnetic surfaces. Parallelization of the solution of the eigenvalue problem is made by repeating steps of the present MARS algorithm using parallel libraries and procedures. Results of MARS parallelization and of the development of a new fix boundary equilibrium code adapted for MARS input will be reported. Work is supported by the U.S. DOE SBIR program.
Canbay, Ferhat; Levent, Vecdi Emre; Serbes, Gorkem; Ugurdag, H Fatih; Goren, Sezer; Aydin, Nizamettin
2016-09-01
The authors aimed to develop an application for producing different architectures to implement dual tree complex wavelet transform (DTCWT) having near shift-invariance property. To obtain a low-cost and portable solution for implementing the DTCWT in multi-channel real-time applications, various embedded-system approaches are realised. For comparison, the DTCWT was implemented in C language on a personal computer and on a PIC microcontroller. However, in the former approach portability and in the latter desired speed performance properties cannot be achieved. Hence, implementation of the DTCWT on a reconfigurable platform such as field programmable gate array, which provides portable, low-cost, low-power, and high-performance computing, is considered as the most feasible solution. At first, they used the system generator DSP design tool of Xilinx for algorithm design. However, the design implemented by using such tools is not optimised in terms of area and power. To overcome all these drawbacks mentioned above, they implemented the DTCWT algorithm by using Verilog Hardware Description Language, which has its own difficulties. To overcome these difficulties, simplify the usage of proposed algorithms and the adaptation procedures, a code generator program that can produce different architectures is proposed.
Song, Shaozhen; Xu, Jingjiang; Wang, Ruikang K
2016-11-01
Current optical coherence tomography (OCT) imaging suffers from short ranging distance and narrow imaging field of view (FOV). There is growing interest in searching for solutions to these limitations in order to expand further in vivo OCT applications. This paper describes a solution where we utilize an akinetic swept source for OCT implementation to enable ~10 cm ranging distance, associated with the use of a wide-angle camera lens in the sample arm to provide a FOV of ~20 x 20 cm 2 . The akinetic swept source operates at 1300 nm central wavelength with a bandwidth of 100 nm. We propose an adaptive calibration procedure to the programmable akinetic light source so that the sensitivity of the OCT system over ~10 cm ranging distance is substantially improved for imaging of large volume samples. We demonstrate the proposed swept source OCT system for in vivo imaging of entire human hands and faces with an unprecedented FOV (up to 400 cm 2 ). The capability of large-volume OCT imaging with ultra-long ranging and ultra-wide FOV is expected to bring new opportunities for in vivo biomedical applications.
Song, Shaozhen; Xu, Jingjiang; Wang, Ruikang K.
2016-01-01
Current optical coherence tomography (OCT) imaging suffers from short ranging distance and narrow imaging field of view (FOV). There is growing interest in searching for solutions to these limitations in order to expand further in vivo OCT applications. This paper describes a solution where we utilize an akinetic swept source for OCT implementation to enable ~10 cm ranging distance, associated with the use of a wide-angle camera lens in the sample arm to provide a FOV of ~20 x 20 cm2. The akinetic swept source operates at 1300 nm central wavelength with a bandwidth of 100 nm. We propose an adaptive calibration procedure to the programmable akinetic light source so that the sensitivity of the OCT system over ~10 cm ranging distance is substantially improved for imaging of large volume samples. We demonstrate the proposed swept source OCT system for in vivo imaging of entire human hands and faces with an unprecedented FOV (up to 400 cm2). The capability of large-volume OCT imaging with ultra-long ranging and ultra-wide FOV is expected to bring new opportunities for in vivo biomedical applications. PMID:27896012
Human factors issues in performing life science experiments in a 0-G environment
NASA Technical Reports Server (NTRS)
Gonzalez, Wayne
1989-01-01
An overview of the environmental conditions within the Spacelab and the planned Space Station Freedom is presented. How this environment causes specific Human Factors problems and the nature of design solutions are described. The impact of these problems and solutions on the performance of life science activities onboard Spacelab (SL) and Space Station Freedom (SSF) is discussed. The first area highlighted is contamination. The permanence of SSF in contrast to the two-week mission of SL has significant impacts on crew and specimen protection requirements and, thus, resource utilization. These requirements, in turn impose restrictions on working volumes, scheduling, training, and scope of experimental procedures. A second area is microgravity. This means that all specimens, materials, and apparatus must be restrained and carefully controlled. Because so much of the scientific activity must occur within restricted enclosures (gloveboxes), the provisions for restraint and control are made more complex. The third topic is crewmember biomechanics and the problems of movement and task performance in microgravity. In addition to the need to stabilize the body for the performance of tasks, performance of very sensitive tasks such as dissection is difficult. The issue of space sickness and adaption is considered in this context.
Recent innovations in edible and/or biodegradable packaging materials.
Guilbert, S; Cuq, B; Gontard, N
1997-01-01
Certain newly discovered characteristics of natural biopolymers should make them a choice material to be used for different types of wrappings and films. Edible and/or biodegradable packagings produced from agricultural origin macromolecules provide a supplementary and sometimes essential means to control physiological, microbiological, and physicochemical changes in food products. This is accomplished (i) by controlling mass transfers between food product and ambient atmosphere or between components in heterogeneous food product, and (iii) by modifying and controlling food surface conditions (pH, level of specific functional agents, slow release of flavour compounds), it should be stressed that the material characteristics (polysaccharide, protein, or lipid, plasticized or not, chemically modified or not, used alone or in combination) and the fabrication procedures (casting of a film-forming solution, thermoforming) must be adapted to each specific food product and usage condition (relative humidity, temperature). Some potential uses of these materials (e.g. wrapping of various fabricated foods; protection of fruits and vegetables by control of maturation; protection of meat and fish; control of internal moisture transfer in pizzas), which are hinged on film properties (e.g. organoleptic, mechanical, gas and solute barrier) are described with examples.
Adaptive finite element methods for two-dimensional problems in computational fracture mechanics
NASA Technical Reports Server (NTRS)
Min, J. B.; Bass, J. M.; Spradley, L. W.
1994-01-01
Some recent results obtained using solution-adaptive finite element methods in two-dimensional problems in linear elastic fracture mechanics are presented. The focus is on the basic issue of adaptive finite element methods for validating the new methodology by computing demonstration problems and comparing the stress intensity factors to analytical results.
Evaluation of the CATSIB DIF Procedure in a Pretest Setting
ERIC Educational Resources Information Center
Nandakumar, Ratna; Roussos, Louis
2004-01-01
A new procedure, CATSIB, for assessing differential item functioning (DIF) on computerized adaptive tests (CATs) is proposed. CATSIB, a modified SIBTEST procedure, matches test takers on estimated ability and controls for impact-induced Type 1 error inflation by employing a CAT version of the IBTEST "regression correction." The…
Puzzles in modern biology. V. Why are genomes overwired?
Frank, Steven A
2017-01-01
Many factors affect eukaryotic gene expression. Transcription factors, histone codes, DNA folding, and noncoding RNA modulate expression. Those factors interact in large, broadly connected regulatory control networks. An engineer following classical principles of control theory would design a simpler regulatory network. Why are genomes overwired? Neutrality or enhanced robustness may lead to the accumulation of additional factors that complicate network architecture. Dynamics progresses like a ratchet. New factors get added. Genomes adapt to the additional complexity. The newly added factors can no longer be removed without significant loss of fitness. Alternatively, highly wired genomes may be more malleable. In large networks, most genomic variants tend to have a relatively small effect on gene expression and trait values. Many small effects lead to a smooth gradient, in which traits may change steadily with respect to underlying regulatory changes. A smooth gradient may provide a continuous path from a starting point up to the highest peak of performance. A potential path of increasing performance promotes adaptability and learning. Genomes gain by the inductive process of natural selection, a trial and error learning algorithm that discovers general solutions for adapting to environmental challenge. Similarly, deeply and densely connected computational networks gain by various inductive trial and error learning procedures, in which the networks learn to reduce the errors in sequential trials. Overwiring alters the geometry of induction by smoothing the gradient along the inductive pathways of improving performance. Those overwiring benefits for induction apply to both natural biological networks and artificial deep learning networks.
Mograbi, Daniel C; Indelli, Pamela; Lage, Caio A; Tebyriça, Vitória; Landeira-Fernandez, Jesus; Rimes, Katharine A
2018-03-01
Introduction Beliefs about the unacceptability of expression and experience of emotion are present in the general population but seem to be more prevalent in patients with a number of health conditions. Such beliefs, which may be viewed as a form of perfectionism about emotions, may have a deleterious effect on symptomatology as well as on treatment adherence and outcome. Nevertheless, few questionnaires have been developed to measure such beliefs about emotions, and no instrument has been validated in a developing country. The current study adapted and validated the Beliefs about Emotions Scale in a Brazilian sample. Methods The adaptation procedure included translation, back-translation and analysis of the content, with the final Brazilian Portuguese version of the scale being tested online in a sample of 645 participants. Internal consistency of the scale was very high and results of a principal axis factoring analysis indicated a two-factor solution. Results Respondents with high fatigue levels showed more perfectionist beliefs, and the scale correlated positively with questionnaires measuring anxiety, depression and fear of negative evaluation, confirming cross-cultural associations reported before. Finally, men, non-Caucasians and participants with lower educational achievement gave greater endorsement to such beliefs than women, Caucasian individuals and participants with higher educational level. Conclusions The study confirms previous clinical findings reported in the literature, but indicates novel associations with demographic variables. The latter may reflect cultural differences related to beliefs about emotions in Brazil.
Kneissler, Jan; Drugowitsch, Jan; Friston, Karl; Butz, Martin V
2015-01-01
Predictive coding appears to be one of the fundamental working principles of brain processing. Amongst other aspects, brains often predict the sensory consequences of their own actions. Predictive coding resembles Kalman filtering, where incoming sensory information is filtered to produce prediction errors for subsequent adaptation and learning. However, to generate prediction errors given motor commands, a suitable temporal forward model is required to generate predictions. While in engineering applications, it is usually assumed that this forward model is known, the brain has to learn it. When filtering sensory input and learning from the residual signal in parallel, a fundamental problem arises: the system can enter a delusional loop when filtering the sensory information using an overly trusted forward model. In this case, learning stalls before accurate convergence because uncertainty about the forward model is not properly accommodated. We present a Bayes-optimal solution to this generic and pernicious problem for the case of linear forward models, which we call Predictive Inference and Adaptive Filtering (PIAF). PIAF filters incoming sensory information and learns the forward model simultaneously. We show that PIAF is formally related to Kalman filtering and to the Recursive Least Squares linear approximation method, but combines these procedures in a Bayes optimal fashion. Numerical evaluations confirm that the delusional loop is precluded and that the learning of the forward model is more than 10-times faster when compared to a naive combination of Kalman filtering and Recursive Least Squares.
Matar, Eman M; Arabiat, Diana H; Foster, Mandie J
2016-11-01
This research was undertaken with the purpose of testing two research hypotheses regarding the efficacy of 10% oral glucose solution on procedural pain associated with venepuncture and nasopharyngeal suctioning within three neonatal intensive care units (NICU). The hypotheses were formulated from previous conclusions reached by other researchers highlighting the efficacy of sucrose solutions on neonates' pain responses during minor painful procedures. A quasi-experimental trial utilising a time series design with one group was used. Data from a total of 90 neonates included 60 neonates who underwent a venepuncture and 30 neonates who underwent a nasopharyngeal suctioning procedure for clinical purposes. The neonate's pain response for each procedure was scored using the Neonatal Pain Assessment Scale (NPAS) on two separate occasions over three time periods. The pre-procedural score (T 0 ) when the neonate received no sucrose, the inter-procedural score (T 1 ) when the neonate was given 2ml of 10% glucose solution two minutes before the procedure (intervention group) or where oral glucose was withheld (control group) and the post-procedural score (T 2 ) being at the end of the procedure. The results showed the mean NPAS scores in response to venepuncture or nasopharyngeal suctioning were significantly lower in the intervention group than the control group. This showed that oral glucose (10%) had a positive effect on the pain response during venepuncture and nasopharyngeal suctioning procedures. Copyright © 2015 Elsevier Inc. All rights reserved.
Applying Parallel Adaptive Methods with GeoFEST/PYRAMID to Simulate Earth Surface Crustal Dynamics
NASA Technical Reports Server (NTRS)
Norton, Charles D.; Lyzenga, Greg; Parker, Jay; Glasscoe, Margaret; Donnellan, Andrea; Li, Peggy
2006-01-01
This viewgraph presentation reviews the use Adaptive Mesh Refinement (AMR) in simulating the Crustal Dynamics of Earth's Surface. AMR simultaneously improves solution quality, time to solution, and computer memory requirements when compared to generating/running on a globally fine mesh. The use of AMR in simulating the dynamics of the Earth's Surface is spurred by future proposed NASA missions, such as InSAR for Earth surface deformation and other measurements. These missions will require support for large-scale adaptive numerical methods using AMR to model observations. AMR was chosen because it has been successful in computation fluid dynamics for predictive simulation of complex flows around complex structures.
Parallel goal-oriented adaptive finite element modeling for 3D electromagnetic exploration
NASA Astrophysics Data System (ADS)
Zhang, Y.; Key, K.; Ovall, J.; Holst, M.
2014-12-01
We present a parallel goal-oriented adaptive finite element method for accurate and efficient electromagnetic (EM) modeling of complex 3D structures. An unstructured tetrahedral mesh allows this approach to accommodate arbitrarily complex 3D conductivity variations and a priori known boundaries. The total electric field is approximated by the lowest order linear curl-conforming shape functions and the discretized finite element equations are solved by a sparse LU factorization. Accuracy of the finite element solution is achieved through adaptive mesh refinement that is performed iteratively until the solution converges to the desired accuracy tolerance. Refinement is guided by a goal-oriented error estimator that uses a dual-weighted residual method to optimize the mesh for accurate EM responses at the locations of the EM receivers. As a result, the mesh refinement is highly efficient since it only targets the elements where the inaccuracy of the solution corrupts the response at the possibly distant locations of the EM receivers. We compare the accuracy and efficiency of two approaches for estimating the primary residual error required at the core of this method: one uses local element and inter-element residuals and the other relies on solving a global residual system using a hierarchical basis. For computational efficiency our method follows the Bank-Holst algorithm for parallelization, where solutions are computed in subdomains of the original model. To resolve the load-balancing problem, this approach applies a spectral bisection method to divide the entire model into subdomains that have approximately equal error and the same number of receivers. The finite element solutions are then computed in parallel with each subdomain carrying out goal-oriented adaptive mesh refinement independently. We validate the newly developed algorithm by comparison with controlled-source EM solutions for 1D layered models and with 2D results from our earlier 2D goal oriented adaptive refinement code named MARE2DEM. We demonstrate the performance and parallel scaling of this algorithm on a medium-scale computing cluster with a marine controlled-source EM example that includes a 3D array of receivers located over a 3D model that includes significant seafloor bathymetry variations and a heterogeneous subsurface.
Nordquist, Rebecca E; van der Staay, Franz Josef; van Eerdenburg, Frank J C M; Velkers, Francisca C; Fijn, Lisa; Arndt, Saskia S
2017-02-21
A number of mutilating procedures, such as dehorning in cattle and goats and beak trimming in laying hens, are common in farm animal husbandry systems in an attempt to prevent or solve problems, such as injuries from horns or feather pecking. These procedures and other practices, such as early maternal separation, overcrowding, and barren housing conditions, raise concerns about animal welfare. Efforts to ensure or improve animal welfare involve adapting the animal to its environment, i.e., by selective breeding (e.g., by selecting "robust" animals) adapting the environment to the animal (e.g., by developing social housing systems in which aggressive encounters are reduced to a minimum), or both. We propose adapting the environment to the animals by improving management practices and housing conditions, and by abandoning mutilating procedures. This approach requires the active involvement of all stakeholders: veterinarians and animal scientists, the industrial farming sector, the food processing and supply chain, and consumers of animal-derived products. Although scientific evidence about the welfare effects of current practices in farming such as mutilating procedures, management practices, and housing conditions is steadily growing, the gain in knowledge needs a boost through more scientific research. Considering the huge number of animals whose welfare is affected, all possible effort must be made to improve their welfare as quickly as possible in order to ban welfare-compromising procedures and practices as soon as possible.
Adapting to life: ocean biogeochemical modelling and adaptive remeshing
NASA Astrophysics Data System (ADS)
Hill, J.; Popova, E. E.; Ham, D. A.; Piggott, M. D.; Srokosz, M.
2014-05-01
An outstanding problem in biogeochemical modelling of the ocean is that many of the key processes occur intermittently at small scales, such as the sub-mesoscale, that are not well represented in global ocean models. This is partly due to their failure to resolve sub-mesoscale phenomena, which play a significant role in vertical nutrient supply. Simply increasing the resolution of the models may be an inefficient computational solution to this problem. An approach based on recent advances in adaptive mesh computational techniques may offer an alternative. Here the first steps in such an approach are described, using the example of a simple vertical column (quasi-1-D) ocean biogeochemical model. We present a novel method of simulating ocean biogeochemical behaviour on a vertically adaptive computational mesh, where the mesh changes in response to the biogeochemical and physical state of the system throughout the simulation. We show that the model reproduces the general physical and biological behaviour at three ocean stations (India, Papa and Bermuda) as compared to a high-resolution fixed mesh simulation and to observations. The use of an adaptive mesh does not increase the computational error, but reduces the number of mesh elements by a factor of 2-3. Unlike previous work the adaptivity metric used is flexible and we show that capturing the physical behaviour of the model is paramount to achieving a reasonable solution. Adding biological quantities to the adaptivity metric further refines the solution. We then show the potential of this method in two case studies where we change the adaptivity metric used to determine the varying mesh sizes in order to capture the dynamics of chlorophyll at Bermuda and sinking detritus at Papa. We therefore demonstrate that adaptive meshes may provide a suitable numerical technique for simulating seasonal or transient biogeochemical behaviour at high vertical resolution whilst minimising the number of elements in the mesh. More work is required to move this to fully 3-D simulations.
Construction of a Computerized Adaptive Testing Version of the Quebec Adaptive Behavior Scale.
ERIC Educational Resources Information Center
Tasse, Marc J.; And Others
Multilog (Thissen, 1991) was used to estimate parameters of 225 items from the Quebec Adaptive Behavior Scale (QABS). A database containing actual data from 2,439 subjects was used for the parameterization procedures. The two-parameter-logistic model was used in estimating item parameters and in the testing strategy. MicroCAT (Assessment Systems…
Adaptive Assessment of Young Children with Visual Impairment
ERIC Educational Resources Information Center
Ruiter, Selma; Nakken, Han; Janssen, Marleen; Van Der Meulen, Bieuwe; Looijestijn, Paul
2011-01-01
The aim of this study was to assess the effect of adaptations for children with low vision of the Bayley Scales, a standardized developmental instrument widely used to assess development in young children. Low vision adaptations were made to the procedures, item instructions and play material of the Dutch version of the Bayley Scales of Infant…
ERIC Educational Resources Information Center
Roulette, Jennifer W; Hill, Laura G; Diversi, Marcelo; Overath, Renee
2017-01-01
Objective: Most reports of adaptations to evidence-based prevention programmes for delivery to specific cultural groups describe formal adaptation procedures. In this paper, we report on how practitioners identify and manage issues of perceived cultural mismatch when delivering a scripted, evidence-based intervention. Design: We used grounded…
Implementing Culture Change in Nursing Homes: An Adaptive Leadership Framework
Corazzini, Kirsten; Twersky, Jack; White, Heidi K.; Buhr, Gwendolen T.; McConnell, Eleanor S.; Weiner, Madeline; Colón-Emeric, Cathleen S.
2015-01-01
Purpose of the Study: To describe key adaptive challenges and leadership behaviors to implement culture change for person-directed care. Design and Methods: The study design was a qualitative, observational study of nursing home staff perceptions of the implementation of culture change in each of 3 nursing homes. We conducted 7 focus groups of licensed and unlicensed nursing staff, medical care providers, and administrators. Questions explored perceptions of facilitators and barriers to culture change. Using a template organizing style of analysis with immersion/crystallization, themes of barriers and facilitators were coded for adaptive challenges and leadership. Results: Six key themes emerged, including relationships, standards and expectations, motivation and vision, workload, respect of personhood, and physical environment. Within each theme, participants identified barriers that were adaptive challenges and facilitators that were examples of adaptive leadership. Commonly identified challenges were how to provide person-directed care in the context of extant rules or policies or how to develop staff motivated to provide person-directed care. Implications: Implementing culture change requires the recognition of adaptive challenges for which there are no technical solutions, but which require reframing of norms and expectations, and the development of novel and flexible solutions. Managers and administrators seeking to implement person-directed care will need to consider the role of adaptive leadership to address these adaptive challenges. PMID:24451896
Sclerotherapy with tetracycline solution for hydrocele.
Hu, K N; Khan, A S; Gonder, M
1984-12-01
A study of sclerotherapy for hydrocele using different concentrations (10%, 5%, 2.5%) for tetracycline solution was done on 24 patients, 23 patients were cured. The effectiveness of sclerotherapy was the same for the three groups of patients with use of each different concentration of the solution. Pain was the only adverse effect. Nonspecific cellular foreign body reaction and fibrin strand proliferation were observed in the hydrocele fluid after this procedure. We consider sclerotherapy for hydrocele with tetracycline solution safe and the procedure of choice for patients in whom surgery or anesthesia is contraindicated, for patients who refuse surgery, and for economic reasons.
Parametric study of minimum reactor mass in energy-storage dc-to-dc converters
NASA Technical Reports Server (NTRS)
Wong, R. C.; Owen, H. A., Jr.; Wilson, T. G.
1981-01-01
Closed-form analytical solutions for the design equations of a minimum-mass reactor for a two-winding voltage-or-current step-up converter are derived. A quantitative relationship between the three parameters - minimum total reactor mass, maximum output power, and switching frequency - is extracted from these analytical solutions. The validity of the closed-form solution is verified by a numerical minimization procedure. A computer-aided design procedure using commercially available toroidal cores and magnet wires is also used to examine how the results from practical designs follow the predictions of the analytical solutions.
Bodner, Michael E; Bilheimer, Alicia; Gao, Xiaomei; Lyna, Pauline; Alexander, Stewart C; Dolor, Rowena J; Østbye, Truls; Bravender, Terrill; Tulsky, James A; Graves, Sidney; Irons, Alexis; Pollak, Kathryn I
2015-11-13
Practice-based studies are needed to assess how physicians communicate health messages about weight to overweight/obese adolescent patients, but successful recruitment to such studies is challenging. This paper describes challenges, solutions, and lessons learned to recruit physicians and adolescents to the Teen Communicating Health Analyzing Talk (CHAT) study, a randomized controlled trial of a communication skills intervention for primary care physicians to enhance communication about weight with overweight/obese adolescents. A "peer-to-peer" approach was used to recruit physicians, including the use of "clinic champions" who liaised between study leaders and physicians. Consistent rapport and cooperative working relationships with physicians and clinic staff were developed and maintained. Adolescent clinic files were reviewed (HIPAA waiver) to assess eligibility. Parents could elect to opt-out for their children. To encourage enrollment, confidentiality of audio recordings was emphasized, and financial incentives were offered to all participants. We recruited 49 physicians and audio-recorded 391 of their overweight/obese adolescents' visits. Recruitment challenges included 1) physician reticence to participate; 2) variability in clinic operating procedures; 3) variability in adolescent accrual rates; 4) clinic open access scheduling; and 5) establishing communication with parents and adolescents. Key solutions included the use of a "clinic champion" to help recruit physicians, pro-active, consistent communication with clinic staff, and adapting calling times to reach parents and adolescents. Recruiting physicians and adolescents to audio-recorded, practice-based health communication studies can be successful. Anticipated challenges to recruiting can be met with advanced planning; however, optimal solutions to challenges evolve as recruitment progresses.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hu, Y.
MRI-guided treatment is a growing area of medicine, particularly in radiotherapy and surgery. The exquisite soft tissue anatomic contrast offered by MRI, along with functional imaging, makes the use of MRI during therapeutic procedures very attractive. Challenging the utility of MRI in the therapy room are many issues including the physics of MRI and the impact on the environment and therapeutic instruments, the impact of the room and instruments on the MRI; safety, space, design and cost. In this session, the applications and challenges of MRI-guided treatment will be described. The session format is: Past, present and future: MRI-guided radiotherapymore » from 2005 to 2025: Jan Lagendijk Battling Maxwell’s equations: Physics challenges and solutions for hybrid MRI systems: Paul Keall I want it now!: Advances in MRI acquisition, reconstruction and the use of priors to enable fast anatomic and physiologic imaging to inform guidance and adaptation decisions: Yanle Hu MR in the OR: The growth and applications of MRI for interventional radiology and surgery: Rebecca Fahrig Learning Objectives: To understand the history and trajectory of MRI-guided radiotherapy To understand the challenges of integrating MR imaging systems with linear accelerators To understand the latest in fast MRI methods to enable the visualisation of anatomy and physiology on radiotherapy treatment timescales To understand the growing role and challenges of MRI for image-guided surgical procedures My disclosures are publicly available and updated at: http://sydney.edu.au/medicine/radiation-physics/about-us/disclosures.php.« less
NASA Astrophysics Data System (ADS)
Popov, Igor; Sukov, Sergey
2018-02-01
A modification of the adaptive artificial viscosity (AAV) method is considered. This modification is based on one stage time approximation and is adopted to calculation of gasdynamics problems on unstructured grids with an arbitrary type of grid elements. The proposed numerical method has simplified logic, better performance and parallel efficiency compared to the implementation of the original AAV method. Computer experiments evidence the robustness and convergence of the method to difference solution.
NASA Astrophysics Data System (ADS)
Maslakov, M. L.
2018-04-01
This paper examines the solution of convolution-type integral equations of the first kind by applying the Tikhonov regularization method with two-parameter stabilizing functions. The class of stabilizing functions is expanded in order to improve the accuracy of the resulting solution. The features of the problem formulation for identification and adaptive signal correction are described. A method for choosing regularization parameters in problems of identification and adaptive signal correction is suggested.
Unstructured mesh algorithms for aerodynamic calculations
NASA Technical Reports Server (NTRS)
Mavriplis, D. J.
1992-01-01
The use of unstructured mesh techniques for solving complex aerodynamic flows is discussed. The principle advantages of unstructured mesh strategies, as they relate to complex geometries, adaptive meshing capabilities, and parallel processing are emphasized. The various aspects required for the efficient and accurate solution of aerodynamic flows are addressed. These include mesh generation, mesh adaptivity, solution algorithms, convergence acceleration, and turbulence modeling. Computations of viscous turbulent two-dimensional flows and inviscid three-dimensional flows about complex configurations are demonstrated. Remaining obstacles and directions for future research are also outlined.
NASA Technical Reports Server (NTRS)
Jawerth, Bjoern; Sweldens, Wim
1993-01-01
We present ideas on how to use wavelets in the solution of boundary value ordinary differential equations. Rather than using classical wavelets, we adapt their construction so that they become (bi)orthogonal with respect to the inner product defined by the operator. The stiffness matrix in a Galerkin method then becomes diagonal and can thus be trivially inverted. We show how one can construct an O(N) algorithm for various constant and variable coefficient operators.
Nordquist, Rebecca E.; van der Staay, Franz Josef; van Eerdenburg, Frank J. C. M.; Velkers, Francisca C.; Fijn, Lisa; Arndt, Saskia S.
2017-01-01
Simple summary Intensive farming systems are confronted with a number of animal welfare issues such as injuries from horns in cattle and feather pecking in poultry. To solve these problems, mutilating procedures, such as dehorning in cattle and goats and beak trimming in laying hens, are applied routinely. These and other procedures such as early maternal separation, overcrowding, and barren housing conditions impair animal welfare. Scientific underpinning of the efficacy of these interventions and management practices is poor. We advocate that all stakeholders, in particular animal scientists and veterinarians, take the lead in evaluating common, putative mutilating and welfare reducing procedures and management practices to develop better, scientifically supported alternatives, focused on adaptation of the environment to the animals, to ensure uncompromised animal welfare. Abstract A number of mutilating procedures, such as dehorning in cattle and goats and beak trimming in laying hens, are common in farm animal husbandry systems in an attempt to prevent or solve problems, such as injuries from horns or feather pecking. These procedures and other practices, such as early maternal separation, overcrowding, and barren housing conditions, raise concerns about animal welfare. Efforts to ensure or improve animal welfare involve adapting the animal to its environment, i.e., by selective breeding (e.g., by selecting “robust” animals) adapting the environment to the animal (e.g., by developing social housing systems in which aggressive encounters are reduced to a minimum), or both. We propose adapting the environment to the animals by improving management practices and housing conditions, and by abandoning mutilating procedures. This approach requires the active involvement of all stakeholders: veterinarians and animal scientists, the industrial farming sector, the food processing and supply chain, and consumers of animal-derived products. Although scientific evidence about the welfare effects of current practices in farming such as mutilating procedures, management practices, and housing conditions is steadily growing, the gain in knowledge needs a boost through more scientific research. Considering the huge number of animals whose welfare is affected, all possible effort must be made to improve their welfare as quickly as possible in order to ban welfare-compromising procedures and practices as soon as possible. PMID:28230800
Sensitivity calculations for iteratively solved problems
NASA Technical Reports Server (NTRS)
Haftka, R. T.
1985-01-01
The calculation of sensitivity derivatives of solutions of iteratively solved systems of algebraic equations is investigated. A modified finite difference procedure is presented which improves the accuracy of the calculated derivatives. The procedure is demonstrated for a simple algebraic example as well as an element-by-element preconditioned conjugate gradient iterative solution technique applied to truss examples.
Adaptive statistical pattern classifiers for remotely sensed data
NASA Technical Reports Server (NTRS)
Gonzalez, R. C.; Pace, M. O.; Raulston, H. S.
1975-01-01
A technique for the adaptive estimation of nonstationary statistics necessary for Bayesian classification is developed. The basic approach to the adaptive estimation procedure consists of two steps: (1) an optimal stochastic approximation of the parameters of interest and (2) a projection of the parameters in time or position. A divergence criterion is developed to monitor algorithm performance. Comparative results of adaptive and nonadaptive classifier tests are presented for simulated four dimensional spectral scan data.
Urbain, Charline; Houyoux, Emeline; Albouy, Geneviève; Peigneux, Philippe
2014-02-01
Although a beneficial role of post-training sleep for declarative memory has been consistently evidenced in children, as in adults, available data suggest that procedural memory consolidation does not benefit from sleep in children. However, besides the absence of performance gains in children, sleep-dependent plasticity processes involved in procedural memory consolidation might be expressed through differential interference effects on the learning of novel but related procedural material. To test this hypothesis, 32 10-12-year-old children were trained on a motor rotation adaptation task. After either a sleep or a wake period, they were first retested on the same rotation applied at learning, thus assessing offline sleep-dependent changes in performance, then on the opposite (unlearned) rotation to assess sleep-dependent modulations in proactive interference coming from the consolidated visuomotor memory trace. Results show that children gradually improve performance over the learning session, showing effective adaptation to the imposed rotation. In line with previous findings, no sleep-dependent changes in performance were observed for the learned rotation. However, presentation of the opposite, unlearned deviation elicited significantly higher interference effects after post-training sleep than wakefulness in children. Considering that a definite feature of procedural motor memory and skill acquisition is the implementation of highly automatized motor behaviour, thus lacking flexibility, our results suggest a better integration and/or automation or motor adaptation skills after post-training sleep, eventually resulting in higher proactive interference effects on untrained material. © 2013 European Sleep Research Society.
Adaptation of a Fast Optimal Interpolation Algorithm to the Mapping of Oceangraphic Data
NASA Technical Reports Server (NTRS)
Menemenlis, Dimitris; Fieguth, Paul; Wunsch, Carl; Willsky, Alan
1997-01-01
A fast, recently developed, multiscale optimal interpolation algorithm has been adapted to the mapping of hydrographic and other oceanographic data. This algorithm produces solution and error estimates which are consistent with those obtained from exact least squares methods, but at a small fraction of the computational cost. Problems whose solution would be completely impractical using exact least squares, that is, problems with tens or hundreds of thousands of measurements and estimation grid points, can easily be solved on a small workstation using the multiscale algorithm. In contrast to methods previously proposed for solving large least squares problems, our approach provides estimation error statistics while permitting long-range correlations, using all measurements, and permitting arbitrary measurement locations. The multiscale algorithm itself, published elsewhere, is not the focus of this paper. However, the algorithm requires statistical models having a very particular multiscale structure; it is the development of a class of multiscale statistical models, appropriate for oceanographic mapping problems, with which we concern ourselves in this paper. The approach is illustrated by mapping temperature in the northeastern Pacific. The number of hydrographic stations is kept deliberately small to show that multiscale and exact least squares results are comparable. A portion of the data were not used in the analysis; these data serve to test the multiscale estimates. A major advantage of the present approach is the ability to repeat the estimation procedure a large number of times for sensitivity studies, parameter estimation, and model testing. We have made available by anonymous Ftp a set of MATLAB-callable routines which implement the multiscale algorithm and the statistical models developed in this paper.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fukuda, Ryoichi, E-mail: fukuda@ims.ac.jp; Ehara, Masahiro; Elements Strategy Initiative for Catalysts and Batteries
A perturbative approximation of the state specific polarizable continuum model (PCM) symmetry-adapted cluster-configuration interaction (SAC-CI) method is proposed for efficient calculations of the electronic excitations and absorption spectra of molecules in solutions. This first-order PCM SAC-CI method considers the solvent effects on the energies of excited states up to the first-order with using the zeroth-order wavefunctions. This method can avoid the costly iterative procedure of the self-consistent reaction field calculations. The first-order PCM SAC-CI calculations well reproduce the results obtained by the iterative method for various types of excitations of molecules in polar and nonpolar solvents. The first-order contribution ismore » significant for the excitation energies. The results obtained by the zeroth-order PCM SAC-CI, which considers the fixed ground-state reaction field for the excited-state calculations, are deviated from the results by the iterative method about 0.1 eV, and the zeroth-order PCM SAC-CI cannot predict even the direction of solvent shifts in n-hexane for many cases. The first-order PCM SAC-CI is applied to studying the solvatochromisms of (2,2{sup ′}-bipyridine)tetracarbonyltungsten [W(CO){sub 4}(bpy), bpy = 2,2{sup ′}-bipyridine] and bis(pentacarbonyltungsten)pyrazine [(OC){sub 5}W(pyz)W(CO){sub 5}, pyz = pyrazine]. The SAC-CI calculations reveal the detailed character of the excited states and the mechanisms of solvent shifts. The energies of metal to ligand charge transfer states are significantly sensitive to solvents. The first-order PCM SAC-CI well reproduces the observed absorption spectra of the tungsten carbonyl complexes in several solvents.« less
QUEST+: A general multidimensional Bayesian adaptive psychometric method.
Watson, Andrew B
2017-03-01
QUEST+ is a Bayesian adaptive psychometric testing method that allows an arbitrary number of stimulus dimensions, psychometric function parameters, and trial outcomes. It is a generalization and extension of the original QUEST procedure and incorporates many subsequent developments in the area of parametric adaptive testing. With a single procedure, it is possible to implement a wide variety of experimental designs, including conventional threshold measurement; measurement of psychometric function parameters, such as slope and lapse; estimation of the contrast sensitivity function; measurement of increment threshold functions; measurement of noise-masking functions; Thurstone scale estimation using pair comparisons; and categorical ratings on linear and circular stimulus dimensions. QUEST+ provides a general method to accelerate data collection in many areas of cognitive and perceptual science.
Fast and Adaptive Sparse Precision Matrix Estimation in High Dimensions
Liu, Weidong; Luo, Xi
2014-01-01
This paper proposes a new method for estimating sparse precision matrices in the high dimensional setting. It has been popular to study fast computation and adaptive procedures for this problem. We propose a novel approach, called Sparse Column-wise Inverse Operator, to address these two issues. We analyze an adaptive procedure based on cross validation, and establish its convergence rate under the Frobenius norm. The convergence rates under other matrix norms are also established. This method also enjoys the advantage of fast computation for large-scale problems, via a coordinate descent algorithm. Numerical merits are illustrated using both simulated and real datasets. In particular, it performs favorably on an HIV brain tissue dataset and an ADHD resting-state fMRI dataset. PMID:25750463
A numerical study of adaptive space and time discretisations for Gross–Pitaevskii equations
Thalhammer, Mechthild; Abhau, Jochen
2012-01-01
As a basic principle, benefits of adaptive discretisations are an improved balance between required accuracy and efficiency as well as an enhancement of the reliability of numerical computations. In this work, the capacity of locally adaptive space and time discretisations for the numerical solution of low-dimensional nonlinear Schrödinger equations is investigated. The considered model equation is related to the time-dependent Gross–Pitaevskii equation arising in the description of Bose–Einstein condensates in dilute gases. The performance of the Fourier-pseudo spectral method constrained to uniform meshes versus the locally adaptive finite element method and of higher-order exponential operator splitting methods with variable time stepsizes is studied. Numerical experiments confirm that a local time stepsize control based on a posteriori local error estimators or embedded splitting pairs, respectively, is effective in different situations with an enhancement either in efficiency or reliability. As expected, adaptive time-splitting schemes combined with fast Fourier transform techniques are favourable regarding accuracy and efficiency when applied to Gross–Pitaevskii equations with a defocusing nonlinearity and a mildly varying regular solution. However, the numerical solution of nonlinear Schrödinger equations in the semi-classical regime becomes a demanding task. Due to the highly oscillatory and nonlinear nature of the problem, the spatial mesh size and the time increments need to be of the size of the decisive parameter 0<ε≪1, especially when it is desired to capture correctly the quantitative behaviour of the wave function itself. The required high resolution in space constricts the feasibility of numerical computations for both, the Fourier pseudo-spectral and the finite element method. Nevertheless, for smaller parameter values locally adaptive time discretisations facilitate to determine the time stepsizes sufficiently small in order that the numerical approximation captures correctly the behaviour of the analytical solution. Further illustrations for Gross–Pitaevskii equations with a focusing nonlinearity or a sharp Gaussian as initial condition, respectively, complement the numerical study. PMID:25550676
A numerical study of adaptive space and time discretisations for Gross-Pitaevskii equations.
Thalhammer, Mechthild; Abhau, Jochen
2012-08-15
As a basic principle, benefits of adaptive discretisations are an improved balance between required accuracy and efficiency as well as an enhancement of the reliability of numerical computations. In this work, the capacity of locally adaptive space and time discretisations for the numerical solution of low-dimensional nonlinear Schrödinger equations is investigated. The considered model equation is related to the time-dependent Gross-Pitaevskii equation arising in the description of Bose-Einstein condensates in dilute gases. The performance of the Fourier-pseudo spectral method constrained to uniform meshes versus the locally adaptive finite element method and of higher-order exponential operator splitting methods with variable time stepsizes is studied. Numerical experiments confirm that a local time stepsize control based on a posteriori local error estimators or embedded splitting pairs, respectively, is effective in different situations with an enhancement either in efficiency or reliability. As expected, adaptive time-splitting schemes combined with fast Fourier transform techniques are favourable regarding accuracy and efficiency when applied to Gross-Pitaevskii equations with a defocusing nonlinearity and a mildly varying regular solution. However, the numerical solution of nonlinear Schrödinger equations in the semi-classical regime becomes a demanding task. Due to the highly oscillatory and nonlinear nature of the problem, the spatial mesh size and the time increments need to be of the size of the decisive parameter [Formula: see text], especially when it is desired to capture correctly the quantitative behaviour of the wave function itself. The required high resolution in space constricts the feasibility of numerical computations for both, the Fourier pseudo-spectral and the finite element method. Nevertheless, for smaller parameter values locally adaptive time discretisations facilitate to determine the time stepsizes sufficiently small in order that the numerical approximation captures correctly the behaviour of the analytical solution. Further illustrations for Gross-Pitaevskii equations with a focusing nonlinearity or a sharp Gaussian as initial condition, respectively, complement the numerical study.
Solution-Focused Therapy as a Culturally Acknowledging Approach with American Indians
ERIC Educational Resources Information Center
Meyer, Dixie D.; Cottone, R. Rocco
2013-01-01
Limited literature is available applying specific theoretical orientations with American Indians. Solution-focused therapy may be appropriate, given the client-identified solutions, the egalitarian counselor/client relationship, the use of relationships, and the view that change is inevitable. However, adaption of scaling questions and the miracle…
Beuscart-Zéphir, Marie-Catherine; Pelayo, Sylvia; Bernonville, Stéphanie
2010-04-01
The objectives of this paper are: In this approach, the implementation of such a complex IT solution is considered a major redesign of the work system. The paper describes the Human Factor (HF) tasks embedded in the project lifecycle: (1) analysis and modelling of the current work system and usability assessment of the medication CPOE solution; (2) HF recommendations for work re-design and usability recommendations for IT system re-engineering both aiming at a safer and more efficient work situation. Standard ethnographic methods were used to support the analysis of the current work system and work situations, coupled with cognitive task analysis methods and documents review. Usability inspection (heuristic evaluation) and both in-lab (simulated tasks) and on-site (real tasks) usability tests were performed for the evaluation of the CPOE candidate. Adapted software engineering models were used in combination with usual textual descriptions, tasks models and mock-ups to support the recommendations for work and product re-design. The analysis of the work situations identified different work organisations and procedures across the hospital's departments. The most important differences concerned the doctor-nurse communications and cooperation modes and the procedures for preparing and administering the medications. The assessment of the medication CPOE functions uncovered a number of usability problems including severe ones leading to impossible to detect or to catch errors. Models of the actual and possible distribution of tasks and roles were used to support decision making in the work design process. The results of the usability assessment were translated into requirements to support the necessary re-engineering of the IT application. The HFE approach to medication CPOE efficiently identifies and distinguishes currently unsafe or uncomfortable work situations that could obviously benefit from an IT solution from other work situations incorporating efficient work procedures that might be impaired by the implementation of the CPOE. In this context, a careful redesign of the work situation and of the entire work system is necessary to actually benefit from the installation of the product in terms of patient safety and human performances. In parallel, a usability assessment of the product to be implemented is mandatory to identify potentially dangerous usability flaws and to fix them before the installation. (c) 2009 Elsevier Ireland Ltd. All rights reserved.
A computational procedure for multibody systems including flexible beam dynamics
NASA Technical Reports Server (NTRS)
Downer, J. D.; Park, K. C.; Chiou, J. C.
1990-01-01
A computational procedure suitable for the solution of equations of motions for flexible multibody systems has been developed. A fully nonlinear continuum approach capable of accounting for both finite rotations and large deformations has been used to model a flexible beam component. The beam kinematics are referred directly to an inertial reference frame such that the degrees of freedom embody both the rigid and flexible deformation motions. As such, the beam inertia expression is identical to that of rigid body dynamics. The nonlinear coupling between gross body motion and elastic deformation is contained in the internal force expression. Numerical solution procedures for the integration of spatial kinematic systems can be directily applied to the generalized coordinates of both the rigid and flexible components. An accurate computation of the internal force term which is invariant to rigid motions is incorporated into the general solution procedure.
PDF approach for compressible turbulent reacting flows
NASA Technical Reports Server (NTRS)
Hsu, A. T.; Tsai, Y.-L. P.; Raju, M. S.
1993-01-01
The objective of the present work is to develop a probability density function (pdf) turbulence model for compressible reacting flows for use with a CFD flow solver. The probability density function of the species mass fraction and enthalpy are obtained by solving a pdf evolution equation using a Monte Carlo scheme. The pdf solution procedure is coupled with a compressible CFD flow solver which provides the velocity and pressure fields. A modeled pdf equation for compressible flows, capable of capturing shock waves and suitable to the present coupling scheme, is proposed and tested. Convergence of the combined finite-volume Monte Carlo solution procedure is discussed, and an averaging procedure is developed to provide smooth Monte-Carlo solutions to ensure convergence. Two supersonic diffusion flames are studied using the proposed pdf model and the results are compared with experimental data; marked improvements over CFD solutions without pdf are observed. Preliminary applications of pdf to 3D flows are also reported.
Combined LAURA-UPS hypersonic solution procedure
NASA Technical Reports Server (NTRS)
Wood, William A.; Thompson, Richard A.
1993-01-01
A combined solution procedure for hypersonic flowfields around blunted slender bodies was implemented using a thin-layer Navier-Stokes code (LAURA) in the nose region and a parabolized Navier-Stokes code (UPS) on the after body region. Perfect gas, equilibrium air, and non-equilibrium air solutions to sharp cones and a sharp wedge were obtained using UPS alone as a preliminary step. Surface heating rates are presented for two slender bodies with blunted noses, having used LAURA to provide a starting solution to UPS downstream of the sonic line. These are an 8 deg sphere-cone in Mach 5, perfect gas, laminar flow at 0 and 4 deg angles of attack and the Reentry F body at Mach 20, 80,000 ft equilibrium gas conditions for 0 and 0.14 deg angles of attack. The results indicate that this procedure is a timely and accurate method for obtaining aerothermodynamic predictions on slender hypersonic vehicles.
An Eulerian/Lagrangian coupling procedure for three-dimensional vortical flows
NASA Technical Reports Server (NTRS)
Felici, Helene M.; Drela, Mark
1993-01-01
A coupled Eulerian/Lagrangian method is presented for the reduction of numerical diffusion observed in solutions of 3D vortical flows using standard Eulerian finite-volume time-marching procedures. A Lagrangian particle tracking method, added to the Eulerian time-marching procedure, provides a correction of the Eulerian solution. In turn, the Eulerian solution is used to integrate the Lagrangian state-vector along the particles trajectories. While the Eulerian solution ensures the conservation of mass and sets the pressure field, the particle markers describe accurately the convection properties and enhance the vorticity and entropy capturing capabilities of the Eulerian solver. The Eulerian/Lagrangian coupling strategies are discussed and the combined scheme is tested on a constant stagnation pressure flow in a 90 deg bend and on a swirling pipe flow. As the numerical diffusion is reduced when using the Lagrangian correction, a vorticity gradient augmentation is identified as a basic problem of this inviscid calculation.
The Aids' Requirements of Children with Severe Multiple Handicaps and the People Looking after Them.
ERIC Educational Resources Information Center
Anden, Gerd
The report presents findings from interviews with 10 families with children (4-19 years old) with severe mental retardation and multiple disabilities regarding the need for technical aids and adaptations in their homes. The following areas are addressed and examples of solutions proposed: hygienic aids (hot water adaptations, travel adaptations,…
Electrical and optical evaluation aspects of public lighting systems
NASA Astrophysics Data System (ADS)
Tulbure, Adrian; Marc, Gheorghe; Kurt, Ünal
2016-12-01
This paper briefs a few issues regarding the technical validation of public lighting solutions. The novelty of the work is justified by the fact that it combines technical legislation in force [1], with practical analysis procedures [2]. Thus, in order to select the optimal solution, the paper describes a case study of measurement procedure which confirms the high electrical and optical characteristics [3] of the proposed solutions. At the end of the contribution, comparative design purposes for the two versions of modern street lighting are presented.
A Network Optimization Solution using SAS/OR Tools for the Department of the Army Branching Problem
2010-02-18
OPTMODEL; NETFLOW ;Nodes;Arcs;ROTC; assignments;Basic Branches;Cadet Satisfaction; CLASSIFICATION: Unclassified This paper will demonstrate...implement a solution using the NETFLOW procedure and repeat that network solution using the OPTMODEL procedure. The OPTMODEL implementation will be...96.545599 M 1 AV AV IN EN FA AR 4 96.221521 M 1 IN IN MI EN MP AR 1 Figure 1, Supply: cadet data (5 of 2545) ordered by OMS PROC NETFLOW takes a
A self-adaptive-grid method with application to airfoil flow
NASA Technical Reports Server (NTRS)
Nakahashi, K.; Deiwert, G. S.
1985-01-01
A self-adaptive-grid method is described that is suitable for multidimensional steady and unsteady computations. Based on variational principles, a spring analogy is used to redistribute grid points in an optimal sense to reduce the overall solution error. User-specified parameters, denoting both maximum and minimum permissible grid spacings, are used to define the all-important constants, thereby minimizing the empiricism and making the method self-adaptive. Operator splitting and one-sided controls for orthogonality and smoothness are used to make the method practical, robust, and efficient. Examples are included for both steady and unsteady viscous flow computations about airfoils in two dimensions, as well as for a steady inviscid flow computation and a one-dimensional case. These examples illustrate the precise control the user has with the self-adaptive method and demonstrate a significant improvement in accuracy and quality of the solutions.
A new anisotropic mesh adaptation method based upon hierarchical a posteriori error estimates
NASA Astrophysics Data System (ADS)
Huang, Weizhang; Kamenski, Lennard; Lang, Jens
2010-03-01
A new anisotropic mesh adaptation strategy for finite element solution of elliptic differential equations is presented. It generates anisotropic adaptive meshes as quasi-uniform ones in some metric space, with the metric tensor being computed based on hierarchical a posteriori error estimates. A global hierarchical error estimate is employed in this study to obtain reliable directional information of the solution. Instead of solving the global error problem exactly, which is costly in general, we solve it iteratively using the symmetric Gauß-Seidel method. Numerical results show that a few GS iterations are sufficient for obtaining a reasonably good approximation to the error for use in anisotropic mesh adaptation. The new method is compared with several strategies using local error estimators or recovered Hessians. Numerical results are presented for a selection of test examples and a mathematical model for heat conduction in a thermal battery with large orthotropic jumps in the material coefficients.
A self-organizing Lagrangian particle method for adaptive-resolution advection-diffusion simulations
NASA Astrophysics Data System (ADS)
Reboux, Sylvain; Schrader, Birte; Sbalzarini, Ivo F.
2012-05-01
We present a novel adaptive-resolution particle method for continuous parabolic problems. In this method, particles self-organize in order to adapt to local resolution requirements. This is achieved by pseudo forces that are designed so as to guarantee that the solution is always well sampled and that no holes or clusters develop in the particle distribution. The particle sizes are locally adapted to the length scale of the solution. Differential operators are consistently evaluated on the evolving set of irregularly distributed particles of varying sizes using discretization-corrected operators. The method does not rely on any global transforms or mapping functions. After presenting the method and its error analysis, we demonstrate its capabilities and limitations on a set of two- and three-dimensional benchmark problems. These include advection-diffusion, the Burgers equation, the Buckley-Leverett five-spot problem, and curvature-driven level-set surface refinement.
NASA Technical Reports Server (NTRS)
Mccormick, S.; Quinlan, D.
1989-01-01
The fast adaptive composite grid method (FAC) is an algorithm that uses various levels of uniform grids (global and local) to provide adaptive resolution and fast solution of PDEs. Like all such methods, it offers parallelism by using possibly many disconnected patches per level, but is hindered by the need to handle these levels sequentially. The finest levels must therefore wait for processing to be essentially completed on all the coarser ones. A recently developed asynchronous version of FAC, called AFAC, completely eliminates this bottleneck to parallelism. This paper describes timing results for AFAC, coupled with a simple load balancing scheme, applied to the solution of elliptic PDEs on an Intel iPSC hypercube. These tests include performance of certain processes necessary in adaptive methods, including moving grids and changing refinement. A companion paper reports on numerical and analytical results for estimating convergence factors of AFAC applied to very large scale examples.
A Comparison of Error-Correction Procedures on Skill Acquisition during Discrete-Trial Instruction
ERIC Educational Resources Information Center
Carroll, Regina A.; Joachim, Brad T.; St. Peter, Claire C.; Robinson, Nicole
2015-01-01
Previous research supports the use of a variety of error-correction procedures to facilitate skill acquisition during discrete-trial instruction. We used an adapted alternating treatments design to compare the effects of 4 commonly used error-correction procedures on skill acquisition for 2 children with attention deficit hyperactivity disorder…
Nanopharmaceuticals: Tiny challenges for the environmental risk assessment of pharmaceuticals.
Berkner, Silvia; Schwirn, Kathrin; Voelker, Doris
2016-04-01
Many new developments and innovations in health care are based on nanotechnology. The field of nanopharmaceuticals is diverse and not as new as one might think; indeed, nanopharmaceuticals have been marketed for many years, and the future is likely to bring more nanosized compounds to the market. Therefore, it is time to examine whether the environmental risk assessment for human pharmaceuticals is prepared to assess the exposure, fate, and effects of nanopharmaceuticals in an adequate way. Challenges include the different definitions for nanomaterials and nanopharmaceuticals, different regulatory frameworks, the diversity of nanopharmaceuticals, the scope of current regulatory guidelines, and the applicability of test protocols. Based on the current environmental risk assessment for human medicinal products in the European Union, necessary adaptations for the assessment procedures and underlying study protocols are discussed and emerging solutions identified. © 2015 The Authors. Environmental Toxicology & Chemistry published by Wiley Periodicals, Inc. on behalf of SETAC.
Minimum change in spherical aberration that can be perceived
Manzanera, Silvestre; Artal, Pablo
2016-01-01
It is important to know the visual sensitivity to optical blur from both a basic science perspective and a practical point of view. Of particular interest is the sensitivity to blur induced by spherical aberration because it is being used to increase depth of focus as a component of a presbyopic solution. Using a flicker detection-based procedure implemented on an adaptive optics visual simulator, we measured the spherical aberration thresholds that produce just-noticeable differences in perceived image quality. The thresholds were measured for positive and negative values of spherical aberration, for best focus and + 0.5 D and + 1.0 D of defocus. At best focus, the SA thresholds were 0.20 ± 0.01 µm and −0.17 ± 0.03 µm for positive and negative spherical aberration respectively (referred to a 6-mm pupil). These experimental values may be useful in setting spherical aberration permissible levels in different ophthalmic techniques. PMID:27699113
Interleaved Training and Training-Based Transmission Design for Hybrid Massive Antenna Downlink
NASA Astrophysics Data System (ADS)
Zhang, Cheng; Jing, Yindi; Huang, Yongming; Yang, Luxi
2018-06-01
In this paper, we study the beam-based training design jointly with the transmission design for hybrid massive antenna single-user (SU) and multiple-user (MU) systems where outage probability is adopted as the performance measure. For SU systems, we propose an interleaved training design to concatenate the feedback and training procedures, thus making the training length adaptive to the channel realization. Exact analytical expressions are derived for the average training length and the outage probability of the proposed interleaved training. For MU systems, we propose a joint design for the beam-based interleaved training, beam assignment, and MU data transmissions. Two solutions for the beam assignment are provided with different complexity-performance tradeoff. Analytical results and simulations show that for both SU and MU systems, the proposed joint training and transmission designs achieve the same outage performance as the traditional full-training scheme but with significant saving in the training overhead.
[Financing problems of capital goods: part 1: leasing as a solution?].
Clausen, C C; Bauer, M; Saleh, A; Picker, O
2008-06-01
The provision of financial support of hospitals by States for buying capital goods is becoming increasingly more limited. In order to still make investments, alternative forms of financing such as leasing must be considered in hospitals. However, the change from the classical form of dual financing and the decision to opt for a leasing model involves much more than just a question of costs. Leasing results in easily manageable expenditure, flexibility and adaptability for the choice of model but the leasing installments must be directly financed by the turnover from diagnosis-related groups and so lead to a reduction in the annual profit. In this article the authors try to give the reader an overview of the complex and sometimes counter-productive effect of financial instruments for investments in hospitals using leasing financing as an example. In the follow-up article the decision-making procedure using dynamic investment calculations will be demonstrated using a concrete example.
Granularity analysis for mathematical proofs.
Schiller, Marvin R G
2013-04-01
Mathematical proofs generally allow for various levels of detail and conciseness, such that they can be adapted for a particular audience or purpose. Using automated reasoning approaches for teaching proof construction in mathematics presupposes that the step size of proofs in such a system is appropriate within the teaching context. This work proposes a framework that supports the granularity analysis of mathematical proofs, to be used in the automated assessment of students' proof attempts and for the presentation of hints and solutions at a suitable pace. Models for granularity are represented by classifiers, which can be generated by hand or inferred from a corpus of sample judgments via machine-learning techniques. This latter procedure is studied by modeling granularity judgments from four experts. The results provide support for the granularity of assertion-level proofs but also illustrate a degree of subjectivity in assessing step size. Copyright © 2013 Cognitive Science Society, Inc.
Autonomous Data Collection Using a Self-Organizing Map.
Faigl, Jan; Hollinger, Geoffrey A
2018-05-01
The self-organizing map (SOM) is an unsupervised learning technique providing a transformation of a high-dimensional input space into a lower dimensional output space. In this paper, we utilize the SOM for the traveling salesman problem (TSP) to develop a solution to autonomous data collection. Autonomous data collection requires gathering data from predeployed sensors by moving within a limited communication radius. We propose a new growing SOM that adapts the number of neurons during learning, which also allows our approach to apply in cases where some sensors can be ignored due to a lower priority. Based on a comparison with available combinatorial heuristic algorithms for relevant variants of the TSP, the proposed approach demonstrates improved results, while also being less computationally demanding. Moreover, the proposed learning procedure can be extended to cases where particular sensors have varying communication radii, and it can also be extended to multivehicle planning.
Computational Aerothermodynamic Simulation Issues on Unstructured Grids
NASA Technical Reports Server (NTRS)
Gnoffo, Peter A.; White, Jeffery A.
2004-01-01
The synthesis of physical models for gas chemistry and turbulence from the structured grid codes LAURA and VULCAN into the unstructured grid code FUN3D is described. A directionally Symmetric, Total Variation Diminishing (STVD) algorithm and an entropy fix (eigenvalue limiter) keyed to local cell Reynolds number are introduced to improve solution quality for hypersonic aeroheating applications. A simple grid-adaptation procedure is incorporated within the flow solver. Simulations of flow over an ellipsoid (perfect gas, inviscid), Shuttle Orbiter (viscous, chemical nonequilibrium) and comparisons to the structured grid solvers LAURA (cylinder, Shuttle Orbiter) and VULCAN (flat plate) are presented to show current capabilities. The quality of heating in 3D stagnation regions is very sensitive to algorithm options in general, high aspect ratio tetrahedral elements complicate the simulation of high Reynolds number, viscous flow as compared to locally structured meshes aligned with the flow.
Staggered solution procedures for multibody dynamics simulation
NASA Technical Reports Server (NTRS)
Park, K. C.; Chiou, J. C.; Downer, J. D.
1990-01-01
The numerical solution procedure for multibody dynamics (MBD) systems is termed a staggered MBD solution procedure that solves the generalized coordinates in a separate module from that for the constraint force. This requires a reformulation of the constraint conditions so that the constraint forces can also be integrated in time. A major advantage of such a partitioned solution procedure is that additional analysis capabilities such as active controller and design optimization modules can be easily interfaced without embedding them into a monolithic program. After introducing the basic equations of motion for MBD system in the second section, Section 3 briefly reviews some constraint handling techniques and introduces the staggered stabilized technique for the solution of the constraint forces as independent variables. The numerical direct time integration of the equations of motion is described in Section 4. As accurate damping treatment is important for the dynamics of space structures, we have employed the central difference method and the mid-point form of the trapezoidal rule since they engender no numerical damping. This is in contrast to the current practice in dynamic simulations of ground vehicles by employing a set of backward difference formulas. First, the equations of motion are partitioned according to the translational and the rotational coordinates. This sets the stage for an efficient treatment of the rotational motions via the singularity-free Euler parameters. The resulting partitioned equations of motion are then integrated via a two-stage explicit stabilized algorithm for updating both the translational coordinates and angular velocities. Once the angular velocities are obtained, the angular orientations are updated via the mid-point implicit formula employing the Euler parameters. When the two algorithms, namely, the two-stage explicit algorithm for the generalized coordinates and the implicit staggered procedure for the constraint Lagrange multipliers, are brought together in a staggered manner, they constitute a staggered explicit-implicit procedure which is summarized in Section 5. Section 6 presents some example problems and discussions concerning several salient features of the staggered MBD solution procedure are offered in Section 7.
Statistical efficiency of adaptive algorithms.
Widrow, Bernard; Kamenetsky, Max
2003-01-01
The statistical efficiency of a learning algorithm applied to the adaptation of a given set of variable weights is defined as the ratio of the quality of the converged solution to the amount of data used in training the weights. Statistical efficiency is computed by averaging over an ensemble of learning experiences. A high quality solution is very close to optimal, while a low quality solution corresponds to noisy weights and less than optimal performance. In this work, two gradient descent adaptive algorithms are compared, the LMS algorithm and the LMS/Newton algorithm. LMS is simple and practical, and is used in many applications worldwide. LMS/Newton is based on Newton's method and the LMS algorithm. LMS/Newton is optimal in the least squares sense. It maximizes the quality of its adaptive solution while minimizing the use of training data. Many least squares adaptive algorithms have been devised over the years, but no other least squares algorithm can give better performance, on average, than LMS/Newton. LMS is easily implemented, but LMS/Newton, although of great mathematical interest, cannot be implemented in most practical applications. Because of its optimality, LMS/Newton serves as a benchmark for all least squares adaptive algorithms. The performances of LMS and LMS/Newton are compared, and it is found that under many circumstances, both algorithms provide equal performance. For example, when both algorithms are tested with statistically nonstationary input signals, their average performances are equal. When adapting with stationary input signals and with random initial conditions, their respective learning times are on average equal. However, under worst-case initial conditions, the learning time of LMS can be much greater than that of LMS/Newton, and this is the principal disadvantage of the LMS algorithm. But the strong points of LMS are ease of implementation and optimal performance under important practical conditions. For these reasons, the LMS algorithm has enjoyed very widespread application. It is used in almost every modem for channel equalization and echo cancelling. Furthermore, it is related to the famous backpropagation algorithm used for training neural networks.
Rapid, generalized adaptation to asynchronous audiovisual speech.
Van der Burg, Erik; Goodbourn, Patrick T
2015-04-07
The brain is adaptive. The speed of propagation through air, and of low-level sensory processing, differs markedly between auditory and visual stimuli; yet the brain can adapt to compensate for the resulting cross-modal delays. Studies investigating temporal recalibration to audiovisual speech have used prolonged adaptation procedures, suggesting that adaptation is sluggish. Here, we show that adaptation to asynchronous audiovisual speech occurs rapidly. Participants viewed a brief clip of an actor pronouncing a single syllable. The voice was either advanced or delayed relative to the corresponding lip movements, and participants were asked to make a synchrony judgement. Although we did not use an explicit adaptation procedure, we demonstrate rapid recalibration based on a single audiovisual event. We find that the point of subjective simultaneity on each trial is highly contingent upon the modality order of the preceding trial. We find compelling evidence that rapid recalibration generalizes across different stimuli, and different actors. Finally, we demonstrate that rapid recalibration occurs even when auditory and visual events clearly belong to different actors. These results suggest that rapid temporal recalibration to audiovisual speech is primarily mediated by basic temporal factors, rather than higher-order factors such as perceived simultaneity and source identity. © 2015 The Author(s) Published by the Royal Society. All rights reserved.
ERIC Educational Resources Information Center
Hartman, Rhona C.; Redden, Martha Ross
The fact sheet focuses on considerations when testing adaptations are needed, provides some facts about disability, and identifies a variety of adaptations of testing procedures which have been developed and successfully used in schools, vocational training programs, and on college campuses. Testing adaptations are discussed in terms of disability…
ERIC Educational Resources Information Center
Ercikan, Kadriye; Alper, Naim
2009-01-01
This commentary first summarizes and discusses the analysis of the two translation processes described in the Oliveira, Colak, and Akerson article and the inferences these researchers make based on their research. In the second part of the commentary, we describe procedures and criteria used in adapting tests into different languages and how they…
Passot, Jean-Baptiste; Luque, Niceto R.; Arleo, Angelo
2013-01-01
The cerebellum is thought to mediate sensorimotor adaptation through the acquisition of internal models of the body-environment interaction. These representations can be of two types, identified as forward and inverse models. The first predicts the sensory consequences of actions, while the second provides the correct commands to achieve desired state transitions. In this paper, we propose a composite architecture consisting of multiple cerebellar internal models to account for the adaptation performance of humans during sensorimotor learning. The proposed model takes inspiration from the cerebellar microcomplex circuit, and employs spiking neurons to process information. We investigate the intrinsic properties of the cerebellar circuitry subserving efficient adaptation properties, and we assess the complementary contributions of internal representations by simulating our model in a procedural adaptation task. Our simulation results suggest that the coupling of internal models enhances learning performance significantly (compared with independent forward and inverse models), and it allows for the reproduction of human adaptation capabilities. Furthermore, we provide a computational explanation for the performance improvement observed after one night of sleep in a wide range of sensorimotor tasks. We predict that internal model coupling is a necessary condition for the offline consolidation of procedural memories. PMID:23874289
Gonthier, Corentin; Aubry, Alexandre; Bourdin, Béatrice
2018-06-01
Working memory tasks designed for children usually present trials in order of ascending difficulty, with testing discontinued when the child fails a particular level. Unfortunately, this procedure comes with a number of issues, such as decreased engagement from high-ability children, vulnerability of the scores to temporary mind-wandering, and large between-subjects variations in number of trials, testing time, and proactive interference. To circumvent these problems, the goal of the present study was to demonstrate the feasibility of assessing working memory using an adaptive testing procedure. The principle of adaptive testing is to dynamically adjust the level of difficulty as the task progresses to match the participant's ability. We used this method to develop an adaptive complex span task (the ACCES) comprising verbal and visuo-spatial subtests. The task presents a fixed number of trials to all participants, allows for partial credit scoring, and can be used with children regardless of ability level. The ACCES demonstrated satisfying psychometric properties in a sample of 268 children aged 8-13 years, confirming the feasibility of using adaptive tasks to measure working memory capacity in children. A free-to-use implementation of the ACCES is provided.
Training a Network of Electronic Neurons for Control of a Mobile Robot
NASA Astrophysics Data System (ADS)
Vromen, T. G. M.; Steur, E.; Nijmeijer, H.
An adaptive training procedure is developed for a network of electronic neurons, which controls a mobile robot driving around in an unknown environment while avoiding obstacles. The neuronal network controls the angular velocity of the wheels of the robot based on the sensor readings. The nodes in the neuronal network controller are clusters of neurons rather than single neurons. The adaptive training procedure ensures that the input-output behavior of the clusters is identical, even though the constituting neurons are nonidentical and have, in isolation, nonidentical responses to the same input. In particular, we let the neurons interact via a diffusive coupling, and the proposed training procedure modifies the diffusion interaction weights such that the neurons behave synchronously with a predefined response. The working principle of the training procedure is experimentally validated and results of an experiment with a mobile robot that is completely autonomously driving in an unknown environment with obstacles are presented.
Use of prism adaptation in children with unilateral brain lesion: Is it feasible?
Riquelme, Inmaculada; Henne, Camille; Flament, Benoit; Legrain, Valéry; Bleyenheuft, Yannick; Hatem, Samar M
2015-01-01
Unilateral visuospatial deficits have been observed in children with brain damage. While the effectiveness of prism adaptation for treating unilateral neglect in adult stroke patients has been demonstrated previously, the usefulness of prism adaptation in a pediatric population is still unknown. The present study aims at evaluating the feasibility of prism adaptation in children with unilateral brain lesion and comparing the validity of a game procedure designed for child-friendly paediatric intervention, with the ecological task used for prism adaptation in adult patients. Twenty-one children with unilateral brain lesion randomly were assigned to a prism group wearing prismatic glasses, or a control group wearing neutral glasses during a bimanual task intervention. All children performed two different bimanual tasks on randomly assigned consecutive days: ecological tasks or game tasks. The efficacy of prism adaptation was measured by assessing its after-effects with visual open loop pointing (visuoproprioceptive test) and subjective straight-ahead pointing (proprioceptive test). Game tasks and ecological tasks produced similar after-effects. Prismatic glasses elicited a significant shift of visuospatial coordinates which was not observed in the control group. Prism adaptation performed with game tasks seems an effective procedure to obtain after-effects in children with unilateral brain lesion. The usefulness of repetitive prism adaptation sessions as a therapeutic intervention in children with visuospatial deficits and/or neglect, should be investigated in future studies. Copyright © 2015 Elsevier Ltd. All rights reserved.
A triangular thin shell finite element: Nonlinear analysis. [structural analysis
NASA Technical Reports Server (NTRS)
Thomas, G. R.; Gallagher, R. H.
1975-01-01
Aspects of the formulation of a triangular thin shell finite element which pertain to geometrically nonlinear (small strain, finite displacement) behavior are described. The procedure for solution of the resulting nonlinear algebraic equations combines a one-step incremental (tangent stiffness) approach with one iteration in the Newton-Raphson mode. A method is presented which permits a rational estimation of step size in this procedure. Limit points are calculated by means of a superposition scheme coupled to the incremental side of the solution procedure while bifurcation points are calculated through a process of interpolation of the determinants of the tangent-stiffness matrix. Numerical results are obtained for a flat plate and two curved shell problems and are compared with alternative solutions.
Adapting construction staking to modern technology : final report.
DOT National Transportation Integrated Search
2017-08-01
This report summarizes the tasks and findings of the ICT Project R27-163, Adapting Construction Staking to Modern Technology, which aims to develop written procedures for the use of modern technologies (such as GPS and civil information modeling) in ...
Aguila-Camacho, Norelys; Duarte-Mermoud, Manuel A
2016-01-01
This paper presents the analysis of three classes of fractional differential equations appearing in the field of fractional adaptive systems, for the case when the fractional order is in the interval α ∈(0,1] and the Caputo definition for fractional derivatives is used. The boundedness of the solutions is proved for all three cases, and the convergence to zero of the mean value of one of the variables is also proved. Applications of the obtained results to fractional adaptive schemes in the context of identification and control problems are presented at the end of the paper, including numerical simulations which support the analytical results. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
A new approach for solving the three-dimensional steady Euler equations. I - General theory
NASA Technical Reports Server (NTRS)
Chang, S.-C.; Adamczyk, J. J.
1986-01-01
The present iterative procedure combines the Clebsch potentials and the Munk-Prim (1947) substitution principle with an extension of a semidirect Cauchy-Riemann solver to three dimensions, in order to solve steady, inviscid three-dimensional rotational flow problems in either subsonic or incompressible flow regimes. This solution procedure can be used, upon discretization, to obtain inviscid subsonic flow solutions in a 180-deg turning channel. In addition to accurately predicting the behavior of weak secondary flows, the algorithm can generate solutions for strong secondary flows and will yield acceptable flow solutions after only 10-20 outer loop iterations.
A new approach for solving the three-dimensional steady Euler equations. I - General theory
NASA Astrophysics Data System (ADS)
Chang, S.-C.; Adamczyk, J. J.
1986-08-01
The present iterative procedure combines the Clebsch potentials and the Munk-Prim (1947) substitution principle with an extension of a semidirect Cauchy-Riemann solver to three dimensions, in order to solve steady, inviscid three-dimensional rotational flow problems in either subsonic or incompressible flow regimes. This solution procedure can be used, upon discretization, to obtain inviscid subsonic flow solutions in a 180-deg turning channel. In addition to accurately predicting the behavior of weak secondary flows, the algorithm can generate solutions for strong secondary flows and will yield acceptable flow solutions after only 10-20 outer loop iterations.
Strategies for Choosing Descent Flight-Path Angles for Small Jets
NASA Technical Reports Server (NTRS)
Wu, Minghong Gilbert; Green, Steven M.
2012-01-01
Three candidate strategies for choosing the descent flight path angle (FPA) for small jets are proposed, analyzed, and compared for fuel efficiency under arrival metering conditions. The strategies vary in operational complexity from a universally fixed FPA, or FPA function that varies with descent speed for improved fuel efficiency, to the minimum-fuel FPA computed for each flight based on winds, route, and speed profile. Methodologies for selecting the parameter for the first two strategies are described. The differences in fuel burn are analyzed over a year s worth of arrival traffic and atmospheric conditions recorded for the Dallas/Fort Worth (DFW) Airport during 2011. The results show that the universally fixed FPA strategy (same FPA for all flights, all year) burns on average 26 lbs more fuel per flight as compared to the minimum-fuel solution. This FPA is adapted to the arrival gate (direction of entry to the terminal) and various timespans (season, month and day) to improve fuel efficiency. Compared to a typical FPA of approximately 3 degrees the adapted FPAs vary significantly, up to 1.3 from one arrival gate to another or up to 1.4 from one day to another. Adapting the universally fixed FPA strategy to the arrival gate or to each day reduces the extra fuel burn relative to the minimum-fuel solution by 27% and 34%, respectively. The adaptations to gate and time combined shows up to 57% reduction of the extra fuel burn. The second strategy, an FPA function, contributes a 17% reduction in the 26 lbs of extra fuel burn over the universally fixed FPA strategy. Compared to the corresponding adaptations of the universally fixed FPA, adaptations of the FPA function reduce the extra fuel burn anywhere from 15-23% depending on the extent of adaptation. The combined effect of the FPA function strategy with both directional and temporal adaptation recovers 67% of the extra fuel relative to the minimum-fuel solution.
Research and constructive solutions on the reduction of slosh noise
NASA Astrophysics Data System (ADS)
Manta (Balas, M.; Balas, R.; Doicin, C. V.
2016-11-01
The paper presents a product design making of, over a “delicate issue” in automotive industry as slosh noise phenomena. Even though the current market tendency shows great achievements over this occurrence, in this study, the main idea is to design concepts of slosh noise baffles adapted for serial life existing fuel tanks in the automotive industry. Moreover, starting with internal and external research, going further through reversed engineering and applying own baffle technical solutions from conceptual sketches to 3D design, the paper shows the technical solutions identified as an alternative to a new development of fuel tank. Based on personal and academic experience there were identified several problematics and the possible answers based on functional analysis, in order to avoid blocking points. The idea of developing baffles adapted to already existent fuel tanks leaded to equivalent solutions analyzed from functional point of view. Once this stage is finished, a methodology will be used so as to choose the optimum solution so as to get the functional design.
Solution of plane cascade flow using improved surface singularity methods
NASA Technical Reports Server (NTRS)
Mcfarland, E. R.
1981-01-01
A solution method has been developed for calculating compressible inviscid flow through a linear cascade of arbitrary blade shapes. The method uses advanced surface singularity formulations which were adapted from those found in current external flow analyses. The resulting solution technique provides a fast flexible calculation for flows through turbomachinery blade rows. The solution method and some examples of the method's capabilities are presented.
Clean Energy Solutions Center: Assisting Countries with Clean Energy Policy
Energy Solutions Center: Assisting Countries with Clean Energy Policy NREL helps developing countries and adapting to climate change impacts, developing countries are looking for clean energy solutions supports clean energy scale-up in the developing world are knowledge, capacity, and cost. The Clean Energy
Abdelrahim, Huda; Elnashar, Maha; Khidir, Amal; Killawi, Amal; Hammoud, Maya; Al-Khal, Abdul Latif; Fetters, Michael D
2017-04-01
Reducing language and cultural barriers in healthcare are significant factors in resolving health disparities. Qatar's rapidly growing multicultural population presents new challenges to the healthcare system. The purpose of this research was to explore patients' perspectives about language discordance, and the strategies used to overcome language barriers during patients' visits. Participants were recruited and interviewed from four language groups (Arabic = 24, English = 20, Hindi = 20, and Urdu = 20), all of whom were living in Qatar and utilizing Hamad General Hospital-Outpatient Clinics as a source of their healthcare services. Using qualitative analysis procedures, relevant themes and codes were generated and data analyzed using Atlas-ti. As for results, most participants had experienced or witnessed language barriers during their outpatient clinics visits. Participants essentially were unfamiliar with professional medical interpreters and described their adaptive solutions, for example utilizing incidental interpreters, stringing together fragments of multiple languages, and using body language. Those not speaking mainstream languages of Hamad General Hospital (English and Arabic) were more vulnerable to health disparities due to language barriers. Despite the patient impetus to do something, patient-reported adaptive strategies could compromise patients' safety and access to quality healthcare. Polices tackling the language barrier need to be reviewed in Qatar's multicultural healthcare system and similar settings.
NASA Astrophysics Data System (ADS)
Gilman, Charles R.; Aparicio, Manuel; Barry, J.; Durniak, Timothy; Lam, Herman; Ramnath, Rajiv
1997-12-01
An enterprise's ability to deliver new products quickly and efficiently to market is critical for competitive success. While manufactureres recognize the need for speed and flexibility to compete in this market place, companies do not have the time or capital to move to new automation technologies. The National Industrial Information Infrastructure Protocols Consortium's Solutions for MES Adaptable Replicable Technology (NIIIP SMART) subgroup is developing an information infrastructure to enable the integration and interoperation among Manufacturing Execution Systems (MES) and Enterprise Information Systems within an enterprise or among enterprises. The goal of these developments is an adaptable, affordable, reconfigurable, integratable manufacturing system. Key innovative aspects of NIIIP SMART are: (1) Design of an industry standard object model that represents the diverse aspects of MES. (2) Design of a distributed object network to support real-time information sharing. (3) Product data exchange based on STEP and EXPRESS (ISO 10303). (4) Application of workflow and knowledge management technologies to enact manufacturing and business procedures and policy. (5) Application of intelligent agents to support emergent factories. This paper illustrates how these technologies have been incorporated into the NIIIP SMART system architecture to enable the integration and interoperation of existing tools and future MES applications in a 'plug and play' environment.
Development of Three-Dimensional DRAGON Grid Technology
NASA Technical Reports Server (NTRS)
Zheng, Yao; Kiou, Meng-Sing; Civinskas, Kestutis C.
1999-01-01
For a typical three dimensional flow in a practical engineering device, the time spent in grid generation can take 70 percent of the total analysis effort, resulting in a serious bottleneck in the design/analysis cycle. The present research attempts to develop a procedure that can considerably reduce the grid generation effort. The DRAGON grid, as a hybrid grid, is created by means of a Direct Replacement of Arbitrary Grid Overlapping by Nonstructured grid. The DRAGON grid scheme is an adaptation to the Chimera thinking. The Chimera grid is a composite structured grid, composing a set of overlapped structured grids, which are independently generated and body-fitted. The grid is of high quality and amenable for efficient solution schemes. However, the interpolation used in the overlapped region between grids introduces error, especially when a sharp-gradient region is encountered. The DRAGON grid scheme is capable of completely eliminating the interpolation and preserving the conservation property. It maximizes the advantages of the Chimera scheme and adapts the strengths of the unstructured and while at the same time keeping its weaknesses minimal. In the present paper, we describe the progress towards extending the DRAGON grid technology into three dimensions. Essential and programming aspects of the extension, and new challenges for the three-dimensional cases, are addressed.
Adaptive form-finding method for form-fixed spatial network structures
NASA Astrophysics Data System (ADS)
Lan, Cheng; Tu, Xi; Xue, Junqing; Briseghella, Bruno; Zordan, Tobia
2018-02-01
An effective form-finding method for form-fixed spatial network structures is presented in this paper. The adaptive form-finding method is introduced along with the example of designing an ellipsoidal network dome with bar length variations being as small as possible. A typical spherical geodesic network is selected as an initial state, having bar lengths in a limit group number. Next, this network is transformed into the ellipsoidal shape as desired by applying compressions on bars according to the bar length variations caused by transformation. Afterwards, the dynamic relaxation method is employed to explicitly integrate the node positions by applying residual forces. During the form-finding process, the boundary condition of constraining nodes on the ellipsoid surface is innovatively considered as reactions on the normal direction of the surface at node positions, which are balanced with the components of the nodal forces in a reverse direction induced by compressions on bars. The node positions are also corrected according to the fixed-form condition in each explicit iteration step. In the serial results of time history, the optimal solution is found from a time history of states by properly choosing convergence criteria, and the presented form-finding procedure is proved to be applicable for form-fixed problems.
Space motion sickness preflight adaptation training: preliminary studies with prototype trainers
NASA Technical Reports Server (NTRS)
Parker, D. E.; Rock, J. C.; von Gierke, H. E.; Ouyang, L.; Reschke, M. F.; Arrott, A. P.
1987-01-01
Preflight training frequently has been proposed as a potential solution to the problem of space motion sickness. The paper considers successively the otolith reinterpretation, the concept for a preflight adaptation trainer and the research with the Miami University Seesaw, the Wright Patterson Air-Force Base Dynamic Environment Simulator and the Visually Coupled Airborne Systems Simulator prototype adaptation trainers.
Adaptive 3D single-block grids for the computation of viscous flows around wings
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hagmeijer, R.; Kok, J.C.
1996-12-31
A robust algorithm for the adaption of a 3D single-block structured grid suitable for the computation of viscous flows around a wing is presented and demonstrated by application to the ONERA M6 wing. The effects of grid adaption on the flow solution and accuracy improvements is analyzed. Reynolds number variations are studied.
Parallel Tetrahedral Mesh Adaptation with Dynamic Load Balancing
NASA Technical Reports Server (NTRS)
Oliker, Leonid; Biswas, Rupak; Gabow, Harold N.
1999-01-01
The ability to dynamically adapt an unstructured grid is a powerful tool for efficiently solving computational problems with evolving physical features. In this paper, we report on our experience parallelizing an edge-based adaptation scheme, called 3D_TAG. using message passing. Results show excellent speedup when a realistic helicopter rotor mesh is randomly refined. However. performance deteriorates when the mesh is refined using a solution-based error indicator since mesh adaptation for practical problems occurs in a localized region., creating a severe load imbalance. To address this problem, we have developed PLUM, a global dynamic load balancing framework for adaptive numerical computations. Even though PLUM primarily balances processor workloads for the solution phase, it reduces the load imbalance problem within mesh adaptation by repartitioning the mesh after targeting edges for refinement but before the actual subdivision. This dramatically improves the performance of parallel 3D_TAG since refinement occurs in a more load balanced fashion. We also present optimal and heuristic algorithms that, when applied to the default mapping of a parallel repartitioner, significantly reduce the data redistribution overhead. Finally, portability is examined by comparing performance on three state-of-the-art parallel machines.
Utilization of group theory in studies of molecular clusters
NASA Astrophysics Data System (ADS)
Ocak, Mahir E.
The structure of the molecular symmetry group of molecular clusters was analyzed and it is shown that the molecular symmetry group of a molecular cluster can be written as direct products and semidirect products of its subgroups. Symmetry adaptation of basis functions in direct product groups and semidirect product groups was considered in general and the sequential symmetry adaptation procedure which is already known for direct product groups was extended to the case of semidirect product groups. By using the sequential symmetry adaptation procedure a new method for calculating the VRT spectra of molecular clusters which is named as Monomer Basis Representation (MBR) method is developed. In the MBR method, calculations starts with a single monomer with the purpose of obtaining an optimized basis for that monomer as a linear combination of some primitive basis functions. Then, an optimized basis for each identical monomer is generated from the optimized basis of this monomer. By using the optimized bases of the monomers, a basis is generated generated for the solution of the full problem, and the VRT spectra of the cluster is obtained by using this basis. Since an optimized basis is used for each monomer which has a much smaller size than the primitive basis from which the optimized bases are generated, the MBR method leads to an exponential optimization in the size of the basis that is required for the calculations. Application of the MBR method has been illustrated by calculating the VRT spectra of water dimer by using the SAPT-5st potential surface of Groenenboom et al. The rest of the calculations are in good agreement with both the original calculations of Groenenboom et al. and also with the experimental results. Comparing the size of the optimized basis with the size of the primitive basis, it can be said that the method works efficiently. Because of its efficiency, the MBR method can be used for studies of clusters bigger than dimers. Thus, MBR method can be used for studying the many-body terms and for deriving accurate potential surfaces.
49 CFR 572.142 - Head assembly and test procedure.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 49 Transportation 7 2013-10-01 2013-10-01 false Head assembly and test procedure. 572.142 Section...-year-Old Child Crash Test Dummy, Alpha Version § 572.142 Head assembly and test procedure. (a) The head assembly (refer to § 572.140(a)(1)(i)) for this test consists of the head (drawing 210-1000), adapter plate...
49 CFR 572.142 - Head assembly and test procedure.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 49 Transportation 7 2011-10-01 2011-10-01 false Head assembly and test procedure. 572.142 Section...-year-Old Child Crash Test Dummy, Alpha Version § 572.142 Head assembly and test procedure. (a) The head assembly (refer to § 572.140(a)(1)(i)) for this test consists of the head (drawing 210-1000), adapter plate...
49 CFR 572.142 - Head assembly and test procedure.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 49 Transportation 7 2014-10-01 2014-10-01 false Head assembly and test procedure. 572.142 Section...-year-Old Child Crash Test Dummy, Alpha Version § 572.142 Head assembly and test procedure. (a) The head assembly (refer to § 572.140(a)(1)(i)) for this test consists of the head (drawing 210-1000), adapter plate...
49 CFR 572.142 - Head assembly and test procedure.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 49 Transportation 7 2012-10-01 2012-10-01 false Head assembly and test procedure. 572.142 Section...-year-Old Child Crash Test Dummy, Alpha Version § 572.142 Head assembly and test procedure. (a) The head assembly (refer to § 572.140(a)(1)(i)) for this test consists of the head (drawing 210-1000), adapter plate...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goffin, Mark A., E-mail: mark.a.goffin@gmail.com; Buchan, Andrew G.; Dargaville, Steven
2015-01-15
A method for applying goal-based adaptive methods to the angular resolution of the neutral particle transport equation is presented. The methods are applied to an octahedral wavelet discretisation of the spherical angular domain which allows for anisotropic resolution. The angular resolution is adapted across both the spatial and energy dimensions. The spatial domain is discretised using an inner-element sub-grid scale finite element method. The goal-based adaptive methods optimise the angular discretisation to minimise the error in a specific functional of the solution. The goal-based error estimators require the solution of an adjoint system to determine the importance to the specifiedmore » functional. The error estimators and the novel methods to calculate them are described. Several examples are presented to demonstrate the effectiveness of the methods. It is shown that the methods can significantly reduce the number of unknowns and computational time required to obtain a given error. The novelty of the work is the use of goal-based adaptive methods to obtain anisotropic resolution in the angular domain for solving the transport equation. -- Highlights: •Wavelet angular discretisation used to solve transport equation. •Adaptive method developed for the wavelet discretisation. •Anisotropic angular resolution demonstrated through the adaptive method. •Adaptive method provides improvements in computational efficiency.« less
Cınar, Yasin; Cingü, Abdullah Kürşat; Sahin, Alparslan; Türkcü, Fatih Mehmet; Yüksel, Harun; Caca, Ihsan
2014-03-01
Abstract Objective: To monitor the changes in corneal thickness during the corneal collagen cross-linking procedure by using isotonic riboflavin solution without dextran in ectatic corneal diseases. The corneal thickness measurements were obtained before epithelial removal, after epithelial removal, following the instillation of isotonic riboflavin solution without dextran for 30 min, and after 10 min of ultraviolet A irradiation. Eleven eyes of eleven patients with progressive keratoconus (n = 10) and iatrogenic corneal ectasia (n = 1) were included in this study. The mean thinnest pachymetric measurements were 391.82 ± 30.34 µm (320-434 µm) after de-epithelialization of the cornea, 435 ± 21.17 µm (402-472 µm) following 30 min instillation of isotonic riboflavin solution without dextran and 431.73 ± 20.64 µm (387-461 µm) following 10 min of ultraviolet A irradiation to the cornea. Performing corneal cross-linking procedure with isotonic riboflavin solution without dextran might not induce corneal thinning but a little swelling throughout the procedure.
NASA Astrophysics Data System (ADS)
Palumbo, Giovanna; Tosi, Daniele; Schena, Emiliano; Massaroni, Carlo; Ippolito, Juliet; Verze, Paolo; Carlomagno, Nicola; Tammaro, Vincenzo; Iadicicco, Agostino; Campopiano, Stefania
2017-05-01
Fiber Bragg Grating (FBG) sensors applied to bio-medical procedures such as surgery and rehabilitation are a valid alternative to traditional sensing techniques due to their unique characteristics. Herein we propose the use of FBG sensor arrays for accurate real-time temperature measurements during multi-step RadioFrequency Ablation (RFA) based thermal tumor treatment. Real-time temperature monitoring in the RF-applied region represents a valid feedback for the success of the thermo-ablation procedure. In order to create a thermal multi-point map around the tumor area to be treated, a proper sensing configuration was developed. In particular, the RF probe of a commercial medical instrumentation, has been equipped with properly packaged FBGs sensors. Moreover, in order to discriminate the treatment areas to be ablated as precisely as possible, a second array 3.5 cm long, made by several FBGs was used. The results of the temperature measurements during the RFA experiments conducted on ex-vivo animal liver and kidney tissues are presented herein. The proposed FBGs based solution has proven to be capable of distinguish different and consecutive discharges and for each of them, to measure the temperature profile with a resolution of 0.1 °C and a minimum spatial resolution of 5mm. Based upon our experiments, it is possible to confirm that the temperature decreases with distance from a RF peak ablation, in accordance with RF theory. The proposed solution promises to be very useful for the surgeon because a real-time temperature feedback allows for the adaptation of RFA parameters during surgery and better delineates the area under treatment.
NASA Astrophysics Data System (ADS)
Schwing, Alan Michael
For computational fluid dynamics, the governing equations are solved on a discretized domain of nodes, faces, and cells. The quality of the grid or mesh can be a driving source for error in the results. While refinement studies can help guide the creation of a mesh, grid quality is largely determined by user expertise and understanding of the flow physics. Adaptive mesh refinement is a technique for enriching the mesh during a simulation based on metrics for error, impact on important parameters, or location of important flow features. This can offload from the user some of the difficult and ambiguous decisions necessary when discretizing the domain. This work explores the implementation of adaptive mesh refinement in an implicit, unstructured, finite-volume solver. Consideration is made for applying modern computational techniques in the presence of hanging nodes and refined cells. The approach is developed to be independent of the flow solver in order to provide a path for augmenting existing codes. It is designed to be applicable for unsteady simulations and refinement and coarsening of the grid does not impact the conservatism of the underlying numerics. The effect on high-order numerical fluxes of fourth- and sixth-order are explored. Provided the criteria for refinement is appropriately selected, solutions obtained using adapted meshes have no additional error when compared to results obtained on traditional, unadapted meshes. In order to leverage large-scale computational resources common today, the methods are parallelized using MPI. Parallel performance is considered for several test problems in order to assess scalability of both adapted and unadapted grids. Dynamic repartitioning of the mesh during refinement is crucial for load balancing an evolving grid. Development of the methods outlined here depend on a dual-memory approach that is described in detail. Validation of the solver developed here against a number of motivating problems shows favorable comparisons across a range of regimes. Unsteady and steady applications are considered in both subsonic and supersonic flows. Inviscid and viscous simulations achieve similar results at a much reduced cost when employing dynamic mesh adaptation. Several techniques for guiding adaptation are compared. Detailed analysis of statistics from the instrumented solver enable understanding of the costs associated with adaptation. Adaptive mesh refinement shows promise for the test cases presented here. It can be considerably faster than using conventional grids and provides accurate results. The procedures for adapting the grid are light-weight enough to not require significant computational time and yield significant reductions in grid size.
Adaptive control of large space structures using recursive lattice filters
NASA Technical Reports Server (NTRS)
Sundararajan, N.; Goglia, G. L.
1985-01-01
The use of recursive lattice filters for identification and adaptive control of large space structures is studied. Lattice filters were used to identify the structural dynamics model of the flexible structures. This identification model is then used for adaptive control. Before the identified model and control laws are integrated, the identified model is passed through a series of validation procedures and only when the model passes these validation procedures is control engaged. This type of validation scheme prevents instability when the overall loop is closed. Another important area of research, namely that of robust controller synthesis, was investigated using frequency domain multivariable controller synthesis methods. The method uses the Linear Quadratic Guassian/Loop Transfer Recovery (LQG/LTR) approach to ensure stability against unmodeled higher frequency modes and achieves the desired performance.
Guidance law development for aeroassisted transfer vehicles using matched asymptotic expansions
NASA Technical Reports Server (NTRS)
Calise, Anthony J.; Melamed, Nahum
1993-01-01
This report addresses and clarifies a number of issues related to the Matched Asymptotic Expansion (MAE) analysis of skip trajectories, or any class of problems that give rise to inner layers that are not associated directly with satisfying boundary conditions. The procedure for matching inner and outer solutions, and using the composite solution to satisfy boundary conditions is developed and rigorously followed to obtain a set of algebraic equations for the problem of inclination change with minimum energy loss. A detailed evaluation of the zeroth order guidance algorithm for aeroassisted orbit transfer is performed. It is shown that by exploiting the structure of the MAE solution procedure, the original problem, which requires the solution of a set of 20 implicit algebraic equations, can be reduced to a problem of 6 implicit equations in 6 unknowns. A solution that is near optimal, requires a minimum of computation, and thus can be implemented in real time and on-board the vehicle, has been obtained. Guidance law implementation entails treating the current state as a new initial state and repetitively solving the zeroth order MAE problem to obtain the feedback controls. Finally, a general procedure is developed for constructing a MAE solution up to first order, of the Hamilton-Jacobi-Bellman equation based on the method of characteristics. The development is valid for a class of perturbation problems whose solution exhibits two-time-scale behavior. A regular expansion for problems of this type is shown to be inappropriate since it is not valid over a narrow range of the independent variable. That is, it is not uniformly valid. Of particular interest here is the manner in which matching and boundary conditions are enforced when the expansion is carried out to first order. Two cases are distinguished-one where the left boundary condition coincides with, or lies to the right of, the singular region, and another one where the left boundary condition lies to the left of the singular region. A simple example is used to illustrate the procedure where the obtained solution is uniformly valid to O(Epsilon(exp 2)). The potential application of this procedure to aeroassisted plane change is also described and partially evaluated.
Adjoint Algorithm for CAD-Based Shape Optimization Using a Cartesian Method
NASA Technical Reports Server (NTRS)
Nemec, Marian; Aftosmis, Michael J.
2004-01-01
Adjoint solutions of the governing flow equations are becoming increasingly important for the development of efficient analysis and optimization algorithms. A well-known use of the adjoint method is gradient-based shape optimization. Given an objective function that defines some measure of performance, such as the lift and drag functionals, its gradient is computed at a cost that is essentially independent of the number of design variables (geometric parameters that control the shape). More recently, emerging adjoint applications focus on the analysis problem, where the adjoint solution is used to drive mesh adaptation, as well as to provide estimates of functional error bounds and corrections. The attractive feature of this approach is that the mesh-adaptation procedure targets a specific functional, thereby localizing the mesh refinement and reducing computational cost. Our focus is on the development of adjoint-based optimization techniques for a Cartesian method with embedded boundaries.12 In contrast t o implementations on structured and unstructured grids, Cartesian methods decouple the surface discretization from the volume mesh. This feature makes Cartesian methods well suited for the automated analysis of complex geometry problems, and consequently a promising approach to aerodynamic optimization. Melvin et developed an adjoint formulation for the TRANAIR code, which is based on the full-potential equation with viscous corrections. More recently, Dadone and Grossman presented an adjoint formulation for the Euler equations. In both approaches, a boundary condition is introduced to approximate the effects of the evolving surface shape that results in accurate gradient computation. Central to automated shape optimization algorithms is the issue of geometry modeling and control. The need to optimize complex, "real-life" geometry provides a strong incentive for the use of parametric-CAD systems within the optimization procedure. In previous work, we presented an effective optimization framework that incorporates a direct-CAD interface. In this work, we enhance the capabilities of this framework with efficient gradient computations using the discrete adjoint method. We present details of the adjoint numerical implementation, which reuses the domain decomposition, multigrid, and time-marching schemes of the flow solver. Furthermore, we explain and demonstrate the use of CAD in conjunction with the Cartesian adjoint approach. The final paper will contain a number of complex geometry, industrially relevant examples with many design variables to demonstrate the effectiveness of the adjoint method on Cartesian meshes.
Knowledge Retrieval Solutions.
ERIC Educational Resources Information Center
Khan, Kamran
1998-01-01
Excalibur RetrievalWare offers true knowledge retrieval solutions. Its fundamental technologies, Adaptive Pattern Recognition Processing and Semantic Networks, have capabilities for knowledge discovery and knowledge management of full-text, structured and visual information. The software delivers a combination of accuracy, extensibility,…
Parallel adaptive discontinuous Galerkin approximation for thin layer avalanche modeling
NASA Astrophysics Data System (ADS)
Patra, A. K.; Nichita, C. C.; Bauer, A. C.; Pitman, E. B.; Bursik, M.; Sheridan, M. F.
2006-08-01
This paper describes the development of highly accurate adaptive discontinuous Galerkin schemes for the solution of the equations arising from a thin layer type model of debris flows. Such flows have wide applicability in the analysis of avalanches induced by many natural calamities, e.g. volcanoes, earthquakes, etc. These schemes are coupled with special parallel solution methodologies to produce a simulation tool capable of very high-order numerical accuracy. The methodology successfully replicates cold rock avalanches at Mount Rainier, Washington and hot volcanic particulate flows at Colima Volcano, Mexico.
Video Conferences through the Internet: How to Survive in a Hostile Environment
Fernández, Carlos; Fernández-Navajas, Julián; Sequeira, Luis; Casadesus, Luis
2014-01-01
This paper analyzes and compares two different video conference solutions, widely used in corporate and home environments, with a special focus on the mechanisms used for adapting the traffic to the network status. The results show how these mechanisms are able to provide a good quality in the hostile environment of the public Internet, a best effort network without delay or delivery guarantees. Both solutions are evaluated in a laboratory, where different network impairments (bandwidth limit, delay, and packet loss) are set, in both the uplink and the downlink, and the reaction of the applications is measured. The tests show how these solutions modify their packet size and interpacket time, in order to increase or reduce the sent data. One of the solutions also uses a scalable video codec, able to adapt the traffic to the network status and to the end devices. PMID:24605066
Auto-adaptive finite element meshes
NASA Technical Reports Server (NTRS)
Richter, Roland; Leyland, Penelope
1995-01-01
Accurate capturing of discontinuities within compressible flow computations is achieved by coupling a suitable solver with an automatic adaptive mesh algorithm for unstructured triangular meshes. The mesh adaptation procedures developed rely on non-hierarchical dynamical local refinement/derefinement techniques, which hence enable structural optimization as well as geometrical optimization. The methods described are applied for a number of the ICASE test cases are particularly interesting for unsteady flow simulations.
Adaptive Standard Operating Procedures for Complex Disasters
2017-03-01
Developments in Business Simulation and Experiential Learning 33 (2014). 23 Patrick Lagadec and Benjamin Topper, “How Crises Model the Modern World...field of crisis response . Therefore, this experiment supports the argument for implementing the adaptive design proposals. The adaptive SOP enhancement...Kalay. “An Event- Based Model to Simulate Human Behaviour in Built Environments.” Proceedings of the 30th eCAADe Conference 1 (2012). Snowden
"Low-field" intraoperative MRI: a new scenario, a new adaptation.
Iturri-Clavero, F; Galbarriatu-Gutierrez, L; Gonzalez-Uriarte, A; Tamayo-Medel, G; de Orte, K; Martinez-Ruiz, A; Castellon-Larios, K; Bergese, S D
2016-11-01
To describe the adaptation of Cruces University Hospital to the use of intraoperative magnetic resonance imaging (ioMRI), and how the acquisition and use of this technology would impact the day-to-day running of the neurosurgical suite. With the approval of the ethics committee, an observational, prospective study was performed from June 2012 to April 2014, which included 109 neurosurgical procedures with the assistance of ioMRI. These were performed using the Polestar N-30 system (PSN30; Medtronic Navigation, Louisville, CO), which was integrated into the operating room. A total of 159 procedures were included: 109 cranial surgeries assisted with ioMRI and 50 control cases (no ioMRI use). There were no statistical significant differences when anaesthetic time (p=0.587) and surgical time (p=0.792) were compared; however, an important difference was shown in duration of patient positioning (p<0.0009) and total duration of the procedure (p<0.0009) between both groups. The introduction of ioMRI is necessary for most neurosurgical suites; however, a few things need to be taken into consideration when adapting to it. Increase procedure time, the use of specific MRI-safe devices, as well as a checklist for each patient to minimise risks, should be taken into consideration. Published by Elsevier Ltd.
NASA Astrophysics Data System (ADS)
Zhang, Yi; Xu, Yue; Ma, Kun
2016-08-01
In this paper, the variable-coefficient Kadomtsev-Petviashvili (vcKP) equation with self-consistent sources is presented by two different methods, one is the source generation procedure, the other is the Pfaffianization procedure, and the solutions for the two new coupled systems are given through Grammian-type Pfaffian determinants.
The purpose of this SOP is to describe procedures for preparing calibration curve solutions used for gas chromatography/mass spectrometry (GC/MS) analysis of chlorpyrifos, diazinon, malathion, DDT, DDE, DDD, a-chlordane, and g-chlordane in dust, soil, air, and handwipe sample ext...
Dorogova, V B; Kucheriavykh, E I; Sokolova, T V
1989-01-01
Photometric procedure of butyl "aeroflot" identification in the work zone air and in wash-out from workers' integument was developed, The procedure was based on the formation of yellow- and orange-dyed copper dibutyl dithiophosphate under butyl "aeroflot" interaction with copper sulphate with the subsequent photometry of dyed solutions for the wavelength of 420 nm in the 10-mm cell. Buffer solution with pH-9.2 was used as an absorbing solution for the workplace air sampling and integument wash-out.
Abubakar, Amina; Kalu, Raphael Birya; Katana, Khamis; Kabunda, Beatrice; Hassan, Amin S.; Newton, Charles R.; Van de Vijver, Fons
2016-01-01
Objective We set out to adapt the Beck Depression Inventory (BDI)-II in Kenya and examine its factorial structure. Methods In the first phase we carried out in-depth interviews involving 29 adult members of the community to elicit their understanding of depression and identify aspects of the BDI-II that required adaptation. In the second phase, a modified version of BDI-II was administered to 221 adults randomly selected from the community to allow for the evaluation of its psychometric properties. In the third phase of the study we evaluated the discriminative validity of BDI-11 by comparing a randomly chosen community sample (n = 29) with caregivers of adolescents affected by HIV (n = 77). Results A considerable overlap between the BDI symptoms and those generated in the interviews was observed. Relevant idioms and symptoms such as ‘thinking too much’ and ‘Kuchoka moyo (having a tired heart)’ were identified. The administration of the BDI had to be modified to make it suitable for the low literacy levels of our participants. Fit indices for several models (one factorial, two-factor model and a three factor model) were all within acceptable range. Evidence indicated that while multidimensional models could be fitted, the strong correlations between the factors implied that a single factor model may be the best suited solution (alpha [0.89], and a significant correlation with locally identified items [r = 0.51]) confirmed the good psychometric properties of the adapted BDI-II. No evidence was found to support the hypothesis that somatization was more prevalent. Lastly, caregivers of HIV affected adolescents had significantly higher scores compared to adults randomly selected from the community F(1, 121) = 23.31, p < .001 indicating the discriminative validity of the adapted BDI = II. Conclusions With an adapted administration procedure, the BDI-II provides an adequate measure of depressive symptoms which can be used alongside other measures for proper diagnosis in a low literacy population. PMID:27258530
Global Properties of Fully Convective Accretion Disks from Local Simulations
NASA Astrophysics Data System (ADS)
Bodo, G.; Cattaneo, F.; Mignone, A.; Ponzo, F.; Rossi, P.
2015-08-01
We present an approach to deriving global properties of accretion disks from the knowledge of local solutions derived from numerical simulations based on the shearing box approximation. The approach consists of a two-step procedure. First, a local solution valid for all values of the disk height is constructed by piecing together an interior solution obtained numerically with an analytical exterior radiative solution. The matching is obtained by assuming hydrostatic balance and radiative equilibrium. Although in principle the procedure can be carried out in general, it simplifies considerably when the interior solution is fully convective. In these cases, the construction is analogous to the derivation of the Hayashi tracks for protostars. The second step consists of piecing together the local solutions at different radii to obtain a global solution. Here we use the symmetry of the solutions with respect to the defining dimensionless numbers—in a way similar to the use of homology relations in stellar structure theory—to obtain the scaling properties of the various disk quantities with radius.