Interactive solution-adaptive grid generation procedure
NASA Technical Reports Server (NTRS)
Henderson, Todd L.; Choo, Yung K.; Lee, Ki D.
1992-01-01
TURBO-AD is an interactive solution adaptive grid generation program under development. The program combines an interactive algebraic grid generation technique and a solution adaptive grid generation technique into a single interactive package. The control point form uses a sparse collection of control points to algebraically generate a field grid. This technique provides local grid control capability and is well suited to interactive work due to its speed and efficiency. A mapping from the physical domain to a parametric domain was used to improve difficulties encountered near outwardly concave boundaries in the control point technique. Therefore, all grid modifications are performed on the unit square in the parametric domain, and the new adapted grid is then mapped back to the physical domain. The grid adaption is achieved by adapting the control points to a numerical solution in the parametric domain using control sources obtained from the flow properties. Then a new modified grid is generated from the adapted control net. This process is efficient because the number of control points is much less than the number of grid points and the generation of the grid is an efficient algebraic process. TURBO-AD provides the user with both local and global controls.
Kim, D.; Ghanem, R.
1994-12-31
Multigrid solution technique to solve a material nonlinear problem in a visual programming environment using the finite element method is discussed. The nonlinear equation of equilibrium is linearized to incremental form using Newton-Rapson technique, then multigrid solution technique is used to solve linear equations at each Newton-Rapson step. In the process, adaptive mesh refinement, which is based on the bisection of a pair of triangles, is used to form grid hierarchy for multigrid iteration. The solution process is implemented in a visual programming environment with distributed computing capability, which enables more intuitive understanding of solution process, and more effective use of resources.
NASA Technical Reports Server (NTRS)
Padovan, J.; Tovichakchaikul, S.
1983-01-01
This paper will develop a new solution strategy which can handle elastic-plastic-creep problems in an inherently stable manner. This is achieved by introducing a new constrained time stepping algorithm which will enable the solution of creep initiated pre/postbuckling behavior where indefinite tangent stiffnesses are encountered. Due to the generality of the scheme, both monotone and cyclic loading histories can be handled. The presentation will give a thorough overview of current solution schemes and their short comings, the development of constrained time stepping algorithms as well as illustrate the results of several numerical experiments which benchmark the new procedure.
Interactive solution-adaptive grid generation
NASA Technical Reports Server (NTRS)
Choo, Yung K.; Henderson, Todd L.
1992-01-01
TURBO-AD is an interactive solution-adaptive grid generation program under development. The program combines an interactive algebraic grid generation technique and a solution-adaptive grid generation technique into a single interactive solution-adaptive grid generation package. The control point form uses a sparse collection of control points to algebraically generate a field grid. This technique provides local grid control capability and is well suited to interactive work due to its speed and efficiency. A mapping from the physical domain to a parametric domain was used to improve difficulties that had been encountered near outwardly concave boundaries in the control point technique. Therefore, all grid modifications are performed on a unit square in the parametric domain, and the new adapted grid in the parametric domain is then mapped back to the physical domain. The grid adaptation is achieved by first adapting the control points to a numerical solution in the parametric domain using control sources obtained from flow properties. Then a new modified grid is generated from the adapted control net. This solution-adaptive grid generation process is efficient because the number of control points is much less than the number of grid points and the generation of a new grid from the adapted control net is an efficient algebraic process. TURBO-AD provides the user with both local and global grid controls.
Transonic airfoil calculations using solution-adaptive grids
NASA Technical Reports Server (NTRS)
Holst, T. L.; Brown, D.
1981-01-01
A new algorithm for generating solution-adaptive grids (SAG) about airfoil configurations embedded in transonic flow is presented. The present SAG approach uses only the airfoil surface solution to recluster grid points on the airfoil surface, i.e., the reclustering problem is one dimension smaller than the flow-field calculation problem. Special controls automatically built into the elliptic grid generation procedure are then used to obtain grids with suitable interior behavior. This concept of redistributing grid points greatly simplifies the idea of solution-adaptive grids. Numerical results indicate significant improvements in accuracy for SAG grids relative to standard grids using the same number of points.
Combined LAURA-UPS hypersonic solution procedure
NASA Technical Reports Server (NTRS)
Wood, William A.; Thompson, Richard A.
1993-01-01
A combined solution procedure for hypersonic flowfields around blunted slender bodies was implemented using a thin-layer Navier-Stokes code (LAURA) in the nose region and a parabolized Navier-Stokes code (UPS) on the after body region. Perfect gas, equilibrium air, and non-equilibrium air solutions to sharp cones and a sharp wedge were obtained using UPS alone as a preliminary step. Surface heating rates are presented for two slender bodies with blunted noses, having used LAURA to provide a starting solution to UPS downstream of the sonic line. These are an 8 deg sphere-cone in Mach 5, perfect gas, laminar flow at 0 and 4 deg angles of attack and the Reentry F body at Mach 20, 80,000 ft equilibrium gas conditions for 0 and 0.14 deg angles of attack. The results indicate that this procedure is a timely and accurate method for obtaining aerothermodynamic predictions on slender hypersonic vehicles.
Adaptive EAGLE dynamic solution adaptation and grid quality enhancement
NASA Technical Reports Server (NTRS)
Luong, Phu Vinh; Thompson, J. F.; Gatlin, B.; Mastin, C. W.; Kim, H. J.
1992-01-01
In the effort described here, the elliptic grid generation procedure in the EAGLE grid code was separated from the main code into a subroutine, and a new subroutine which evaluates several grid quality measures at each grid point was added. The elliptic grid routine can now be called, either by a computational fluid dynamics (CFD) code to generate a new adaptive grid based on flow variables and quality measures through multiple adaptation, or by the EAGLE main code to generate a grid based on quality measure variables through static adaptation. Arrays of flow variables can be read into the EAGLE grid code for use in static adaptation as well. These major changes in the EAGLE adaptive grid system make it easier to convert any CFD code that operates on a block-structured grid (or single-block grid) into a multiple adaptive code.
Multigrid solution of internal flows using unstructured solution adaptive meshes
NASA Technical Reports Server (NTRS)
Smith, Wayne A.; Blake, Kenneth R.
1992-01-01
This is the final report of the NASA Lewis SBIR Phase 2 Contract Number NAS3-25785, Multigrid Solution of Internal Flows Using Unstructured Solution Adaptive Meshes. The objective of this project, as described in the Statement of Work, is to develop and deliver to NASA a general three-dimensional Navier-Stokes code using unstructured solution-adaptive meshes for accuracy and multigrid techniques for convergence acceleration. The code will primarily be applied, but not necessarily limited, to high speed internal flows in turbomachinery.
Spatial adaptation procedures on tetrahedral meshes for unsteady aerodynamic flow calculations
NASA Technical Reports Server (NTRS)
Rausch, Russ D.; Batina, John T.; Yang, Henry T. Y.
1993-01-01
Spatial adaptation procedures for the accurate and efficient solution of steady and unsteady inviscid flow problems are described. The adaptation procedures were developed and implemented within a three-dimensional, unstructured-grid, upwind-type Euler code. These procedures involve mesh enrichment and mesh coarsening to either add points in high gradient regions of the flow or remove points where they are not needed, respectively, to produce solutions of high spatial accuracy at minimal computational cost. A detailed description of the enrichment and coarsening procedures are presented and comparisons with experimental data for an ONERA M6 wing and an exact solution for a shock-tube problem are presented to provide an assessment of the accuracy and efficiency of the capability. Steady and unsteady results, obtained using spatial adaptation procedures, are shown to be of high spatial accuracy, primarily in that discontinuities such as shock waves are captured very sharply.
Numerical procedure for planetary wave solution
Choi, Woo Kap; Wuebbles, D.J.
1993-09-01
The newly-developed LLNL two-dimensional chemical-radiative-transport model requires a knowledge of the EP flux divergence as an input momentum forcing. The major contributions for this forcing term come from the synoptic wave in the troposphere, the planetary wave in the stratosphere and mesosphere and the gravity wave in the upper mesosphere. The major source of the zonal momentum forcing in the middle atmosphere is the nonlinear planetary wave breaking. This planetary wave breaking also plays a significant role of mixing the chemical tracers. Garcia suggested a way of parameterizing the planetary wave breaking by using, linear damping of the primary wave. In this note we describe the procedure of obtaining the wave solution for the parameterization.
Adaptive Distributed Environment for Procedure Training (ADEPT)
NASA Technical Reports Server (NTRS)
Domeshek, Eric; Ong, James; Mohammed, John
2013-01-01
ADEPT (Adaptive Distributed Environment for Procedure Training) is designed to provide more effective, flexible, and portable training for NASA systems controllers. When creating a training scenario, an exercise author can specify a representative rationale structure using the graphical user interface, annotating the results with instructional texts where needed. The author's structure may distinguish between essential and optional parts of the rationale, and may also include "red herrings" - hypotheses that are essential to consider, until evidence and reasoning allow them to be ruled out. The system is built from pre-existing components, including Stottler Henke's SimVentive? instructional simulation authoring tool and runtime. To that, a capability was added to author and exploit explicit control decision rationale representations. ADEPT uses SimVentive's Scalable Vector Graphics (SVG)- based interactive graphic display capability as the basis of the tool for quickly noting aspects of decision rationale in graph form. The ADEPT prototype is built in Java, and will run on any computer using Windows, MacOS, or Linux. No special peripheral equipment is required. The software enables a style of student/ tutor interaction focused on the reasoning behind systems control behavior that better mimics proven Socratic human tutoring behaviors for highly cognitive skills. It supports fast, easy, and convenient authoring of such tutoring behaviors, allowing specification of detailed scenario-specific, but content-sensitive, high-quality tutor hints and feedback. The system places relatively light data-entry demands on the student to enable its rationale-centered discussions, and provides a support mechanism for fostering coherence in the student/ tutor dialog by including focusing, sequencing, and utterance tuning mechanisms intended to better fit tutor hints and feedback into the ongoing context.
Solution-adaptive program SADAP3D
NASA Technical Reports Server (NTRS)
Djomehri, M. J.; Deiwert, George S.
1991-01-01
A generic solution-adaptive grid program based on an error equi-distribution concept has been developed for use in complex multidimensional flows. The capability of a generic, user-friendly, multidimensional algorithm is demonstrated using several complex 3D flows in high-speed regimes. A simple scheme enforces the continuity of grid spacing along each currently adapted grid line as the computation proceeds. Calculation of normal vectors at each point of a grid plane is enhanced. The grid movement is automatically controlled by reinstating grid-spacing continuity locally in the mainstream of the unidirectional computation. It is concluded that the scheme can be readily coupled with many flow solvers to enhance accuracy and efficiency of solutions.
Effects of adaptive refinement on the inverse EEG solution
NASA Astrophysics Data System (ADS)
Weinstein, David M.; Johnson, Christopher R.; Schmidt, John A.
1995-10-01
One of the fundamental problems in electroencephalography can be characterized by an inverse problem. Given a subset of electrostatic potentials measured on the surface of the scalp and the geometry and conductivity properties within the head, calculate the current vectors and potential fields within the cerebrum. Mathematically the generalized EEG problem can be stated as solving Poisson's equation of electrical conduction for the primary current sources. The resulting problem is mathematically ill-posed i.e., the solution does not depend continuously on the data, such that small errors in the measurement of the voltages on the scalp can yield unbounded errors in the solution, and, for the general treatment of a solution of Poisson's equation, the solution is non-unique. However, if accurate solutions the general treatment of a solution of Poisson's equation, the solution is non-unique. However, if accurate solutions to such problems could be obtained, neurologists would gain noninvasive accesss to patient-specific cortical activity. Access to such data would ultimately increase the number of patients who could be effectively treated for pathological cortical conditions such as temporal lobe epilepsy. In this paper, we present the effects of spatial adaptive refinement on the inverse EEG problem and show that the use of adaptive methods allow for significantly better estimates of electric and potential fileds within the brain through an inverse procedure. To test these methods, we have constructed several finite element head models from magneteic resonance images of a patient. The finite element meshes ranged in size from 2724 nodes and 12,812 elements to 5224 nodes and 29,135 tetrahedral elements, depending on the level of discretization. We show that an adaptive meshing algorithm minimizes the error in the forward problem due to spatial discretization and thus increases the accuracy of the inverse solution.
Symmetry-adapted Wannier functions in the maximal localization procedure
NASA Astrophysics Data System (ADS)
Sakuma, R.
2013-06-01
A procedure to construct symmetry-adapted Wannier functions in the framework of the maximally localized Wannier function approach [Marzari and Vanderbilt, Phys. Rev. BPRBMDO1098-012110.1103/PhysRevB.56.12847 56, 12847 (1997); Souza, Marzari, and Vanderbilt, Phys. Rev. BPRBMDO1098-012110.1103/PhysRevB.65.035109 65, 035109 (2001)] is presented. In this scheme, the minimization of the spread functional of the Wannier functions is performed with constraints that are derived from symmetry properties of the specified set of the Wannier functions and the Bloch functions used to construct them, therefore one can obtain a solution that does not necessarily yield the global minimum of the spread functional. As a test of this approach, results of atom-centered Wannier functions for GaAs and Cu are presented.
Self-adaptive closed constrained solution algorithms for nonlinear conduction
NASA Technical Reports Server (NTRS)
Padovan, J.; Tovichakchaikul, S.
1982-01-01
Self-adaptive solution algorithms are developed for nonlinear heat conduction problems encountered in analyzing materials for use in high temperature or cryogenic conditions. The nonlinear effects are noted to occur due to convection and radiation effects, as well as temperature-dependent properties of the materials. Incremental successive substitution (ISS) and Newton-Raphson (NR) procedures are treated as extrapolation schemes which have solution projections bounded by a hyperline with an externally applied thermal load vector arising from internal heat generation and boundary conditions. Closed constraints are formulated which improve the efficiency and stability of the procedures by employing closed ellipsoidal surfaces to control the size of successive iterations. Governing equations are defined for nonlinear finite element models, and comparisons are made of results using the the new method and the ISS and NR schemes for epoxy, PVC, and CuGe.
Adaptive resolution simulation of salt solutions
NASA Astrophysics Data System (ADS)
Bevc, Staš; Junghans, Christoph; Kremer, Kurt; Praprotnik, Matej
2013-10-01
We present an adaptive resolution simulation of aqueous salt (NaCl) solutions at ambient conditions using the adaptive resolution scheme. Our multiscale approach concurrently couples the atomistic and coarse-grained models of the aqueous NaCl, where water molecules and ions change their resolution while moving from one resolution domain to the other. We employ standard extended simple point charge (SPC/E) and simple point charge (SPC) water models in combination with AMBER and GROMOS force fields for ion interactions in the atomistic domain. Electrostatics in our model are described by the generalized reaction field method. The effective interactions for water-water and water-ion interactions in the coarse-grained model are derived using structure-based coarse-graining approach while the Coulomb interactions between ions are appropriately screened. To ensure an even distribution of water molecules and ions across the simulation box we employ thermodynamic forces. We demonstrate that the equilibrium structural, e.g. radial distribution functions and density distributions of all the species, and dynamical properties are correctly reproduced by our adaptive resolution method. Our multiscale approach, which is general and can be used for any classical non-polarizable force-field and/or types of ions, will significantly speed up biomolecular simulation involving aqueous salt.
Psychometric Function Reconstruction from Adaptive Tracking Procedures
1988-11-29
Naval Submarine Medical Research Laboratory NSMRL Report 1095 29 November 1988 COC 0pCAL X, D TZ STA PSYC OMERCN TINRC SRUTO D STATES Reserch ork nit...performance. This paper describes a series of computer simulations undertaken to assess the validity of generating psychometric functions from trials in an...would expect that ill-behaved procedures would have been dropped from scientists ’ repertoires. The simulations reported here must be viewed relatively
Procedure for Adapting Direct Simulation Monte Carlo Meshes
NASA Technical Reports Server (NTRS)
Woronowicz, Michael S.; Wilmoth, Richard G.; Carlson, Ann B.; Rault, Didier F. G.
1992-01-01
A technique is presented for adapting computational meshes used in the G2 version of the direct simulation Monte Carlo method. The physical ideas underlying the technique are discussed, and adaptation formulas are developed for use on solutions generated from an initial mesh. The effect of statistical scatter on adaptation is addressed, and results demonstrate the ability of this technique to achieve more accurate results without increasing necessary computational resources.
An adaptive embedded mesh procedure for leading-edge vortex flows
NASA Technical Reports Server (NTRS)
Powell, Kenneth G.; Beer, Michael A.; Law, Glenn W.
1989-01-01
A procedure for solving the conical Euler equations on an adaptively refined mesh is presented, along with a method for determining which cells to refine. The solution procedure is a central-difference cell-vertex scheme. The adaptation procedure is made up of a parameter on which the refinement decision is based, and a method for choosing a threshold value of the parameter. The refinement parameter is a measure of mesh-convergence, constructed by comparison of locally coarse- and fine-grid solutions. The threshold for the refinement parameter is based on the curvature of the curve relating the number of cells flagged for refinement to the value of the refinement threshold. Results for three test cases are presented. The test problem is that of a delta wing at angle of attack in a supersonic free-stream. The resulting vortices and shocks are captured efficiently by the adaptive code.
Automatic procedure for generating symmetry adapted wavefunctions.
Johansson, Marcus; Veryazov, Valera
2017-01-01
Automatic detection of point groups as well as symmetrisation of molecular geometry and wavefunctions are useful tools in computational quantum chemistry. Algorithms for developing these tools as well as an implementation are presented. The symmetry detection algorithm is a clustering algorithm for symmetry invariant properties, combined with logical deduction of possible symmetry elements using the geometry of sets of symmetrically equivalent atoms. An algorithm for determining the symmetry adapted linear combinations (SALCs) of atomic orbitals is also presented. The SALCs are constructed with the use of projection operators for the irreducible representations, as well as subgroups for determining splitting fields for a canonical basis. The character tables for the point groups are auto generated, and the algorithm is described. Symmetrisation of molecules use a projection into the totally symmetric space, whereas for wavefunctions projection as well and partner function determination and averaging is used. The software has been released as a stand-alone, open source library under the MIT license and integrated into both computational and molecular modelling software.Graphical abstract.
a Procedural Solution to Model Roman Masonry Structures
NASA Astrophysics Data System (ADS)
Cappellini, V.; Saleri, R.; Stefani, C.; Nony, N.; De Luca, L.
2013-07-01
The paper will describe a new approach based on the development of a procedural modelling methodology for archaeological data representation. This is a custom-designed solution based on the recognition of the rules belonging to the construction methods used in roman times. We have conceived a tool for 3D reconstruction of masonry structures starting from photogrammetric surveying. Our protocol considers different steps. Firstly we have focused on the classification of opus based on the basic interconnections that can lead to a descriptive system used for their unequivocal identification and design. Secondly, we have chosen an automatic, accurate, flexible and open-source photogrammetric pipeline named Pastis Apero Micmac - PAM, developed by IGN (Paris). We have employed it to generate ortho-images from non-oriented images, using a user-friendly interface implemented by CNRS Marseille (France). Thirdly, the masonry elements are created in parametric and interactive way, and finally they are adapted to the photogrammetric data. The presented application, currently under construction, is developed with an open source programming language called Processing, useful for visual, animated or static, 2D or 3D, interactive creations. Using this computer language, a Java environment has been developed. Therefore, even if the procedural modelling reveals an accuracy level inferior to the one obtained by manual modelling (brick by brick), this method can be useful when taking into account the static evaluation on buildings (requiring quantitative aspects) and metric measures for restoration purposes.
Anisotropic Solution Adaptive Unstructured Grid Generation Using AFLR
NASA Technical Reports Server (NTRS)
Marcum, David L.
2007-01-01
An existing volume grid generation procedure, AFLR3, was successfully modified to generate anisotropic tetrahedral elements using a directional metric transformation defined at source nodes. The procedure can be coupled with a solver and an error estimator as part of an overall anisotropic solution adaptation methodology. It is suitable for use with an error estimator based on an adjoint, optimization, sensitivity derivative, or related approach. This offers many advantages, including more efficient point placement along with robust and efficient error estimation. It also serves as a framework for true grid optimization wherein error estimation and computational resources can be used as cost functions to determine the optimal point distribution. Within AFLR3 the metric transformation is implemented using a set of transformation vectors and associated aspect ratios. The modified overall procedure is presented along with details of the anisotropic transformation implementation. Multiple two-and three-dimensional examples are also presented that demonstrate the capability of the modified AFLR procedure to generate anisotropic elements using a set of source nodes with anisotropic transformation metrics. The example cases presented use moderate levels of anisotropy and result in usable element quality. Future testing with various flow solvers and methods for obtaining transformation metric information is needed to determine practical limits and evaluate the efficacy of the overall approach.
NIF Anti-Reflective Coating Solutions: Preparation, Procedures and Specifications
Suratwala, T; Carman, L; Thomas, I
2003-07-01
The following document contains a detailed description of the preparation procedures for the antireflective coating solutions used for NIF optics. This memo includes preparation procedures for the coating solutions (sections 2.0-4.0), specifications and vendor information of the raw materials used and on all equipment used (section 5.0), and QA specifications (section 6.0) and procedures (section 7.0) to determine quality and repeatability of all the coating solutions. There are different five coating solutions that will be used to coat NIF optics. These solutions are listed below: (1) Colloidal silica (3%) in ethanol (2) Colloidal silica (2%) in sec-butanol (3) Colloidal silica (9%) in sec-butanol (deammoniated) (4) HMDS treated silica (10%) in decane (5) GR650 (3.3%) in ethanol/sec-butanol The names listed above are to be considered the official name for the solution. They will be referred to by these names in the remainder of this document. Table 1 gives a summary of all the optics to be coated including: (1) the surface to be coated; (2) the type of solution to be used; (3) the coating method (meniscus, dip, or spin coating) to be used; (4) the type of coating (broadband, 1?, 2?, 3?) to be made; (5) number of optics to be coated; and (6) the type of post processing required (if any). Table 2 gives a summary of the batch compositions and measured properties of all five of these solutions.
An adaptive mesh-moving and refinement procedure for one-dimensional conservation laws
NASA Technical Reports Server (NTRS)
Biswas, Rupak; Flaherty, Joseph E.; Arney, David C.
1993-01-01
We examine the performance of an adaptive mesh-moving and /or local mesh refinement procedure for the finite difference solution of one-dimensional hyperbolic systems of conservation laws. Adaptive motion of a base mesh is designed to isolate spatially distinct phenomena, and recursive local refinement of the time step and cells of the stationary or moving base mesh is performed in regions where a refinement indicator exceeds a prescribed tolerance. These adaptive procedures are incorporated into a computer code that includes a MacCormack finite difference scheme wih Davis' artificial viscosity model and a discretization error estimate based on Richardson's extrapolation. Experiments are conducted on three problems in order to qualify the advantages of adaptive techniques relative to uniform mesh computations and the relative benefits of mesh moving and refinement. Key results indicate that local mesh refinement, with and without mesh moving, can provide reliable solutions at much lower computational cost than possible on uniform meshes; that mesh motion can be used to improve the results of uniform mesh solutions for a modest computational effort; that the cost of managing the tree data structure associated with refinement is small; and that a combination of mesh motion and refinement reliably produces solutions for the least cost per unit accuracy.
An adaptive mesh-moving and refinement procedure for one-dimensional conservation laws
NASA Technical Reports Server (NTRS)
Biswas, Rupak; Flaherty, Joseph E.; Arney, David C.
1993-01-01
We examine the performance of an adaptive mesh-moving and /or local mesh refinement procedure for the finite difference solution of one-dimensional hyperbolic systems of conservation laws. Adaptive motion of a base mesh is designed to isolate spatially distinct phenomena, and recursive local refinement of the time step and cells of the stationary or moving base mesh is performed in regions where a refinement indicator exceeds a prescribed tolerance. These adaptive procedures are incorporated into a computer code that includes a MacCormack finite difference scheme wih Davis' artificial viscosity model and a discretization error estimate based on Richardson's extrapolation. Experiments are conducted on three problems in order to qualify the advantages of adaptive techniques relative to uniform mesh computations and the relative benefits of mesh moving and refinement. Key results indicate that local mesh refinement, with and without mesh moving, can provide reliable solutions at much lower computational cost than possible on uniform meshes; that mesh motion can be used to improve the results of uniform mesh solutions for a modest computational effort; that the cost of managing the tree data structure associated with refinement is small; and that a combination of mesh motion and refinement reliably produces solutions for the least cost per unit accuracy.
A novel hyperbolic grid generation procedure with inherent adaptive dissipation
Tai, C.H.; Yin, S.L.; Soong, C.Y.
1995-01-01
This paper reports a novel hyperbolic grid-generation with an inherent adaptive dissipation (HGAD), which is capable of improving the oscillation and overlapping of grid lines. In the present work upwinding differencing is applied to discretize the hyperbolic system and, thereby, to develop the adaptive dissipation coefficient. Complex configurations with the features of geometric discontinuity, exceptional concavity and convexity are used as the test cases for comparison of the present HGAD procedure with the conventional hyerbolic and elliptic ones. The results reveal that the HGAD method is superior in orthogonality and smoothness of the grid system. In addition, the computational efficiency of the flow solver may be improved by using the present HGAD procedure. 15 refs., 8 figs.
Full Gradient Solution to Adaptive Hybrid Control
NASA Technical Reports Server (NTRS)
Bean, Jacob; Schiller, Noah H.; Fuller, Chris
2016-01-01
This paper focuses on the adaptation mechanisms in adaptive hybrid controllers. Most adaptive hybrid controllers update two filters individually according to the filtered-reference least mean squares (FxLMS) algorithm. Because this algorithm was derived for feedforward control, it does not take into account the presence of a feedback loop in the gradient calculation. This paper provides a derivation of the proper weight vector gradient for hybrid (or feedback) controllers that takes into account the presence of feedback. In this formulation, a single weight vector is updated rather than two individually. An internal model structure is assumed for the feedback part of the controller. The full gradient is equivalent to that used in the standard FxLMS algorithm with the addition of a recursive term that is a function of the modeling error. Some simulations are provided to highlight the advantages of using the full gradient in the weight vector update rather than the approximation.
Full Gradient Solution to Adaptive Hybrid Control
NASA Technical Reports Server (NTRS)
Bean, Jacob; Schiller, Noah H.; Fuller, Chris
2017-01-01
This paper focuses on the adaptation mechanisms in adaptive hybrid controllers. Most adaptive hybrid controllers update two filters individually according to the filtered reference least mean squares (FxLMS) algorithm. Because this algorithm was derived for feedforward control, it does not take into account the presence of a feedback loop in the gradient calculation. This paper provides a derivation of the proper weight vector gradient for hybrid (or feedback) controllers that takes into account the presence of feedback. In this formulation, a single weight vector is updated rather than two individually. An internal model structure is assumed for the feedback part of the controller. The full gradient is equivalent to that used in the standard FxLMS algorithm with the addition of a recursive term that is a function of the modeling error. Some simulations are provided to highlight the advantages of using the full gradient in the weight vector update rather than the approximation.
An Innovative Adaptive Pushover Procedure Based on Storey Shear
Shakeri, Kazem; Shayanfar, Mohsen A.
2008-07-08
Since the conventional pushover analyses are unable to consider the effect of the higher modes and progressive variation in dynamic properties, recent years have witnessed the development of some advanced adaptive pushover methods. However in these methods, using the quadratic combination rules to combine the modal forces result in a positive value in load pattern at all storeys and the reversal sign of the modes is removed; consequently these methods do not have a major advantage over their non-adaptive counterparts. Herein an innovative adaptive pushover method based on storey shear is proposed which can take into account the reversal signs in higher modes. In each storey the applied load pattern is derived from the storey shear profile; consequently, the sign of the applied loads in consecutive steps could be changed. Accuracy of the proposed procedure is examined by applying it to a 20-storey steel building. It illustrates a good estimation of the peak response in inelastic phase.
Solution procedure of residue harmonic balance method and its applications
NASA Astrophysics Data System (ADS)
Guo, ZhongJin; Leung, A. Y. T.; Ma, XiaoYan
2014-08-01
This paper presents a simple and rigorous solution procedure of residue harmonic balance for predicting the accurate approximation of certain autonomous ordinary differential systems. In this solution procedure, no small parameter is assumed. The harmonic residue of balance equation is separated in two parts at each step. The first part has the same number of Fourier terms as the present order of approximation and the remaining part is used in the subsequent improvement. The corrections are governed by linear ordinary differential equation so that they can be solved easily by means of harmonic balance method again. Three kinds of different differential equations involving general, fractional and delay ordinary differential systems are given as numerical examples respectively. Highly accurate limited cycle frequency and amplitude are captured. The results match well with the exact solutions or numerical solutions for a wide range of control parameters. Comparison with those available shows that the residue harmonic balance solution procedure is very effective for these autonomous differential systems. Moreover, the present method works not only in predicting the amplitude but also the frequency of bifurcated period solution for delay ordinary differential equation.
Solution-adaptive finite element method in computational fracture mechanics
NASA Technical Reports Server (NTRS)
Min, J. B.; Bass, J. M.; Spradley, L. W.
1993-01-01
Some recent results obtained using solution-adaptive finite element method in linear elastic two-dimensional fracture mechanics problems are presented. The focus is on the basic issue of adaptive finite element method for validating the applications of new methodology to fracture mechanics problems by computing demonstration problems and comparing the stress intensity factors to analytical results.
NASA Technical Reports Server (NTRS)
Hooker, John R.; Batina, John T.; Williams, Marc H.
1992-01-01
An algorithm which combines spatial and temporal adaption for the time integration of the two-dimensional Euler equations on unstructured meshes of triangles is presented. Spatial adaption involves mesh enrichment to add elements in high gradient regions of the flow and mesh coarsening to remove elements where they are no longer needed. Temporal adaption is a time accurate, local time stepping procedure which integrates the flow equations in each cell according to the local numerical stability constraint. The flow solver utilizes a four-stage Runge-Kutta time integration scheme with an upwind flux-split spatial discretization. Results obtained using spatial and temporal adaption indicate that highly accurate solutions can be obtained with a significant savings of computing time over global time stepping.
Multigrid solution strategies for adaptive meshing problems
NASA Technical Reports Server (NTRS)
Mavriplis, Dimitri J.
1995-01-01
This paper discusses the issues which arise when combining multigrid strategies with adaptive meshing techniques for solving steady-state problems on unstructured meshes. A basic strategy is described, and demonstrated by solving several inviscid and viscous flow cases. Potential inefficiencies in this basic strategy are exposed, and various alternate approaches are discussed, some of which are demonstrated with an example. Although each particular approach exhibits certain advantages, all methods have particular drawbacks, and the formulation of a completely optimal strategy is considered to be an open problem.
Transmission Line Adapted Analytical Power Charts Solution
NASA Astrophysics Data System (ADS)
Sakala, Japhet D.; Daka, James S. J.; Setlhaolo, Ditiro; Malichi, Alec Pulu
2017-08-01
The performance of a transmission line has been assessed over the years using power charts. These are graphical representations, drawn to scale, of the equations that describe the performance of transmission lines. Various quantities that describe the performance, such as sending end voltage, sending end power and compensation to give zero voltage regulation, may be deduced from the power charts. Usually required values are read off and then converted using the appropriate scales and known relationships. In this paper, the authors revisit this area of circle diagrams for transmission line performance. The work presented here formulates the mathematical model that analyses the transmission line performance from the power charts relationships and then uses them to calculate the transmission line performance. In this proposed approach, it is not necessary to draw the power charts for the solution. However the power charts may be drawn for the visual presentation. The method is based on applying derived equations and is simple to use since it does not require rigorous derivations.
Transmission Line Adapted Analytical Power Charts Solution
NASA Astrophysics Data System (ADS)
Sakala, Japhet D.; Daka, James S. J.; Setlhaolo, Ditiro; Malichi, Alec Pulu
2016-08-01
The performance of a transmission line has been assessed over the years using power charts. These are graphical representations, drawn to scale, of the equations that describe the performance of transmission lines. Various quantities that describe the performance, such as sending end voltage, sending end power and compensation to give zero voltage regulation, may be deduced from the power charts. Usually required values are read off and then converted using the appropriate scales and known relationships. In this paper, the authors revisit this area of circle diagrams for transmission line performance. The work presented here formulates the mathematical model that analyses the transmission line performance from the power charts relationships and then uses them to calculate the transmission line performance. In this proposed approach, it is not necessary to draw the power charts for the solution. However the power charts may be drawn for the visual presentation. The method is based on applying derived equations and is simple to use since it does not require rigorous derivations.
Solutions and procedures to assure the flow in deepwater conditions
Gomes, M.G.F.M.; Pereira, F.B.; Lino, A.C.F.
1996-12-31
Petrobras has been developing deep water oil fields located in Campos Basin, a vanguard subsea project which faces big challenges, one of them wax deposition in production flowlines. So, since 1990, Petrobras has been studying methods to prevent and remove paraffin-wax deposits. Tests of techniques based on chemical inhibition of crystal growth, thermo-chemical cleaning (SGN), mechanical cleaning (pigging), electrical heating and thermal insulation were done and the main results obtained at CENPES (Petrobras R and D Center) started to be used in the field in 1993. This paper presents solutions and procedures which has been used to minimize oil production losses at Campos Basin -- Brazil.
Solution procedure of dynamical contact problems with friction
NASA Astrophysics Data System (ADS)
Abdelhakim, Lotfi
2017-07-01
Dynamical contact is one of the common research topics because of its wide applications in the engineering field. The main goal of this work is to develop a time-stepping algorithm for dynamic contact problems. We propose a finite element approach for elastodynamics contact problems [1]. Sticking, sliding and frictional contact can be taken into account. Lagrange multipliers are used to enforce non-penetration condition. For the time discretization, we propose a scheme equivalent to the explicit Newmark scheme. Each time step requires solving a nonlinear problem similar to a static friction problem. The nonlinearity of the system of equation needs an iterative solution procedure based on Uzawa's algorithm [2][3]. The applicability of the algorithm is illustrated by selected sample numerical solutions to static and dynamic contact problems. Results obtained with the model have been compared and verified with results from an independent numerical method.
Impact of space-time mesh adaptation on solute transport modeling in porous media
NASA Astrophysics Data System (ADS)
Esfandiar, Bahman; Porta, Giovanni; Perotto, Simona; Guadagnini, Alberto
2015-02-01
We implement a space-time grid adaptation procedure to efficiently improve the accuracy of numerical simulations of solute transport in porous media in the context of model parameter estimation. We focus on the Advection Dispersion Equation (ADE) for the interpretation of nonreactive transport experiments in laboratory-scale heterogeneous porous media. When compared to a numerical approximation based on a fixed space-time discretization, our approach is grounded on a joint automatic selection of the spatial grid and the time step to capture the main (space-time) system dynamics. Spatial mesh adaptation is driven by an anisotropic recovery-based error estimator which enables us to properly select the size, shape, and orientation of the mesh elements. Adaptation of the time step is performed through an ad hoc local reconstruction of the temporal derivative of the solution via a recovery-based approach. The impact of the proposed adaptation strategy on the ability to provide reliable estimates of the key parameters of an ADE model is assessed on the basis of experimental solute breakthrough data measured following tracer injection in a nonuniform porous system. Model calibration is performed in a Maximum Likelihood (ML) framework upon relying on the representation of the ADE solution through a generalized Polynomial Chaos Expansion (gPCE). Our results show that the proposed anisotropic space-time grid adaptation leads to ML parameter estimates and to model results of markedly improved quality when compared to classical inversion approaches based on a uniform space-time discretization.
Apramian, Tavis; Watling, Christopher; Lingard, Lorelei; Cristancho, Sayra
2015-10-01
Surgical research struggles to describe the relationship between procedural variations in daily practice and traditional conceptualizations of evidence. The problem has resisted simple solutions, in part, because we lack a solid understanding of how surgeons conceptualize and interact around variation, adaptation, innovation, and evidence in daily practice. This grounded theory study aims to describe the social processes that influence how procedural variation is conceptualized in the surgical workplace. Using the constructivist grounded theory methodology, semi-structured interviews with surgeons (n = 19) from four North American academic centres were collected and analysed. Purposive sampling targeted surgeons with experiential knowledge of the role of variations in the workplace. Theoretical sampling was conducted until a theoretical framework representing key processes was conceptually saturated. Surgical procedural variation was influenced by three key processes. Seeking improvement was shaped by having unsolved procedural problems, adapting in the moment, and pursuing personal opportunities. Orienting self and others to variations consisted of sharing stories of variations with others, taking stock of how a variation promoted personal interests, and placing trust in peers. Acting under cultural and material conditions was characterized by being wary, positioning personal image, showing the logic of a variation, and making use of academic resources to do so. Our findings include social processes that influence how adaptations are incubated in surgical practice and mature into innovations. This study offers a language for conceptualizing the sociocultural influences on procedural variations in surgery. Interventions to change how surgeons interact with variations on a day-to-day basis should consider these social processes in their design. © 2015 John Wiley & Sons, Ltd.
Apramian, Tavis; Watling, Christopher; Lingard, Lorelei; Cristancho, Sayra
2017-01-01
Rationale, aims and objectives Surgical research struggles to describe the relationship between procedural variations in daily practice and traditional conceptualizations of evidence. The problem has resisted simple solutions, in part, because we lack a solid understanding of how surgeons conceptualize and interact around variation, adaptation, innovation, and evidence in daily practice. This grounded theory study aims to describe the social processes that influence how procedural variation is conceptualized in the surgical workplace. Method Using the constructivist grounded theory methodology, semi-structured interviews with surgeons (n = 19) from four North American academic centres were collected and analysed. Purposive sampling targeted surgeons with experiential knowledge of the role of variations in the workplace. Theoretical sampling was conducted until a theoretical framework representing key processes was conceptually saturated. Results Surgical procedural variation was influenced by three key processes. Seeking improvement was shaped by having unsolved procedural problems, adapting in the moment, and pursuing personal opportunities. Orienting self and others to variations consisted of sharing stories of variations with others, taking stock of how a variation promoted personal interests, and placing trust in peers. Acting under cultural and material conditions was characterized by being wary, positioning personal image, showing the logic of a variation, and making use of academic resources to do so. Our findings include social processes that influence how adaptations are incubated in surgical practice and mature into innovations. Conclusions This study offers a language for conceptualizing the sociocultural influences on procedural variations in surgery. Interventions to change how surgeons interact with variations on a day-to-day basis should consider these social processes in their design. PMID:26096874
Full analytical solution of Adapted Polarisation State Contrast Imaging.
Upadhyay, Debajyoti; Mondal, Sugata; Lacot, Eric; Orlik, Xavier
2011-12-05
We have earlier proposed a 2-channel imaging technique: Adapted Polarisation State Contrast Imaging (APSCI), which noticeably enhances the polarimetric contrast between an object and its background using fully polarised incident state adapted to the scene, such that the polarimetric responses of those regions are located as far as possible on the Poincaré sphere. We address here the full analytical and graphical analysis of the ensemble of solutions of specific incident states, by introducing 3-Distance Eigen Space and explain the underlying physical structure of APSCI and the effect of noise over the measurements.
Adaptation of sweeteners in water and in tannic acid solutions.
Schiffman, S S; Pecore, S D; Booth, B J; Losee, M L; Carr, B T; Sattely-Miller, E; Graham, B G; Warwick, Z S
1994-03-01
Repeated exposure to a tastant often leads to a decrease in magnitude of the perceived intensity; this phenomenon is termed adaptation. The purpose of this study was to determine the degree of adaptation of the sweet response for a variety of sweeteners in water and in the presence of two levels of tannic acid. Sweetness intensity ratings were given by a trained panel for 14 sweeteners: three sugars (fructose, glucose, sucrose), two polyhydric alcohols (mannitol, sorbitol), two terpenoid glycosides (rebaudioside-A, stevioside), two dipeptide derivatives (alitame, aspartame), one sulfamate (sodium cyclamate), one protein (thaumatin), two N-sulfonyl amides (acesulfame-K, sodium saccharin), and one dihydrochalcone (neohesperidin dihydrochalcone). Panelists were given four isointense concentrations of each sweetener by itself and in the presence of two concentrations of tannic acid. Each sweetener concentration was tasted and rated four consecutive times with a 30 s interval between each taste and a 2 min interval between each concentration. Within a taste session, a series of concentrations of a given sweetener was presented in ascending order of magnitude. Adaptation was calculated as the decrease in intensity from the first to the fourth sample. The greatest adaptation in water solutions was found for acesulfame-K, Na saccharin, rebaudioside-A, and stevioside. This was followed by the dipeptide sweeteners, alitame and aspartame. The least adaptation occurred with the sugars, polyhydric alcohols, and neohesperidin dihydrochalcone. Adaptation was greater in tannic acid solutions than in water for six sweeteners. Adaptation of sweet taste may result from the desensitization of sweetener receptors analogous to the homologous desensitization found in the beta adrenergic system.
Space-Time Adaptive Solution of Richards' Equation
NASA Astrophysics Data System (ADS)
Abhishek, C.; Miller, C. T.; Farthing, M. W.
2003-12-01
Efficient, robust simulation of groundwater flow in the unsaturated zone remains computationally expensive, especially for problems characterized by sharp fronts in both space and time. Standard approaches that employ uniform spatial and temporal discretizations for the numerical solution of these problems lead to inefficient and expensive simulations. In this work, we solve Richards' equation using adaptive methods in both space and time. Spatial adaption is based upon a coarse grid solve and gradient-based error indicators, while the spatial step size is adjusted using a fixed-order approximation. Temporal adaption is accomplished using variable-order, variable-step-size approximations based upon the backward difference formulas up to fifth order. Since the advantages of similar adaptive methods in time are now established, we evaluate our method by comparison with a uniform spatial discretization that is adaptive in time for four different test problems. The numerical results demonstrate that the proposed method provides a robust and efficient alternative to standard approaches for simulating variably saturated flow.
A spatially and temporally adaptive solution of Richards’ equation
NASA Astrophysics Data System (ADS)
Miller, Cass T.; Abhishek, Chandra; Farthing, Matthew W.
2006-04-01
Efficient, robust simulation of groundwater flow in the unsaturated zone remains computationally expensive, especially for problems characterized by sharp fronts in both space and time. Standard approaches that employ uniform spatial and temporal discretizations for the numerical solution of these problems lead to inefficient and expensive simulations. In this work, we solve Richards' equation using adaptive methods in both space and time. Spatial adaption is based upon a coarse grid solve and a gradient error indicator using a fixed-order approximation. Temporal adaption is accomplished using variable order, variable step size approximations based upon the backward difference formulas up to fifth order. Since the advantages of similar adaptive methods in time are now established, we evaluate our method by comparison with a uniform spatial discretization that is adaptive in time for four different one-dimensional test problems. The numerical results demonstrate that the proposed method provides a robust and efficient alternative to standard approaches for simulating variably saturated flow in one spatial dimension.
Solution-Adaptive Cartesian Cell Approach for Viscous and Inviscid Flows
NASA Technical Reports Server (NTRS)
Coirier, William J.; Powell, Kenneth G.
1996-01-01
A Cartesian cell-based approach for adaptively refined solutions of the Euler and Navier-Stokes equations in two dimensions is presented. Grids about geometrically complicated bodies are generated automatically, by the recursive subdivision of a single Cartesian cell encompassing the entire flow domain. Where the resulting cells intersect bodies, polygonal cut cells are created using modified polygon-clipping algorithms. The grid is stored in a binary tree data structure that provides a natural means of obtaining cell-to-cell connectivity and of carrying out solution-adaptive mesh refinement. The Euler and Navier-Stokes equations are solved on the resulting grids using a finite volume formulation. The convective terms are upwinded: A linear reconstruction of the primitive variables is performed, providing input states to an approximate Riemann solver for computing the fluxes between neighboring cells. The results of a study comparing the accuracy and positivity of two classes of cell-centered, viscous gradient reconstruction procedures is briefly summarized. Adaptively refined solutions of the Navier-Stokes equations are shown using the more robust of these gradient reconstruction procedures, where the results computed by the Cartesian approach are compared to theory, experiment, and other accepted computational results for a series of low and moderate Reynolds number flows.
Adaptive Multigrid Solution of Stokes' Equation on CELL Processor
NASA Astrophysics Data System (ADS)
Elgersma, M. R.; Yuen, D. A.; Pratt, S. G.
2006-12-01
We are developing an adaptive multigrid solver for treating nonlinear elliptic partial-differential equations, needed for mantle convection problems. Since multigrid is being used for the complete solution, not just as a preconditioner, spatial difference operators are kept nearly diagonally dominant by increasing density of the coarsest grid in regions where coefficients have rapid spatial variation. At each time step, the unstructured coarse grid is refined in regions where coefficients associated with the differential operators or boundary conditions have rapid spatial variation, and coarsened in regions where there is more gradual spatial variation. For three-dimensional problems, the boundary is two-dimensional, and regions where coefficients change rapidly are often near two-dimensional surfaces, so the coarsest grid is only fine near two-dimensional subsets of the three-dimensional space. Coarse grid density drops off exponentially with distance from boundary surfaces and rapid-coefficient-change surfaces. This unstructured coarse grid results in the number of coarse grid voxels growing proportional to surface area, rather than proportional to volume. This results in significant computational savings for the coarse-grid solution. This coarse-grid solution is then refined for the fine-grid solution, and multigrid methods have memory usage and runtime proportional to the number of fine-grid voxels. This adaptive multigrid algorithm is being implemented on the CELL processor, where each chip has eight floating point processors and each processor operates on four floating point numbers each clock cycle. Both the adaptive grid algorithm and the multigrid solver have very efficient parallel implementations, in order to take advantage of the CELL processor architecture.
NASA Technical Reports Server (NTRS)
Rebstock, Rainer
1987-01-01
Numerical methods are developed for control of three dimensional adaptive test sections. The physical properties of the design problem occurring in the external field computation are analyzed, and a design procedure suited for solution of the problem is worked out. To do this, the desired wall shape is determined by stepwise modification of an initial contour. The necessary changes in geometry are determined with the aid of a panel procedure, or, with incident flow near the sonic range, with a transonic small perturbation (TSP) procedure. The designed wall shape, together with the wall deflections set during the tunnel run, are the input to a newly derived one-step formula which immediately yields the adapted wall contour. This is particularly important since the classical iterative adaptation scheme is shown to converge poorly for 3D flows. Experimental results obtained in the adaptive test section with eight flexible walls are presented to demonstrate the potential of the procedure. Finally, a method is described to minimize wall interference in 3D flows by adapting only the top and bottom wind tunnel walls.
Solution adaptive grids applied to low Reynolds number flow
NASA Astrophysics Data System (ADS)
de With, G.; Holdø, A. E.; Huld, T. A.
2003-08-01
A numerical study has been undertaken to investigate the use of a solution adaptive grid for flow around a cylinder in the laminar flow regime. The main purpose of this work is twofold. The first aim is to investigate the suitability of a grid adaptation algorithm and the reduction in mesh size that can be obtained. Secondly, the uniform asymmetric flow structures are ideal to validate the mesh structures due to mesh refinement and consequently the selected refinement criteria. The refinement variable used in this work is a product of the rate of strain and the mesh cell size, and contains two variables Cm and Cstr which determine the order of each term. By altering the order of either one of these terms the refinement behaviour can be modified.
A "Rearrangement Procedure" for Scoring Adaptive Tests with Review Options
ERIC Educational Resources Information Center
Papanastasiou, Elena C.; Reckase, Mark D.
2007-01-01
Because of the increased popularity of computerized adaptive testing (CAT), many admissions tests, as well as certification and licensure examinations, have been transformed from their paper-and-pencil versions to computerized adaptive versions. A major difference between paper-and-pencil tests and CAT from an examinee's point of view is that in…
Stratified and Maximum Information Item Selection Procedures in Computer Adaptive Testing
ERIC Educational Resources Information Center
Deng, Hui; Ansley, Timothy; Chang, Hua-Hua
2010-01-01
In this study we evaluated and compared three item selection procedures: the maximum Fisher information procedure (F), the a-stratified multistage computer adaptive testing (CAT) (STR), and a refined stratification procedure that allows more items to be selected from the high a strata and fewer items from the low a strata (USTR), along with…
NASA Technical Reports Server (NTRS)
Hooker, John R.; Batina, John T.; Williams, Marc H.
1992-01-01
An algorithm which combines spatial and temporal adaption for the time integration of the two dimensional Euler equations on unstructured meshes of triangles is presented. Spatial adaption involves mesh enrichment to add elements in high gradient regions of the flow and mesh coarsening to remove elements where they are no longer needed. Temporal adaption is a time accurate, local time stepping procedure which integrates the flow equations in each cell according to the local numerical stability constraint. The flow solver utilizes a four stage Runge-Kutta time integration scheme with an upwind flux-split spatial discretization. Results obtained using spatial and temporal adaption indicate that highly accurate solutions can be obtained with a significant savings of computing time over global time stepping.
Adaptive multigrid domain decomposition solutions for viscous interacting flows
NASA Technical Reports Server (NTRS)
Rubin, Stanley G.; Srinivasan, Kumar
1992-01-01
Several viscous incompressible flows with strong pressure interaction and/or axial flow reversal are considered with an adaptive multigrid domain decomposition procedure. Specific examples include the triple deck structure surrounding the trailing edge of a flat plate, the flow recirculation in a trough geometry, and the flow in a rearward facing step channel. For the latter case, there are multiple recirculation zones, of different character, for laminar and turbulent flow conditions. A pressure-based form of flux-vector splitting is applied to the Navier-Stokes equations, which are represented by an implicit lowest-order reduced Navier-Stokes (RNS) system and a purely diffusive, higher-order, deferred-corrector. A trapezoidal or box-like form of discretization insures that all mass conservation properties are satisfied at interfacial and outflow boundaries, even for this primitive-variable, non-staggered grid computation.
A Procedure for Empirical Initialization of Adaptive Testing Algorithms.
ERIC Educational Resources Information Center
van der Linden, Wim J.
In constrained adaptive testing, the numbers of constraints needed to control the content of the tests can easily run into the hundreds. Proper initialization of the algorithm becomes a requirement because the presence of large numbers of constraints slows down the convergence of the ability estimator. In this paper, an empirical initialization of…
Estimation of Latent Trait Status Using Adaptive Testing Procedures.
ERIC Educational Resources Information Center
Sympson, James B.
Latent trait test score theory is discussed primarily in terms of Birnbaum's three-parameter logistic model, and with some reference to the Rasch model. Equations and graphic illustrations are given for item characteristic curves and item information curves. An example is given for a hypothetical 20-item adaptive test, showing cumulative results…
An adaptive solution to the chemical master equation using tensors
NASA Astrophysics Data System (ADS)
Vo, Huy D.; Sidje, Roger B.
2017-07-01
Solving the chemical master equation directly is difficult due to the curse of dimensionality. We tackle that challenge by a numerical scheme based on the quantized tensor train (QTT) format, which enables us to represent the solution in a compressed form that scales linearly with the dimension. We recast the finite state projection in this QTT framework and allow it to expand adaptively based on proven error criteria. The end result is a QTT-formatted matrix exponential that we evaluate through a combination of the inexact uniformization technique and the alternating minimal energy algorithm. Our method can detect when the equilibrium distribution is reached with an inexpensive test that exploits the structure of the tensor format. We successfully perform numerical tests on high-dimensional problems that had been out of reach for classical approaches.
A Solution Adaptive Technique Using Tetrahedral Unstructured Grids
NASA Technical Reports Server (NTRS)
Pirzadeh, Shahyar Z.
2000-01-01
An adaptive unstructured grid refinement technique has been developed and successfully applied to several three dimensional inviscid flow test cases. The method is based on a combination of surface mesh subdivision and local remeshing of the volume grid Simple functions of flow quantities are employed to detect dominant features of the flowfield The method is designed for modular coupling with various error/feature analyzers and flow solvers. Several steady-state, inviscid flow test cases are presented to demonstrate the applicability of the method for solving practical three-dimensional problems. In all cases, accurate solutions featuring complex, nonlinear flow phenomena such as shock waves and vortices have been generated automatically and efficiently.
Multigrid solution of the Euler equations on unstructured and adaptive meshes
NASA Technical Reports Server (NTRS)
Mavriplis, Dimitri; Jameson, Antony
1987-01-01
A multigrid algorithm has been developed for solving the steady-state Euler equations in two dimensions on unstructured triangular meshes. The method assumes the various coarse and fine grids of the multigrid sequence to be independent of one another, thus decoupling the grid generation procedure from the multigrid algorithm. The transfer of variables between the various meshes employs a tree-search algorithm which rapidly identifies regions of overlap between coarse and fine grid cells. Finer meshes are obtained either by regenerating new globally refined meshes, or by adaptively refining the previous coarser mesh. For both cases, the observed convergence rates are comparable to those obtained with structured multigrid Euler solvers. The adaptively generated meshes are shown to produce solutions of higher accuracy with fewer mesh points.
Procedure for Adaptive Laboratory Evolution of Microorganisms Using a Chemostat.
Jeong, Haeyoung; Lee, Sang J; Kim, Pil
2016-09-20
Natural evolution involves genetic diversity such as environmental change and a selection between small populations. Adaptive laboratory evolution (ALE) refers to the experimental situation in which evolution is observed using living organisms under controlled conditions and stressors; organisms are thereby artificially forced to make evolutionary changes. Microorganisms are subject to a variety of stressors in the environment and are capable of regulating certain stress-inducible proteins to increase their chances of survival. Naturally occurring spontaneous mutations bring about changes in a microorganism's genome that affect its chances of survival. Long-term exposure to chemostat culture provokes an accumulation of spontaneous mutations and renders the most adaptable strain dominant. Compared to the colony transfer and serial transfer methods, chemostat culture entails the highest number of cell divisions and, therefore, the highest number of diverse populations. Although chemostat culture for ALE requires more complicated culture devices, it is less labor intensive once the operation begins. Comparative genomic and transcriptome analyses of the adapted strain provide evolutionary clues as to how the stressors contribute to mutations that overcome the stress. The goal of the current paper is to bring about accelerated evolution of microorganisms under controlled laboratory conditions.
Cooperative solutions coupling a geometry engine and adaptive solver codes
NASA Technical Reports Server (NTRS)
Dickens, Thomas P.
1995-01-01
Follow-on work has progressed in using Aero Grid and Paneling System (AGPS), a geometry and visualization system, as a dynamic real time geometry monitor, manipulator, and interrogator for other codes. In particular, AGPS has been successfully coupled with adaptive flow solvers which iterate, refining the grid in areas of interest, and continuing on to a solution. With the coupling to the geometry engine, the new grids represent the actual geometry much more accurately since they are derived directly from the geometry and do not use refits to the first-cut grids. Additional work has been done with design runs where the geometric shape is modified to achieve a desired result. Various constraints are used to point the solution in a reasonable direction which also more closely satisfies the desired results. Concepts and techniques are presented, as well as examples of sample case studies. Issues such as distributed operation of the cooperative codes versus running all codes locally and pre-calculation for performance are discussed. Future directions are considered which will build on these techniques in light of changing computer environments.
An adaptive solution domain algorithm for solving multiphase flow equations
NASA Astrophysics Data System (ADS)
Katyal, A. K.; Parker, J. C.
1992-01-01
An adaptive solution domain (ASD) finite-element model for simulating hydrocarbon spills has been developed that is computationally more efficient than conventional numerical methods. Coupled flow of water and oil with an air phase at constant pressure is considered. In the ASD formulation, the solution domain for water- and oil-flow equations is restricted by eliminating elements from the global matrix assembly which are not experiencing significant changes in fluid saturations or pressures. When any nodes of an element exhibit changes in fluid pressures more than a stipulated tolerance τ, or changes in fluid saturations greater than tolerance τ 2 during the current time step, it is labeled active and included in the computations for the next iteration. This formulation achieves computational efficiency by solving the flow equations for only the part of the domain where changes in fluid pressure or the saturations take place above stipulated tolerances. Examples involving infiltration and redistribution of oil in 1- and 2-D spatial domains are described to illustrate the application of the ASD method and the savings in the processor time achieved by this formulation. Savings in the computational effort up to 84% during infiltration and 63% during redistribution were achieved for the 2-D example problem.
ERIC Educational Resources Information Center
Barrouillet, Pierre; Camos, Valerie; Perruchet, Pierre; Seron, Xavier
2004-01-01
This article presents a new model of transcoding numbers from verbal to arabic form. This model, called ADAPT, is developmental, asemantic, and procedural. The authors' main proposal is that the transcoding process shifts from an algorithmic strategy to the direct retrieval from memory of digital forms. Thus, the model is evolutive, adaptive, and…
Zhang, M; Westerly, D C; Mackie, T R
2011-08-07
With on-line image guidance (IG), prostate shifts relative to the bony anatomy can be corrected by realigning the patient with respect to the treatment fields. In image guided intensity modulated proton therapy (IG-IMPT), because the proton range is more sensitive to the material it travels through, the realignment may introduce large dose variations. This effect is studied in this work and an on-line adaptive procedure is proposed to restore the planned dose to the target. A 2D anthropomorphic phantom was constructed from a real prostate patient's CT image. Two-field laterally opposing spot 3D-modulation and 24-field full arc distal edge tracking (DET) plans were generated with a prescription of 70 Gy to the planning target volume. For the simulated delivery, we considered two types of procedures: the non-adaptive procedure and the on-line adaptive procedure. In the non-adaptive procedure, only patient realignment to match the prostate location in the planning CT was performed. In the on-line adaptive procedure, on top of the patient realignment, the kinetic energy for each individual proton pencil beam was re-determined from the on-line CT image acquired after the realignment and subsequently used for delivery. Dose distributions were re-calculated for individual fractions for different plans and different delivery procedures. The results show, without adaptive, that both the 3D-modulation and the DET plans experienced delivered dose degradation by having large cold or hot spots in the prostate. The DET plan had worse dose degradation than the 3D-modulation plan. The adaptive procedure effectively restored the planned dose distribution in the DET plan, with delivered prostate D(98%), D(50%) and D(2%) values less than 1% from the prescription. In the 3D-modulation plan, in certain cases the adaptive procedure was not effective to reduce the delivered dose degradation and yield similar results as the non-adaptive procedure. In conclusion, based on this 2D phantom
A mineral separation procedure using hot Clerici solution
Rosenblum, Sam
1974-01-01
Careful boiling of Clerici solution in a Pyrex test tube in an oil bath is used to float minerals with densities up to 5.0 in order to obtain purified concentrates of monazite (density 5.1) for analysis. The "sink" and "float" fractions are trapped in solidified Clerici salts on rapid chilling, and the fractions are washed into separate filter papers with warm water. The hazardous nature of Clerici solution requires unusual care in handling.
Element-by-element Solution Procedures for Nonlinear Structural Analysis
NASA Technical Reports Server (NTRS)
Hughes, T. J. R.; Winget, J. M.; Levit, I.
1984-01-01
Element-by-element approximate factorization procedures are proposed for solving the large finite element equation systems which arise in nonlinear structural mechanics. Architectural and data base advantages of the present algorithms over traditional direct elimination schemes are noted. Results of calculations suggest considerable potential for the methods described.
Measurement of Actinides in Molybdenum-99 Solution Analytical Procedure
Soderquist, Chuck Z.; Weaver, Jamie L.
2015-11-01
This document is a companion report to a previous report, PNNL 24519, Measurement of Actinides in Molybdenum-99 Solution, A Brief Review of the Literature, August 2015. In this companion report, we report a fast, accurate, newly developed analytical method for measurement of trace alpha-emitting actinide elements in commercial high-activity molybdenum-99 solution. Molybdenum-99 is widely used to produce ^{99m}Tc for medical imaging. Because it is used as a radiopharmaceutical, its purity must be proven to be extremely high, particularly for the alpha emitting actinides. The sample of ^{99}Mo solution is measured into a vessel (such as a polyethylene centrifuge tube) and acidified with dilute nitric acid. A gadolinium carrier is added (50 µg). Tracers and spikes are added as necessary. Then the solution is made strongly basic with ammonium hydroxide, which causes the gadolinium carrier to precipitate as hydrous Gd(OH)_{3}. The precipitate of Gd(OH)_{3} carries all of the actinide elements. The suspension of gadolinium hydroxide is then passed through a membrane filter to make a counting mount suitable for direct alpha spectrometry. The high-activity ^{99}Mo and ^{99m}Tc pass through the membrane filter and are separated from the alpha emitters. The gadolinium hydroxide, carrying any trace actinide elements that might be present in the sample, forms a thin, uniform cake on the surface of the membrane filter. The filter cake is first washed with dilute ammonium hydroxide to push the last traces of molybdate through, then with water. The filter is then mounted on a stainless steel counting disk. Finally, the alpha emitting actinide elements are measured by alpha spectrometry.
Comparison of Disinfection Procedures on the Catheter Adapter-Transfer Set Junction.
Firanek, Catherine; Szpara, Edward; Polanco, Patricia; Davis, Ira; Sloand, James
2016-01-01
Peritonitis is a significant complication of peritoneal dialysis (PD), contributing to mortality and technique failure. Suboptimal disinfection and/or a loose connection at the catheter adapter-transfer set junction are forms of touch contamination that can compromise the integrity of the sterile fluid path and lead to peritonitis. Proper use of the right disinfectants for connections at the PD catheter adapter-transfer set interface can help eliminate bacteria at surface interfaces, secure connections, and prevent bacteria from entering into the sterile fluid pathway. Three studies were conducted to assess the antibacterial effects of various disinfecting agents and procedures, and ensuing security of the catheter adapter-transfer set junction. An open-soak disinfection procedure with 10% povidone iodine improves disinfection and tightness/security of catheter adapter-transfer set connection. Copyright © 2016 International Society for Peritoneal Dialysis.
Adaptive correction procedure for TVL1 image deblurring under impulse noise
NASA Astrophysics Data System (ADS)
Bai, Minru; Zhang, Xiongjun; Shao, Qianqian
2016-08-01
For the problem of image restoration of observed images corrupted by blur and impulse noise, the widely used TVL1 model may deviate from both the data-acquisition model and the prior model, especially for high noise levels. In order to seek a solution of high recovery quality beyond the reach of the TVL1 model, we propose an adaptive correction procedure for TVL1 image deblurring under impulse noise. Then, a proximal alternating direction method of multipliers (ADMM) is presented to solve the corrected TVL1 model and its convergence is also established under very mild conditions. It is verified by numerical experiments that our proposed approach outperforms the TVL1 model in terms of signal-to-noise ratio (SNR) values and visual quality, especially for high noise levels: it can handle salt-and-pepper noise as high as 90% and random-valued noise as high as 70%. In addition, a comparison with a state-of-the-art method, the two-phase method, demonstrates the superiority of the proposed approach.
The block adaptive multigrid method applied to the solution of the Euler equations
NASA Technical Reports Server (NTRS)
Pantelelis, Nikos
1993-01-01
In the present study, a scheme capable of solving very fast and robust complex nonlinear systems of equations is presented. The Block Adaptive Multigrid (BAM) solution method offers multigrid acceleration and adaptive grid refinement based on the prediction of the solution error. The proposed solution method was used with an implicit upwind Euler solver for the solution of complex transonic flows around airfoils. Very fast results were obtained (18-fold acceleration of the solution) using one fourth of the volumes of a global grid with the same solution accuracy for two test cases.
NASA Astrophysics Data System (ADS)
Chen, Y. Y.; Chen, S. H.; Zhao, W.
2017-07-01
An improved procedure for perturbation method is presented for constructing homoclinic solutions of strongly nonlinear self-excited oscillators. Compared with current perturbation methods based on nonlinear time transformations, the preference of the present method is that the explicit solutions, in respect to the original time variable, can be derived. In the paper, the equivalence and unified perturbation procedure with nonlinear time transformations, by which implicit solutions can be derived at nonlinear time scales, are firstly presented. Then an explicit generating homoclinic solution for power-law strongly nonlinear oscillator is derived with proposed hyperbolic function balance procedure. An approximation scheme is presented to improve the perturbation procedure and the explicit expression for nonlinear time transformation can be achieved. Applications and comparisons with other methods are performed to assess the advantage of the present method.
NASA Astrophysics Data System (ADS)
Dimova, Stefka; Mihaylova, Yonita
2016-02-01
The numerical solution of nonlinear degenerate reaction-diffusion problems often meets two kinds of difculties: singularities in space - finite speed of propagation of compact supports' initial perturbations and possible sharp moving fronts, where the solution has low regularity, and singularities in time - blow-up or quenching in finite time. We propose and implement a combination of the sixth-order WENO scheme of Liu, Shu and Zhang [SIAM J.Sci.Comput. 33, 939-965 (2011)] with an adaptive procedure to deal with these singularities. Numerical results on the mathematical model of heat structures are shown.
Three-dimensional Navier-Stokes calculations using solution-adapted grids
NASA Technical Reports Server (NTRS)
Henderson, T. L.; Huang, W.; Lee, K. D.; Choo, Y. K.
1993-01-01
A three-dimensional solution-adaptive grid generation technique is presented. The adaptation technique redistributes grid points to improve the accuracy of a flow solution without increasing the number of grid points. It is applicable to structured grids with a multiblock topology. The method uses a numerical mapping and potential theory to modify the initial grid distribution based on the properties of the flow solution on the initial grid. The technique is demonstrated with two examples - a transonic finite wing and a supersonic blunt fin. The advantages are shown by comparing flow solutions on the adapted grids with those on the initial grids.
NASA Technical Reports Server (NTRS)
Gossard, Myron L
1952-01-01
An iterative transformation procedure suggested by H. Wielandt for numerical solution of flutter and similar characteristic-value problems is presented. Application of this procedure to ordinary natural-vibration problems and to flutter problems is shown by numerical examples. Comparisons of computed results with experimental values and with results obtained by other methods of analysis are made.
Shen, Yi
2013-01-01
A subject’s sensitivity to a stimulus variation can be studied by estimating the psychometric function. Generally speaking, three parameters of the psychometric function are of interest: the performance threshold, the slope of the function, and the rate at which attention lapses occurs. In the current study, three psychophysical procedures were used to estimate the three-parameter psychometric function for an auditory gap detection task. They were an up-down staircase (up-down) procedure, an entropy-based Bayesian (entropy) procedure, and an updated maximum-likelihood (UML) procedure. Data collected from four young, normal-hearing listeners showed that while all three procedures provided similar estimates of the threshold parameter, the up-down procedure performed slightly better in estimating the slope and lapse rate for 200 trials of data collection. When the lapse rate was increased by mixing random responses into the three adaptive procedures, the larger lapse rate was especially detrimental to the efficiency of the up-down procedure, and the UML procedure provided better estimates of the threshold and slope than the other two procedures. PMID:23417238
Shen, Yi
2013-05-01
A subject's sensitivity to a stimulus variation can be studied by estimating the psychometric function. Generally speaking, three parameters of the psychometric function are of interest: the performance threshold, the slope of the function, and the rate at which attention lapses occur. In the present study, three psychophysical procedures were used to estimate the three-parameter psychometric function for an auditory gap detection task. These were an up-down staircase (up-down) procedure, an entropy-based Bayesian (entropy) procedure, and an updated maximum-likelihood (UML) procedure. Data collected from four young, normal-hearing listeners showed that while all three procedures provided similar estimates of the threshold parameter, the up-down procedure performed slightly better in estimating the slope and lapse rate for 200 trials of data collection. When the lapse rate was increased by mixing in random responses for the three adaptive procedures, the larger lapse rate was especially detrimental to the efficiency of the up-down procedure, and the UML procedure provided better estimates of the threshold and slope than did the other two procedures.
An Investigation of Procedures for Computerized Adaptive Testing Using Partial Credit Scoring.
ERIC Educational Resources Information Center
Koch, William R.; Dodd, Barbara G.
1989-01-01
Various aspects of the computerized adaptive testing (CAT) procedure for partial credit scoring were manipulated, focusing on the effects of the manipulations on operational characteristics of the CAT. The effects of item-pool size, item-pool information, and stepsizes used along the trait continuum were assessed. (TJH)
A Two Stage Solution Procedure for Production Planning System with Advance Demand Information
NASA Astrophysics Data System (ADS)
Ueno, Nobuyuki; Kadomoto, Kiyotaka; Hasuike, Takashi; Okuhara, Koji
We model for ‘Naiji System’ which is a unique corporation technique between a manufacturer and suppliers in Japan. We propose a two stage solution procedure for a production planning problem with advance demand information, which is called ‘Naiji’. Under demand uncertainty, this model is formulated as a nonlinear stochastic programming problem which minimizes the sum of production cost and inventory holding cost subject to a probabilistic constraint and some linear production constraints. By the convexity and the special structure of correlation matrix in the problem where inventory for different periods is not independent, we propose a solution procedure with two stages which are named Mass Customization Production Planning & Management System (MCPS) and Variable Mesh Neighborhood Search (VMNS) based on meta-heuristics. It is shown that the proposed solution procedure is available to get a near optimal solution efficiently and practical for making a good master production schedule in the suppliers.
A structured multi-block solution-adaptive mesh algorithm with mesh quality assessment
NASA Technical Reports Server (NTRS)
Ingram, Clint L.; Laflin, Kelly R.; Mcrae, D. Scott
1995-01-01
The dynamic solution adaptive grid algorithm, DSAGA3D, is extended to automatically adapt 2-D structured multi-block grids, including adaption of the block boundaries. The extension is general, requiring only input data concerning block structure, connectivity, and boundary conditions. Imbedded grid singular points are permitted, but must be prevented from moving in space. Solutions for workshop cases 1 and 2 are obtained on multi-block grids and illustrate both increased resolution of and alignment with the solution. A mesh quality assessment criteria is proposed to determine how well a given mesh resolves and aligns with the solution obtained upon it. The criteria is used to evaluate the grid quality for solutions of workshop case 6 obtained on both static and dynamically adapted grids. The results indicate that this criteria shows promise as a means of evaluating resolution.
Eldred, Michael Scott; Subia, Samuel Ramirez; Neckels, David; Hopkins, Matthew Morgan; Notz, Patrick K.; Adams, Brian M.; Carnes, Brian; Wittwer, Jonathan W.; Bichon, Barron J.; Copps, Kevin D.
2006-10-01
This report documents the results for an FY06 ASC Algorithms Level 2 milestone combining error estimation and adaptivity, uncertainty quantification, and probabilistic design capabilities applied to the analysis and design of bistable MEMS. Through the use of error estimation and adaptive mesh refinement, solution verification can be performed in an automated and parameter-adaptive manner. The resulting uncertainty analysis and probabilistic design studies are shown to be more accurate, efficient, reliable, and convenient.
NASA Technical Reports Server (NTRS)
Wang, Gang
2003-01-01
A multi grid solution procedure for the numerical simulation of turbulent flows in complex geometries has been developed. A Full Multigrid-Full Approximation Scheme (FMG-FAS) is incorporated into the continuity and momentum equations, while the scalars are decoupled from the multi grid V-cycle. A standard kappa-Epsilon turbulence model with wall functions has been used to close the governing equations. The numerical solution is accomplished by solving for the Cartesian velocity components either with a traditional grid staggering arrangement or with a multiple velocity grid staggering arrangement. The two solution methodologies are evaluated for relative computational efficiency. The solution procedure with traditional staggering arrangement is subsequently applied to calculate the flow and temperature fields around a model Short Take-off and Vertical Landing (STOVL) aircraft hovering in ground proximity.
Paradoxical results of adaptive false discovery rate procedures in neuroimaging studies
Reiss, Philip T.; Schwartzman, Armin; Lu, Feihan; Huang, Lei; Proal, Erika
2013-01-01
Adaptive false discovery rate (FDR) procedures, which offer greater power than the original FDR procedure of Benjamini and Hochberg, are often applied to statistical maps of the brain. When a large proportion of the null hypotheses are false, as in the case of widespread effects such as cortical thinning throughout much of the brain, adaptive FDR methods can surprisingly reject more null hypotheses than not accounting for multiple testing at all— i.e., using uncorrected p-values. A straightforward mathematical argument is presented to explain why this can occur with the q-value method of Storey and colleagues, and a simulation study shows that it can also occur, to a lesser extent, with a two-stage FDR procedure due to Benjamini and colleagues. We demonstrate the phenomenon with reference to a published data set documenting cortical thinning in attention deficit/hyperactivity disorder. The paper concludes with recommendations for how to proceed when adaptive FDR results of this kind are encountered in practice. PMID:22842214
Auto-adaptive statistical procedure for tracking structural health monitoring data
NASA Astrophysics Data System (ADS)
Smith, R. Lowell; Jannarone, Robert J.
2004-07-01
Whatever specific methods come to be preferred in the field of structural health/integrity monitoring, the associated raw data will eventually have to provide inputs for appropriate damage accumulation models and decision making protocols. The status of hardware under investigation eventually will be inferred from the evolution in time of the characteristics of this kind of functional figure of merit. Irrespective of the specific character of raw and processed data, it is desirable to develop simple, practical procedures to support damage accumulation modeling, status discrimination, and operational decision making in real time. This paper addresses these concerns and presents an auto-adaptive procedure developed to process data output from an array of many dozens of correlated sensors. These represent a full complement of information channels associated with typical structural health monitoring applications. What the algorithm does is learn in statistical terms the normal behavior patterns of the system, and against that backdrop, is configured to recognize and flag departures from expected behavior. This is accomplished using standard statistical methods, with certain proprietary enhancements employed to address issues of ill conditioning that may arise. Examples have been selected to illustrate how the procedure performs in practice. These are drawn from the fields of nondestructive testing, infrastructure management, and underwater acoustics. The demonstrations presented include the evaluation of historical electric power utilization data for a major facility, and a quantitative assessment of the performance benefits of net-centric, auto-adaptive computational procedures as a function of scale.
Bhatt, Divesh; Bahar, Ivet
2012-01-01
We introduce an adaptive weighted-ensemble procedure (aWEP) for efficient and accurate evaluation of first-passage rates between states for two-state systems. The basic idea that distinguishes aWEP from conventional weighted-ensemble (WE) methodology is the division of the configuration space into smaller regions and equilibration of the trajectories within each region upon adaptive partitioning of the regions themselves into small grids. The equilibrated conditional/transition probabilities between each pair of regions lead to the determination of populations of the regions and the first-passage times between regions, which in turn are combined to evaluate the first passage times for the forward and backward transitions between the two states. The application of the procedure to a non-trivial coarse–grained model of a 70-residue calcium binding domain of calmodulin is shown to efficiently yield information on the equilibrium probabilities of the two states as well as their first passage times. Notably, the new procedure is significantly more efficient than the canonical implementation of the WE procedure, and this improvement becomes even more significant at low temperatures. PMID:22979844
Adaptive clinical trials in tuberculosis: applications, challenges and solutions.
Davies, G R; Phillips, P P J; Jaki, T
2015-06-01
Drug development for tuberculosis (TB) faces numerous practical obstacles, including the need for combination treatment with at least three drugs, reliance on possibly unrepresentative animal models which may not reproduce key features of human disease and the lack of a well-validated surrogate endpoint for stable cure. Pivotal Phase III trials are large, lengthy and expensive, and the funding and capacity to conduct them are limited worldwide. More rational methods for the selection of priority regimens for Phase III are urgently needed to avoid costly late-stage failures. We examine the suitability of adaptive clinical trial designs for drug development in TB, focusing on designs for Phase IIB and III trials, where we believe the biggest gains in efficiency can be made. Key areas that may be addressed by such designs are improvements in the selection of doses and combinations of drugs in early clinical development and in maximising the power of confirmatory trials in multidrug-resistant TB, where patient numbers and complexity pose practical limitations. We encourage trialists and regulators in this area to consider the advantages that may be offered by these designs and their potential to more effectively and rapidly identify better treatment regimens for TB patients worldwide.
An adaptive nonlinear solution scheme for reservoir simulation
Lett, G.S.
1996-12-31
Numerical reservoir simulation involves solving large, nonlinear systems of PDE with strongly discontinuous coefficients. Because of the large demands on computer memory and CPU, most users must perform simulations on very coarse grids. The average properties of the fluids and rocks must be estimated on these grids. These coarse grid {open_quotes}effective{close_quotes} properties are costly to determine, and risky to use, since their optimal values depend on the fluid flow being simulated. Thus, they must be found by trial-and-error techniques, and the more coarse the grid, the poorer the results. This paper describes a numerical reservoir simulator which accepts fine scale properties and automatically generates multiple levels of coarse grid rock and fluid properties. The fine grid properties and the coarse grid simulation results are used to estimate discretization errors with multilevel error expansions. These expansions are local, and identify areas requiring local grid refinement. These refinements are added adoptively by the simulator, and the resulting composite grid equations are solved by a nonlinear Fast Adaptive Composite (FAC) Grid method, with a damped Newton algorithm being used on each local grid. The nonsymmetric linear system of equations resulting from Newton`s method are in turn solved by a preconditioned Conjugate Gradients-like algorithm. The scheme is demonstrated by performing fine and coarse grid simulations of several multiphase reservoirs from around the world.
Combined LAURA-UPS solution procedure for chemically-reacting flows. M.S. Thesis
NASA Technical Reports Server (NTRS)
Wood, William A.
1994-01-01
A new procedure seeks to combine the thin-layer Navier-Stokes solver LAURA with the parabolized Navier-Stokes solver UPS for the aerothermodynamic solution of chemically-reacting air flowfields. The interface protocol is presented and the method is applied to two slender, blunted shapes. Both axisymmetric and three dimensional solutions are included with surface pressure and heat transfer comparisons between the present method and previously published results. The case of Mach 25 flow over an axisymmetric six degree sphere-cone with a noncatalytic wall is considered to 100 nose radii. A stability bound on the marching step size was observed with this case and is attributed to chemistry effects resulting from the noncatalytic wall boundary condition. A second case with Mach 28 flow over a sphere-cone-cylinder-flare configuration is computed at both two and five degree angles of attack with a fully-catalytic wall. Surface pressures are seen to be within five percent with the present method compared to the baseline LAURA solution and heat transfers are within 10 percent. The effect of grid resolution is investigated and the nonequilibrium results are compared with a perfect gas solution, showing that while the surface pressure is relatively unchanged by the inclusion of reacting chemistry the nonequilibrium heating is 25 percent higher. The procedure demonstrates significant, order of magnitude reductions in solution time and required memory for the three dimensional case over an all thin-layer Navier-Stokes solution.
A Heat Vulnerability Index and Adaptation Solutions for Pittsburgh, Pennsylvania.
Bradford, Kathryn; Abrahams, Leslie; Hegglin, Miriam; Klima, Kelly
2015-10-06
With increasing evidence of global warming, many cities have focused attention on response plans to address their populations' vulnerabilities. Despite expected increased frequency and intensity of heat waves, the health impacts of such events in urban areas can be minimized with careful policy and economic investments. We focus on Pittsburgh, Pennsylvania and ask two questions. First, what are the top factors contributing to heat vulnerability and how do these characteristics manifest geospatially throughout Pittsburgh? Second, assuming the City wishes to deploy additional cooling centers, what placement will optimally address the vulnerability of the at risk populations? We use national census data, ArcGIS geospatial modeling, and statistical analysis to determine a range of heat vulnerability indices and optimal cooling center placement. We find that while different studies use different data and statistical calculations, all methods tested locate additional cooling centers at the confluence of the three rivers (Downtown), the northeast side of Pittsburgh (Shadyside/Highland Park), and the southeast side of Pittsburgh (Squirrel Hill). This suggests that for Pittsburgh, a researcher could apply the same factor analysis procedure to compare data sets for different locations and times; factor analyses for heat vulnerability are more robust than previously thought.
A Heat Vulnerability Index and Adaptation Solutions for Pittsburgh, Pennsylvania
NASA Astrophysics Data System (ADS)
Klima, K.; Abrahams, L.; Bradford, K.; Hegglin, M.
2015-12-01
With increasing evidence of global warming, many cities have focused attention on response plans to address their populations' vulnerabilities. Despite expected increased frequency and intensity of heat waves, the health impacts of such events in urban areas can be minimized with careful policy and economic investments. We focus on Pittsburgh, Pennsylvania and ask two questions. First, what are the top factors contributing to heat vulnerability and how do these characteristics manifest geospatially throughout Pittsburgh? Second, assuming the City wishes to deploy additional cooling centers, what placement will optimally address the vulnerability of the at risk populations? We use national census data, ArcGIS geospatial modeling, and statistical analysis to determine a range of heat vulnerability indices and optimal cooling center placement. We find that while different studies use different data and statistical calculations, all methods tested locate additional cooling centers at the confluence of the three rivers (Downtown), the northeast side of Pittsburgh (Shadyside/ Highland Park), and the southeast side of Pittsburgh (Squirrel Hill). This suggests that for Pittsburgh, a researcher could apply the same factor analysis procedure to compare datasets for different locations and times; factor analyses for heat vulnerability are more robust than previously thought.
Prism Adaptation and Aftereffect: Specifying the Properties of a Procedural Memory System
Fernández-Ruiz, Juan; Díaz, Rosalinda
1999-01-01
Prism adaptation, a form of procedural learning, is a phenomenon in which the motor system adapts to new visuospatial coordinates imposed by prisms that displace the visual field. Once the prisms are withdrawn, the degree and strength of the adaptation can be measured by the spatial deviation of the motor actions in the direction opposite to the visual displacement imposed by the prisms, a phenomenon known as aftereffect. This study was designed to define the variables that affect the acquisition and retention of the aftereffect. Subjects were required to throw balls to a target in front of them before, during, and after lateral displacement of the visual field with prismatic spectacles. The diopters of the prisms and the number of throws were varied among different groups of subjects. The results show that the adaptation process is dependent on the number of interactions between the visual and motor system, and not on the time spent wearing the prisms. The results also show that the magnitude of the aftereffect is highly correlated with the magnitude of the adaptation, regardless of the diopters of the prisms or the number of throws. Finally, the results suggest that persistence of the aftereffect depends on the number of throws after the adaptation is complete. On the basis of these results, we propose that the system underlying this kind of learning stores at least two different parameters, the contents (measured as the magnitude of displacement) and the persistence (measured as the number of throws to return to the baseline) of the learned information. PMID:10355523
Crane, N K; Parsons, I D; Hjelmstad, K D
2002-03-21
Adaptive mesh refinement selectively subdivides the elements of a coarse user supplied mesh to produce a fine mesh with reduced discretization error. Effective use of adaptive mesh refinement coupled with an a posteriori error estimator can produce a mesh that solves a problem to a given discretization error using far fewer elements than uniform refinement. A geometric multigrid solver uses increasingly finer discretizations of the same geometry to produce a very fast and numerically scalable solution to a set of linear equations. Adaptive mesh refinement is a natural method for creating the different meshes required by the multigrid solver. This paper describes the implementation of a scalable adaptive multigrid method on a distributed memory parallel computer. Results are presented that demonstrate the parallel performance of the methodology by solving a linear elastic rocket fuel deformation problem on an SGI Origin 3000. Two challenges must be met when implementing adaptive multigrid algorithms on massively parallel computing platforms. First, although the fine mesh for which the solution is desired may be large and scaled to the number of processors, the multigrid algorithm must also operate on much smaller fixed-size data sets on the coarse levels. Second, the mesh must be repartitioned as it is adapted to maintain good load balancing. In an adaptive multigrid algorithm, separate mesh levels may require separate partitioning, further complicating the load balance problem. This paper shows that, when the proper optimizations are made, parallel adaptive multigrid algorithms perform well on machines with several hundreds of processors.
A new greedy randomised adaptive search procedure for Multiple Sequence Alignment.
Layeb, Abdesslem; Selmane, Marwa; Elhoucine, Maroua Bencheikh
2013-01-01
The Multiple Sequence Alignment (MSA) is one of the most challenging tasks in bioinformatics. It consists of aligning several sequences to show the fundamental relationship and the common characteristics between a set of protein or nucleic sequences; this problem has been shown to be NP-complete if the number of sequences is >2. In this paper, a new incomplete algorithm based on a Greedy Randomised Adaptive Search Procedure (GRASP) is presented to deal with the MSA problem. The first GRASP's phase is a new greedy algorithm based on the application of a new random progressive method and a hybrid global/local algorithm. The second phase is an adaptive refinement method based on consensus alignment. The obtained results are very encouraging and show the feasibility and effectiveness of the proposed approach.
A solution procedure based on the Ateb function for a two-degree-of-freedom oscillator
NASA Astrophysics Data System (ADS)
Cveticanin, L.
2015-06-01
In this paper vibration of a two mass system with two degrees of freedom is considered. Two equal harmonic oscillators are coupled with a strong nonlinear viscoelastic connection. Mathematical model of the system is two coupled second-order strong nonlinear differential equations. Introducing new variables the system transforms into two uncoupled equations: one of them is linear and the other with a strong nonlinearity. In the paper a method for solving the strong nonlinear equation is developed. Based on the exact solution of a pure nonlinear differential equation, we assumed a perturbed version of the solution with time variable parameters. Due to the fact that the solution is periodical, the averaging procedure is introduced. As a special case vibrations of harmonic oscillators with fraction order nonlinear connection are considered. Depending on the order and coefficient of nonlinearities bounded and unbounded motion of masses is determined. Besides, the conditions for steady-state periodical solution are discussed. The procedure given in the paper is applied for investigation of the vibration of a vocal cord, which is modeled with two harmonic oscillators with strong nonlinear fraction order viscoelastic connection. Using the experimental data for the vocal cord the parameters for the steady-state solution which describes the flexural vibration of the vocal cord is analyzed. The influence of the order of nonlinearity on the amplitude and frequency of vibration of the vocal cord is obtained. The analytical results are close to those obtained experimentally.
Zhu, Hongjian
2016-12-12
Seamless phase II/III clinical trials have attracted increasing attention recently. They mainly use Bayesian response adaptive randomization (RAR) designs. There has been little research into seamless clinical trials using frequentist RAR designs because of the difficulty in performing valid statistical inference following this procedure. The well-designed frequentist RAR designs can target theoretically optimal allocation proportions, and they have explicit asymptotic results. In this paper, we study the asymptotic properties of frequentist RAR designs with adjusted target allocation proportions, and investigate statistical inference for this procedure. The properties of the proposed design provide an important theoretical foundation for advanced seamless clinical trials. Our numerical studies demonstrate that the design is ethical and efficient.
1993-06-01
An adapted toxicity characteristic leaching procedure was used to determine toxicity of soils to Daphnia magna . Soil samples were collected from U.S...vol/vol). Contaminated boils, Munition residues, Daphnia magna , EC50 Toxicity.
An efficient solution procedure for the thermoelastic analysis of truss space structures
NASA Technical Reports Server (NTRS)
Givoli, D.; Rand, O.
1992-01-01
A solution procedure is proposed for the thermal and thermoelastic analysis of truss space structures in periodic motion. In this method, the spatial domain is first descretized using a consistent finite element formulation. Then the resulting semi-discrete equations in time are solved analytically by using Fourier decomposition. Geometrical symmetry is taken advantage of completely. An algorithm is presented for the calculation of heat flux distribution. The method is demonstrated via a numerical example of a cylindrically shaped space structure.
An efficient solution procedure for the thermoelastic analysis of truss space structures
NASA Technical Reports Server (NTRS)
Givoli, D.; Rand, O.
1992-01-01
A solution procedure is proposed for the thermal and thermoelastic analysis of truss space structures in periodic motion. In this method, the spatial domain is first descretized using a consistent finite element formulation. Then the resulting semi-discrete equations in time are solved analytically by using Fourier decomposition. Geometrical symmetry is taken advantage of completely. An algorithm is presented for the calculation of heat flux distribution. The method is demonstrated via a numerical example of a cylindrically shaped space structure.
Kim, S.
1994-12-31
Parallel iterative procedures based on domain decomposition techniques are defined and analyzed for the numerical solution of wave propagation by finite element and finite difference methods. For finite element methods, in a Lagrangian framework, an efficient way for choosing the algorithm parameter as well as the algorithm convergence are indicated. Some heuristic arguments for finding the algorithm parameter for finite difference schemes are addressed. Numerical results are presented to indicate the effectiveness of the methods.
Multigrid iteration solution procedure for solving two-dimensional sets of coupled equations. [HTGR
Vondy, D.R.
1984-07-01
A procedure of iterative solution was coded in Fortran to apply the multigrid scheme of iteration to a set of coupled equations for solving two-dimensional sets of coupled equations. The incentive for this effort was to make available an implemented procedure that may be readily used as an alternative to overrelaxation, of special interest in applications where the latter is ineffective. The multigrid process was found to be effective, although not always competitive with simple overrelaxation. Implementing an effective and flexible procedure is a time-consuming task. Absolute error level evaluation was found to be essential to support methods assessment. A code source listing is presented to allow simple application when the computer memory size is adequate, avoiding data transfer from auxiliary storage. Included are the capabilities for one-dimensional rebalance and a driver program illustrating use requirements. Feedback of additional experience from application is anticipated.
A solution procedure for three-dimensional incompressible Navier-Stokes equation and its application
NASA Technical Reports Server (NTRS)
Kwak, D.; Chang, J. L. C.; Shanks, S. P.
1984-01-01
An implicit, finite-difference procedure is presented for numerically solving viscous incompressible flows. For convenience of applying the present method to three-dimensional problems, primitive variables, namely the pressure and velocities, are used. One of the major difficulties in solving incompressible flows that use primitive variables is caused by the pressure field solution method which is used as a mapping procedure to obtain a divergence-free velocity field. The present method is designed to accelerate the pressure-field solution procedure. This is achieved by the method of pseudocompressibility in which the time derivative pressure term is introduced into the mass conservation equation. The pressure wave propagation and the spreading of the viscous effect is investigated using simple test problems. The present study clarifies physical and numerical characteristics of the pseudo-compressible approach in simulating incompressible flows. Computed results for external and internal flows are presented to verify the present procedure. The present algorithm has been shown to be very robust and accurate if the selection of the pseudo-compressibility parameter has been made according to the guidelines given.
Development of a new computerized prism adaptation procedure for visuo-spatial neglect.
Champod, Anne Sophie; Taylor, Kristina; Eskes, Gail A
2014-09-30
Prism adaptation (PA) is a promising rehabilitation technique for visuo-spatial neglect. However, PA effects are often inconsistent across studies and the clinical application of this technique has been limited. The purpose of the present studies was to validate an easily standardized, home-friendly, and game-like PA technique (Peg-the-Mole) with healthy participants as a first step toward clinical development. In study 1, we used Peg-the-Mole with 32 participants wearing prism or sham goggles to investigate whether this procedure can induce significant after-effects on midline judgment and pointing tasks. In study 2, we compared Peg-the-Mole to a typical PA protocol in 42 participants for after-effects and level of enjoyment and to determine if the after-effects generalize to a throwing task. Study 1 showed that Peg-the-Mole induced significant after-effects on all outcome measures. Study 2 demonstrated that after-effects induced by Peg-the-Mole were equivalent to those induced by the typical PA procedure on all outcome measures. Peg-the-Mole was rated as more enjoyable than the typical procedure. Peg-the-Mole is a new computerized PA procedure that can be easily standardized and successfully used to induce significant after-effects. The present findings demonstrate that alterations can be made to the typical PA procedure to make it easier to use and more enjoyable, factors that could increase treatment availability, adherence and intensity. Copyright © 2014 Elsevier B.V. All rights reserved.
Development of alternative plant vitrification solutions in droplet-vitrification procedures.
Kim, Haeng-Hoon; Lee, Yoon-Geol; Shin, Dong-Jin; Ko, Ho-Cheol; Gwag, Jae-Gyun; Cho, Eun-Gi; Engelmann, Florent
2009-01-01
This study aimed at developing alternative vitrification solutions, modified either from the original PVS2 vitrification solution by increasing glycerol and sucrose and/or decreasing dimethylsulfoxide and ethylene glycol concentration, or from the original PVS3 vitrification solution by decreasing glycerol and sucrose concentration. The application of these vitrification solutions to two model species, i.e. garlic and chrysanthemum in a droplet-vitrification procedure, revealed that PVS3 and variants were superior to PVS2 and variants and that most PVS2 variants were comparable to the original PVS2. Both species were sensitive to chemical toxicity of permeating cryoprotectants and chrysanthemum was also sensitive to osmotic stress. The lower recovery of cryopreserved garlic shoot apices dehydrated with PVS2 and variants compared with those dehydrated with PVS3 and variants seemed attributed to cytotoxicity of the vitrification solutions tested as well as to insufficient protection against freezing injury. Chrysanthemum shoot tips were very sensitive to both chemical toxicity and osmotic stress and therefore, induction of cytotoxity tolerance during preconditioning was required for successful cryopreservation. The present study revealed that some of the PVS2 variants tested which have increased glycerol and sucrose and/or decreased dimethylsulfoxide and ethylene glycol concentration can be applied when explants are of medium size, tolerant to chemical toxicity and moderately sensitive to osmotic stress. PVS3 and variants can be used widely when samples are heterogeneous, of large size and/or very sensitive to chemical toxicity and tolerant to osmotic stress.
Construction and solution of an adaptive image-restoration model for removing blur and mixed noise
NASA Astrophysics Data System (ADS)
Wang, Youquan; Cui, Lihong; Cen, Yigang; Sun, Jianjun
2016-03-01
We establish a practical regularized least-squares model with adaptive regularization for dealing with blur and mixed noise in images. This model has some advantages, such as good adaptability for edge restoration and noise suppression due to the application of a priori spatial information obtained from a polluted image. We further focus on finding an important feature of image restoration using an adaptive restoration model with different regularization parameters in polluted images. A more important observation is that the gradient of an image varies regularly from one regularization parameter to another under certain conditions. Then, a modified graduated nonconvexity approach combined with a median filter version of a spatial information indicator is proposed to seek the solution of our adaptive image-restoration model by applying variable splitting and weighted penalty techniques. Numerical experiments show that the method is robust and effective for dealing with various blur and mixed noise levels in images.
Contact lens case cleaning procedures affect storage solution pH and osmolality.
Abengózar-Vela, Antonio; Pinto, Francisco J; González-Méijome, José M; Ralló, Miquel; Serés, Carmen; Calonge, Margarita; González-García, María J
2011-12-01
To investigate pH and osmolality changes in the solutions stored in contact lens (CL) cases, when different case rinsing and drying methods are used on a daily basis. Four multipurpose solutions (Opti-Free Express, Solo-Care Aqua, Re-Nu Multiplus, and Complete) and two hydrogen peroxide systems (AOsept and Oxysept) were studied. Cases were filled with the solutions and kept sealed. After 8 h, the cases underwent different rinsing (rinsing; non-rinsing) and drying (air drying-AD; lint-free tissue drying-LFTD; non-drying-ND) procedures on a daily basis. Five cases of each rinsing/drying combination for each solution were evaluated. The pH and osmolality of the case-contained solution were evaluated on the 1st, 7th, 15th, and then, 30th day. pH and osmolality increased significantly from day 1 to 30, except for Complete in which a significant decrease in pH was found. Rinsing vs. non-rinsing CL cases did not have any influence on the pH or osmolality, except for Oxysept, which showed a significantly higher osmolality value when cases were not rinsed. However, the drying procedure did influence both measurements; pH was significantly higher in the AD compared with the ND group (p < 0.05), and there was a significant difference in osmolality between the three drying conditions (p < 0.05), with the AD group showing the highest values, and the LFTD group showing the lowest. Osmolality and pH values are time and drying process-dependent in a CL case cleaning schedule. Regarding drying conditions, LFTD causes less increase in osmolality. Future studies should determine whether these changes might affect bacterial growth, lens parameters, or subject comfort during CL wear.
A new solution procedure for a nonlinear infinite beam equation of motion
NASA Astrophysics Data System (ADS)
Jang, T. S.
2016-10-01
Our goal of this paper is of a purely theoretical question, however which would be fundamental in computational partial differential equations: Can a linear solution-structure for the equation of motion for an infinite nonlinear beam be directly manipulated for constructing its nonlinear solution? Here, the equation of motion is modeled as mathematically a fourth-order nonlinear partial differential equation. To answer the question, a pseudo-parameter is firstly introduced to modify the equation of motion. And then, an integral formalism for the modified equation is found here, being taken as a linear solution-structure. It enables us to formulate a nonlinear integral equation of second kind, equivalent to the original equation of motion. The fixed point approach, applied to the integral equation, results in proposing a new iterative solution procedure for constructing the nonlinear solution of the original beam equation of motion, which consists luckily of just the simple regular numerical integration for its iterative process; i.e., it appears to be fairly simple as well as straightforward to apply. A mathematical analysis is carried out on both natures of convergence and uniqueness of the iterative procedure by proving a contractive character of a nonlinear operator. It follows conclusively,therefore, that it would be one of the useful nonlinear strategies for integrating the equation of motion for a nonlinear infinite beam, whereby the preceding question may be answered. In addition, it may be worth noticing that the pseudo-parameter introduced here has double roles; firstly, it connects the original beam equation of motion with the integral equation, second, it is related with the convergence of the iterative method proposed here.
qPR: An adaptive partial-report procedure based on Bayesian inference
Baek, Jongsoo; Lesmes, Luis Andres; Lu, Zhong-Lin
2016-01-01
Iconic memory is best assessed with the partial report procedure in which an array of letters appears briefly on the screen and a poststimulus cue directs the observer to report the identity of the cued letter(s). Typically, 6–8 cue delays or 600–800 trials are tested to measure the iconic memory decay function. Here we develop a quick partial report, or qPR, procedure based on a Bayesian adaptive framework to estimate the iconic memory decay function with much reduced testing time. The iconic memory decay function is characterized by an exponential function and a joint probability distribution of its three parameters. Starting with a prior of the parameters, the method selects the stimulus to maximize the expected information gain in the next test trial. It then updates the posterior probability distribution of the parameters based on the observer's response using Bayesian inference. The procedure is reiterated until either the total number of trials or the precision of the parameter estimates reaches a certain criterion. Simulation studies showed that only 100 trials were necessary to reach an average absolute bias of 0.026 and a precision of 0.070 (both in terms of probability correct). A psychophysical validation experiment showed that estimates of the iconic memory decay function obtained with 100 qPR trials exhibited good precision (the half width of the 68.2% credible interval = 0.055) and excellent agreement with those obtained with 1,600 trials of the conventional method of constant stimuli procedure (RMSE = 0.063). Quick partial-report relieves the data collection burden in characterizing iconic memory and makes it possible to assess iconic memory in clinical populations. PMID:27580045
qPR: An adaptive partial-report procedure based on Bayesian inference.
Baek, Jongsoo; Lesmes, Luis Andres; Lu, Zhong-Lin
2016-08-01
Iconic memory is best assessed with the partial report procedure in which an array of letters appears briefly on the screen and a poststimulus cue directs the observer to report the identity of the cued letter(s). Typically, 6-8 cue delays or 600-800 trials are tested to measure the iconic memory decay function. Here we develop a quick partial report, or qPR, procedure based on a Bayesian adaptive framework to estimate the iconic memory decay function with much reduced testing time. The iconic memory decay function is characterized by an exponential function and a joint probability distribution of its three parameters. Starting with a prior of the parameters, the method selects the stimulus to maximize the expected information gain in the next test trial. It then updates the posterior probability distribution of the parameters based on the observer's response using Bayesian inference. The procedure is reiterated until either the total number of trials or the precision of the parameter estimates reaches a certain criterion. Simulation studies showed that only 100 trials were necessary to reach an average absolute bias of 0.026 and a precision of 0.070 (both in terms of probability correct). A psychophysical validation experiment showed that estimates of the iconic memory decay function obtained with 100 qPR trials exhibited good precision (the half width of the 68.2% credible interval = 0.055) and excellent agreement with those obtained with 1,600 trials of the conventional method of constant stimuli procedure (RMSE = 0.063). Quick partial-report relieves the data collection burden in characterizing iconic memory and makes it possible to assess iconic memory in clinical populations.
An edge-based solution-adaptive method applied to the AIRPLANE code
NASA Astrophysics Data System (ADS)
Biswas, Rupak; Thomas, Scott D.; Cliff, Susan E.
1995-11-01
Computational methods to solve large-scale realistic problems in fluid flow can be made more efficient and cost effective by using them in conjunction with dynamic mesh adaption procedures that perform simultaneous coarsening and refinement to capture flow features of interest. This work couples the tetrahedral mesh adaption scheme, 3D_TAG, with the AIRPLANE code to solve complete aircraft configuration problems in transonic and supersonic flow regimes. Results indicate that the near-field sonic boom pressure signature of a cone-cylinder is improved, the oblique and normal shocks are better resolved on a transonic wing, and the bow shock ahead of an unstarted inlet is better defined.
An edge-based solution-adaptive method applied to the AIRPLANE code
NASA Technical Reports Server (NTRS)
Biswas, Rupak; Thomas, Scott D.; Cliff, Susan E.
1995-01-01
Computational methods to solve large-scale realistic problems in fluid flow can be made more efficient and cost effective by using them in conjunction with dynamic mesh adaption procedures that perform simultaneous coarsening and refinement to capture flow features of interest. This work couples the tetrahedral mesh adaption scheme, 3D_TAG, with the AIRPLANE code to solve complete aircraft configuration problems in transonic and supersonic flow regimes. Results indicate that the near-field sonic boom pressure signature of a cone-cylinder is improved, the oblique and normal shocks are better resolved on a transonic wing, and the bow shock ahead of an unstarted inlet is better defined.
Space-time mesh adaptation for solute transport in randomly heterogeneous porous media.
Dell'Oca, Aronne; Porta, Giovanni Michele; Guadagnini, Alberto; Riva, Monica
2017-07-05
We assess the impact of an anisotropic space and time grid adaptation technique on our ability to solve numerically solute transport in heterogeneous porous media. Heterogeneity is characterized in terms of the spatial distribution of hydraulic conductivity, whose natural logarithm, Y, is treated as a second-order stationary random process. We consider nonreactive transport of dissolved chemicals to be governed by an Advection Dispersion Equation at the continuum scale. The flow field, which provides the advective component of transport, is obtained through the numerical solution of Darcy's law. A suitable recovery-based error estimator is analyzed to guide the adaptive discretization. We investigate two diverse strategies guiding the (space-time) anisotropic mesh adaptation. These are respectively grounded on the definition of the guiding error estimator through the spatial gradients of: (i) the concentration field only; (ii) both concentration and velocity components. We test the approach for two-dimensional computational scenarios with moderate and high levels of heterogeneity, the latter being expressed in terms of the variance of Y. As quantities of interest, we key our analysis towards the time evolution of section-averaged and point-wise solute breakthrough curves, second centered spatial moment of concentration, and scalar dissipation rate. As a reference against which we test our results, we consider corresponding solutions associated with uniform space-time grids whose level of refinement is established through a detailed convergence study. We find a satisfactory comparison between results for the adaptive methodologies and such reference solutions, our adaptive technique being associated with a markedly reduced computational cost. Comparison of the two adaptive strategies tested suggests that: (i) defining the error estimator relying solely on concentration fields yields some advantages in grasping the key features of solute transport taking place within
An adaptive staircase procedure for the E-Prime programming environment.
Hairston, W David; Maldjian, Joseph A
2009-01-01
Many studies need to determine a subject's threshold for a given task. This can be achieved efficiently using an adaptive staircase procedure. While the logic and algorithms for staircases have been well established, the few pre-programmed routines currently available to researchers require at least moderate programming experience to integrate into new paradigms and experimental settings. Here, we describe a freely distributed routine developed for the E-Prime programming environment that can be easily integrated into any experimental protocol with only a basic understanding of E-Prime. An example experiment (visual temporal-order-judgment task) where subjects report the order of occurrence of two circles illustrates the behavior and consistency of the routine.
NASA Astrophysics Data System (ADS)
Delgado, J. M. P. Q.
2013-06-01
The aim of this work is to present a mathematical and experimental formulation of a new simple procedure for the measurement of effective molecular diffusion coefficients of a salt solution in a water-saturated building material. This innovate experimental procedure and mathematical formulation is presented in detail and experimental values of "effective" molecular diffusion coefficient of sodium chloride in a concrete sample ( w/ c = 0.45), at five different temperatures (between 10 and 30 °C) and four different initial NaCl concentrations (between 0.1 and 0.5 M), are reported. The experimental results obtained are in good agreement with the theoretical and experimental values of molecular diffusion coefficient presented in literature. An empirical correlation is presented for the prediction of "effective" molecular diffusion coefficient over the entire range of temperatures and initial salt concentrations studied.
Adaptively Refined Euler and Navier-Stokes Solutions with a Cartesian-Cell Based Scheme
NASA Technical Reports Server (NTRS)
Coirier, William J.; Powell, Kenneth G.
1995-01-01
A Cartesian-cell based scheme with adaptive mesh refinement for solving the Euler and Navier-Stokes equations in two dimensions has been developed and tested. Grids about geometrically complicated bodies were generated automatically, by recursive subdivision of a single Cartesian cell encompassing the entire flow domain. Where the resulting cells intersect bodies, N-sided 'cut' cells were created using polygon-clipping algorithms. The grid was stored in a binary-tree data structure which provided a natural means of obtaining cell-to-cell connectivity and of carrying out solution-adaptive mesh refinement. The Euler and Navier-Stokes equations were solved on the resulting grids using an upwind, finite-volume formulation. The inviscid fluxes were found in an upwinded manner using a linear reconstruction of the cell primitives, providing the input states to an approximate Riemann solver. The viscous fluxes were formed using a Green-Gauss type of reconstruction upon a co-volume surrounding the cell interface. Data at the vertices of this co-volume were found in a linearly K-exact manner, which ensured linear K-exactness of the gradients. Adaptively-refined solutions for the inviscid flow about a four-element airfoil (test case 3) were compared to theory. Laminar, adaptively-refined solutions were compared to accepted computational, experimental and theoretical results.
Calculation procedures for potential and viscous flow solutions for engine inlets
NASA Technical Reports Server (NTRS)
Albers, J. A.; Stockman, N. O.
1973-01-01
The method and basic elements of computer solutions for both potential flow and viscous flow calculations for engine inlets are described. The procedure is applicable to subsonic conventional (CTOL), short-haul (STOL), and vertical takeoff (VTOL) aircraft engine nacelles operating in a compressible viscous flow. The calculated results compare well with measured surface pressure distributions for a number of model inlets. The paper discusses the uses of the program in both the design and analysis of engine inlets, with several examples given for VTOL lift fans, acoustic splitters, and for STOL engine nacelles. Several test support applications are also given.
NASA Technical Reports Server (NTRS)
Przekwas, A. J.; Singhal, A. K.; Tam, L. T.
1984-01-01
The capability of simulating three dimensional two phase reactive flows with combustion in the liquid fuelled rocket engines is demonstrated. This was accomplished by modifying an existing three dimensional computer program (REFLAN3D) with Eulerian Lagrangian approach to simulate two phase spray flow, evaporation and combustion. The modified code is referred as REFLAN3D-SPRAY. The mathematical formulation of the fluid flow, heat transfer, combustion and two phase flow interaction of the numerical solution procedure, boundary conditions and their treatment are described.
NASA Astrophysics Data System (ADS)
LaChapelle, J.
1996-09-01
A new axiomatic formulation of path integrals is used to construct a path integral solution of the Schrödinger equation in curvilinear coordinates. An important feature of the formalism is that a coordinate transformation in the variables of the wavefunction does not imply a change of variable of integration in the path integral. Consequently, a transformation from Euclidean to curvilinear coordinates is simple to handle; there is no need to introduce ``quantum corrections'' into the action functional. Furthermore, the paths are differentiable: hence, issues related to stochastic paths do not arise. The procedure for constructing the path integral solution of the Schrödinger equation is straightforward. The case of the Schrödinger equation in spherical coordinates for a free particle is presented in detail.
Software for the parallel adaptive solution of conservation laws by discontinous Galerkin methods.
Flaherty, J. E.; Loy, R. M.; Shephard, M. S.; Teresco, J. D.
1999-08-17
The authors develop software tools for the solution of conservation laws using parallel adaptive discontinuous Galerkin methods. In particular, the Rensselaer Partition Model (RPM) provides parallel mesh structures within an adaptive framework to solve the Euler equations of compressible flow by a discontinuous Galerkin method (LOCO). Results are presented for a Rayleigh-Taylor flow instability for computations performed on 128 processors of an IBM SP computer. In addition to managing the distributed data and maintaining a load balance, RPM provides information about the parallel environment that can be used to tailor partitions to a specific computational environment.
Aguila-Camacho, Norelys; Duarte-Mermoud, Manuel A
2016-01-01
This paper presents the analysis of three classes of fractional differential equations appearing in the field of fractional adaptive systems, for the case when the fractional order is in the interval α ∈(0,1] and the Caputo definition for fractional derivatives is used. The boundedness of the solutions is proved for all three cases, and the convergence to zero of the mean value of one of the variables is also proved. Applications of the obtained results to fractional adaptive schemes in the context of identification and control problems are presented at the end of the paper, including numerical simulations which support the analytical results.
A Dynamically Adaptive Arbitrary Lagrangian-Eulerian Method for Solution of the Euler Equations
Anderson, R W; Elliott, N S; Pember, R B
2003-02-14
A new method that combines staggered grid arbitrary Lagrangian-Eulerian (ALE) techniques with structured local adaptive mesh refinement (AMR) has been developed for solution of the Euler equations. The novel components of the methods are driven by the need to reconcile traditional AMR techniques with the staggered variables and moving, deforming meshes associated with Lagrange based ALE schemes. We develop interlevel solution transfer operators and interlevel boundary conditions first in the case of purely Lagrangian hydrodynamics, and then extend these ideas into an ALE method by developing adaptive extensions of elliptic mesh relaxation techniques. Conservation properties of the method are analyzed, and a series of test problem calculations are presented which demonstrate the utility and efficiency of the method.
Fast multipole and space adaptive multiresolution methods for the solution of the Poisson equation
NASA Astrophysics Data System (ADS)
Bilek, Petr; Duarte, Max; Nečas, David; Bourdon, Anne; Bonaventura, Zdeněk
2016-09-01
This work focuses on the conjunction of the fast multipole method (FMM) with the space adaptive multiresolution (MR) technique for grid adaptation. Since both methods, MR and FMM provide a priori error estimates, both achieve O(N) computational complexity, and both operate on the same hierarchical space division, their conjunction represents a natural choice when designing a numerically efficient and robust strategy for time dependent problems. Special attention is given to the use of these methods in the simulation of streamer discharges in air. We have designed a FMM Poisson solver on multiresolution adapted grid in 2D. The accuracy and the computation complexity of the solver has been verified for a set of manufactured solutions. We confirmed that the developed solver attains desired accuracy and this accuracy is controlled only by the number of terms in the multipole expansion in combination with the multiresolution accuracy tolerance. The implementation has a linear computation complexity O(N).
NASA Technical Reports Server (NTRS)
Jawerth, Bjoern; Sweldens, Wim
1993-01-01
We present ideas on how to use wavelets in the solution of boundary value ordinary differential equations. Rather than using classical wavelets, we adapt their construction so that they become (bi)orthogonal with respect to the inner product defined by the operator. The stiffness matrix in a Galerkin method then becomes diagonal and can thus be trivially inverted. We show how one can construct an O(N) algorithm for various constant and variable coefficient operators.
Roberts, Nathan V.; Demkowiz, Leszek; Moser, Robert
2015-11-15
The discontinuous Petrov-Galerkin methodology with optimal test functions (DPG) of Demkowicz and Gopalakrishnan [18, 20] guarantees the optimality of the solution in an energy norm, and provides several features facilitating adaptive schemes. Whereas Bubnov-Galerkin methods use identical trial and test spaces, Petrov-Galerkin methods allow these function spaces to differ. In DPG, test functions are computed on the fly and are chosen to realize the supremum in the inf-sup condition; the method is equivalent to a minimum residual method. For well-posed problems with sufficiently regular solutions, DPG can be shown to converge at optimal rates—the inf-sup constants governing the convergence are mesh-independent, and of the same order as those governing the continuous problem [48]. DPG also provides an accurate mechanism for measuring the error, and this can be used to drive adaptive mesh refinements. We employ DPG to solve the steady incompressible Navier-Stokes equations in two dimensions, building on previous work on the Stokes equations, and focusing particularly on the usefulness of the approach for automatic adaptivity starting from a coarse mesh. We apply our approach to a manufactured solution due to Kovasznay as well as the lid-driven cavity flow, backward-facing step, and flow past a cylinder problems.
NASA Astrophysics Data System (ADS)
Gioiella, Lucia; Altobelli, Rosaria; de Luna, Martina Salzano; Filippone, Giovanni
2016-05-01
The efficacy of chitosan-based hydrogels in the removal of dyes from aqueous solutions has been investigated as a function of different parameters. Hydrogels were obtained by gelation of chitosan with a non-toxic gelling agent based on an aqueous basic solution. The preparation procedure has been optimized in terms of chitosan concentration in the starting solution, gelling agent concentration and chitosan-to-gelling agent ratio. The goal is to properly select the material- and process-related parameters in order to optimize the performances of the chitosan-based dye adsorbent. First, the influence of such factors on the gelling process has been studied from a kinetic point of view. Then, the effects on the adsorption capacity and kinetics of the chitosan hydrogels obtained in different conditions have been investigated. A common food dye (Indigo Carmine) has been used for this purpose. Noticeably, although the disk-shaped hydrogels are in the bulk form, their adsorption capacity is comparable to that reported in the literature for films and beads. In addition, the bulk samples can be easily separated from the liquid phase after the adsorption process, which is highly attractive from a practical point of view. Compression tests reveal that the samples do not breakup even after relatively large compressive strains. The obtained results suggest that the fine tuning of the process parameters allows the production of mechanical resistant and highly adsorbing chitosan-based hydrogels.
ERIC Educational Resources Information Center
Colorado State Dept. of Education, Denver. Special Education Services Unit.
This document is intended to provide guidance in the delivery of motor services to Colorado students with impairments in movement, sensory feedback, and sensory motor areas. Presented first is a rationale for providing adapted physical education, occupational therapy, and/or physical therapy services. The next chapter covers definitions,…
An Adaptive Landscape Classification Procedure using Geoinformatics and Artificial Neural Networks
Coleman, Andre Michael
2008-06-01
The Adaptive Landscape Classification Procedure (ALCP), which links the advanced geospatial analysis capabilities of Geographic Information Systems (GISs) and Artificial Neural Networks (ANNs) and particularly Self-Organizing Maps (SOMs), is proposed as a method for establishing and reducing complex data relationships. Its adaptive and evolutionary capability is evaluated for situations where varying types of data can be combined to address different prediction and/or management needs such as hydrologic response, water quality, aquatic habitat, groundwater recharge, land use, instrumentation placement, and forecast scenarios. The research presented here documents and presents favorable results of a procedure that aims to be a powerful and flexible spatial data classifier that fuses the strengths of geoinformatics and the intelligence of SOMs to provide data patterns and spatial information for environmental managers and researchers. This research shows how evaluation and analysis of spatial and/or temporal patterns in the landscape can provide insight into complex ecological, hydrological, climatic, and other natural and anthropogenic-influenced processes. Certainly, environmental management and research within heterogeneous watersheds provide challenges for consistent evaluation and understanding of system functions. For instance, watersheds over a range of scales are likely to exhibit varying levels of diversity in their characteristics of climate, hydrology, physiography, ecology, and anthropogenic influence. Furthermore, it has become evident that understanding and analyzing these diverse systems can be difficult not only because of varying natural characteristics, but also because of the availability, quality, and variability of spatial and temporal data. Developments in geospatial technologies, however, are providing a wide range of relevant data, and in many cases, at a high temporal and spatial resolution. Such data resources can take the form of high
Karmali, Faisal; Chaudhuri, Shomesh E.; Yi, Yongwoo; Merfeld, Daniel M.
2015-01-01
When measuring thresholds, careful selection of stimulus amplitude can increase efficiency by increasing the precision of psychometric fit parameters (e.g., decreasing the fit parameter error bars). To find efficient adaptive algorithms for psychometric threshold (“sigma”) estimation, we combined analytic approaches, Monte Carlo simulations and human experiments for a one-interval, binary forced-choice, direction-recognition task. To our knowledge, this is the first time analytic results have been combined and compared with either simulation or human results. Human performance was consistent with theory and not significantly different from simulation predictions. Our analytic approach provides a bound on efficiency, which we compared against the efficiency of standard staircase algorithms, a modified staircase algorithm with asymmetric step sizes, and a maximum likelihood estimation (MLE) procedure. Simulation results suggest that optimal efficiency at determining threshold is provided by the MLE procedure targeting a fraction correct level of 0.92, an asymmetric 4-down, 1-up (4D1U) staircase targeting between 0.86 and 0.92 or a standard 6D1U staircase. Psychometric test efficiency, computed by comparing simulation and analytic results, was between 41%–58% for 50 trials for these three algorithms, reaching up to 84% for 200 trials. These approaches were 13%–21% more efficient than the commonly-used 3D1U symmetric staircase. We also applied recent advances to reduce accuracy errors using a bias-reduced fitting approach. Taken together, the results lend confidence that the assumptions underlying each approach are reasonable, and that human threshold forced-choice decision-making is modeled well by detection-theory models and mimics simulations based on detection theory models. PMID:26645306
Karmali, Faisal; Chaudhuri, Shomesh E; Yi, Yongwoo; Merfeld, Daniel M
2016-03-01
When measuring thresholds, careful selection of stimulus amplitude can increase efficiency by increasing the precision of psychometric fit parameters (e.g., decreasing the fit parameter error bars). To find efficient adaptive algorithms for psychometric threshold ("sigma") estimation, we combined analytic approaches, Monte Carlo simulations, and human experiments for a one-interval, binary forced-choice, direction-recognition task. To our knowledge, this is the first time analytic results have been combined and compared with either simulation or human results. Human performance was consistent with theory and not significantly different from simulation predictions. Our analytic approach provides a bound on efficiency, which we compared against the efficiency of standard staircase algorithms, a modified staircase algorithm with asymmetric step sizes, and a maximum likelihood estimation (MLE) procedure. Simulation results suggest that optimal efficiency at determining threshold is provided by the MLE procedure targeting a fraction correct level of 0.92, an asymmetric 4-down, 1-up staircase targeting between 0.86 and 0.92 or a standard 6-down, 1-up staircase. Psychometric test efficiency, computed by comparing simulation and analytic results, was between 41 and 58% for 50 trials for these three algorithms, reaching up to 84% for 200 trials. These approaches were 13-21% more efficient than the commonly used 3-down, 1-up symmetric staircase. We also applied recent advances to reduce accuracy errors using a bias-reduced fitting approach. Taken together, the results lend confidence that the assumptions underlying each approach are reasonable and that human threshold forced-choice decision making is modeled well by detection theory models and mimics simulations based on detection theory models.
NASA Astrophysics Data System (ADS)
Wissmeier, L. C.; Barry, D. A.
2009-12-01
Computer simulations of water availability and quality play an important role in state-of-the-art water resources management. However, many of the most utilized software programs focus either on physical flow and transport phenomena (e.g., MODFLOW, MT3DMS, FEFLOW, HYDRUS) or on geochemical reactions (e.g., MINTEQ, PHREEQC, CHESS, ORCHESTRA). In recent years, several couplings between both genres of programs evolved in order to consider interactions between flow and biogeochemical reactivity (e.g., HP1, PHWAT). Software coupling procedures can be categorized as ‘close couplings’, where programs pass information via the memory stack at runtime, and ‘remote couplings’, where the information is exchanged at each time step via input/output files. The former generally involves modifications of software codes and therefore expert programming skills are required. We present a generic recipe for remotely coupling the PHREEQC geochemical modeling framework and flow and solute transport (FST) simulators. The iterative scheme relies on operator splitting with continuous re-initialization of PHREEQC and the FST of choice at each time step. Since PHREEQC calculates the geochemistry of aqueous solutions in contact with soil minerals, the procedure is primarily designed for couplings to FST’s for liquid phase flow in natural environments. It requires the accessibility of initial conditions and numerical parameters such as time and space discretization in the input text file for the FST and control of the FST via commands to the operating system (batch on Windows; bash/shell on Unix/Linux). The coupling procedure is based on PHREEQC’s capability to save the state of a simulation with all solid, liquid and gaseous species as a PHREEQC input file by making use of the dump file option in the TRANSPORT keyword. The output from one reaction calculation step is therefore reused as input for the following reaction step where changes in element amounts due to advection
Mission to Mars: Adaptive Identifier for the Solution of Inverse Optical Metrology Tasks
NASA Astrophysics Data System (ADS)
Krapivin, Vladimir F.; Varotsos, Costas A.; Christodoulakis, John
2016-06-01
A human mission to Mars requires the solution of many problems that mainly linked to the safety of life, the reliable operational control of drinking water as well as health care. The availability of liquid fuels is also an important issue since the existing tools cannot fully provide the required liquid fuels quantities for the mission return journey. This paper presents the development of new methods and technology for reliable, operational, and with high availability chemical analysis of liquid solutions of various types. This technology is based on the employment of optical sensors (such as the multi-channel spectrophotometers or spectroellipsometers and microwave radiometers) and the development of a database of spectral images for typical liquid solutions that could be the objects of life on Mars. This database exploits the adaptive recognition of optical images of liquids using specific algorithms that are based on spectral analysis, cluster analysis and methods for solving the inverse optical metrology tasks.
Approximate boundary condition procedure for the two-dimensional numerical solution of vortex wakes
NASA Technical Reports Server (NTRS)
Weston, R. P.; Liu, C. H.
1982-01-01
Research on efficient computational methods for general vorticity fields has been conducted in connection with a need for basic research on vortex-dominated flows. The present investigation is concerned with the evolution of vortex wakes behind aircraft wings. An efficient procedure is presented for the calculation of the boundary values used in the numerical solution of the unsteady, incompressible, two-dimensional Navier-Stokes equations for an unbounded flow field. The extent of the computational grid can be reduced compared to methods utilizing standard boundary conditions, without loss of accuracy. The efficiencies realized make it feasible to calculate the vortex wake development for realistic wing configurations, including the merging of multiple vortices, for Reynolds numbers of about 10,000 based on wing chord.
The effect of pretreatment with an oxalic acid solution on marginal adaptation to enamel in vivo.
van Dijken, J W; Hörstedt, P
1998-07-01
New acids such as oxalic acid have been introduced as a conditioning agent in the total-etch technique. There is concern about long-term retention of the acid on enamel in relation to the superficial etch effect. This in vivo study evaluated the marginal adaptation to enamel conditioned with either and oxalic acid solution or a phosphoric acid with SEM replica technique. Twenty-four patients received one of each of three class III restorations. Two cavity preparations were pretreated with aluminum nitrate/oxalic acid/glycine solution 1 of the Gluma 2000 system. The first cavity was primed and sealed with Gluma 2000 solution 2, the second cavity with Gluma 3 and 4. The third cavity was conditioned with phosphoric acid (Gluma 1) and sealed with the bonding resin Gluma 4. All three cavities were restored with a hybrid resin composite (Pekafill). At baseline and after 1 year, replica impressions were made to study the margins with SEM. Semiquantitative analysis of the enamel interfaces was performed (x200 and x1000 magnifications). Marginal quality of the three restorative systems were compared and tested intraindividually. The three restorations exhibited good enamel marginal adaptation and a high percentage of gap-free margins at baseline, 96% to 97% of the total length of margins investigated. Marginal quality decreased significantly after 1 year for all three groups. Gap-free margins were observed in 81% to 85% of the marginal length. No significant differences were found among the groups. Despite a less pronounced etch pattern created by conditioning of enamel with the oxalic acid solution, a good enamel marginal quality was observed at both evaluation times, comparable to the marginal adaptation of the phosphoric acid conditioned cavities.
ERIC Educational Resources Information Center
Yang, Xiangdong; Poggio, John C.; Glasnapp, Douglas R.
2006-01-01
The effects of five ability estimators, that is, maximum likelihood estimator, weighted likelihood estimator, maximum a posteriori, expected a posteriori, and Owen's sequential estimator, on the performances of the item response theory-based adaptive classification procedure on multiple categories were studied via simulations. The following…
ERIC Educational Resources Information Center
Yang, Xiangdong; Poggio, John C.; Glasnapp, Douglas R.
2006-01-01
The effects of five ability estimators, that is, maximum likelihood estimator, weighted likelihood estimator, maximum a posteriori, expected a posteriori, and Owen's sequential estimator, on the performances of the item response theory-based adaptive classification procedure on multiple categories were studied via simulations. The following…
Adaptive resolution simulation of an atomistic DNA molecule in MARTINI salt solution
NASA Astrophysics Data System (ADS)
Zavadlav, J.; Podgornik, R.; Melo, M. N.; Marrink, S. J.; Praprotnik, M.
2016-10-01
We present a dual-resolution model of a deoxyribonucleic acid (DNA) molecule in a bathing solution, where we concurrently couple atomistic bundled water and ions with the coarse-grained MARTINI model of the solvent. We use our fine-grained salt solution model as a solvent in the inner shell surrounding the DNA molecule, whereas the solvent in the outer shell is modeled by the coarse-grained model. The solvent entities can exchange between the two domains and adapt their resolution accordingly. We critically asses the performance of our multiscale model in adaptive resolution simulations of an infinitely long DNA molecule, focusing on the structural characteristics of the solvent around DNA. Our analysis shows that the adaptive resolution scheme does not produce any noticeable artifacts in comparison to a reference system simulated in full detail. The effect of using a bundled-SPC model, required for multiscaling, compared to the standard free SPC model is also evaluated. Our multiscale approach opens the way for large scale applications of DNA and other biomolecules which require a large solvent reservoir to avoid boundary effects.
An adaptive wavelet-vaguelette algorithm for the solution of PDEs
Froehlich, J.; Schneider, K.
1997-01-15
The paper first describes a fast algorithm for the discrete orthonormal wavelet transform and its inverse without using the scaling function. This approach permits to compute the decomposition of a function into a lacunary wavelet basis, i.e., a basis constituted of a subset of all basis functions up to a certain scale, without modification. The construction is then extended to operator-adapted biorthogonal wavelets. This is relevant for the solution of certain nonlinear evolutionary PDEs where a priori information about the significant coefficients is available. We pursue the approach which is based on the explicit computation of the scalewise contributions of the approximated function to the values at points of hierarchical grids. Here, we present an improved construction employing the cardinal function of the multiresolution. The new method is applied to the Helmholtz equation and illustrated by comparative numerical results. It is then extended for the solution of a nonlinear parabolic PDE with semi-implicit discretization in time and self-adaptive wavelet discretization in space. Results with full adaptivity of the spatial wavelet discretization are presented for a one-dimensional flame front as well as for a two-dimensional problem. 50 refs., 4 figs., 3 tabs.
Adaptive finite element methods for the solution of inverse problems in optical tomography
NASA Astrophysics Data System (ADS)
Bangerth, Wolfgang; Joshi, Amit
2008-06-01
Optical tomography attempts to determine a spatially variable coefficient in the interior of a body from measurements of light fluxes at the boundary. Like in many other applications in biomedical imaging, computing solutions in optical tomography is complicated by the fact that one wants to identify an unknown number of relatively small irregularities in this coefficient at unknown locations, for example corresponding to the presence of tumors. To recover them at the resolution needed in clinical practice, one has to use meshes that, if uniformly fine, would lead to intractably large problems with hundreds of millions of unknowns. Adaptive meshes are therefore an indispensable tool. In this paper, we will describe a framework for the adaptive finite element solution of optical tomography problems. It takes into account all steps starting from the formulation of the problem including constraints on the coefficient, outer Newton-type nonlinear and inner linear iterations, regularization, and in particular the interplay of these algorithms with discretizing the problem on a sequence of adaptively refined meshes. We will demonstrate the efficiency and accuracy of these algorithms on a set of numerical examples of clinical relevance related to locating lymph nodes in tumor diagnosis.
Triangle Based Adaptive Stencils for the Solution of Hyperbolic Conservation Laws
NASA Astrophysics Data System (ADS)
Durlofsky, Louis J.; Engquist, Bjorn; Osher, Stanley
1992-01-01
A triangle based adaptive difference stencil for the numerical approximation of hyperbolic conservation laws in two space dimensions is constructed. The novelty of the resulting scheme lies in the nature of the preprocessing of the cell averaged data, which is accomplished via a nearest neighbor linear interpolation followed by a slope limiting procedure. Two such limiting procedures are suggested. The resulting method is considerably more simple than other triangle based non-oscillatory approximations which, like this scheme, approximate the flux up to second-order accuracy. Numerical results for constant and variable coefficient linear advection, as well as for nonlinear flux functions (Burgers' equation and the Buckley-Leverett equation), are presented. The observed order of convergence, after local averaging, is from 1.7 to 2.0 in L1.
Space-time adaptive solution of inverse problems with the discrete adjoint method
NASA Astrophysics Data System (ADS)
Alexe, Mihai; Sandu, Adrian
2014-08-01
This paper develops a framework for the construction and analysis of discrete adjoint sensitivities in the context of time dependent, adaptive grid, adaptive step models. Discrete adjoints are attractive in practice since they can be generated with low effort using automatic differentiation. However, this approach brings several important challenges. The space-time adjoint of the forward numerical scheme may be inconsistent with the continuous adjoint equations. A reduction in accuracy of the discrete adjoint sensitivities may appear due to the inter-grid transfer operators. Moreover, the optimization algorithm may need to accommodate state and gradient vectors whose dimensions change between iterations. This work shows that several of these potential issues can be avoided through a multi-level optimization strategy using discontinuous Galerkin (DG) hp-adaptive discretizations paired with Runge-Kutta (RK) time integration. We extend the concept of dual (adjoint) consistency to space-time RK-DG discretizations, which are then shown to be well suited for the adaptive solution of time-dependent inverse problems. Furthermore, we prove that DG mesh transfer operators on general meshes are also dual consistent. This allows the simultaneous derivation of the discrete adjoint for both the numerical solver and the mesh transfer logic with an automatic code generation mechanism such as algorithmic differentiation (AD), potentially speeding up development of large-scale simulation codes. The theoretical analysis is supported by numerical results reported for a two-dimensional non-stationary inverse problem.
Patched based methods for adaptive mesh refinement solutions of partial differential equations
Saltzman, J.
1997-09-02
This manuscript contains the lecture notes for a course taught from July 7th through July 11th at the 1997 Numerical Analysis Summer School sponsored by C.E.A., I.N.R.I.A., and E.D.F. The subject area was chosen to support the general theme of that year`s school which is ``Multiscale Methods and Wavelets in Numerical Simulation.`` The first topic covered in these notes is a description of the problem domain. This coverage is limited to classical PDEs with a heavier emphasis on hyperbolic systems and constrained hyperbolic systems. The next topic is difference schemes. These schemes are the foundation for the adaptive methods. After the background material is covered, attention is focused on a simple patched based adaptive algorithm and its associated data structures for square grids and hyperbolic conservation laws. Embellishments include curvilinear meshes, embedded boundary and overset meshes. Next, several strategies for parallel implementations are examined. The remainder of the notes contains descriptions of elliptic solutions on the mesh hierarchy, elliptically constrained flow solution methods and elliptically constrained flow solution methods with diffusion.
Ijabadeniyi, Oluwatosin Ademola; Mnyandu, Elizabeth
2017-04-13
The effectiveness of sodium dodecyl sulphate (SDS), sodium hypochlorite solution and levulinic acid in reducing the survival of heat adapted and chlorine adapted Listeria monocytogenes ATCC 7644 was evaluated. The results against heat adapted L. monocytognes revealed that sodium hypochlorite solution was the least effective, achieving log reduction of 2.75, 2.94 and 3.97 log colony forming unit (CFU)/mL for 1, 3 and 5 minutes, respectively. SDS was able to achieve 8 log reduction for both heat adapted and chlorine adapted bacteria. When used against chlorine adapted L. monocytogenes sodium hypochlorite solution achieved log reduction of 2.76, 2.93 and 3.65 log CFU/mL for 1, 3 and 5 minutes, respectively. Using levulinic acid on heat adapted bacteria achieved log reduction of 3.07, 2.78 and 4.97 log CFU/mL for 1, 3, 5 minutes, respectively. On chlorine adapted bacteria levulinic acid achieved log reduction of 2.77, 3.07 and 5.21 log CFU/mL for 1, 3 and 5 minutes, respectively. Using a mixture of 0.05% SDS and 0.5% levulinic acid on heat adapted bacteria achieved log reduction of 3.13, 3.32 and 4.79 log CFU/mL for 1, 3 and 5 minutes while on chlorine adapted bacteria it achieved 3.20, 3.33 and 5.66 log CFU/mL, respectively. Increasing contact time also increased log reduction for both test pathogens. A storage period of up to 72 hours resulted in progressive log reduction for both test pathogens. Results also revealed that there was a significant difference (P≤0.05) among contact times, storage times and sanitizers. Findings from this study can be used to select suitable sanitizers and contact times for heat and chlorine adapted L. monocytogenes in the fresh produce industry.
An Adaptive QoS Routing Solution for MANET Based Multimedia Communications in Emergency Cases
NASA Astrophysics Data System (ADS)
Ramrekha, Tipu Arvind; Politis, Christos
The Mobile Ad hoc Networks (MANET) is a wireless network deprived of any fixed central authoritative routing entity. It relies entirely on collaborating nodes forwarding packets from source to destination. This paper describes the design, implementation and performance evaluation of CHAMELEON, an adaptive Quality of Service (QoS) routing solution, with improved delay and jitter performances, enabling multimedia communication for MANETs in extreme emergency situations such as forest fire and terrorist attacks as defined in the PEACE project. CHAMELEON is designed to adapt its routing behaviour according to the size of a MANET. The reactive Ad Hoc on-Demand Distance Vector Routing (AODV) and proactive Optimized Link State Routing (OLSR) protocols are deemed appropriate for CHAMELEON through their performance evaluation in terms of delay and jitter for different MANET sizes in a building fire emergency scenario. CHAMELEON is then implemented in NS-2 and evaluated similarly. The paper concludes with a summary of findings so far and intended future work.
Can adaptive grid refinement produce grid-independent solutions for incompressible flows?
NASA Astrophysics Data System (ADS)
Wackers, Jeroen; Deng, Ganbo; Guilmineau, Emmanuel; Leroyer, Alban; Queutey, Patrick; Visonneau, Michel; Palmieri, Alexandro; Liverani, Alfredo
2017-09-01
This paper studies if adaptive grid refinement combined with finite-volume simulation of the incompressible RANS equations can be used to obtain grid-independent solutions of realistic flow problems. It is shown that grid adaptation based on metric tensors can generate series of meshes for grid convergence studies in a straightforward way. For a two-dimensional airfoil and the flow around a tanker ship, the grid convergence of the observed forces is sufficiently smooth for numerical uncertainty estimation. Grid refinement captures the details of the local flow in the wake, which is shown to be grid converged on reasonably-sized meshes. Thus, grid convergence studies using automatic refinement are suitable for high-Reynolds incompressible flows.
NASA Technical Reports Server (NTRS)
Stricklin, J. A.; Haisler, W. E.; Von Riesemann, W. A.
1972-01-01
This paper presents an assessment of the solution procedures available for the analysis of inelastic and/or large deflection structural behavior. A literature survey is given which summarized the contribution of other researchers in the analysis of structural problems exhibiting material nonlinearities and combined geometric-material nonlinearities. Attention is focused at evaluating the available computation and solution techniques. Each of the solution techniques is developed from a common equation of equilibrium in terms of pseudo forces. The solution procedures are applied to circular plates and shells of revolution in an attempt to compare and evaluate each with respect to computational accuracy, economy, and efficiency. Based on the numerical studies, observations and comments are made with regard to the accuracy and economy of each solution technique.
NASA Technical Reports Server (NTRS)
Stricklin, J. A.; Haisler, W. E.; Von Riesemann, W. A.
1972-01-01
This paper presents an assessment of the solution procedures available for the analysis of inelastic and/or large deflection structural behavior. A literature survey is given which summarized the contribution of other researchers in the analysis of structural problems exhibiting material nonlinearities and combined geometric-material nonlinearities. Attention is focused at evaluating the available computation and solution techniques. Each of the solution techniques is developed from a common equation of equilibrium in terms of pseudo forces. The solution procedures are applied to circular plates and shells of revolution in an attempt to compare and evaluate each with respect to computational accuracy, economy, and efficiency. Based on the numerical studies, observations and comments are made with regard to the accuracy and economy of each solution technique.
Pérez-Jordá, José M
2011-11-28
A series of improvements for the solution of the three-dimensional Schrödinger equation over a method introduced by Gygi [F. Gygi, Europhys. Lett. 19, 617 (1992); F. Gygi, Phys. Rev. B 48, 11692 (1993)] are presented. As in the original Gygi's method, the solution (orbital) is expressed by means of plane waves in adaptive coordinates u, where u is mapped from Cartesian coordinates, u=f(r). The improvements implemented are threefold. First, maps are introduced that allow the application of the method to atoms and molecules without the assistance of the supercell approximation. Second, the electron-nucleus singularities are exactly removed, so that pseudo-potentials are no longer required. Third, the sampling error during integral evaluation is made negligible, which results in a true variational, second-order energy error procedure. The method is tested on the hydrogen atom (ground and excited states) and the H(2)(+) molecule, resulting in milli-Hartree accuracy with a moderate number of plane waves.
Adaptive-mesh-based algorithm for fluorescence molecular tomography using an analytical solution
NASA Astrophysics Data System (ADS)
Wang, Daifa; Song, Xiaolei; Bai, Jing
2007-07-01
Fluorescence molecular tomography (FMT) has become an important method for in-vivo imaging of small animals. It has been widely used for tumor genesis, cancer detection, metastasis, drug discovery, and gene therapy. In this study, an algorithm for FMT is proposed to obtain accurate and fast reconstruction by combining an adaptive mesh refinement technique and an analytical solution of diffusion equation. Numerical studies have been performed on a parallel plate FMT system with matching fluid. The reconstructions obtained show that the algorithm is efficient in computation time, and they also maintain image quality.
A constrained backpropagation approach for the adaptive solution of partial differential equations.
Rudd, Keith; Di Muro, Gianluca; Ferrari, Silvia
2014-03-01
This paper presents a constrained backpropagation (CPROP) methodology for solving nonlinear elliptic and parabolic partial differential equations (PDEs) adaptively, subject to changes in the PDE parameters or external forcing. Unlike existing methods based on penalty functions or Lagrange multipliers, CPROP solves the constrained optimization problem associated with training a neural network to approximate the PDE solution by means of direct elimination. As a result, CPROP reduces the dimensionality of the optimization problem, while satisfying the equality constraints associated with the boundary and initial conditions exactly, at every iteration of the algorithm. The effectiveness of this method is demonstrated through several examples, including nonlinear elliptic and parabolic PDEs with changing parameters and nonhomogeneous terms.
1990-08-22
by 3 GORDON BENNETT RUTHERFORD, B.S. SUPERVISING PROFESSOR: RICHARD J. LAGOW The synthesis of several perfluorinated ethers of pentaerythritol ...II i SYNTHESIS OF PERFLUORINATED ETHERS BY SOLUTION PHASE 3 DIRECT FLUORINATION: AN ADAPTATION OF THE LA-MAR TECHNIQUEI g oo/z3 - - G 5 / I P I Al 3...g I L__ iI !V I I SYNTHESIS OF PERFLUORINATED ETHERS BY SOLUTION PHASE 3 DIRECT FLUORINATION: AN ADAPTATION OF THE LA-MAR TECHNIQUE 3 GORDON BENNETT
NASA Astrophysics Data System (ADS)
Ozcelikkale, Altug; Sert, Cuneyt
2012-05-01
Least-squares spectral element solution of steady, two-dimensional, incompressible flows are obtained by approximating velocity, pressure and vorticity variable set on Gauss-Lobatto-Legendre nodes. Constrained Approximation Method is used for h- and p-type nonconforming interfaces of quadrilateral elements. Adaptive solutions are obtained using a posteriori error estimates based on least squares functional and spectral coefficient. Effective use of p-refinement to overcome poor mass conservation drawback of least-squares formulation and successful use of h- and p-refinement together to solve problems with geometric singularities are demonstrated. Capabilities and limitations of the developed code are presented using Kovasznay flow, flow past a circular cylinder in a channel and backward facing step flow.
Comparison of Facts and DPD-Steadifac Procedures for Free and Combined Chlorine in Aqueous Solution.
1980-01-01
of Nitrogen Trichloride. 13 2. Interference of Monochloramine on the DPD Procedure in Synthetic Waters ........... ... ..... ... .. 17 3. Comparison... Monochloramine with the FACTS Procedure Using Synthetic Waters ..... ..... ......................... 14 2. Interference of Monochloramine with the DPD...Procedure in Synthetic W.aters .......... ......................... 16 3. Interference of Monochloramine with the DPD-STEADIFAC Modified Procedure in
NASA Astrophysics Data System (ADS)
Dawes, W. N.
This paper describes some recent developments in the application of unstructured mesh, solution-adaptive methods to the solution of the three-dimensional Navier-Stokes equations in turbomachinery flows. By adopting a simple, pragmatic but systematic approach to mesh generation, the variety of simulations which can be attempted ranges from simple turbomachinery blade-blade primary paths towards complex secondary gas paths and can include the interactions between the two paths. By adopting a hierarchical data structure, mesh refinement and derefinement can be performed sufficiently economically that it becomes practical to perform unsteady flow simulations with zones of mesh refinement ‘following’ unsteady flow features, like vortices and wakes, through a coarse background mesh. The combined benefits of the approach result in a powerful analytical ability. Solutions for a wide range of steady flows are presented including a transonic compressor rotor, a centrifugal impellor, the internal coolant passage of a radial inflow turbine and a turbine disc-cavity flow. Unsteady solutions are presented for a cylinder shedding vortices and for a turbine wake/rotor interaction.
NASA Astrophysics Data System (ADS)
Oware, E. K.; Moysey, S. M. J.
2014-09-01
We investigate the potential for characterizing spatial moments of subsurface solute plumes from surface-based electrical resistivity images produced within a Proper Orthogonal Decomposition (POD) inversion framework. The existing POD algorithm is improved here to allow for adaptive conditioning of the POD training images on resistivity measurements. The efficacy of the suggested technique is evaluated with two hypothetical transport scenarios: synthetic #1 is based on the case where the target plume and POD training images follow the same (unimodal) plume morphology, whereas a second source location in synthetic #2 makes the target plume bimodal and inconsistent with the POD training images. The resistivity imaging results indicate that the adaptive algorithm efficiently and robustly updates the POD training images to obtain good quality resistivity images of the target plumes, both in the presence of data noise and when conceptual model inaccuracies exist in the training simulations. Spatial moments of the solute plumes recovered from the resistivity images are also favorable, with relative mass recovery errors in the range of 0.6-4.4%, center of mass errors in the range of 0.6-9.6%, and spatial variance errors in the range of 3.4-45% for cases where the voltage data had 0-10% noise. These results are consistent with or improved upon those reported in the literature. Comparison of the resistivity-based moment estimates to those obtained from direct concentration sampling suggests that for cases with good quality resistivity data (i.e., <3% noise), the imaging results provide more accurate moments until 6-10 multi-level sampling wells are installed. While the specific number of wells will depend on the actual field scenario, we suggest that this finding illustrates the general value of POD-based resistivity imaging techniques for non-invasively estimating the spatial moments of a solute plume.
Shao, Liujiazi; Wang, Baoguo; Wang, Shuangyan; Mu, Feng; Gu, Ke
2013-01-01
OBJECTIVE: The ideal solution for fluid management during neurosurgical procedures remains controversial. The aim of this study was to compare the effects of a 7.2% hypertonic saline - 6% hydroxyethyl starch (HS-HES) solution and a 6% hydroxyethyl starch (HES) solution on clinical, hemodynamic and laboratory variables during elective neurosurgical procedures. METHODS: Forty patients scheduled for elective neurosurgical procedures were randomly assigned to the HS-HES group or the HES group. After the induction of anesthesia, patients in the HS-HES group received 250 mL of HS-HES (500 mL/h), whereas the patients in the HES group received 1,000 mL of HES (1000 mL/h). The monitored variables included clinical, hemodynamic and laboratory parameters. Chictr.org: ChiCTR-TRC-12002357 RESULTS: The patients who received the HS-HES solution had a significant decrease in the intraoperative total fluid input (p<0.01), the volume of Ringer's solution required (p<0.05), the fluid balance (p<0.01) and their dural tension scores (p<0.05). The total urine output, blood loss, bleeding severity scores, operation duration and hemodynamic variables were similar in both groups (p>0.05). Moreover, compared with the HES group, the HS-HES group had significantly higher plasma concentrations of sodium and chloride, increasing the osmolality (p<0.01). CONCLUSION: Our results suggest that HS-HES reduced the volume of intraoperative fluid required to maintain the patients undergoing surgery and led to a decrease in the intraoperative fluid balance. Moreover, HS-HES improved the dural tension scores and provided satisfactory brain relaxation. Our results indicate that HS-HES may represent a new avenue for volume therapy during elective neurosurgical procedures. PMID:23644851
Shao, Liujiazi; Wang, Baoguo; Wang, Shuangyan; Mu, Feng; Gu, Ke
2013-01-01
The ideal solution for fluid management during neurosurgical procedures remains controversial. The aim of this study was to compare the effects of a 7.2% hypertonic saline - 6% hydroxyethyl starch (HS-HES) solution and a 6% hydroxyethyl starch (HES) solution on clinical, hemodynamic and laboratory variables during elective neurosurgical procedures. Forty patients scheduled for elective neurosurgical procedures were randomly assigned to the HS-HES group orthe HES group. Afterthe induction of anesthesia, patients in the HS-HES group received 250 mL of HS-HES (500 mL/h), whereas the patients in the HES group received 1,000 mL of HES (1000 mL/h). The monitored variables included clinical, hemodynamic and laboratory parameters. Chictr.org: ChiCTR-TRC-12002357 The patients who received the HS-HES solution had a significant decrease in the intraoperative total fluid input (p<0.01), the volume of Ringer's solution required (p<0.05), the fluid balance (p<0.01) and their dural tension scores (p<0.05). The total urine output, blood loss, bleeding severity scores, operation duration and hemodynamic variables were similar in both groups (p>0.05). Moreover, compared with the HES group, the HS-HES group had significantly higher plasma concentrations of sodium and chloride, increasing the osmolality (p<0.01). Our results suggest that HS-HES reduced the volume of intraoperative fluid required to maintain the patients undergoing surgery and led to a decrease in the intraoperative fluid balance. Moreover, HS-HES improved the dural tension scores and provided satisfactory brain relaxation. Our results indicate that HS-HES may represent a new avenue for volume therapy during elective neurosurgical procedures.
Solution of three-dimensional groundwater flow equations using the strongly implicit procedure
Trescott, P.C.; Larson, S.P.
1977-01-01
A three-dimensional numerical model has been coded to use the strongly implicit procedure for solving the finite-difference approximations to the ground-water flow equation. The model allows for: (1) the representation of each aquifer and each confining bed by several layers; and (2) the use of an anisotropic hydraulic conductivity at each finite-difference block. The model is compared with a previously developed quasi-three-dimensional model by simulating the steady-state flow in an aquifer system in the Piceance Creek Basin, Colorado. The aquifer system consists of two aquifers separated by a leaky confining bed. The upper aquifer receives recharge from precipitation and is hydraulically connected to streams. For this problem, in order to make a valid comparison of results, a single layer was used to represent each aquifer. Furthermore, the need for a layer to represent the confining bed was eliminated by incorporating the effects of vertical leakage into the vertical component of the anisotropic hydraulic conductivity of the adjacent aquifers. Thus, the problem was represented by only two layers in each model with a total of about 2,100 equations. This restricted the effects of flow in the confining layer to the vertical component, but simulations with a third layer in the three-dimensional model permitting horizontal flow in the confining bed show that the two-layer approach is reasonable. Convergence to a solution of this problem takes about one minute of computer time on the IBM/155. This is about 30 times faster than the time required using the quasi-three-dimensional model. ?? 1977.
Apramian, Tavis; Cristancho, Sayra; Watling, Chris; Ott, Michael; Lingard, Lorelei
2017-01-01
OBJECTIVE Clinical research increasingly acknowledges the existence of significant procedural variation in surgical practice. This study explored surgeons’ perspectives regarding the influence of intersurgeon procedural variation on the teaching and learning of surgical residents. DESIGN AND SETTING This qualitative study used a grounded theory-based analysis of observational and interview data. Observational data were collected in 3 tertiary care teaching hospitals in Ontario, Canada. Semistructured interviews explored potential procedural variations arising during the observations and prompts from an iteratively refined guide. Ongoing data analysis refined the theoretical framework and informed data collection strategies, as prescribed by the iterative nature of grounded theory research. PARTICIPANTS Our sample included 99 hours of observation across 45 cases with 14 surgeons. Semistructured, audio-recorded interviews (n = 14) occurred immediately following observational periods. RESULTS Surgeons endorsed the use of intersurgeon procedural variations to teach residents about adapting to the complexity of surgical practice and the norms of surgical culture. Surgeons suggested that residents’ efforts to identify thresholds of principle and preference are crucial to professional development. Principles that emerged from the study included the following: (1) knowing what comes next, (2) choosing the right plane, (3) handling tissue appropriately, (4) recognizing the abnormal, and (5) making safe progress. Surgeons suggested that learning to follow these principles while maintaining key aspects of surgical culture, like autonomy and individuality, are important social processes in surgical education. CONCLUSIONS Acknowledging intersurgeon variation has important implications for curriculum development and workplace-based assessment in surgical education. Adapting to intersurgeon procedural variations may foster versatility in surgical residents. However, the
Apramian, Tavis; Cristancho, Sayra; Watling, Chris; Ott, Michael; Lingard, Lorelei
2016-01-01
Clinical research increasingly acknowledges the existence of significant procedural variation in surgical practice. This study explored surgeons' perspectives regarding the influence of intersurgeon procedural variation on the teaching and learning of surgical residents. This qualitative study used a grounded theory-based analysis of observational and interview data. Observational data were collected in 3 tertiary care teaching hospitals in Ontario, Canada. Semistructured interviews explored potential procedural variations arising during the observations and prompts from an iteratively refined guide. Ongoing data analysis refined the theoretical framework and informed data collection strategies, as prescribed by the iterative nature of grounded theory research. Our sample included 99 hours of observation across 45 cases with 14 surgeons. Semistructured, audio-recorded interviews (n = 14) occurred immediately following observational periods. Surgeons endorsed the use of intersurgeon procedural variations to teach residents about adapting to the complexity of surgical practice and the norms of surgical culture. Surgeons suggested that residents' efforts to identify thresholds of principle and preference are crucial to professional development. Principles that emerged from the study included the following: (1) knowing what comes next, (2) choosing the right plane, (3) handling tissue appropriately, (4) recognizing the abnormal, and (5) making safe progress. Surgeons suggested that learning to follow these principles while maintaining key aspects of surgical culture, like autonomy and individuality, are important social processes in surgical education. Acknowledging intersurgeon variation has important implications for curriculum development and workplace-based assessment in surgical education. Adapting to intersurgeon procedural variations may foster versatility in surgical residents. However, the existence of procedural variations and their active use in surgeons
Error norms for the adaptive solution of the Navier-Stokes equations
NASA Technical Reports Server (NTRS)
Forester, C. K.
1982-01-01
The adaptive solution of the Navier-Stokes equations depends upon the successful interaction of three key elements: (1) the ability to flexibly select grid length scales in composite grids, (2) the ability to efficiently control residual error in composite grids, and (3) the ability to define reliable, convenient error norms to guide the grid adjustment and optimize the residual levels relative to the local truncation errors. An initial investigation was conducted to explore how to approach developing these key elements. Conventional error assessment methods were defined and defect and deferred correction methods were surveyed. The one dimensional potential equation was used as a multigrid test bed to investigate how to achieve successful interaction of these three key elements.
Physiology driven adaptivity for the numerical solution of the bidomain equations.
Whiteley, Jonathan P
2007-09-01
Previous work [Whiteley, J. P. IEEE Trans. Biomed. Eng. 53:2139-2147, 2006] derived a stable, semi-implicit numerical scheme for solving the bidomain equations. This scheme allows the timestep used when solving the bidomain equations numerically to be chosen by accuracy considerations rather than stability considerations. In this study we modify this scheme to allow an adaptive numerical solution in both time and space. The spatial mesh size is determined by the gradient of the transmembrane and extracellular potentials while the timestep is determined by the values of: (i) the fast sodium current; and (ii) the calcium release from junctional sarcoplasmic reticulum to myoplasm current. For two-dimensional simulations presented here, combining the numerical algorithm in the paper cited above with the adaptive algorithm presented here leads to an increase in computational efficiency by a factor of around 250 over previous work, together with significantly less computational memory being required. The speedup for three-dimensional simulations is likely to be more impressive.
Zonal multigrid solution of compressible flow problems on unstructured and adaptive meshes
NASA Technical Reports Server (NTRS)
Mavriplis, Dimitri J.
1989-01-01
The simultaneous use of adaptive meshing techniques with a multigrid strategy for solving the 2-D Euler equations in the context of unstructured meshes is studied. To obtain optimal efficiency, methods capable of computing locally improved solutions without recourse to global recalculations are pursued. A method for locally refining an existing unstructured mesh, without regenerating a new global mesh is employed, and the domain is automatically partitioned into refined and unrefined regions. Two multigrid strategies are developed. In the first, time-stepping is performed on a global fine mesh covering the entire domain, and convergence acceleration is achieved through the use of zonal coarse grid accelerator meshes, which lie under the adaptively refined regions of the global fine mesh. Both schemes are shown to produce similar convergence rates to each other, and also with respect to a previously developed global multigrid algorithm, which performs time-stepping throughout the entire domain, on each mesh level. However, the present schemes exhibit higher computational efficiency due to the smaller number of operations on each level.
NASA Astrophysics Data System (ADS)
Eric, L.; Vrugt, J. A.
2010-12-01
Spatially distributed hydrologic models potentially contain hundreds of parameters that need to be derived by calibration against a historical record of input-output data. The quality of this calibration strongly determines the predictive capability of the model and thus its usefulness for science-based decision making and forecasting. Unfortunately, high-dimensional optimization problems are typically difficult to solve. Here we present our recent developments to the Differential Evolution Adaptive Metropolis (DREAM) algorithm (Vrugt et al., 2009) to warrant efficient solution of high-dimensional parameter estimation problems. The algorithm samples from an archive of past states (Ter Braak and Vrugt, 2008), and uses multiple-try Metropolis sampling (Liu et al., 2000) to decrease the required burn-in time for each individual chain and increase efficiency of posterior sampling. This approach is hereafter referred to as MT-DREAM. We present results for 2 synthetic mathematical case studies, and 2 real-world examples involving from 10 to 240 parameters. Results for those cases show that our multiple-try sampler, MT-DREAM, can consistently find better solutions than other Bayesian MCMC methods. Moreover, MT-DREAM is admirably suited to be implemented and ran on a parallel machine and is therefore a powerful method for posterior inference.
Impact of Metal Nanoform Colloidal Solution on the Adaptive Potential of Plants
NASA Astrophysics Data System (ADS)
Taran, Nataliya; Batsmanova, Ludmila; Kovalenko, Mariia; Okanenko, Alexander
2016-02-01
Nanoparticles are a known cause of oxidative stress and so induce antistress action. The latter property was the purpose of our study. The effect of two concentrations (120 and 240 mg/l) of nanoform biogenic metal (Ag, Cu, Fe, Zn, Mn) colloidal solution on antioxidant enzymes, superoxide dismutase and catalase; the level of the factor of the antioxidant state; and the content of thiobarbituric acid reactive substances (TBARSs) of soybean plant in terms of field experience were studied. It was found that the oxidative processes developed a metal nanoparticle pre-sowing seed treatment variant at a concentration of 120 mg/l, as evidenced by the increase in the content of TBARS in photosynthetic tissues by 12 %. Pre-sowing treatment in a double concentration (240 mg/l) resulted in a decrease in oxidative processes (19 %), and pre-sowing treatment combined with vegetative treatment also contributed to the reduction of TBARS (10 %). Increased activity of superoxide dismutase (SOD) was observed in a variant by increasing the content of TBARS; SOD activity was at the control level in two other variants. Catalase activity decreased in all variants. The factor of antioxidant activity was highest (0.3) in a variant with nanoparticle double treatment (pre-sowing and vegetative) at a concentration of 120 mg/l. Thus, the studied nanometal colloidal solution when used in small doses, in a certain time interval, can be considered as a low-level stress factor which according to hormesis principle promoted adaptive response reaction.
Sukkay, Sasicha
2016-01-01
Based on a 2013 statistic published by Thai with Disability foundation, five percent of Thailand's population are disabled people. Six hundred thousand of them have mobility disability, and the number is increasing every year. To support them, the Thai government has implemented a number of disability laws and policies. One of the policies is to better disabled people's quality of life by adapting their houses to facilitate their activities. However, the policy has not been fully realized yet-there is still no specific guideline for housing adaptation for people with disabilities. This study is an attempt to address the lack of standardized criteria for such adaptation by developing a number of effective ones. Our development had 3 objectives: first, to identify the body functioning of a group of people with mobility disability according to the international classification functioning concept (ICF); second, to perform post-occupancy evaluation of this group and their houses; and third, with the collected data, to have a group of multidisciplinary experts cooperatively develop criteria for housing adaptation. The major findings were that room dimensions and furniture materials really had an impact on accessibility and toilet as well as bed room were the most difficult areas to access.
Two-stage sequential sampling: A neighborhood-free adaptive sampling procedure
Salehi, M.; Smith, D.R.
2005-01-01
Designing an efficient sampling scheme for a rare and clustered population is a challenging area of research. Adaptive cluster sampling, which has been shown to be viable for such a population, is based on sampling a neighborhood of units around a unit that meets a specified condition. However, the edge units produced by sampling neighborhoods have proven to limit the efficiency and applicability of adaptive cluster sampling. We propose a sampling design that is adaptive in the sense that the final sample depends on observed values, but it avoids the use of neighborhoods and the sampling of edge units. Unbiased estimators of population total and its variance are derived using Murthy's estimator. The modified two-stage sampling design is easy to implement and can be applied to a wider range of populations than adaptive cluster sampling. We evaluate the proposed sampling design by simulating sampling of two real biological populations and an artificial population for which the variable of interest took the value either 0 or 1 (e.g., indicating presence and absence of a rare event). We show that the proposed sampling design is more efficient than conventional sampling in nearly all cases. The approach used to derive estimators (Murthy's estimator) opens the door for unbiased estimators to be found for similar sequential sampling designs. ?? 2005 American Statistical Association and the International Biometric Society.
Operational Characteristics of Adaptive Testing Procedures Using the Graded Response Model.
ERIC Educational Resources Information Center
Dodd, Barbara G.; And Others
1989-01-01
General guidelines are developed to assist practitioners in devising operational computerized adaptive testing systems based on the graded response model. The effects of the following major variables were examined: item pool size; stepsize used along the trait continuum until maximum likelihood estimation could be calculated; and stopping rule…
NASA Astrophysics Data System (ADS)
Saanouni, Kkemais; Labergère, Carl; Issa, Mazen; Rassineux, Alain
2010-06-01
This work proposes a complete adaptive numerical methodology which uses `advanced' elastoplastic constitutive equations coupling: thermal effects, large elasto-viscoplasticity with mixed non linear hardening, ductile damage and contact with friction, for 2D machining simulation. Fully coupled (strong coupling) thermo-elasto-visco-plastic-damage constitutive equations based on the state variables under large plastic deformation developed for metal forming simulation are presented. The relevant numerical aspects concerning the local integration scheme as well as the global resolution strategy and the adaptive remeshing facility are briefly discussed. Applications are made to the orthogonal metal cutting by chip formation and segmentation under high velocity. The interactions between hardening, plasticity, ductile damage and thermal effects and their effects on the adiabatic shear band formation including the formation of cracks are investigated.
Bayesian Procedures for Identifying Aberrant Response-Time Patterns in Adaptive Testing
ERIC Educational Resources Information Center
van der Linden, Wim J.; Guo, Fanmin
2008-01-01
In order to identify aberrant response-time patterns on educational and psychological tests, it is important to be able to separate the speed at which the test taker operates from the time the items require. A lognormal model for response times with this feature was used to derive a Bayesian procedure for detecting aberrant response times.…
ERIC Educational Resources Information Center
Chang, Hua-Hua; And Others
Recently, R. Shealy and W. Stout (1993) proposed a procedure for detecting differential item functioning (DIF) called SIBTEST. Current versions of SIBTEST can only be used for dichotomously scored items, but this paper presents an extension to handle polytomous items. The paper presents: (1) a discussion of an appropriate definition of DIF for…
Bayesian Procedures for Identifying Aberrant Response-Time Patterns in Adaptive Testing
ERIC Educational Resources Information Center
van der Linden, Wim J.; Guo, Fanmin
2008-01-01
In order to identify aberrant response-time patterns on educational and psychological tests, it is important to be able to separate the speed at which the test taker operates from the time the items require. A lognormal model for response times with this feature was used to derive a Bayesian procedure for detecting aberrant response times.…
EEG-Based BCI System Using Adaptive Features Extraction and Classification Procedures
Mangia, Anna Lisa; Cappello, Angelo
2016-01-01
Motor imagery is a common control strategy in EEG-based brain-computer interfaces (BCIs). However, voluntary control of sensorimotor (SMR) rhythms by imagining a movement can be skilful and unintuitive and usually requires a varying amount of user training. To boost the training process, a whole class of BCI systems have been proposed, providing feedback as early as possible while continuously adapting the underlying classifier model. The present work describes a cue-paced, EEG-based BCI system using motor imagery that falls within the category of the previously mentioned ones. Specifically, our adaptive strategy includes a simple scheme based on a common spatial pattern (CSP) method and support vector machine (SVM) classification. The system's efficacy was proved by online testing on 10 healthy participants. In addition, we suggest some features we implemented to improve a system's “flexibility” and “customizability,” namely, (i) a flexible training session, (ii) an unbalancing in the training conditions, and (iii) the use of adaptive thresholds when giving feedback. PMID:27635129
Wager, Michel; Rigoard, Philippe; Bataille, Benoit; Guenot, Claude; Supiot, Aurélie; Blanc, Jean-Luc; Stal, Veronique; Pluchon, Claudette; Bouyer, Coline; Gil, Roger; Du Boisgueheneuc, Foucaud
2015-01-01
Many neurosurgical procedures are now performed with the patient aware in order to allow interactions between the patient and healthcare professionals. These procedures include awake brain surgery and spinal cord stimulation (SCS), lead placement for treatment of refractory chronic back and leg pain. Neurosurgical procedures under local anaesthesia require optimal intraoperative cooperation of the patient and all personnel involved in surgery. In addition to accommodating this extra source of intraoperative information all other necessary sources of data relevant to the procedure must be presented. The concept of an operating room dedicated to neurosurgical procedures performed aware and accommodating these concepts is presented, and some evidence for improvements in outcome presented, deriving from a series of patients implanted with spinal cord stimulators before and after the operating theatre was brought into service. In addition to the description, two videos demonstrate the facility online. Beyond this qualitative evidence, quantitative improvement in patient outcome is evidenced by the series presented: 91.3% of patients operated in the awake anaesthesia-dedicated theatre obtained adequate low back pain coverage, versus 60.0% for patients operated before (p = 0.028). The concept of such an operating room is a step in improving the outcome by improving the presentation of all types of information to the operating room staff most notably in the example of aware procedures.
Lazar, Ann A; Zerbe, Gary O
2011-12-01
Researchers often compare the relationship between an outcome and covariate for two or more groups by evaluating whether the fitted regression curves differ significantly. When they do, researchers need to determine the "significance region," or the values of the covariate where the curves significantly differ. In analysis of covariance (ANCOVA), the Johnson-Neyman procedure can be used to determine the significance region; for the hierarchical linear model (HLM), the Miyazaki and Maier (M-M) procedure has been suggested. However, neither procedure can assume nonnormally distributed data. Furthermore, the M-M procedure produces biased (downward) results because it uses the Wald test, does not control the inflated Type I error rate due to multiple testing, and requires implementing multiple software packages to determine the significance region. In this article, we address these limitations by proposing solutions for determining the significance region suitable for generalized linear (mixed) model (GLM or GLMM). These proposed solutions incorporate test statistics that resolve the biased results, control the Type I error rate using Scheffé's method, and uses a single statistical software package to determine the significance region.
Kreitler, Jason; Stoms, David M; Davis, Frank W
2014-01-01
Quantitative methods of spatial conservation prioritization have traditionally been applied to issues in conservation biology and reserve design, though their use in other types of natural resource management is growing. The utility maximization problem is one form of a covering problem where multiple criteria can represent the expected social benefits of conservation action. This approach allows flexibility with a problem formulation that is more general than typical reserve design problems, though the solution methods are very similar. However, few studies have addressed optimization in utility maximization problems for conservation planning, and the effect of solution procedure is largely unquantified. Therefore, this study mapped five criteria describing elements of multifunctional agriculture to determine a hypothetical conservation resource allocation plan for agricultural land conservation in the Central Valley of CA, USA. We compared solution procedures within the utility maximization framework to determine the difference between an open source integer programming approach and a greedy heuristic, and find gains from optimization of up to 12%. We also model land availability for conservation action as a stochastic process and determine the decline in total utility compared to the globally optimal set using both solution algorithms. Our results are comparable to other studies illustrating the benefits of optimization for different conservation planning problems, and highlight the importance of maximizing the effectiveness of limited funding for conservation and natural resource management.
Kreitler, Jason R.; Stoms, David M.; Davis, Frank W.
2014-01-01
Quantitative methods of spatial conservation prioritization have traditionally been applied to issues in conservation biology and reserve design, though their use in other types of natural resource management is growing. The utility maximization problem is one form of a covering problem where multiple criteria can represent the expected social benefits of conservation action. This approach allows flexibility with a problem formulation that is more general than typical reserve design problems, though the solution methods are very similar. However, few studies have addressed optimization in utility maximization problems for conservation planning, and the effect of solution procedure is largely unquantified. Therefore, this study mapped five criteria describing elements of multifunctional agriculture to determine a hypothetical conservation resource allocation plan for agricultural land conservation in the Central Valley of CA, USA. We compared solution procedures within the utility maximization framework to determine the difference between an open source integer programming approach and a greedy heuristic, and find gains from optimization of up to 12%. We also model land availability for conservation action as a stochastic process and determine the decline in total utility compared to the globally optimal set using both solution algorithms. Our results are comparable to other studies illustrating the benefits of optimization for different conservation planning problems, and highlight the importance of maximizing the effectiveness of limited funding for conservation and natural resource management.
A solution-adaptive mesh algorithm for dynamic/static refinement of two and three dimensional grids
NASA Technical Reports Server (NTRS)
Benson, Rusty A.; Mcrae, D. S.
1991-01-01
An adaptive grid algorithm has been developed in two and three dimensions that can be used dynamically with a solver or as part of a grid refinement process. The algorithm employs a transformation from the Cartesian coordinate system to a general coordinate space, which is defined as a parallelepiped in three dimensions. A weighting function, independent for each coordinate direction, is developed that will provide the desired refinement criteria in regions of high solution gradient. The adaptation is performed in the general coordinate space and the new grid locations are returned to the Cartesian space via a simple, one-step inverse mapping. The algorithm for relocation of the mesh points in the parametric space is based on the center of mass for distributed weights. Dynamic solution-adaptive results are presented for laminar flows in two and three dimensions.
NASA Technical Reports Server (NTRS)
Rudy, D. H.; Morris, D. J.; Blanchard, D. K.; Cooke, C. H.; Rubin, S. G.
1975-01-01
The status of an investigation of four numerical techniques for the time-dependent compressible Navier-Stokes equations is presented. Results for free shear layer calculations in the Reynolds number range from 1000 to 81000 indicate that a sequential alternating-direction implicit (ADI) finite-difference procedure requires longer computing times to reach steady state than a low-storage hopscotch finite-difference procedure. A finite-element method with cubic approximating functions was found to require excessive computer storage and computation times. A fourth method, an alternating-direction cubic spline technique which is still being tested, is also described.
Simulation of metal forming processes with a 3D adaptive remeshing procedure
NASA Astrophysics Data System (ADS)
Zeramdini, Bessam; Robert, Camille; Germain, Guenael; Pottier, Thomas
2016-10-01
In this paper, a fully adaptive 3D numerical methodology based on a tetrahedral element was proposed in order to improve the finite element simulation of any metal forming process. This automatic methodology was implemented in a computational platform which integrates a finite element solver, 3D mesh generation and a field transfer algorithm. The proposed remeshing method was developed in order to solve problems associated with the severe distortion of elements subject to large deformations, to concentrate the elements where the error is large and to coarsen the mesh where the error is small. This leads to a significant reduction in the computation times while maintaining simulation accuracy. In addition, in order to enhance the contact conditions, this method has been coupled with a specific operator to maintain the initial contact between the workpiece nodes and the rigid tool after each remeshing step. In this paper special attention is paid to the data transfer methods and the necessary adaptive remeshing steps are given. Finally, a numerical example is detailed to demonstrate the efficiency of the approach and to compare the results for the different field transfer strategies.
A goal-oriented adaptive procedure for the quasi-continuum method with cluster approximation
NASA Astrophysics Data System (ADS)
Memarnahavandi, Arash; Larsson, Fredrik; Runesson, Kenneth
2015-04-01
We present a strategy for adaptive error control for the quasi-continuum (QC) method applied to molecular statics problems. The QC-method is introduced in two steps: Firstly, introducing QC-interpolation while accounting for the exact summation of all the bond-energies, we compute goal-oriented error estimators in a straight-forward fashion based on the pertinent adjoint (dual) problem. Secondly, for large QC-elements the bond energy and its derivatives are typically computed using an appropriate discrete quadrature using cluster approximations, which introduces a model error. The combined error is estimated approximately based on the same dual problem in conjunction with a hierarchical strategy for approximating the residual. As a model problem, we carry out atomistic-to-continuum homogenization of a graphene monolayer, where the Carbon-Carbon energy bonds are modeled via the Tersoff-Brenner potential, which involves next-nearest neighbor couplings. In particular, we are interested in computing the representative response for an imperfect lattice. Within the goal-oriented framework it becomes natural to choose the macro-scale (continuum) stress as the "quantity of interest". Two different formulations are adopted: The Basic formulation and the Global formulation. The presented numerical investigation shows the accuracy and robustness of the proposed error estimator and the pertinent adaptive algorithm.
NASA Astrophysics Data System (ADS)
Dragan, Vasile; Ivanov, Ivan
2011-04-01
In this article, the problem of the numerical computation of the stabilising solution of the game theoretic algebraic Riccati equation is investigated. The Riccati equation under consideration occurs in connection with the solution of the H ∞ control problem for a class of stochastic systems affected by state-dependent and control-dependent white noise and subjected to Markovian jumping. The stabilising solution of the considered game theoretic Riccati equation is obtained as a limit of a sequence of approximations constructed based on stabilising solutions of a sequence of algebraic Riccati equations of stochastic control with definite sign of the quadratic part. The proposed algorithm extends to this general framework the method proposed in Lanzon, Feng, Anderson, and Rotkowitz (Lanzon, A., Feng, Y., Anderson, B.D.O., and Rotkowitz, M. (2008), 'Computing the Positive Stabilizing Solution to Algebraic Riccati Equations with an Indefinite Quadratic Term Viaa Recursive Method,' IEEE Transactions on Automatic Control, 53, pp. 2280-2291). In the proof of the convergence of the proposed algorithm different concepts associated the generalised Lyapunov operators as stability, stabilisability and detectability are widely involved. The efficiency of the proposed algorithm is demonstrated by several numerical experiments.
Uystepruyst, Ch; Coghe, J; Dorts, Th; Harmegnies, N; Delsemme, M-H; Art, T; Lekeux, P
2002-01-01
The purpose of this study was to evaluate the effects of three resuscitation procedures on respiratory and metabolic adaptation to extra-uterine life during the first 24 h after birth in healthy newborn calves. Twenty-four newborn calves were randomly grouped into four categories: six calves did not receive any specific resuscitation procedure and were considered as controls (C); six received pharyngeal and nasal suctioning immediately after birth by use of a hand-powered vacuum pump (SUC); six received five litres of cold water poured over their heads immediately after birth (CW) and six were housed in a calf pen with an infrared radiant heater for 24 h after birth (IR). Calves were examined at birth, 5, 15, 30, 45 and 60 min, 2, 3, 6, 12 and 24 h after birth and the following measurements were recorded: physical and clinical examination, arterial blood gas analysis, pulmonary function tests using the oesophageal balloon catheter technique, arterial and venous blood acid-base balance analysis, jugular venous blood sampling for determination of metabolic, haematological and passive immune transfer variables. SUC was accompanied by improved pulmonary function efficiency and by a less pronounced decrease in body temperature. The "head shaking movement" and the subsequent temporary increase in total pulmonary resistance as well as the greater lactic acidosis due to CW were accompanied by more efficient, but statistically non-significant, pulmonary gas exchanges. IR allowed maintenance of higher body temperature without requiring increased catabolism of energetic stores. IR also caused a change in breathing pattern which contributed to better distribution of the ventilation and to slightly improved gas exchange. The results indicate that use of SUC, CW and IR modified respiratory and metabolic adaptation during the first 24 h after birth without side-effects. These resuscitation procedures should be recommended for their specific indication, i.e. cleansing of fetal
Broom, Donald M
2006-01-01
The term adaptation is used in biology in three different ways. It may refer to changes which occur at the cell and organ level, or at the individual level, or at the level of gene action and evolutionary processes. Adaptation by cells, especially nerve cells helps in: communication within the body, the distinguishing of stimuli, the avoidance of overload and the conservation of energy. The time course and complexity of these mechanisms varies. Adaptive characters of organisms, including adaptive behaviours, increase fitness so this adaptation is evolutionary. The major part of this paper concerns adaptation by individuals and its relationships to welfare. In complex animals, feed forward control is widely used. Individuals predict problems and adapt by acting before the environmental effect is substantial. Much of adaptation involves brain control and animals have a set of needs, located in the brain and acting largely via motivational mechanisms, to regulate life. Needs may be for resources but are also for actions and stimuli which are part of the mechanism which has evolved to obtain the resources. Hence pigs do not just need food but need to be able to carry out actions like rooting in earth or manipulating materials which are part of foraging behaviour. The welfare of an individual is its state as regards its attempts to cope with its environment. This state includes various adaptive mechanisms including feelings and those which cope with disease. The part of welfare which is concerned with coping with pathology is health. Disease, which implies some significant effect of pathology, always results in poor welfare. Welfare varies over a range from very good, when adaptation is effective and there are feelings of pleasure or contentment, to very poor. A key point concerning the concept of individual adaptation in relation to welfare is that welfare may be good or poor while adaptation is occurring. Some adaptation is very easy and energetically cheap and
Validation of an adapted procedure to collect hair for cortisol determination in adolescents.
Ouellet-Morin, Isabelle; Laurin, Mélissa; Robitaille, Marie-Pier; Brendgen, Mara; Lupien, Sonia J; Boivin, Michel; Vitaro, Frank
2016-08-01
In the last decades, cortisol has been extensively studied in association to early exposure to adversity as well as in the etiology of a number of physical and mental problems. While saliva and blood samples allow the measurement of acute changes in cortisol secretion, hair samples are thought to provide a valid retrospective measure of chronic cortisol secretion over an extended period of time. Nevertheless, the existing protocol for hair collection involves considerable financial and logistical challenges when performed in large epidemiological studies. This study aimed to validate an adapted collection protocol asking participants to sample their hair at home and to send it back to our laboratory by regular mail. Participants were 34 teenagers between 17 and 18 years of age. They participated in two hair collections: (a) at home, with the help of someone they know, and (b) in our laboratory, with a trained research assistant. We noted a strong correlation between cortisol ascertained from hair collected at home and at the laboratory. No mean difference in cortisol levels could be detected between the two protocols. Moreover, we showed that a wide range of hair-related, sociodemographic, lifestyle factors that may be associated with hair cortisol levels did not affect the association between cortisol measures derived from each protocol. Our study provides initial support that reliable measures of chronic cortisol secretion could be obtained by asking adolescents to collect a sample of their hair at home and send them to the laboratory by regular mail. This adapted protocol has considerable financial and logistical advantages in large epidemiological studies. Copyright © 2016 Elsevier Ltd. All rights reserved.
Sweet Solutions to Reduce Procedural Pain in Neonates: A Meta-analysis.
Harrison, Denise; Larocque, Catherine; Bueno, Mariana; Stokes, Yehudis; Turner, Lucy; Hutton, Brian; Stevens, Bonnie
2017-01-01
Abundant evidence of sweet taste analgesia in neonates exists, yet placebo-controlled trials continue to be conducted. To review all trials evaluating sweet solutions for analgesia in neonates and to conduct cumulative meta-analyses (CMAs) on behavioral pain outcomes. (1) Data from 2 systematic reviews of sweet solutions for newborns; (2) searches ending 2015 of CINAHL, Medline, Embase, and psychINFO. Two authors screened studies for inclusion, conducted risk-of-bias ratings, and extracted behavioral outcome data for CMAs. CMA was performed using random effects meta-analysis. One hundred and sixty-eight studies were included; 148 (88%) included placebo/no-treatment arms. CMA for crying time included 29 trials (1175 infants). From the fifth trial in 2002, there was a statistically significant reduction in mean cry time for sweet solutions compared with placebo (-27 seconds, 95% confidence interval [CI] -51 to -4). By the final trial, CMA was -23 seconds in favor of sweet solutions (95% CI -29 to -18). CMA for pain scores included 50 trials (3341 infants). Results were in favor of sweet solutions from the second trial (0.5, 95% CI -1 to -0.1). Final results showed a standardized mean difference of -0.9 (95% CI -1.1 to -0.7). We were unable to use or obtain data from many studies to include in the CMA. Evidence of sweet taste analgesia in neonates has existed since the first published trials, yet placebo/no-treatment, controlled trials have continued to be conducted. Future neonatal pain studies need to select more ethically responsible control groups. Copyright © 2017 by the American Academy of Pediatrics.
Tan, Onder; Atik, Bekir; Calka, Omer
2006-03-01
: Melkersson-Rosenthal Syndrome (MRS) is a rare granulomatous disease characterized by a triad, including orofacial swelling, facial palsy and lingua plicata with usually recurrent or progressive course. Orofacial swelling, the most often sign of MRS, leads to the both esthetic and functional deformities. Because of its unknown etiology, a rational treatment is difficult and management of MRS still remains symptomatic, aiming at to remove orofacial swelling mainly. Although the many nonsurgical therapies have been mentioned in the literature, none has been proved uniformly and predictably successful to date. In this paper, we present different surgical procedures and their outcomes in a series of 4 cases with MRS. The procedures including mucosa, submucosa and tangential muscle resection, crescent shaped commissuroplasty, and facial liposuction may be considered in surgical armamentarium when orofacial swelling becomes persistent. We think that the plastic surgeons may act more effectively in the management of the syndrome in the future.
Churchill, Nathan W; Strother, Stephen C
2013-11-15
The presence of physiological noise in functional MRI can greatly limit the sensitivity and accuracy of BOLD signal measurements, and produce significant false positives. There are two main types of physiological confounds: (1) high-variance signal in non-neuronal tissues of the brain including vascular tracts, sinuses and ventricles, and (2) physiological noise components which extend into gray matter tissue. These physiological effects may also be partially coupled with stimuli (and thus the BOLD response). To address these issues, we have developed PHYCAA+, a significantly improved version of the PHYCAA algorithm (Churchill et al., 2011) that (1) down-weights the variance of voxels in probable non-neuronal tissue, and (2) identifies the multivariate physiological noise subspace in gray matter that is linked to non-neuronal tissue. This model estimates physiological noise directly from EPI data, without requiring external measures of heartbeat and respiration, or manual selection of physiological components. The PHYCAA+ model significantly improves the prediction accuracy and reproducibility of single-subject analyses, compared to PHYCAA and a number of commonly-used physiological correction algorithms. Individual subject denoising with PHYCAA+ is independently validated by showing that it consistently increased between-subject activation overlap, and minimized false-positive signal in non gray-matter loci. The results are demonstrated for both block and fast single-event task designs, applied to standard univariate and adaptive multivariate analysis models. Copyright © 2013 Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Jiang, Yi-Tsann; Usab, William J., Jr.
1993-01-01
A general solution adaptive scheme based on a remeshing technique is developed for solving the two-dimensional and quasi-three-dimensional Euler and Favre-averaged Navier-Stokes equations. The numerical scheme is formulated on an unstructured triangular mesh utilizing an edge-based pointer system which defines the edge connectivity of the mesh structure. Jameson's four-stage hybrid Runge-Kutta scheme is used to march the solution in time. The convergence rate is enhanced through the use of local time stepping and implicit residual averaging. As the solution evolves, the mesh is regenerated adaptively using flow field information. Mesh adaptation parameters are evaluated such that an estimated local numerical error is equally distributed over the whole domain. For inviscid flows, the present approach generates a complete unstructured triangular mesh using the advancing front method. For turbulent flows, the approach combines a local highly stretched structured triangular mesh in the boundary layer region with an unstructured mesh in the remaining regions to efficiently resolve the important flow features. One-equation and two-equation turbulence models are incorporated into the present unstructured approach. Results are presented for a wide range of flow problems including two-dimensional multi-element airfoils, two-dimensional cascades, and quasi-three-dimensional cascades. This approach is shown to gain flow resolution in the refined regions while achieving a great reduction in the computational effort and storage requirements since solution points are not wasted in regions where they are not required.
NASA Technical Reports Server (NTRS)
Jiang, Yi-Tsann
1993-01-01
A general solution adaptive scheme-based on a remeshing technique is developed for solving the two-dimensional and quasi-three-dimensional Euler and Favre-averaged Navier-Stokes equations. The numerical scheme is formulated on an unstructured triangular mesh utilizing an edge-based pointer system which defines the edge connectivity of the mesh structure. Jameson's four-stage hybrid Runge-Kutta scheme is used to march the solution in time. The convergence rate is enhanced through the use of local time stepping and implicit residual averaging. As the solution evolves, the mesh is regenerated adaptively using flow field information. Mesh adaptation parameters are evaluated such that an estimated local numerical error is equally distributed over the whole domain. For inviscid flows, the present approach generates a complete unstructured triangular mesh using the advancing front method. For turbulent flows, the approach combines a local highly stretched structured triangular mesh in the boundary layer region with an unstructured mesh in the remaining regions to efficiently resolve the important flow features. One-equation and two-equation turbulence models are incorporated into the present unstructured approach. Results are presented for a wide range of flow problems including two-dimensional multi-element airfoils, two-dimensional cascades, and quasi-three-dimensional cascades. This approach is shown to gain flow resolution in the refined regions while achieving a great reduction in the computational effort and storage requirements since solution points are not wasted in regions where they are not required.
A Procedure to Construct Exact Solutions of Nonlinear Fractional Differential Equations
Güner, Özkan; Cevikel, Adem C.
2014-01-01
We use the fractional transformation to convert the nonlinear partial fractional differential equations with the nonlinear ordinary differential equations. The Exp-function method is extended to solve fractional partial differential equations in the sense of the modified Riemann-Liouville derivative. We apply the Exp-function method to the time fractional Sharma-Tasso-Olver equation, the space fractional Burgers equation, and the time fractional fmKdV equation. As a result, we obtain some new exact solutions. PMID:24737972
A procedure to construct exact solutions of nonlinear fractional differential equations.
Güner, Özkan; Cevikel, Adem C
2014-01-01
We use the fractional transformation to convert the nonlinear partial fractional differential equations with the nonlinear ordinary differential equations. The Exp-function method is extended to solve fractional partial differential equations in the sense of the modified Riemann-Liouville derivative. We apply the Exp-function method to the time fractional Sharma-Tasso-Olver equation, the space fractional Burgers equation, and the time fractional fmKdV equation. As a result, we obtain some new exact solutions.
Is a shorter bar an effective solution to avoid bar dislocation in a Nuss procedure?
Ghionzoli, Marco; Ciuti, Gastone; Ricotti, Leonardo; Tocchioni, Francesca; Lo Piccolo, Roberto; Menciassi, Arianna; Messineo, Antonio
2014-03-01
A variety of expedients to minimize bar dislocation in the Nuss procedure has been reported. The aims of this study were to create a mathematical model to define mechanical stresses acting on bars of different lengths in the Nuss procedure, and to apply this model to clinical scenarios. Finite element model analyses were used to outline the mechanical stresses and to mathematically define different cases. Data from a group of patients with procedures carried out using standard Nuss criteria (NC group; bars half an inch shorter than the distance between the mid-axillary lines) were compared with data from a second group treated by applying model-based suggestions (MS group; bars approximately 3 inches shorter than the distance between the mid-axillary lines). Mean patient age in the NC group (48 cases) was 16.4 years old (84% males). The mean operating time was 57 minutes, and the mean bar length was 14.19 inches. There were 5 cases (10.4%) of bar dislocation. Mean patient age in the MS group (88 cases) was 16.2 years old (87% males). The mean operating time was 43 minutes and the mean bar length was 11.67 inches. There was only 1 bar dislocation, a reduction from 10.4% (NC) to 1.1% (MS) odds ratio 0.0989 (confidence interval 0.0112 to 0.8727), p = 0.0373. A shorter Nuss bar reduces tension on the sutures applied at bar extremities. This leads to enhanced bar stability and a reduced risk that the bar will flip. The use of a shorter Nuss bar may reduce the incidence of bar dislocation. Copyright © 2014 The Society of Thoracic Surgeons. Published by Elsevier Inc. All rights reserved.
Koffarnus, Mikhail N; Deshpande, Harshawardhan U; Lisinski, Jonathan M; Eklund, Anders; Bickel, Warren K; LaConte, Stephen M
2017-08-10
Research on the rate at which people discount the value of future rewards has become increasingly prevalent as discount rate has been shown to be associated with many unhealthy patterns of behavior such as drug abuse, gambling, and overeating. fMRI research points to a fronto-parietal-limbic pathway that is active during decisions between smaller amounts of money now and larger amounts available after a delay. Researchers in this area have used different variants of delay discounting tasks and reported various contrasts between choice trials of different types from these tasks. For instance, researchers have compared 1) choices of delayed monetary amounts to choices of the immediate monetary amounts, 2) 'hard' choices made near one's point of indifference to 'easy' choices that require little thought, and 3) trials where an immediate choice is available versus trials where one is unavailable, regardless of actual eventual choice. These differences in procedure and analysis make comparison of results across studies difficult. In the present experiment, we designed a delay discounting task with the intended capability of being able to construct contrasts of all three comparisons listed above while optimizing scanning time to reduce costs and avoid participant fatigue. This was accomplished with an algorithm that customized the choice trials presented to each participant with the goal of equalizing choice trials of each type. We compared this task, which we refer to here as the individualized discounting task (IDT), to two other delay discounting tasks previously reported in the literature (McClure et al., 2004; Amlung et al., 2014) in 18 participants. Results show that the IDT can examine each of the three contrasts mentioned above, while yielding a similar degree of activation as the reference tasks. This suggests that this new task could be used in delay discounting fMRI studies to allow researchers to more easily compare their results to a majority of previous
A Simple Procedure for Constructing 5'-Amino-Terminated Oligodeoxynucleotides in Aqueous Solution
NASA Technical Reports Server (NTRS)
Bruick, Richard K.; Koppitz, Marcus; Joyce, Gerald F.; Orgel, Leslie E.
1997-01-01
A rapid method for the synthesis of oligodeoxynucleotides (ODNs) terminated by 5'-amino-5'-deoxythymidine is described. A 3'-phosphorylated ODN (the donor) is incubated in aqueous solution with 5'-amino- 5'-deoxythymidine in the presence of N-(3-dimethylaminopropyl)-)N'-ethylcarbodiimide hydrochloride (EDC), extending the donor by one residue via a phosphoramidate bond. Template- directed ligation of the extended donor and an acceptor ODN, followed by acid hydrolysis, yields the acceptor ODN extended by a single 5'-amino-5'-deoxythymidine residue at its 5'terminus.
NASA Astrophysics Data System (ADS)
Fabian, H.; Hoelzer, W.; Herrmann, G.; Ristau, O.; Sklenar, H.; Welfle, H.
1990-03-01
The solution structures of the oligomeric DNAs d(GGGGCCCC) 2 (I), d(CGCGCGCG) 2 (II), d(GGGTACCC) 2 (III), d(GGGATCCC) 2 (IV), d(CGCTAGCG) 2 (V), d(GGAATTCC) 2 (VI), d(GGATCC) 2 (VII) and d(GCATGC) 2 (VIII) were analyzed. In low-salt solutions the overall conformation of these oligomers is predominantly B-like but not A-like as observed in crystals of (I), (III) and (IV) by others. The conformation of (I), (III) and (IV) containing homo (dG) · (dC) tracts is different from that of mixed-sequence DNAs like in (II) and (V) or in poly(dG-dC) · poly (dG-dC). Also (IV), (VI), (VII) and (VIII) involving restriction enzyme recognition sites reveal spectral differences. Attempts to correlate these spectral differences with structural differences within the B-family on the basis of calculated fine structures were started.
NASA Astrophysics Data System (ADS)
Sartoros, Christine; Salin, Eric D.
1998-05-01
Lines available while running a blank solution were used to monitor the analytical performance of an inductively coupled plasma atomic emission spectrometry (ICP-AES) system in real time. Using H and Ar lines and their signal-to-background ratios (SBRs), simple rules in the form of a prediction table were developed by inspection of the data. These rules could be used for predicting changes in radio-frequency power, carrier gas flow rates, and sample introduction rate. The performance of the prediction table was good but not excellent. Another set of rules in the form of a decision tree was developed in an automated fashion using the C4.5 induction engine. The performance of the decision tree was superior to that of the prediction table. It appears that blank spectral information can be used to predict with over 90% accuracy when an ICP-AES is breaking down. However this is not as definitive at identifying the exact fault as some more exhaustive approaches involving the use of standard solutions.
Yılmaz, Koray; Özyürek, Taha
2017-04-01
The aim of this study was to compare the amount of debris extruded from the apex during retreatment procedures with ProTaper Next (PTN; Dentsply Maillefer, Ballaigues, Switzerland), Reciproc (RCP; VDW, Munich, Germany), and Twisted File Adaptive (TFA; SybronEndo, Orange, CA) files and the duration of these retreatment procedures. Ninety upper central incisor teeth were prepared and filled with gutta-percha and AH Plus sealer (Dentsply DeTrey, Konstanz, Germany) using the vertical compaction technique. The teeth were randomly divided into 3 groups of 30 for removal of the root filling material with PTN, RCP, and TFA files. The apically extruded debris was collected in preweighed Eppendorf tubes. The time for gutta-percha removal was recorded. Data were statistically analyzed using Kruskal-Wallis and 1-way analysis of variance tests. The amount of debris extruded was RPC > TFA > PTN, respectively. Compared with the PTN group, the amount of debris extruded in the RPC group was statistically significantly higher (P < .001). There was no statistically significant difference among the RCP, TFA, and PTN groups regarding the time for retreatment (P > .05). Within the limitations of this in vitro study, all groups were associated with debris extrusion from the apex. The RCP file system led to higher levels of apical extrusion in proportion to the PTN file system. In addition, there was no significant difference among groups in the duration of the retreatment procedures. Copyright © 2017 American Association of Endodontists. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Bargatze, L. F.
2015-12-01
Active Data Archive Product Tracking (ADAPT) is a collection of software routines that permits one to generate XML metadata files to describe and register data products in support of the NASA Heliophysics Virtual Observatory VxO effort. ADAPT is also a philosophy. The ADAPT concept is to use any and all available metadata associated with scientific data to produce XML metadata descriptions in a consistent, uniform, and organized fashion to provide blanket access to the full complement of data stored on a targeted data server. In this poster, we present an application of ADAPT to describe all of the data products that are stored by using the Common Data File (CDF) format served out by the CDAWEB and SPDF data servers hosted at the NASA Goddard Space Flight Center. These data servers are the primary repositories for NASA Heliophysics data. For this purpose, the ADAPT routines have been used to generate data resource descriptions by using an XML schema named Space Physics Archive, Search, and Extract (SPASE). SPASE is the designated standard for documenting Heliophysics data products, as adopted by the Heliophysics Data and Model Consortium. The set of SPASE XML resource descriptions produced by ADAPT includes high-level descriptions of numerical data products, display data products, or catalogs and also includes low-level "Granule" descriptions. A SPASE Granule is effectively a universal access metadata resource; a Granule associates an individual data file (e.g. a CDF file) with a "parent" high-level data resource description, assigns a resource identifier to the file, and lists the corresponding assess URL(s). The CDAWEB and SPDF file systems were queried to provide the input required by the ADAPT software to create an initial set of SPASE metadata resource descriptions. Then, the CDAWEB and SPDF data repositories were queried subsequently on a nightly basis and the CDF file lists were checked for any changes such as the occurrence of new, modified, or deleted
Experience of using human albumin solution 4.5% in 1195 therapeutic plasma exchange procedures.
Pusey, C; Dash, C; Garrett, M; Gascoigne, E; Gesinde, M; Gillanders, K; Wallington, T
2010-08-01
The aim of the study was to document the incidence of adverse reactions (ADRs) in subjects undergoing therapeutic plasma exchange with human albumin 4.5% solution (Zenalb 4.5) and to explore whether there were any differences in tolerability with a change from UK to US plasma and a subsequent manufacturing modification. Zenalb 4.5 was initially manufactured from recovered plasma from UK blood donations and later from source plasma from US donors. The modification was a salt diafiltration step. A prospective survey was conducted at three UK aphaeresis units; data from 154 subjects undergoing 1195 plasma exchanges using Zenalb 4.5 were collected. Adverse events with at least a possible relationship to treatment were recorded. There were 20 ADRs per 1195 exchanges (1.7%), experienced by 14 subjects (9.1%). The most common reaction was rigours in 17 exchanges (1.4%) and 12 subjects (7.8%). ADRs occurred in 0.8% (2/250) of plasma exchanges with UK plasma, 0.2% (1/539) using US plasma/original manufacturing method, 4.3% (16/370) using US plasma/modified method and 12.5% (1/8) using US plasma/mixed original and modified methods. Data were incomplete for the remaining 28 exchanges, but no ADRs were reported. Moreover, 17 ADRs occurred over a 14-month period and involved 10 batches manufactured from US plasma (1 original, 9 by modified method). The incidence then returned to the previously lower level. There was no explanation for this cluster of events. Overall, there was no evidence that plasma source or manufacturing method affected tolerability and it was concluded that human albumin 4.5% solution (Zenalb 4.5) is well tolerated during plasma exchange therapy.
NASA Technical Reports Server (NTRS)
Coirier, William J.; Powell, Kenneth G.
1994-01-01
A Cartesian, cell-based approach for adaptively-refined solutions of the Euler and Navier-Stokes equations in two dimensions is developed and tested. Grids about geometrically complicated bodies are generated automatically, by recursive subdivision of a single Cartesian cell encompassing the entire flow domain. Where the resulting cells intersect bodies, N-sided 'cut' cells are created using polygon-clipping algorithms. The grid is stored in a binary-tree structure which provides a natural means of obtaining cell-to-cell connectivity and of carrying out solution-adaptive mesh refinement. The Euler and Navier-Stokes equations are solved on the resulting grids using a finite-volume formulation. The convective terms are upwinded: a gradient-limited, linear reconstruction of the primitive variables is performed, providing input states to an approximate Riemann solver for computing the fluxes between neighboring cells. The more robust of a series of viscous flux functions is used to provide the viscous fluxes at the cell interfaces. Adaptively-refined solutions of the Navier-Stokes equations using the Cartesian, cell-based approach are obtained and compared to theory, experiment, and other accepted computational results for a series of low and moderate Reynolds number flows.
NASA Technical Reports Server (NTRS)
Coirier, William J.; Powell, Kenneth G.
1995-01-01
A Cartesian, cell-based approach for adaptively-refined solutions of the Euler and Navier-Stokes equations in two dimensions is developed and tested. Grids about geometrically complicated bodies are generated automatically, by recursive subdivision of a single Cartesian cell encompassing the entire flow domain. Where the resulting cells intersect bodies, N-sided 'cut' cells are created using polygon-clipping algorithms. The grid is stored in a binary-tree data structure which provides a natural means of obtaining cell-to-cell connectivity and of carrying out solution-adaptive mesh refinement. The Euler and Navier-Stokes equations are solved on the resulting grids using a finite-volume formulation. The convective terms are upwinded: A gradient-limited, linear reconstruction of the primitive variables is performed, providing input states to an approximate Riemann solver for computing the fluxes between neighboring cells. The more robust of a series of viscous flux functions is used to provide the viscous fluxes at the cell interfaces. Adaptively-refined solutions of the Navier-Stokes equations using the Cartesian, cell-based approach are obtained and compared to theory, experiment and other accepted computational results for a series of low and moderate Reynolds number flows.
A procedure to create isoconcentration surfaces in low-chemical-partitioning, high-solute alloys.
Hornbuckle, B C; Kapoor, M; Thompson, G B
2015-12-01
A proximity histogram or proxigram is the prevailing technique of calculating 3D composition profiles of a second phase in atom probe tomography. The second phase in the reconstruction is delineated by creating an isoconcentration surface, i.e. the precipitate-matrix interface. The 3D composition profile is then calculated with respect to this user-defined isoconcentration surface. Hence, the selection of the correct isoconcentration surface is critical. In general, the preliminary selection of an isoconcentration value is guided by the visual observation of a chemically partitioned second phase. However, in low-chemical -partitioning systems, such a visual guide is absent. The lack of a priori composition information of the precipitate phase may further confound the issue. This paper presents a methodology of selecting an appropriate elemental species and subsequently obtaining an isoconcentration value to create an accurate isoconcentration surface that will act as the precipitate-matrix interface. We use the H-phase precipitate in the Ni-Ti-Hf shape memory alloy as our case study to illustrate the procedure.
This standard operating procedure describes the method used for preparing internal standard, surrogate recovery standard and calibration standard solutions for neutral analytes used for gas chromatography/mass spectrometry analysis.
This standard operating procedure describes the method used for preparing internal standard, surrogate recovery standard and calibration standard solutions for neutral analytes used for gas chromatography/mass spectrometry analysis.
Operational Challenges and Solutions with Implementation of an Adaptive Seamless Phase 2/3 Study
Spencer, Kimberly; Colvin, Kelly; Braunecker, Brad; Brackman, Marcia; Ripley, Joyce; Hines, Paul; Skrivanek, Zachary; Gaydos, Brenda; Geiger, Mary Jane
2012-01-01
A wide variety of operational issues were encountered with the planning and implementation of an adaptive, dose-finding, seamless phase 2/3 trial for a diabetes therapeutic. Compared with a conventional design, significant upfront planning was required, as well as earlier, more integrated cross-functional coordination. The existing infrastructure necessitated greater flexibility to meet the needs of the adaptive design. Rapid data acquisition, analysis, and reporting were essential to support the successful implementation of the adaptive algorithm. Drug supply for nine treatment arms had to be carefully managed across many sites worldwide. Details regarding these key operational challenges and others will be discussed along with resolutions taken to enable successful implementation of this adaptive, seamless trial. PMID:23294774
Adaptive Filtering for Large Space Structures: A Closed-Form Solution
NASA Technical Reports Server (NTRS)
Rauch, H. E.; Schaechter, D. B.
1985-01-01
In a previous paper Schaechter proposes using an extended Kalman filter to estimate adaptively the (slowly varying) frequencies and damping ratios of a large space structure. The time varying gains for estimating the frequencies and damping ratios can be determined in closed form so it is not necessary to integrate the matrix Riccati equations. After certain approximations, the time varying adaptive gain can be written as the product of a constant matrix times a matrix derived from the components of the estimated state vector. This is an important savings of computer resources and allows the adaptive filter to be implemented with approximately the same effort as the nonadaptive filter. The success of this new approach for adaptive filtering was demonstrated using synthetic data from a two mode system.
Harrison, Denise; Yamada, Janet; Adams-Webber, Thomasin; Ohlsson, Arne; Beyene, Joseph; Stevens, Bonnie
2015-05-05
Extensive evidence exists showing analgesic effects of sweet solutions for newborns and infants. It is less certain if the same analgesic effects exist for children one year to 16 years of age. This is an updated version of the original Cochrane review published in Issue 10, 2011 (Harrison 2011) titled Sweet tasting solutions for reduction of needle-related procedural pain in children aged one to 16 years. To determine the efficacy of sweet tasting solutions or substances for reducing needle-related procedural pain in children beyond one year of age. Searches were run to the end of June 2014. We searched the following databases: the Cochrane Central Register of Controlled Trials (CENTRAL), Cochrane Database of Systematic Reviews, Database of Abstracts of Reviews of Effects (DARE), Cochrane Methodology Register, Health Technology Assessment, the NHS Economic Evaluation Database, MEDLINE, EMBASE, PsycINFO, and ACP Journal Club (all via OvidSP), and CINAH (via EBSCOhost). We applied no language restrictions. Published or unpublished randomised controlled trials (RCT) in which children aged one year to 16 years, received a sweet tasting solution or substance for needle-related procedural pain. Control conditions included water, non-sweet tasting substances, pacifier, distraction, positioning/containment, breastfeeding, or no treatment. Outcome measures included crying duration, composite pain scores, physiological or behavioral pain indicators, self-report of pain or parental or healthcare professional-report of the child's pain. We reported mean differences (MD), weighted mean difference (WMD), or standardized mean difference (SMD) with 95% confidence intervals (CI) using fixed-effect or random-effects models as appropriate for continuous outcome measures. We reported risk ratio (RR), risk difference (RD), and the number needed to treat to benefit (NNTB) for dichotomous outcomes. We used the I(2) statistic to assess between-study heterogeneity. We included one
Yancey, Paul H; Siebenaller, Joseph F
2015-06-01
Organisms experience a wide range of environmental factors such as temperature, salinity and hydrostatic pressure, which pose challenges to biochemical processes. Studies on adaptations to such factors have largely focused on macromolecules, especially intrinsic adaptations in protein structure and function. However, micromolecular cosolutes can act as cytoprotectants in the cellular milieu to affect biochemical function and they are now recognized as important extrinsic adaptations. These solutes, both inorganic and organic, have been best characterized as osmolytes, which accumulate to reduce osmotic water loss. Singly, and in combination, many cosolutes have properties beyond simple osmotic effects, e.g. altering the stability and function of proteins in the face of numerous stressors. A key example is the marine osmolyte trimethylamine oxide (TMAO), which appears to enhance water structure and is excluded from peptide backbones, favoring protein folding and stability and counteracting destabilizers like urea and temperature. Co-evolution of intrinsic and extrinsic adaptations is illustrated with high hydrostatic pressure in deep-living organisms. Cytosolic and membrane proteins and G-protein-coupled signal transduction in fishes under pressure show inhibited function and stability, while revealing a number of intrinsic adaptations in deep species. Yet, intrinsic adaptations are often incomplete, and those fishes accumulate TMAO linearly with depth, suggesting a role for TMAO as an extrinsic 'piezolyte' or pressure cosolute. Indeed, TMAO is able to counteract the inhibitory effects of pressure on the stability and function of many proteins. Other cosolutes are cytoprotective in other ways, such as via antioxidation. Such observations highlight the importance of considering the cellular milieu in biochemical and cellular adaptation. © 2015. Published by The Company of Biologists Ltd.
Warren, Rachel
2011-01-13
The papers in this volume discuss projections of climate change impacts upon humans and ecosystems under a global mean temperature rise of 4°C above preindustrial levels. Like most studies, they are mainly single-sector or single-region-based assessments. Even the multi-sector or multi-region approaches generally consider impacts in sectors and regions independently, ignoring interactions. Extreme weather and adaptation processes are often poorly represented and losses of ecosystem services induced by climate change or human adaptation are generally omitted. This paper addresses this gap by reviewing some potential interactions in a 4°C world, and also makes a comparison with a 2°C world. In a 4°C world, major shifts in agricultural land use and increased drought are projected, and an increased human population might increasingly be concentrated in areas remaining wet enough for economic prosperity. Ecosystem services that enable prosperity would be declining, with carbon cycle feedbacks and fire causing forest losses. There is an urgent need for integrated assessments considering the synergy of impacts and limits to adaptation in multiple sectors and regions in a 4°C world. By contrast, a 2°C world is projected to experience about one-half of the climate change impacts, with concomitantly smaller challenges for adaptation. Ecosystem services, including the carbon sink provided by the Earth's forests, would be expected to be largely preserved, with much less potential for interaction processes to increase challenges to adaptation. However, demands for land and water for biofuel cropping could reduce the availability of these resources for agricultural and natural systems. Hence, a whole system approach to mitigation and adaptation, considering interactions, potential human and species migration, allocation of land and water resources and ecosystem services, will be important in either a 2°C or a 4°C world.
Webster, Clayton G; Zhang, Guannan; Gunzburger, Max D
2012-10-01
Accurate predictive simulations of complex real world applications require numerical approximations to first, oppose the curse of dimensionality and second, converge quickly in the presence of steep gradients, sharp transitions, bifurcations or finite discontinuities in high-dimensional parameter spaces. In this paper we present a novel multi-dimensional multi-resolution adaptive (MdMrA) sparse grid stochastic collocation method, that utilizes hierarchical multiscale piecewise Riesz basis functions constructed from interpolating wavelets. The basis for our non-intrusive method forms a stable multiscale splitting and thus, optimal adaptation is achieved. Error estimates and numerical examples will used to compare the efficiency of the method with several other techniques.
Brahme, Anders; Nyman, Peter; Skatt, Björn
2008-05-01
A four-dimensional (4D) laser camera (LC) has been developed for accurate patient imaging in diagnostic and therapeutic radiology. A complementary metal-oxide semiconductor camera images the intersection of a scanned fan shaped laser beam with the surface of the patient and allows real time recording of movements in a three-dimensional (3D) or four-dimensional (4D) format (3D +time). The LC system was first designed as an accurate patient setup tool during diagnostic and therapeutic applications but was found to be of much wider applicability as a general 4D photon "tag" for the surface of the patient in different clinical procedures. It is presently used as a 3D or 4D optical benchmark or tag for accurate delineation of the patient surface as demonstrated for patient auto setup, breathing and heart motion detection. Furthermore, its future potential applications in gating, adaptive therapy, 3D or 4D image fusion between most imaging modalities and image processing are discussed. It is shown that the LC system has a geometrical resolution of about 0, 1 mm and that the rigid body repositioning accuracy is about 0, 5 mm below 20 mm displacements, 1 mm below 40 mm and better than 2 mm at 70 mm. This indicates a slight need for repeated repositioning when the initial error is larger than about 50 mm. The positioning accuracy with standard patient setup procedures for prostate cancer at Karolinska was found to be about 5-6 mm when independently measured using the LC system. The system was found valuable for positron emission tomography-computed tomography (PET-CT) in vivo tumor and dose delivery imaging where it potentially may allow effective correction for breathing artifacts in 4D PET-CT and image fusion with lymph node atlases for accurate target volume definition in oncology. With a LC system in all imaging and radiation therapy rooms, auto setup during repeated diagnostic and therapeutic procedures may save around 5 min per session, increase accuracy and allow
de Abreu, Igor Renato Louro Bruno; Abrão, Fernando Conrado; Silva, Alessandra Rodrigues; Corrêa, Larissa Teresa Cirera; Younes, Riad Nain
2015-05-01
Currently, there is a tendency to perform surgical procedures via laparoscopic or thoracoscopic access. However, even with the impressive technological advancement in surgical materials, such as improvement in quality of monitors, light sources, and optical fibers, surgeons have to face simple problems that can greatly hinder surgery by video. One is the formation of "fog" or residue buildup on the lens, causing decreased visibility. Intracavitary techniques for cleaning surgical optics and preventing fog formation have been described; however, some of these techniques employ the use of expensive and complex devices designed solely for this purpose. Moreover, these techniques allow the cleaning of surgical optics when they becomes dirty, which does not prevent the accumulation of residue in the optics. To solve this problem we have designed a device that allows cleaning the optics with no surgical stops and prevents the fogging and residue accumulation. The objective of this study is to evaluate through experimental testing the effectiveness of a simple device that prevents the accumulation of residue and fogging of optics used in surgical procedures performed through thoracoscopic or laparoscopic access. Ex-vivo experiments were performed simulating the conditions of residue presence in surgical optics during a video surgery. The experiment consists in immersing the optics and catheter set connected to the IV line with crystalloid solution in three types of materials: blood, blood plus fat solution, and 200 mL of distilled water and 1 vial of methylene blue. The optics coupled to the device were immersed in 200 mL of each type of residue, repeating each immersion 10 times for each distinct residue for both thirty and zero degrees optics, totaling 420 experiments. A success rate of 98.1% was observed after the experiments, in these cases the device was able to clean and prevent the residue accumulation in the optics.
AP-Cloud: Adaptive particle-in-cloud method for optimal solutions to Vlasov–Poisson equation
Wang, Xingyu; Samulyak, Roman; Jiao, Xiangmin; Yu, Kwangmin
2016-04-19
We propose a new adaptive Particle-in-Cloud (AP-Cloud) method for obtaining optimal numerical solutions to the Vlasov–Poisson equation. Unlike the traditional particle-in-cell (PIC) method, which is commonly used for solving this problem, the AP-Cloud adaptively selects computational nodes or particles to deliver higher accuracy and efficiency when the particle distribution is highly non-uniform. Unlike other adaptive techniques for PIC, our method balances the errors in PDE discretization and Monte Carlo integration, and discretizes the differential operators using a generalized finite difference (GFD) method based on a weighted least square formulation. As a result, AP-Cloud is independent of the geometric shapes of computational domains and is free of artificial parameters. Efficient and robust implementation is achieved through an octree data structure with 2:1 balance. We analyze the accuracy and convergence order of AP-Cloud theoretically, and verify the method using an electrostatic problem of a particle beam with halo. Here, simulation results show that the AP-Cloud method is substantially more accurate and faster than the traditional PIC, and it is free of artificial forces that are typical for some adaptive PIC techniques.
AP-Cloud: Adaptive Particle-in-Cloud method for optimal solutions to Vlasov-Poisson equation
NASA Astrophysics Data System (ADS)
Wang, Xingyu; Samulyak, Roman; Jiao, Xiangmin; Yu, Kwangmin
2016-07-01
We propose a new adaptive Particle-in-Cloud (AP-Cloud) method for obtaining optimal numerical solutions to the Vlasov-Poisson equation. Unlike the traditional particle-in-cell (PIC) method, which is commonly used for solving this problem, the AP-Cloud adaptively selects computational nodes or particles to deliver higher accuracy and efficiency when the particle distribution is highly non-uniform. Unlike other adaptive techniques for PIC, our method balances the errors in PDE discretization and Monte Carlo integration, and discretizes the differential operators using a generalized finite difference (GFD) method based on a weighted least square formulation. As a result, AP-Cloud is independent of the geometric shapes of computational domains and is free of artificial parameters. Efficient and robust implementation is achieved through an octree data structure with 2:1 balance. We analyze the accuracy and convergence order of AP-Cloud theoretically, and verify the method using an electrostatic problem of a particle beam with halo. Simulation results show that the AP-Cloud method is substantially more accurate and faster than the traditional PIC, and it is free of artificial forces that are typical for some adaptive PIC techniques.
AP-Cloud: Adaptive Particle-in-Cloud method for optimal solutions to Vlasov–Poisson equation
Wang, Xingyu; Samulyak, Roman; Jiao, Xiangmin; Yu, Kwangmin
2016-07-01
We propose a new adaptive Particle-in-Cloud (AP-Cloud) method for obtaining optimal numerical solutions to the Vlasov–Poisson equation. Unlike the traditional particle-in-cell (PIC) method, which is commonly used for solving this problem, the AP-Cloud adaptively selects computational nodes or particles to deliver higher accuracy and efficiency when the particle distribution is highly non-uniform. Unlike other adaptive techniques for PIC, our method balances the errors in PDE discretization and Monte Carlo integration, and discretizes the differential operators using a generalized finite difference (GFD) method based on a weighted least square formulation. As a result, AP-Cloud is independent of the geometric shapes of computational domains and is free of artificial parameters. Efficient and robust implementation is achieved through an octree data structure with 2:1 balance. We analyze the accuracy and convergence order of AP-Cloud theoretically, and verify the method using an electrostatic problem of a particle beam with halo. Simulation results show that the AP-Cloud method is substantially more accurate and faster than the traditional PIC, and it is free of artificial forces that are typical for some adaptive PIC techniques.
AP-Cloud: Adaptive particle-in-cloud method for optimal solutions to Vlasov–Poisson equation
Wang, Xingyu; Samulyak, Roman; Jiao, Xiangmin; ...
2016-04-19
We propose a new adaptive Particle-in-Cloud (AP-Cloud) method for obtaining optimal numerical solutions to the Vlasov–Poisson equation. Unlike the traditional particle-in-cell (PIC) method, which is commonly used for solving this problem, the AP-Cloud adaptively selects computational nodes or particles to deliver higher accuracy and efficiency when the particle distribution is highly non-uniform. Unlike other adaptive techniques for PIC, our method balances the errors in PDE discretization and Monte Carlo integration, and discretizes the differential operators using a generalized finite difference (GFD) method based on a weighted least square formulation. As a result, AP-Cloud is independent of the geometric shapes ofmore » computational domains and is free of artificial parameters. Efficient and robust implementation is achieved through an octree data structure with 2:1 balance. We analyze the accuracy and convergence order of AP-Cloud theoretically, and verify the method using an electrostatic problem of a particle beam with halo. Here, simulation results show that the AP-Cloud method is substantially more accurate and faster than the traditional PIC, and it is free of artificial forces that are typical for some adaptive PIC techniques.« less
AP-Cloud: Adaptive particle-in-cloud method for optimal solutions to Vlasov–Poisson equation
Wang, Xingyu; Samulyak, Roman; Jiao, Xiangmin; Yu, Kwangmin
2016-04-19
We propose a new adaptive Particle-in-Cloud (AP-Cloud) method for obtaining optimal numerical solutions to the Vlasov–Poisson equation. Unlike the traditional particle-in-cell (PIC) method, which is commonly used for solving this problem, the AP-Cloud adaptively selects computational nodes or particles to deliver higher accuracy and efficiency when the particle distribution is highly non-uniform. Unlike other adaptive techniques for PIC, our method balances the errors in PDE discretization and Monte Carlo integration, and discretizes the differential operators using a generalized finite difference (GFD) method based on a weighted least square formulation. As a result, AP-Cloud is independent of the geometric shapes of computational domains and is free of artificial parameters. Efficient and robust implementation is achieved through an octree data structure with 2:1 balance. We analyze the accuracy and convergence order of AP-Cloud theoretically, and verify the method using an electrostatic problem of a particle beam with halo. Here, simulation results show that the AP-Cloud method is substantially more accurate and faster than the traditional PIC, and it is free of artificial forces that are typical for some adaptive PIC techniques.
NASA Astrophysics Data System (ADS)
Verdugo, Francesc; Parés, Núria; Díez, Pedro
2014-08-01
This article presents a space-time adaptive strategy for transient elastodynamics. The method aims at computing an optimal space-time discretization such that the computed solution has an error in the quantity of interest below a user-defined tolerance. The methodology is based on a goal-oriented error estimate that requires accounting for an auxiliary adjoint problem. The major novelty of this paper is using modal analysis to obtain a proper approximation of the adjoint solution. The idea of using a modal-based description was introduced in a previous work for error estimation purposes. Here this approach is used for the first time in the context of adaptivity. With respect to the standard direct time-integration methods, the modal solution of the adjoint problem is highly competitive in terms of computational effort and memory requirements. The performance of the proposed strategy is tested in two numerical examples. The two examples are selected to be representative of different wave propagation phenomena, one being a 2D bulky continuum and the second a 2D domain representing a structural frame.
NASA Astrophysics Data System (ADS)
Wilcox, L. C.; Burstedde, C.; Ghattas, O.; Stadler, G.
2009-12-01
Our goal is to develop scalable methods for global full-waveform seismic inversion. The first step we have taken towards this goal is the creation of an accurate solver for the numerical simulation of wave propagation in media with fluid-solid interfaces. We have chosen a high-order discontinuous Galerkin (DG) method to effectively eliminate numerical dispersion, enabling simulation over many wave periods. Our numerical method uses a strain-velocity formulation that enables the solution of acoustic and elastic wave equations within the same framework. Careful attention has been directed at the formulation of a numerical flux that preserves high-order accuracy in the presence of material discontinuities and at fluid-solid interfaces. We use adaptive mesh refinement (AMR) to resolve local variations in wave speeds with appropriate element sizes. To study the numerical accuracy and convergence of the proposed method we compare with reference solutions of classical interface problems, including Raleigh waves, Lamb waves, Stoneley waves, Scholte waves, and Love waves. We report strong and weak parallel scaling results for generation of the mesh and solution of the wave equations on adaptively resolved global Earth models.
Tests of an adaptive QM/MM calculation on free energy profiles of chemical reactions in solution.
Várnai, Csilla; Bernstein, Noam; Mones, Letif; Csányi, Gábor
2013-10-10
We present reaction free energy calculations using the adaptive buffered force mixing quantum mechanics/molecular mechanics (bf-QM/MM) method. The bf-QM/MM method combines nonadaptive electrostatic embedding QM/MM calculations with extended and reduced QM regions to calculate accurate forces on all atoms, which can be used in free energy calculation methods that require only the forces and not the energy. We calculate the free energy profiles of two reactions in aqueous solution: the nucleophilic substitution reaction of methyl chloride with a chloride anion and the deprotonation reaction of the tyrosine side chain. We validate the bf-QM/MM method against a full QM simulation, and show that it correctly reproduces both geometrical properties and free energy profiles of the QM model, while the electrostatic embedding QM/MM method using a static QM region comprising only the solute is unable to do so. The bf-QM/MM method is not explicitly dependent on the details of the QM and MM methods, so long as it is possible to compute QM forces in a small region and MM forces in the rest of the system, as in a conventional QM/MM calculation. It is simple, with only a few parameters needed to control the QM calculation sizes, and allows (but does not require) a varying and adapting QM region which is necessary for simulating solutions.
NASA Technical Reports Server (NTRS)
Swei, Sean; Cheung, Kenneth
2016-01-01
This project is to develop a novel aerostructure concept that takes advantage of emerging digital composite materials and manufacturing methods to build high stiffness-to-density ratio, ultra-light structures that can provide mission adaptive and aerodynamically efficient future N+3N+4 air vehicles.
ERIC Educational Resources Information Center
Riley, Barth B.; Dennis, Michael L.; Conrad, Kendon J.
2010-01-01
This simulation study sought to compare four different computerized adaptive testing (CAT) content-balancing procedures designed for use in a multidimensional assessment with respect to measurement precision, symptom severity classification, validity of clinical diagnostic recommendations, and sensitivity to atypical responding. The four…
NASA Astrophysics Data System (ADS)
Benfenati, A.; La Camera, A.; Carbillet, M.
2016-02-01
Aims: High-dynamic range images of astrophysical objects present some difficulties in their restoration because of the presence of very bright point-wise sources surrounded by faint and smooth structures. We propose a method that enables the restoration of this kind of images by taking these kinds of sources into account and, at the same time, improving the contrast enhancement in the final image. Moreover, the proposed approach can help to detect the position of the bright sources. Methods: The classical variational scheme in the presence of Poisson noise aims to find the minimum of a functional compound of the generalized Kullback-Leibler function and a regularization functional: the latter function is employed to preserve some characteristic in the restored image. The inexact Bregman procedure substitutes the regularization function with its inexact Bregman distance. This proposed scheme allows us to take under control the level of inexactness arising in the computed solution and permits us to employ an overestimation of the regularization parameter (which balances the trade-off between the Kullback-Leibler and the Bregman distance). This aspect is fundamental, since the estimation of this kind of parameter is very difficult in the presence of Poisson noise. Results: The inexact Bregman procedure is tested on a bright unresolved binary star with a faint circumstellar environment. When the sources' position is exactly known, this scheme provides us with very satisfactory results. In case of inexact knowledge of the sources' position, it can in addition give some useful information on the true positions. Finally, the inexact Bregman scheme can be also used when information about the binary star's position concerns a connected region instead of isolated pixels.
Triangle based adaptive stencils for the solution of hyperbolic conservation laws
NASA Technical Reports Server (NTRS)
Durlofsky, Louis J.; Engquist, Bjorn; Osher, Stanley
1992-01-01
A triangle based total variation diminishing (TVD) scheme for the numerical approximation of hyperbolic conservation laws in two space dimensions is constructed. The novelty of the scheme lies in the nature of the preprocessing of the cell averaged data, which is accomplished via a nearest neighbor linear interpolation followed by a slope limiting procedures. Two such limiting procedures are suggested. The resulting method is considerably more simple than other triangle based non-oscillatory approximations which, like this scheme, approximate the flux up to second order accuracy. Numerical results for linear advection and Burgers' equation are presented.
Adaptive solution of the biharmonic problem with shortly supported cubic spline-wavelets
NASA Astrophysics Data System (ADS)
Černá, Dana; Finěk, Václav
2012-09-01
In our contribution, we design a cubic spline-wavelet basis on the interval. The basis functions have small support and wavelets have vanishing moments. We show that stiffness matrices arising from discretization of the two-dimensional biharmonic problem using a constructed wavelet basis have uniformly bounded condition numbers and these condition numbers are very small. We compare quantitative behavior of adaptive wavelet method with a constructed basis and other cubic spline-wavelet bases, and show the superiority of our construction.
Embedded pitch adapters: A high-yield interconnection solution for strip sensors
NASA Astrophysics Data System (ADS)
Ullán, M.; Allport, P. P.; Baca, M.; Broughton, J.; Chisholm, A.; Nikolopoulos, K.; Pyatt, S.; Thomas, J. P.; Wilson, J. A.; Kierstead, J.; Kuczewski, P.; Lynn, D.; Hommels, L. B. A.; Fleta, C.; Fernandez-Tejero, J.; Quirion, D.; Bloch, I.; Díez, S.; Gregor, I. M.; Lohwasser, K.; Poley, L.; Tackmann, K.; Hauser, M.; Jakobs, K.; Kuehn, S.; Mahboubi, K.; Mori, R.; Parzefall, U.; Clark, A.; Ferrere, D.; Gonzalez Sevilla, S.; Ashby, J.; Blue, A.; Bates, R.; Buttar, C.; Doherty, F.; McMullen, T.; McEwan, F.; O'Shea, V.; Kamada, S.; Yamamura, K.; Ikegami, Y.; Nakamura, K.; Takubo, Y.; Unno, Y.; Takashima, R.; Chilingarov, A.; Fox, H.; Affolder, A. A.; Casse, G.; Dervan, P.; Forshaw, D.; Greenall, A.; Wonsak, S.; Wormald, M.; Cindro, V.; Kramberger, G.; Mandić, I.; Mikuž, M.; Gorelov, I.; Hoeferkamp, M.; Palni, P.; Seidel, S.; Taylor, A.; Toms, K.; Wang, R.; Hessey, N. P.; Valencic, N.; Hanagaki, K.; Dolezal, Z.; Kodys, P.; Bohm, J.; Mikestikova, M.; Bevan, A.; Beck, G.; Milke, C.; Domingo, M.; Fadeyev, V.; Galloway, Z.; Hibbard-Lubow, D.; Liang, Z.; Sadrozinski, H. F.-W.; Seiden, A.; To, K.; French, R.; Hodgson, P.; Marin-Reyes, H.; Parker, K.; Jinnouchi, O.; Hara, K.; Bernabeu, J.; Civera, J. V.; Garcia, C.; Lacasta, C.; Marti i Garcia, S.; Rodriguez, D.; Santoyo, D.; Solaz, C.; Soldevila, U.
2016-09-01
A proposal to fabricate large area strip sensors with integrated, or embedded, pitch adapters is presented for the End-cap part of the Inner Tracker in the ATLAS experiment. To implement the embedded pitch adapters, a second metal layer is used in the sensor fabrication, for signal routing to the ASICs. Sensors with different embedded pitch adapters have been fabricated in order to optimize the design and technology. Inter-strip capacitance, noise, pick-up, cross-talk, signal efficiency, and fabrication yield have been taken into account in their design and fabrication. Inter-strip capacitance tests taking into account all channel neighbors reveal the important differences between the various designs considered. These tests have been correlated with noise figures obtained in full assembled modules, showing that the tests performed on the bare sensors are a valid tool to estimate the final noise in the full module. The full modules have been subjected to test beam experiments in order to evaluate the incidence of cross-talk, pick-up, and signal loss. The detailed analysis shows no indication of cross-talk or pick-up as no additional hits can be observed in any channel not being hit by the beam above 170 mV threshold, and the signal in those channels is always below 1% of the signal recorded in the channel being hit, above 100 mV threshold. First results on irradiated mini-sensors with embedded pitch adapters do not show any change in the interstrip capacitance measurements with only the first neighbors connected.
Advances in sensor adaptation to changes in ambient light: a bio-inspired solution - biomed 2010.
Dean, Brian; Wright, Cameron H G; Barrett, Stephen F
2010-01-01
Fly-inspired sensors have been shown to have many interesting qualities such as hyperacuity (or an ability to achieve movement resolution beyond the theoretical limit), extreme sensitivity to motion, and (through software simulation) image edge extraction, motion detection, and orientation and location of a line. Many of these qualities are beyond the ability of traditional computer vision sensors such as charge-coupled device (CCD) arrays. To obtain these characteristics, a prototype fly-inspired sensor has been built and tested in a laboratory environment and shows promise. Any sophisticated visual system, whether man made or natural, must adequately adapt to lighting conditions; therefore, light adaptation is a vital milestone in getting the fly eye vision sensor prototype working in real-world conditions. A design based on the common house fly, Musca domestica, was suggested in a paper presented to RMBS 2009 and showed an ability to remove 72-86% of effects due to ambient light changes. In this paper, a more advanced version of this design is discussed. This new design is able to remove 97-99% of the effects due to changes in ambient light, by more accurately approximating the light adaptation process used by the common house fly.
Panico, Francesco; Sagliano, Laura; Grossi, Dario; Trojano, Luigi
2016-06-01
The aim of this study is to clarify the specific role of the cerebellum during prism adaptation procedure (PAP), considering its involvement in early prism exposure (i.e., in the recalibration process) and in post-exposure phase (i.e., in the after-effect, related to spatial realignment). For this purpose we interfered with cerebellar activity by means of cathodal transcranial direct current stimulation (tDCS), while young healthy individuals were asked to perform a pointing task on a touch screen before, during and after wearing base-left prism glasses. The distance from the target dot in each trial (in terms of pixels) on horizontal and vertical axes was recorded and served as an index of accuracy. Results on horizontal axis, that was shifted by prism glasses, revealed that participants who received cathodal stimulation showed increased rightward deviation from the actual position of the target while wearing prisms and a larger leftward deviation from the target after prisms removal. Results on vertical axis, in which no shift was induced, revealed a general trend in the two groups to improve accuracy through the different phases of the task, and a trend, more visible in cathodal stimulated participants, to worsen accuracy from the first to the last movements in each phase. Data on horizontal axis allow to confirm that the cerebellum is involved in all stages of PAP, contributing to early strategic recalibration process, as well as to spatial realignment. On vertical axis, the improving performance across the different stages of the task and the worsening accuracy within each task phase can be ascribed, respectively, to a learning process and to the task-related fatigue.
Brantley, P S
2006-08-08
The double spherical harmonics angular approximation in the lowest order, i.e. double P{sub 0} (DP{sub 0}), is developed for the solution of time-dependent non-equilibrium grey radiative transfer problems in planar geometry. Although the DP{sub 0} diffusion approximation is expected to be less accurate than the P{sub 1} diffusion approximation at and near thermodynamic equilibrium, the DP{sub 0} angular approximation can more accurately capture the complicated angular dependence near a non-equilibrium radiation wave front. In addition, the DP{sub 0} approximation should be more accurate in non-equilibrium optically thin regions where the positive and negative angular domains are largely decoupled. We develop an adaptive angular technique that locally uses either the DP{sub 0} or P{sub 1} flux-limited diffusion approximation depending on the degree to which the radiation and material fields are in thermodynamic equilibrium. Numerical results are presented for two test problems due to Su and Olson and to Ganapol and Pomraning for which semi-analytic transport solutions exist. These numerical results demonstrate that the adaptive P{sub 1}-DP{sub 0} diffusion approximation can yield improvements in accuracy over the standard P{sub 1} diffusion approximation, both without and with flux-limiting, for non-equilibrium grey radiative transfer.
Brantley, P S
2005-12-13
The double spherical harmonics angular approximation in the lowest order, i.e. double P{sub 0} (DP{sub 0}), is developed for the solution of time-dependent non-equilibrium grey radiative transfer problems in planar geometry. Although the DP{sub 0} diffusion approximation is expected to be less accurate than the P{sub 1} diffusion approximation at and near thermodynamic equilibrium, the DP{sub 0} angular approximation can more accurately capture the complicated angular dependence near a non-equilibrium radiation wave front. In addition, the DP{sub 0} approximation should be more accurate in non-equilibrium optically thin regions where the positive and negative angular domains are largely decoupled. We develop an adaptive angular technique that locally uses either the DP{sub 0} or P{sub 1} flux-limited diffusion approximation depending on the degree to which the radiation and material fields are in thermodynamic equilibrium. Numerical results are presented for two test problems due to Su and Olson and to Ganapol and Pomraning for which semi-analytic transport solutions exist. These numerical results demonstrate that the adaptive P{sub 1}-DP{sub 0} diffusion approximation can yield improvements in accuracy over the standard P{sub 1} diffusion approximation, both without and with flux-limiting, for non-equilibrium grey radiative transfer.
Copper-Adapted Suillus luteus, a Symbiotic Solution for Pines Colonizing Cu Mine Spoils
Adriaensen, K.; Vrålstad, T.; Noben, J.-P.; Vangronsveld, J.; Colpaert, J. V.
2005-01-01
Natural populations thriving in heavy-metal-contaminated ecosystems are often subjected to selective pressures for increased resistance to toxic metals. In the present study we describe a population of the ectomycorrhizal fungus Suillus luteus that colonized a toxic Cu mine spoil in Norway. We hypothesized that this population had developed adaptive Cu tolerance and was able to protect pine trees against Cu toxicity. We also tested for the existence of cotolerance to Cu and Zn in S. luteus. Isolates from Cu-polluted, Zn-polluted, and nonpolluted sites were grown in vitro on Cu- or Zn-supplemented medium. The Cu mine isolates exhibited high Cu tolerance, whereas the Zn-tolerant isolates were shown to be Cu sensitive, and vice versa. This indicates the evolution of metal-specific tolerance mechanisms is strongly triggered by the pollution in the local environment. Cotolerance does not occur in the S. luteus isolates studied. In a dose-response experiment, the Cu sensitivity of nonmycorrhizal Pinus sylvestris seedlings was compared to the sensitivity of mycorrhizal seedlings colonized either by a Cu-sensitive or Cu-tolerant S. luteus isolate. In nonmycorrhizal plants and plants colonized by the Cu-sensitive isolate, root growth and nutrient uptake were strongly inhibited under Cu stress conditions. In contrast, plants colonized by the Cu-tolerant isolate were hardly affected. The Cu-adapted S. luteus isolate provided excellent insurance against Cu toxicity in pine seedlings exposed to elevated Cu levels. Such a metal-adapted Suillus-Pinus combination might be suitable for large-scale land reclamation at phytotoxic metalliferous and industrial sites. PMID:16269769
Copper-adapted Suillus luteus, a symbiotic solution for pines colonizing Cu mine spoils.
Adriaensen, K; Vrålstad, T; Noben, J-P; Vangronsveld, J; Colpaert, J V
2005-11-01
Natural populations thriving in heavy-metal-contaminated ecosystems are often subjected to selective pressures for increased resistance to toxic metals. In the present study we describe a population of the ectomycorrhizal fungus Suillus luteus that colonized a toxic Cu mine spoil in Norway. We hypothesized that this population had developed adaptive Cu tolerance and was able to protect pine trees against Cu toxicity. We also tested for the existence of cotolerance to Cu and Zn in S. luteus. Isolates from Cu-polluted, Zn-polluted, and nonpolluted sites were grown in vitro on Cu- or Zn-supplemented medium. The Cu mine isolates exhibited high Cu tolerance, whereas the Zn-tolerant isolates were shown to be Cu sensitive, and vice versa. This indicates the evolution of metal-specific tolerance mechanisms is strongly triggered by the pollution in the local environment. Cotolerance does not occur in the S. luteus isolates studied. In a dose-response experiment, the Cu sensitivity of nonmycorrhizal Pinus sylvestris seedlings was compared to the sensitivity of mycorrhizal seedlings colonized either by a Cu-sensitive or Cu-tolerant S. luteus isolate. In nonmycorrhizal plants and plants colonized by the Cu-sensitive isolate, root growth and nutrient uptake were strongly inhibited under Cu stress conditions. In contrast, plants colonized by the Cu-tolerant isolate were hardly affected. The Cu-adapted S. luteus isolate provided excellent insurance against Cu toxicity in pine seedlings exposed to elevated Cu levels. Such a metal-adapted Suillus-Pinus combination might be suitable for large-scale land reclamation at phytotoxic metalliferous and industrial sites.
NASA Technical Reports Server (NTRS)
Brislawn, Kristi D.; Brown, David L.; Chesshire, Geoffrey S.; Saltzman, Jeffrey S.
1995-01-01
Adaptive mesh refinement (AMR) in conjunction with higher-order upwind finite-difference methods have been used effectively on a variety of problems in two and three dimensions. In this paper we introduce an approach for resolving problems that involve complex geometries in which resolution of boundary geometry is important. The complex geometry is represented by using the method of overlapping grids, while local resolution is obtained by refining each component grid with the AMR algorithm, appropriately generalized for this situation. The CMPGRD algorithm introduced by Chesshire and Henshaw is used to automatically generate the overlapping grid structure for the underlying mesh.
NASA Astrophysics Data System (ADS)
Liolios, K.; Tsihrintzis, V.; Angelidis, P.; Georgiev, K.; Georgiev, I.
2016-10-01
Current developments on modeling of groundwater flow and contaminant transport and removal in the porous media of Horizontal Subsurface Flow Constructed Wetlands (HSF CWs) are first reviewed in a short way. The two usual environmental engineering approaches, the black-box and the process-based one, are briefly presented. Next, recent research results obtained by using these two approaches are briefly discussed as application examples, where emphasis is given to the evaluation of the optimal design and operation parameters concerning HSF CWs. For the black-box approach, the use of Artificial Neural Networks is discussed for the formulation of models, which predict the removal performance of HSF CWs. A novel mathematical prove is presented, which concerns the dependence of the first-order removal coefficient on the Temperature and the Hydraulic Residence Time. For the process-based approach, an application example is first discussed which concerns procedures to evaluate the optimal range of values for the removal coefficient, dependent on either the Temperature or the Hydraulic Residence Time. This evaluation is based on simulating available experimental results of pilot-scale units operated in Democritus University of Thrace, Xanthi, Greece. Further, in a second example, a novel enlargement of the system of Partial Differential Equations is presented, in order to include geothermal effects. Finally, in a third example, the case of parameters uncertainty concerning biodegradation procedures is considered and the use of upper and a novel approach is presented, which concerns the upper and the lower solution bound for the practical draft design of HSF CWs.
NASA Technical Reports Server (NTRS)
Schoenauer, W.; Daeubler, H. G.; Glotz, G.; Gruening, J.
1986-01-01
An implicit difference procedure for the solution of equations for a chemically reacting hypersonic boundary layer is described. Difference forms of arbitrary error order in the x and y coordinate plane were used to derive estimates for discretization error. Computational complexity and time were minimized by the use of this difference method and the iteration of the nonlinear boundary layer equations was regulated by discretization error. Velocity and temperature profiles are presented for Mach 20.14 and Mach 18.5; variables are velocity profiles, temperature profiles, mass flow factor, Stanton number, and friction drag coefficient; three figures include numeric data.
NASA Astrophysics Data System (ADS)
Zheng, H. W.; Shu, C.; Chew, Y. T.
2008-07-01
In this paper, an object-oriented and quadrilateral-mesh based solution adaptive algorithm for the simulation of compressible multi-fluid flows is presented. The HLLC scheme (Harten, Lax and van Leer approximate Riemann solver with the Contact wave restored) is extended to adaptively solve the compressible multi-fluid flows under complex geometry on unstructured mesh. It is also extended to the second-order of accuracy by using MUSCL extrapolation. The node, edge and cell are arranged in such an object-oriented manner that each of them inherits from a basic object. A home-made double link list is designed to manage these objects so that the inserting of new objects and removing of the existing objects (nodes, edges and cells) are independent of the number of objects and only of the complexity of O( 1). In addition, the cells with different levels are further stored in different lists. This avoids the recursive calculation of solution of mother (non-leaf) cells. Thus, high efficiency is obtained due to these features. Besides, as compared to other cell-edge adaptive methods, the separation of nodes would reduce the memory requirement of redundant nodes, especially in the cases where the level number is large or the space dimension is three. Five two-dimensional examples are used to examine its performance. These examples include vortex evolution problem, interface only problem under structured mesh and unstructured mesh, bubble explosion under the water, bubble-shock interaction, and shock-interface interaction inside the cylindrical vessel. Numerical results indicate that there is no oscillation of pressure and velocity across the interface and it is feasible to apply it to solve compressible multi-fluid flows with large density ratio (1000) and strong shock wave (the pressure ratio is 10,000) interaction with the interface.
Shave, Steven; Auer, Manfred
2013-12-23
Combinatorial chemical libraries produced on solid support offer fast and cost-effective access to a large number of unique compounds. If such libraries are screened directly on-bead, the speed at which chemical space can be explored by chemists is much greater than that addressable using solution based synthesis and screening methods. Solution based screening has a large supporting body of software such as structure-based virtual screening tools which enable the prediction of protein-ligand complexes. Use of these techniques to predict the protein bound complexes of compounds synthesized on solid support neglects to take into account the conjugation site on the small molecule ligand. This may invalidate predicted binding modes, the linker may be clashing with protein atoms. We present CSBB-ConeExclusion, a methodology and computer program which provides a measure of the applicability of solution dockings to solid support. Output is given in the form of statistics for each docking pose, a unique 2D visualization method which can be used to determine applicability at a glance, and automatically generated PyMol scripts allowing visualization of protein atom incursion into a defined exclusion volume. CSBB-ConeExclusion is then exemplarically used to determine the optimum attachment point for a purine library targeting cyclin-dependent kinase 2 CDK2.
A local anisotropic adaptive algorithm for the solution of low-Mach transient combustion problems
NASA Astrophysics Data System (ADS)
Carpio, Jaime; Prieto, Juan Luis; Vera, Marcos
2016-02-01
A novel numerical algorithm for the simulation of transient combustion problems at low Mach and moderately high Reynolds numbers is presented. These problems are often characterized by the existence of a large disparity of length and time scales, resulting in the development of directional flow features, such as slender jets, boundary layers, mixing layers, or flame fronts. This makes local anisotropic adaptive techniques quite advantageous computationally. In this work we propose a local anisotropic refinement algorithm using, for the spatial discretization, unstructured triangular elements in a finite element framework. For the time integration, the problem is formulated in the context of semi-Lagrangian schemes, introducing the semi-Lagrange-Galerkin (SLG) technique as a better alternative to the classical semi-Lagrangian (SL) interpolation. The good performance of the numerical algorithm is illustrated by solving a canonical laminar combustion problem: the flame/vortex interaction. First, a premixed methane-air flame/vortex interaction with simplified transport and chemistry description (Test I) is considered. Results are found to be in excellent agreement with those in the literature, proving the superior performance of the SLG scheme when compared with the classical SL technique, and the advantage of using anisotropic adaptation instead of uniform meshes or isotropic mesh refinement. As a more realistic example, we then conduct simulations of non-premixed hydrogen-air flame/vortex interactions (Test II) using a more complex combustion model which involves state-of-the-art transport and chemical kinetics. In addition to the analysis of the numerical features, this second example allows us to perform a satisfactory comparison with experimental visualizations taken from the literature.
A cellular automaton model adapted to sandboxes to simulate the transport of solutes
NASA Astrophysics Data System (ADS)
Lora, Boris; Donado, Leonardo; Castro, Eduardo; Bayuelo, Alfredo
2016-04-01
The increasingly use of groundwater sources for human consumption and the growth of the levels of these hydric sources contamination make imperative to reach a deeper understanding how the contaminants are transported by the water, in particular through a heterogeneous porous medium. Accordingly, the present research aims to design a model, which simulates the transport of solutes through a heterogeneous porous medium, using cellular automata. Cellular automata (CA) are a class of spatially (pixels) and temporally discrete mathematical systems characterized by local interaction (neighborhoods). The pixel size and the CA neighborhood were determined in order to reproduce accurately the solute behavior (Ilachinski, 2001). For the design and corresponding validation of the CA model were developed different conservative tracer tests using a sandbox packed heterogeneously with a coarse sand (size # 20 grain diameter 0,85 to 0,6 mm) and clay. We use Uranine and a saline solution with NaCl as a tracer which were measured taking snapshots each 20 seconds. A calibration curve (pixel intensity Vs Concentration) was used to obtain concentration maps. The sandbox was constructed of acrylic (caliber 0,8 cms) with 70 x 45 x 4 cms of dimensions. The "sandbox" had a grid of 35 transversal holes with a diameter of 4 mm each and an uniform separation from one to another of 10 cms. To validate the CA-model it was used a metric consisting in rating the number of correctly predicted pixels over the total per image throughout the entire test run. The CA-model shows that calibrations of pixels and neighborhoods allow reaching results over the 60 % of correctly predictions usually. This makes possible to think that the application of the CA- model could be useful in further researches regarding the transport of contaminants in hydrogeology.
Hoste, H; Torres-Acosta, J F J
2011-08-04
Infections with gastrointestinal nematodes (GIN) remain a major threat for ruminant production, health and welfare associated with outdoor breeding. The control of these helminth parasites has relied on the strategic or tactical use of chemical anthelmintic (AH) drugs. However, the expanding development and diffusion of anthelmintic resistance in nematode populations imposes the need to explore and validate novel solutions (or to re-discover old knowledge) for a more sustainable control of GIN. The different solutions refer to three main principles of action. The first one is to limit the contact between the hosts and the infective larvae in the field through grazing management methods. The latter were described since the 1970s and, at present, they benefit from innovations based on computer models. Several biological control agents have also been studied in the last three decades as potential tools to reduce the infective larvae in the field. The second principle aims at improving the host response against GIN infections relying on the genetic selection between or within breeds of sheep or goats, crossbreeding of resistant and susceptible breeds and/or the manipulation of nutrition. These approaches may benefit from a better understanding of the potential underlying mechanisms, in particular in regard of the host immune response against the worms. The third principle is the control of GIN based on non-conventional AH materials (plant or mineral compounds). Worldwide studies show that non conventional AH materials can eliminate worms and/or negatively affect the parasite's biology. The recent developments and pros and cons concerning these various options are discussed. Last, some results are presented which illustrate how the integration of these different solutions can be efficient and applicable in different systems of production and/or epidemiological conditions. The integration of different control tools seems to be a pre-requisite for the sustainable
Practical Study and Solutions Adapted For The Road Noise In The Algiers City
NASA Astrophysics Data System (ADS)
Iddir, R.; Boukhaloua, N.; Saadi, T.
At the present hour where the city spreads on a big space, the road network devel- opment was a following logical of this movement. Generating a considerable impact thus on the environment. This last is a resulting open system of the interaction be- tween the man and the nature, it's affected all side by the different means of transport and by their increasing demand of mobility. The contemporary city development be- got problems bound to the environment and among it : The road noise. This last is a complex phenomenon, essentially by reason of its humans sensory effects, its impact on the environment is considerable, this one concerns the life quality directly, mainly in population zones to strong density. The resonant pollution reached its paroxysm; the road network of Algiers is not conceived to satisfy requirements in resonant pol- lution matter. For it arrangements soundproof should be adapted in order to face of it these new requirements in matter of the acoustic comfort. All these elements drove to the process aiming the attenuation of the hindrance caused by the road traffic and it by actions essentially aiming: vehicles, the structure of the road and the immediate envi- ronment of the system road - structure. From these results, we note that the situation in resonant nuisance matter in this zone with strong traffic is disturbing, and especially on the residents health.
Mulder, Samuel A; Wunsch, Donald C
2003-01-01
The Traveling Salesman Problem (TSP) is a very hard optimization problem in the field of operations research. It has been shown to be NP-complete, and is an often-used benchmark for new optimization techniques. One of the main challenges with this problem is that standard, non-AI heuristic approaches such as the Lin-Kernighan algorithm (LK) and the chained LK variant are currently very effective and in wide use for the common fully connected, Euclidean variant that is considered here. This paper presents an algorithm that uses adaptive resonance theory (ART) in combination with a variation of the Lin-Kernighan local optimization algorithm to solve very large instances of the TSP. The primary advantage of this algorithm over traditional LK and chained-LK approaches is the increased scalability and parallelism allowed by the divide-and-conquer clustering paradigm. Tours obtained by the algorithm are lower quality, but scaling is much better and there is a high potential for increasing performance using parallel hardware.
Benantar, M.; Flaherty, J.E.
1990-01-01
We consider the parallel assembly and solution on shared-memory computers of linear algebraic systems arising from the finite element discretization of two-dimensional linear self-adjoint elliptic problems. Stiffness matrix assembly and conjugate gradient solution of the linear system using element-by-element and symmetric successive over-relaxation preconditioners are processed in parallel with computations scheduled on noncontiguous regions in order to minimize process synchronization. An underlying quadtree structure, used for automatic mesh generation and solution-based mesh refinement, is separated into disjoint regions called quadrants using six-color procedure having linear time complexity.
Brantley, P S
2005-06-06
The double spherical harmonics angular approximation in the lowest order, i.e. double P{sub 0} (DP{sub 0}), is developed for the solution of time-dependent non-equilibrium grey radiative transfer problems in planar geometry. The standard P{sub 1} angular approximation represents the angular dependence of the radiation specific intensity using a linear function in the angular domain -1 {le} {mu} {le} 1. In contrast, the DP{sub 0} angular approximation represents the angular dependence as isotropic in each half angular range -1 {le} {mu} < 0 and 0 < {mu} {le} 1. Neglecting the time derivative of the radiation flux, both the P{sub 1} and DP{sub 0} equations can be written as a single diffusion equation for the radiation energy density. Although the DP{sub 0} diffusion approximation is expected to be less accurate than the P{sub 1} diffusion approximation at and near thermodynamic equilibrium, the DP{sub 0} angular approximation can more accurately capture the complicated angular dependence near the non-equilibrium wave front. We develop an adaptive angular technique that locally uses either the DP{sub 0} or the P{sub 1} diffusion approximation depending on the degree to which the radiation and material fields are in thermodynamic equilibrium. Numerical results are presented for a test problem due to Su and Olson for which a semi-analytic transport solution exists. The numerical results demonstrate that the adaptive P{sub 1}-DP{sub 0} diffusion approximation can yield improvements in accuracy over the standard P{sub 1} diffusion approximation for non-equilibrium grey radiative transfer.
Heidenreich, Elvio A; Ferrero, José M; Doblaré, Manuel; Rodríguez, José F
2010-07-01
Many problems in biology and engineering are governed by anisotropic reaction-diffusion equations with a very rapidly varying reaction term. This usually implies the use of very fine meshes and small time steps in order to accurately capture the propagating wave while avoiding the appearance of spurious oscillations in the wave front. This work develops a family of macro finite elements amenable for solving anisotropic reaction-diffusion equations with stiff reactive terms. The developed elements are incorporated on a semi-implicit algorithm based on operator splitting that includes adaptive time stepping for handling the stiff reactive term. A linear system is solved on each time step to update the transmembrane potential, whereas the remaining ordinary differential equations are solved uncoupled. The method allows solving the linear system on a coarser mesh thanks to the static condensation of the internal degrees of freedom (DOF) of the macroelements while maintaining the accuracy of the finer mesh. The method and algorithm have been implemented in parallel. The accuracy of the method has been tested on two- and three-dimensional examples demonstrating excellent behavior when compared to standard linear elements. The better performance and scalability of different macro finite elements against standard finite elements have been demonstrated in the simulation of a human heart and a heterogeneous two-dimensional problem with reentrant activity. Results have shown a reduction of up to four times in computational cost for the macro finite elements with respect to equivalent (same number of DOF) standard linear finite elements as well as good scalability properties.
2012-03-27
pulse- detonation engines ( PDE ), stage separation, supersonic cav- ity oscillations, hypersonic aerodynamics, detonation induced structural...ADAPTIVE UNSTRUCTURED CARTESIAN METHOD FOR LARGE-EDDY SIMULATION OF DETONATION IN MULTI-PHASE TURBULENT REACTIVE MIXTURES 5b. GRANT NUMBER FA9550...CCL Report TR-2012-03-03 Hybrid Solution-Adaptive Unstructured Cartesian Method for Large-Eddy Simulation of Detonation in Multi-Phase Turbulent
Borges, Sivanildo S.; Vieira, Gláucia P.; Reis, Boaventura F.
2007-01-01
In this work, an automatic device to deliver titrant solution into a titration chamber with the ability to determine the dispensed volume of solution, with good precision independent of both elapsed time and flow rate, is proposed. A glass tube maintained at the vertical position was employed as a container for the titrant solution. Electronic devices were coupled to the glass tube in order to control its filling with titrant solution, as well as the stepwise solution delivering into the titration chamber. The detection of the titration end point was performed employing a photometer designed using a green LED (λ=545 nm) and a phototransistor. The titration flow system comprised three-way solenoid valves, which were assembled to allow that the steps comprising the solution container loading and the titration run were carried out automatically. The device for the solution volume determination was designed employing an infrared LED (λ=930 nm) and a photodiode. When solution volume delivered from proposed device was within the range of 5 to 105 μl, a linear relationship (R = 0.999) between the delivered volumes and the generated potential difference was achieved. The usefulness of the proposed device was proved performing photometric titration of hydrochloric acid solution with a standardized sodium hydroxide solution and using phenolphthalein as an external indicator. The achieved results presented relative standard deviation of 1.5%. PMID:18317510
Piotrowski, T; Ryczkowski, A; Kazmierska, J
2012-06-01
The deformable image registration (DIR) procedure has been optimized for helical tomotherapy. The data on registration shifts obtained on matching planning image with pre-treatment megavoltage CT are used in our software for acceleration of the first step (rigid registration) of the DIR procedure and for implementation of the B-Spline algorithm with intelligent masking. Priorities of the masks were automatically calculated based on disagreement detected during rigid registration. Evaluation tasks included: (a) comparison of accuracy and rate for schemes of pre-registered and non-registered images; (b) qualification of the effectiveness of the intelligent masking process, and (c) determination of acceleration of achievable with GPU computing. A specially designed head and neck phantom used for evaluation included structures with controlled changes of position, volume, density, and shape. Re-contouring procedures were performed with an Adaptive Planning software (Tomotherapy Inc.). No statistical difference was observed in accuracy of DIR based on structure position match on the tomotherapy unit and non pre-registered images (p > 0.7). Using pre-registered data reduces the total time required for execution of the elastic registration procedure by 5%. These data are also necessary for intelligent masking procedure during B-Spine registration. Intelligent masking procedure increases accuracy of the registration for a masked structure (p < 0.04) without decreasing the accuracy in non-masked tissues and additionally reduces the total time by 13%. GPU computations speed up procedure 30 times. GPU computing of the DIR in current status of our investigation could be realized in a relatively short time after pre-treatment imaging. The proposed approach can be used in the routine assessment of anatomic changes occurring in healthy tissue during the course of radiotherapy. Further developments will be concentrated on the full integration of DIR computations in the imaging and
Dickerson, Vanna M; Coleman, Kevin D; Ogawa, Morika; Saba, Corey F; Cornell, Karen K; Radlinsky, MaryAnn G; Schmiedt, Chad W
2015-10-01
To evaluate outcomes of dogs and owner satisfaction and perception of their dogs' adaptation following amputation of a thoracic or pelvic limb. Retrospective case series. 64 client-owned dogs. Procedures-Medical records of dogs that underwent limb amputation at a veterinary teaching hospital between 2005 and 2012 were reviewed. Signalment, body weight, and body condition scores at the time of amputation, dates of amputation and discharge from the hospital, whether a thoracic or pelvic limb was amputated, and reason for amputation were recorded. Histologic diagnosis and date of death were recorded if applicable. Owners were interviewed by telephone about their experience and interpretation of the dog's adaptation after surgery. Associations between perioperative variables and postoperative quality of life scores were investigated. 58 of 64 (91%) owners perceived no change in their dog's attitude after amputation; 56 (88%) reported complete or nearly complete return to preamputation quality of life, 50 (78%) indicated the dog's recovery and adaptation were better than expected, and 47 (73%) reported no change in the dog's recreational activities. Body condition scores and body weight at the time of amputation were negatively correlated with quality of life scores after surgery. Taking all factors into account, most (55/64 [86%]) respondents reported they would make the same decision regarding amputation again, and 4 (6%) indicated they would not; 5 (8%) were unsure. This information may aid veterinarians in educating clients about adaptation potential of dogs following limb amputation and the need for postoperative weight control in such patients.
Shen, Yi; Dai, Wei; Richards, Virginia M
2015-03-01
A MATLAB toolbox for the efficient estimation of the threshold, slope, and lapse rate of the psychometric function is described. The toolbox enables the efficient implementation of the updated maximum-likelihood (UML) procedure. The toolbox uses an object-oriented architecture for organizing the experimental variables and computational algorithms, which provides experimenters with flexibility in experimental design and data management. Descriptions of the UML procedure and the UML Toolbox are provided, followed by toolbox use examples. Finally, guidelines and recommendations of parameter configurations are given.
Richards, V. M.; Dai, W.
2014-01-01
A MATLAB toolbox for the efficient estimation of the threshold, slope, and lapse rate of the psychometric function is described. The toolbox enables the efficient implementation of the updated maximum-likelihood (UML) procedure. The toolbox uses an object-oriented architecture for organizing the experimental variables and computational algorithms, which provides experimenters with flexibility in experimental design and data management. Descriptions of the UML procedure and the UML Toolbox are provided, followed by toolbox use examples. Finally, guidelines and recommendations of parameter configurations are given. PMID:24671826
Amiri, Mohammad J; Abedi-Koupai, Jahangir; Eslamian, Sayed S; Mousavi, Sayed F; Hasheminejad, Hasti
2013-01-01
To evaluate the performance of Adaptive Neural-Based Fuzzy Inference System (ANFIS) model in estimating the efficiency of Pb (II) ions removal from aqueous solution by ostrich bone ash, a batch experiment was conducted. Five operational parameters including adsorbent dosage (C(s)), initial concentration of Pb (II) ions (C(o)), initial pH, temperature (T) and contact time (t) were taken as the input data and the adsorption efficiency (AE) of bone ash as the output. Based on the 31 different structures, 5 ANFIS models were tested against the measured adsorption efficiency to assess the accuracy of each model. The results showed that ANFIS5, which used all input parameters, was the most accurate (RMSE = 2.65 and R(2) = 0.95) and ANFIS1, which used only the contact time input, was the worst (RMSE = 14.56 and R(2) = 0.46). In ranking the models, ANFIS4, ANFIS3 and ANFIS2 ranked second, third and fourth, respectively. The sensitivity analysis revealed that the estimated AE is more sensitive to the contact time, followed by pH, initial concentration of Pb (II) ions, adsorbent dosage, and temperature. The results showed that all ANFIS models overestimated the AE. In general, this study confirmed the capabilities of ANFIS model as an effective tool for estimation of AE.
NASA Astrophysics Data System (ADS)
Kara, Emre; Kutlar, Ahmet Ihsan; Aksel, Mehmet Haluk
2017-09-01
In this study, two-dimensional geometric and solution adaptive refinement/coarsening scheme codes are generated by the use of Cartesian grid generation techniques. In the solution of compressible, turbulent flows one-equation Spalart-Allmaras turbulence model is implemented. The performance of the flow solver is tested on the case of high Reynolds number, steady flow around NACA 0012 airfoil. The lift coefficient solution for the airfoil at a real-life-flight Reynolds number is compared with the experimental study in literature.
NASA Astrophysics Data System (ADS)
Castin, N.; Fernandez, J. R.; Terentyev, D.; Malerba, L.; Pasianot, R. C.
2014-06-01
We propose a novel approach for simulating, with atomistic kinetic Monte Carlo, the segregation or depletion of solute atoms at interfaces, via transport by vacancies. Differently from classical lattice KMC, no assumption is made regarding the crystallographic structure. The model can thus potentially be applied to any type of interfaces, e.g. grain boundaries. Fully off-lattice KMC models were already proposed in the literature, but are rather demanding in CPU time, mainly because of the necessity to perform static relaxation several times at every step of the simulation, and to calculate migration energies between different metastable states. In our LA-KMC model, we aim at performing static relaxation only once per step at the most, and define possible transitions to other metastable states following a generic predefined procedure. The corresponding migration energies can then be calculated using artificial neural networks, trained to predict them as a function of a full description of the local atomic environment, in term of both the exact location in space of atoms and in term of their chemical nature. Our model is thus a compromise between fully off-lattice and fully on-lattice models: (a) The description of the system is not bound to strict assumptions, but is readapted automatically performing the minimum required amount of static relaxation; (b) The procedure to define transition events is not guaranteed to find all important transitions, and is thereby potentially disregarding some mechanisms of system evolution. This shortcoming is in fact classical to non-fully off-lattice models, but is in our case limited thanks to the application of relaxation at every step; (c) Computing time is largely reduced thanks to the use of neural network to calculate the migration energies. In this presentation, we show the premises of this novel approach, in the case of grain-boundaries for bcc Fe-Cr alloys.
NASA Astrophysics Data System (ADS)
Schmitt, Kara Anne
This research aims to prove that strict adherence to procedures and rigid compliance to process in the US Nuclear Industry may not prevent incidents or increase safety. According to the Institute of Nuclear Power Operations, the nuclear power industry has seen a recent rise in events, and this research claims that a contributing factor to this rise is organizational, cultural, and based on peoples overreliance on procedures and policy. Understanding the proper balance of function allocation, automation and human decision-making is imperative to creating a nuclear power plant that is safe, efficient, and reliable. This research claims that new generations of operators are less engaged and thinking because they have been instructed to follow procedures to a fault. According to operators, they were once to know the plant and its interrelations, but organizationally more importance is now put on following procedure and policy. Literature reviews were performed, experts were questioned, and a model for context analysis was developed. The Context Analysis Method for Identifying Design Solutions (CAMIDS) Model was created, verified and validated through both peer review and application in real world scenarios in active nuclear power plant simulators. These experiments supported the claim that strict adherence and rigid compliance to procedures may not increase safety by studying the industry's propensity for following incorrect procedures, and when it directly affects the outcome of safety or security of the plant. The findings of this research indicate that the younger generations of operators rely highly on procedures, and the organizational pressures of required compliance to procedures may lead to incidents within the plant because operators feel pressured into following the rules and policy above performing the correct actions in a timely manner. The findings support computer based procedures, efficient alarm systems, and skill of the craft matrices. The solution to
NASA Astrophysics Data System (ADS)
Tari, H.; Scheidler, J. J.; Dapino, M. J.
2015-06-01
A reformulation of the Discrete Energy-Averaged model for the calculation of 3D hysteretic magnetization and magnetostriction of iron-gallium (Galfenol) alloys is presented in this paper. An analytical solution procedure based on an eigenvalue decomposition is developed. This procedure avoids the singularities present in the existing approximate solution by offering multiple local minimum energy directions for each easy crystallographic direction. This improved robustness is crucial for use in finite element codes. Analytical simplifications of the 3D model to 2D and 1D applications are also presented. In particular, the 1D model requires calculation for only one easy direction, while all six easy directions must be considered for general applications. Compared to the approximate solution procedure, it is shown that the resulting robustness comes at no expense for 1D applications, but requires almost twice the computational effort for 3D applications. To find model parameters, we employ the average of the hysteretic data, rather than anhysteretic curves, which would require additional measurements. An efficient optimization routine is developed that retains the dimensionality of the prior art. The routine decouples the parameters into exclusive sets, some of which are found directly through a fast preprocessing step to improve accuracy and computational efficiency. The effectiveness of the model is verified by comparison with existing measurement data.
Fukuda, Ryoichi Ehara, Masahiro; Cammi, Roberto
2014-02-14
A perturbative approximation of the state specific polarizable continuum model (PCM) symmetry-adapted cluster-configuration interaction (SAC-CI) method is proposed for efficient calculations of the electronic excitations and absorption spectra of molecules in solutions. This first-order PCM SAC-CI method considers the solvent effects on the energies of excited states up to the first-order with using the zeroth-order wavefunctions. This method can avoid the costly iterative procedure of the self-consistent reaction field calculations. The first-order PCM SAC-CI calculations well reproduce the results obtained by the iterative method for various types of excitations of molecules in polar and nonpolar solvents. The first-order contribution is significant for the excitation energies. The results obtained by the zeroth-order PCM SAC-CI, which considers the fixed ground-state reaction field for the excited-state calculations, are deviated from the results by the iterative method about 0.1 eV, and the zeroth-order PCM SAC-CI cannot predict even the direction of solvent shifts in n-hexane for many cases. The first-order PCM SAC-CI is applied to studying the solvatochromisms of (2,2{sup ′}-bipyridine)tetracarbonyltungsten [W(CO){sub 4}(bpy), bpy = 2,2{sup ′}-bipyridine] and bis(pentacarbonyltungsten)pyrazine [(OC){sub 5}W(pyz)W(CO){sub 5}, pyz = pyrazine]. The SAC-CI calculations reveal the detailed character of the excited states and the mechanisms of solvent shifts. The energies of metal to ligand charge transfer states are significantly sensitive to solvents. The first-order PCM SAC-CI well reproduces the observed absorption spectra of the tungsten carbonyl complexes in several solvents.
Lindskog, Marcus; Winman, Anders; Juslin, Peter; Poom, Leo
2013-01-01
Two studies investigated the reliability and predictive validity of commonly used measures and models of Approximate Number System acuity (ANS). Study 1 investigated reliability by both an empirical approach and a simulation of maximum obtainable reliability under ideal conditions. Results showed that common measures of the Weber fraction (w) are reliable only when using a substantial number of trials, even under ideal conditions. Study 2 compared different purported measures of ANS acuity as for convergent and predictive validity in a within-subjects design and evaluated an adaptive test using the ZEST algorithm. Results showed that the adaptive measure can reduce the number of trials needed to reach acceptable reliability. Only direct tests with non-symbolic numerosity discriminations of stimuli presented simultaneously were related to arithmetic fluency. This correlation remained when controlling for general cognitive ability and perceptual speed. Further, the purported indirect measure of ANS acuity in terms of the Numeric Distance Effect (NDE) was not reliable and showed no sign of predictive validity. The non-symbolic NDE for reaction time was significantly related to direct w estimates in a direction contrary to the expected. Easier stimuli were found to be more reliable, but only harder (7:8 ratio) stimuli contributed to predictive validity. PMID:23964256
Li, Xiangzhu; Paldus, Josef
2009-02-28
We explore spin-preserving, singlet stability of restricted Hartree-Fock (RHF) solutions for a number of closed-shell, homonuclear diatomics in the entire relevant range of internuclear separations. In the presence of such instabilities we explore the implied broken-symmetry (bs) solutions and check their stability. We also address the occurrence of vanishing roots rendered by the stability problem in the case of bs solutions. The RHF bs solutions arise primarily due to the symmetry breaking of the relevant, mostly frontier, molecular orbitals, which approach atomic-type orbitals in the dissociation limit. The resulting bs RHF solutions yield more realistic potential energy curves (PECs) than do the symmetry adapted (sa) solutions. These PECs are shown to be very similar to those rendered by the density functional theory (DFT). Moreover, the sa DFT solutions are found to be stable in a much wider range of internuclear separations than are the RHF solutions, and their bs analogs differ very little from the sa ones. Finally, we examine a possible usefulness of bs RHF solutions in post-HF correlated approaches to the many-electron problem, specifically in the limited configuration interaction and coupled-cluster methods.
NASA Astrophysics Data System (ADS)
Abramova, Victoriya V.; Abramov, Sergey K.; Lukin, Vladimir V.; Vozel, Benoit; Chehdi, Kacem
2016-10-01
Several modifications of scatter-plot-based method for mixed noise parameters estimation are proposed. The modifications relate to the stage of image segmentation and they are intended to adaptively separate image blocks into clusters taking into account image peculiarities and to choose a required number of clusters. Comparative performance analysis of the proposed modifications for images from TID2008 database is performed. It is shown that the best estimation accuracy is provided by a method with automatic determination of a required number of clusters followed by block separation into clusters using k-means method. This modification allows improving the accuracy of noise characteristics estimation by up to 5% for both signal-independent and signal-dependent noise components in comparison to the basic method. The results for real-life data are presented.
Alexandrino, Henrique; Rolo, Anabela; Teodoro, João S; Donato, Henrique; Martins, Ricardo; Serôdio, Marco; Martins, Mónica; Tralhão, José G; Caseiro Alves, Filipe; Palmeira, Carlos; Castro E Sousa, Francisco
2017-09-20
The Associating Liver Partition and Portal Ligation for Staged Hepatectomy (ALPPS) depends on a significant inter-stages kinetic growth rate (KGR). Liver regeneration is highly energy-dependent. The metabolic adaptations in ALPPS are unknown. i) Assess bioenergetics in both stages of ALPPS (T1 and T2) and compare them with control patients undergoing minor (miHp) and major hepatectomy (MaHp), respectively; ii) Correlate findings in ALPPS with volumetric data; iii) Investigate expression of genes involved in liver regeneration and energy metabolism. Five patients undergoing ALPPS, five controls undergoing miHp and five undergoing MaHp. Assessment of remnant liver bioenergetics in T1, T2 and controls. Analysis of gene expression and protein content in ALPPS. Mitochondrial function was worsened in T1 versus miHp; and in T2 versus MaHp (p < 0.05); but improved from T1 to T2 (p < 0.05). Liver bioenergetics in T1 strongly correlated with KGR (p < 0.01). An increased expression of genes associated with liver regeneration (STAT3, ALR) and energy metabolism (PGC-1α, COX, Nampt) was found in T2 (p < 0.05). Metabolic capacity in ALPPS is worse than in controls, improves between stages and correlates with volumetric growth. Bioenergetic adaptations in ALPPS could serve as surrogate markers of liver reserve and as target for energetic conditioning. Copyright © 2017 International Hepato-Pancreato-Biliary Association Inc. Published by Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Rittle-Johnson, Bethany; Star, Jon R.
2007-01-01
Encouraging students to share and compare solution methods is a key component of reform efforts in mathematics, and comparison is emerging as a fundamental learning mechanism. To experimentally evaluate the effects of comparison for mathematics learning, the authors randomly assigned 70 seventh-grade students to learn about algebra equation…
ERIC Educational Resources Information Center
Rittle-Johnson, Bethany; Star, Jon R.
2007-01-01
Encouraging students to share and compare solution methods is a key component of reform efforts in mathematics, and comparison is emerging as a fundamental learning mechanism. To experimentally evaluate the effects of comparison for mathematics learning, the authors randomly assigned 70 seventh-grade students to learn about algebra equation…
Edwards, Andrew G; Teoh, Mark; Hodges, Ryan J; Palma-Dias, Ricardo; Cole, Stephen A; Fung, Alison M; Walker, Susan P
2016-06-01
The benefits of fetoscopic laser photocoagulation (FLP) for treatment of twin-to-twin transfusion syndrome (TTTS) have been recognized for over a decade, yet access to FLP remains limited in many settings. This means at a population level, the potential benefits of FLP for TTTS are far from being fully realized. In part, this is because there are many centers where the case volume is relatively low. This creates an inevitable tension; on one hand, wanting FLP to be readily accessible to all women who may need it, yet on the other, needing to ensure that a high degree of procedural competence is maintained. Some of the solutions to these apparently competing priorities may be found in novel training solutions to achieve, and maintain, procedural proficiency, and with the increased utilization of 'competence based' assessment and credentialing frameworks. We suggest an under-utilized approach is the development of collaborative surgical services, where pooling of personnel and resources can improve timely access to surgery, improve standardized assessment and management of TTTS, minimize the impact of the surgical learning curve, and facilitate audit, education, and research. When deciding which centers should offer laser for TTTS and how we decide, we propose some solutions from a collaborative model.
Alawieh, Ali; Pierce, Alyssa K; Vargas, Jan; Turk, Aquilla S; Turner, Raymond D; Chaudry, M Imran; Spiotta, Alejandro M
2017-05-02
In acute ischemic stroke (AIS), extending mechanical thrombectomy procedural times beyond 60 min has previously been associated with an increased complication rate and poorer outcomes. After improvements in thrombectomy methods, to reassess whether this relationship holds true with a more contemporary thrombectomy approach: a direct aspiration first pass technique (ADAPT). We retrospectively studied a database of patients with AIS who underwent ADAPT thrombectomy for large vessel occlusions. Patients were dichotomized into two groups: 'early recan', in which recanalization (recan) was achieved in ≤35 min, and 'late recan', in which procedures extended beyond 35 min. 197 patients (47.7% women, mean age 66.3 years) were identified. We determined that after 35 min, a poor outcome was more likely than a good (modified Rankin Scale (mRS) score 0-2) outcome. The baseline National Institutes of Health Stroke Scale (NIHSS) score was similar between 'early recan' (n=122) (14.7±6.9) and 'late recan' patients (n=75) (15.9±7.2). Among 'early recan' patients, recanalization was achieved in 17.8±8.8 min compared with 70±39.8 min in 'late recan' patients. The likelihood of achieving a good outcome was higher in the 'early recan' group (65.2%) than in the 'late recan' group (38.2%; p<0.001). Patients in the 'late recan' group had a higher likelihood of postprocedural hemorrhage, specifically parenchymal hematoma type 2, than those in the 'early recan' group. Logistic regression analysis showed that baseline NIHSS, recanalization time, and atrial fibrillation had a significant impact on 90-day outcomes. Our findings suggest that extending ADAPT thrombectomy procedure times beyond 35 min increases the likelihood of complications such as intracerebral hemorrhage while reducing the likelihood of a good outcome. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
Tetrahedral and Hexahedral Mesh Adaptation for CFD Problems
NASA Technical Reports Server (NTRS)
Biswas, Rupak; Strawn, Roger C.; Chancellor, Marisa K. (Technical Monitor)
1997-01-01
This paper presents two unstructured mesh adaptation schemes for problems in computational fluid dynamics. The procedures allow localized grid refinement and coarsening to efficiently capture aerodynamic flow features of interest. The first procedure is for purely tetrahedral grids; unfortunately, repeated anisotropic adaptation may significantly deteriorate the quality of the mesh. Hexahedral elements, on the other hand, can be subdivided anisotropically without mesh quality problems. Furthermore, hexahedral meshes yield more accurate solutions than their tetrahedral counterparts for the same number of edges. Both the tetrahedral and hexahedral mesh adaptation procedures use edge-based data structures that facilitate efficient subdivision by allowing individual edges to be marked for refinement or coarsening. However, for hexahedral adaptation, pyramids, prisms, and tetrahedra are used as buffer elements between refined and unrefined regions to eliminate hanging vertices. Computational results indicate that the hexahedral adaptation procedure is a viable alternative to adaptive tetrahedral schemes.
Tetrahedral and Hexahedral Mesh Adaptation for CFD Problems
NASA Technical Reports Server (NTRS)
Biswas, Rupak; Strawn, Roger C.; Chancellor, Marisa K. (Technical Monitor)
1997-01-01
This paper presents two unstructured mesh adaptation schemes for problems in computational fluid dynamics. The procedures allow localized grid refinement and coarsening to efficiently capture aerodynamic flow features of interest. The first procedure is for purely tetrahedral grids; unfortunately, repeated anisotropic adaptation may significantly deteriorate the quality of the mesh. Hexahedral elements, on the other hand, can be subdivided anisotropically without mesh quality problems. Furthermore, hexahedral meshes yield more accurate solutions than their tetrahedral counterparts for the same number of edges. Both the tetrahedral and hexahedral mesh adaptation procedures use edge-based data structures that facilitate efficient subdivision by allowing individual edges to be marked for refinement or coarsening. However, for hexahedral adaptation, pyramids, prisms, and tetrahedra are used as buffer elements between refined and unrefined regions to eliminate hanging vertices. Computational results indicate that the hexahedral adaptation procedure is a viable alternative to adaptive tetrahedral schemes.
NASA Technical Reports Server (NTRS)
Gupta, K. K.; Lawson, C. L.; Ahmad, A. R.
1992-01-01
The paper first presents the details of the development of a new six-noded plane triangular finite dynamic element. A block Lanczos algorithm is developed next for the accurate and efficient solution of the quadratic matrix eigenvalue problem associated with the finite dynamic element formulation. The resulting computer program fully exploits matrix sparsity inherent in such a discretization and proves to be most efficient for the extraction of the usually required first few roots and vectors, including repeated ones. Most importantly, the present eigenproblem solution is shown to be comparable to that of the corresponding finite element analysis, thereby rendering the associated dynamic element method rather attractive owing to superior convergence characteristics of such elements, presented herein.
NASA Technical Reports Server (NTRS)
Stein, M.; Stein, P. A.
1978-01-01
Approximate solutions for three nonlinear orthotropic plate problems are presented: (1) a thick plate attached to a pad having nonlinear material properties which, in turn, is attached to a substructure which is then deformed; (2) a long plate loaded in inplane longitudinal compression beyond its buckling load; and (3) a long plate loaded in inplane shear beyond its buckling load. For all three problems, the two dimensional plate equations are reduced to one dimensional equations in the y-direction by using a one dimensional trigonometric approximation in the x-direction. Each problem uses different trigonometric terms. Solutions are obtained using an existing algorithm for simultaneous, first order, nonlinear, ordinary differential equations subject to two point boundary conditions. Ordinary differential equations are derived to determine the variable coefficients of the trigonometric terms.
Mikulec, Anthony A.; Hartsock, Jared J.; Salt, Alec N.
2008-01-01
Introduction Intratympanic drug delivery has become widely used in the clinic but little is known about how clinically-utilized drug preparations affect round window membrane permeability or how much drug is actually delivered to the cochlea. This study evaluated the effect of clinically relevant carrier solutions and of suction near the round window membrane (RWM) on the permeability properties of the RWM. Methods RWM permeability was assessed by perfusion of the marker TMPA into the round window niche while monitoring entry into perilymph using TMPA-selective electrodes sealed into scala tympani. Results High osmolarity solution increased RWM permeability by a factor of 2 to 3, benzyl alcohol (a preservative used in some drug formulations) increased permeability by a factor of 3 to 5, and suctioning near the RWM increased permeability by a factor of 10 to 15. Conclusions Variations in available drug formulations can potentially alter RWM permeability properties and affect the amount of drug delivered to the inner ear. Drug solution osmolarity, benzyl alcohol content and possible drying of the round window membrane during suctioning the middle ear can all have a substantial influence of the perilymph levels of drug achieved. PMID:18758387
Moulton-Meissner, Heather; Noble-Wang, Judith; Gupta, Neil; Hocevar, Susan; Kallen, Alex; Arduino, Matthew
2015-08-01
Specific deviations from United States Pharmacopeia standards were analyzed to investigate the factors allowing an outbreak of Serratia marcescens bloodstream infections in patients receiving compounded amino acid solutions. Filter challenge experiments using the outbreak strain of S. marcescens were compared with those that used the filter challenge organism recommended by ASTM International (Brevundimonas diminuta ATCC 19162) to determine the frequency and degree of organism breakthrough. Disk and capsule filters (0.22- and 0.2-μm nominal pore size, respectively) were challenged with either the outbreak strain of S. marcescens or B. diminuta ATCC 19162. The following variables were compared: culture conditions in which organisms were grown overnight or cultured in sterile water (starved), solution type (15% amino acid solution or sterile water), and filtration with or without a 0.5-μm prefilter. Small-scale, syringe-driven, disk-filtration experiments of starved bacterial cultures indicated that approximately 1 in every 1,000 starved S. marcescens cells (0.12%) was able to pass through a 0.22-μm nominal pore-size filter, and about 1 in every 1,000,000 cells was able to pass through a 0.1-μm nominal pore-size filter. No passage of the B. diminuta ATCC 19162 cells was observed with either filter. In full-scale experiments, breakthrough was observed only when 0.2-μm capsule filters were challenged with starved S. marcescens in 15% amino acid solution without a 0.5-μm prefiltration step. Laboratory simulation testing revealed that under certain conditions, bacteria can pass through 0.22- and 0.2-μm filters intended for sterilization of an amino acid solution. Bacteria did not pass through 0.2-μm filters when a 0.5-μm prefilter was used. Copyright © 2015 by the American Society of Health-System Pharmacists, Inc. All rights reserved.
Tel, R M; Berends, G T
1980-10-01
Aqueous solutions of cholesterol and some cholesteryl esters were prepared. The hydrolysis of cholesteryl esters with enzymatic methods could therefore be studied in some detail. The total cholesterol concentration of the aqueous cholesterol and cholesteryl ester solutions was determined by 6 different enzymatic procedures as well as the Liebermann-Burchard method. For some esters (acetate and arachidonate esters) the esterase reaction is not complete within the usual reaction time, whereas most other esters gave analytical results lower than the theoretical. With the Liebermann-Burchard method all esters reacted completely within the reaction time. The esterase have very different specificities for the various cholesteryl esters. With the enzymatic method several commercial control sera as well as human sera gave lower cholesterol concentrations than the Liebermann-Burchard method. These differences can be explained mainly by this incomplete hydrolysis. Some practical recommendations are given.
Gair, Jonathan
2013-01-01
The gastroenterology procedures environment has proven to be fertile ground for the realization of moral distress as it relates to the practice of nursing. Specifically, nurses are expected to fulfill their duty as advocates for their clients at all times and within all contexts; however, their ability to discharge this essential function has been complicated by such influential factors as sedating medications, competing ethical motivations, discordant conclusions of moral reasoning and action, as well as competing institutional factors. This article begins with a fictional case study to introduce readers to the contextual essence of the moral distress that a group of gastroenterology nurses was collectively experiencing. Subsequently, the aim of this article was to explicate how one department, with the aid of an ethics committee, negotiated a process similar to the case study to develop a pragmatic policy and identify an educational primer that encourages nurses to reexamine and value the tangible realities inherent and expected of an advocate in the dynamically complex environment that characterizes all gastroenterology procedure environments where gastroenterology nurses practice.
Dean, Brian; Wright, Cameron H G; Barrett, Steven F
2009-01-01
Fly inspired vision sensors have been shown to have many interesting qualities such as hyperacuity (or an ability to achieve movement resolution beyond the theoretical limit), extreme sensitivity to motion, and (through software simulation) image edge extraction, motion detection, and orientation and location of a line. Many of these qualities are beyond the ability of traditional computer vision sensors such as charge-coupled device (CCD) arrays. To obtain these characteristics, a prototype fly inspired sensor has been built and tested in a laboratory environment and shows promise. Any sophisticated visual system, whether man made or natural, must adequately adapt to lighting conditions, therefore light adaptation is a vital milestone in getting the afore mentioned prototype working in real-world conditions. By studying how the common house fly, Musca domestica, achieves this adaptation it was possible to design an analog solution to this problem. The solution utilizes instrumentation amplifiers and an additional sensor to sense the ambient light. This paper will examine this circuitry in greater detail and will explore the characterization and limitations of this solution.
Moro, Leo; Serino, Francesco-Maria; Ricci, Stefano; Abbruzzese, Gloria; Antonelli-Incalzi, Raffaele
2014-11-01
Varicose veins are treated under local infiltration anesthesia. Literature shows that adding sodium bicarbonate reduces the pain associated with local infiltration anesthesia. Nonetheless, sodium bicarbonate is underused. We sought to assess if the use of a solution of mepivacaine 2% plus adrenaline with sodium bicarbonate 1.4% results in less pain associated with local infiltration anesthesia preceding ambulatory phlebectomies, compared with standard preparation diluted with normal saline. In all, 100 adult patients undergoing scheduled ambulatory phlebectomy were randomized to receive either a solution of mepivacaine chlorhydrate 2% plus adrenaline in sodium bicarbonate 1.4% or a similar solution diluted in normal saline 0.9%. Median pain scores associated with local infiltration anesthesia reported in the intervention and control groups were 2 (SD=1.6) and 5 (SD=2.0) (P<.0001), respectively. A general linear model with bootstrapped confidence intervals showed that using the alkalinized solution would lead to a reduction in pain rating of about 3 points. Patients were not asked to distinguish the pain of the needle stick from the pain of the infiltration. Moreover, a complete clinical study of sensitivity on the infiltrated area was not conducted. Data obtained from this study may contribute to improve local infiltration anesthesia in ambulatory phlebectomy and other phlebologic procedures. Copyright © 2014 American Academy of Dermatology, Inc. Published by Elsevier Inc. All rights reserved.
Lasne, Françoise
2009-01-01
Nonspecific interactions between blotted proteins and unrelated secondary antibodies generate false positives in immunoblotting techniques. Some procedures have been developed to reduce this adsorption but they may work in specific applications and be ineffective in other ones. "Double-blotting" has been developed to overcome this problem. It consists of interpolating a second blotting step between the usual probings of the blot membrane with the primary antibody and the secondary antibodies. This step, by isolating the primary antibody from the interfering proteins, guarantees the specificity of the probing with the secondary antibody. This method has been developed for the study of erythropoietin in concentrated urine since a strong nonspecific binding of biotinylated secondary antibodies to some urinary proteins is observed using classical immunoblotting protocols. However, its concept makes it usable in other applications that come up against this kind of problem. This method is expected to be especially useful for investigating proteins that are present in minute amounts in complex biological media.
Lasne, Françoise
2015-01-01
Nonspecific interactions between blotted proteins and unrelated secondary antibodies generate false positives in immunoblotting techniques. Some procedures have been developed to reduce this adsorption, but they may work in specific applications and be ineffective in others. "Double-blotting" has been developed to overcome this problem. It consists of interpolating a second blotting step between the usual probings of the blot membrane with the primary antibody and the secondary antibodies. This step, by isolating the primary antibody from the interfering proteins, guarantees the specificity of the probing with the secondary antibody. This method has been developed for the study of erythropoietin in concentrated urine since a strong nonspecific binding of biotinylated secondary antibodies to some urinary proteins is observed using classical immunoblotting protocols. However, its concept makes it usable in other applications that come up against this kind of problem. This method is expected to be especially useful for investigating proteins that are present in minute amounts in complex biological media.
Schneider, André
2006-01-01
The understanding of the availability of a metal in soil necessitates a minimum knowledge about its speciation in the soil solution. Here, we evaluated an alternative to the use of ion exchangers for estimating the free ionic fraction of cadmium (FCd) in solution. It is based on the exchange selectivity coefficient (VK) rather than the distribution coefficient (DK) to estimate FCd. Because VK for the Cd-Ca exchange for the used Amberlite resin was independent of the solution Ca concentration (0.5-7.5 mM) and pH (range: 4.5-6), the experiment on a solution mimicking the analyzed solution to estimate VK was not necessary. The influence of variable Ca and Mg concentrations in solution on FCd was assessed in synthetic solutions containing either citrate or malate. The best way to estimate FCd seemed to treat the exchange data as if Ca was solely present. However, neither the proposed approach nor those applying DK prevent the overestimation of FCd when Ca is partly complexed in the analyzed solution. A method intending to estimate two replicates of FCd for a given, unique solution was also studied on solutions issued from sorption-desorption experiments performed on a humic podzol. It consists of two successive supplies of a known resin mass to a unique sample. Both estimates were close and not significantly different.
NASA Astrophysics Data System (ADS)
Ranjan, Srikant
2005-11-01
Fatigue-induced failures in aircraft gas turbine and rocket engine turbopump blades and vanes are a pervasive problem. Turbine blades and vanes represent perhaps the most demanding structural applications due to the combination of high operating temperature, corrosive environment, high monotonic and cyclic stresses, long expected component lifetimes and the enormous consequence of structural failure. Single crystal nickel-base superalloy turbine blades are being utilized in rocket engine turbopumps and jet engines because of their superior creep, stress rupture, melt resistance, and thermomechanical fatigue capabilities over polycrystalline alloys. These materials have orthotropic properties making the position of the crystal lattice relative to the part geometry a significant factor in the overall analysis. Computation of stress intensity factors (SIFs) and the ability to model fatigue crack growth rate at single crystal cracks subject to mixed-mode loading conditions are important parts of developing a mechanistically based life prediction for these complex alloys. A general numerical procedure has been developed to calculate SIFs for a crack in a general anisotropic linear elastic material subject to mixed-mode loading conditions, using three-dimensional finite element analysis (FEA). The procedure does not require an a priori assumption of plane stress or plane strain conditions. The SIFs KI, KII, and KIII are shown to be a complex function of the coupled 3D crack tip displacement field. A comprehensive study of variation of SIFs as a function of crystallographic orientation, crack length, and mode-mixity ratios is presented, based on the 3D elastic orthotropic finite element modeling of tensile and Brazilian Disc (BD) specimens in specific crystal orientations. Variation of SIF through the thickness of the specimens is also analyzed. The resolved shear stress intensity coefficient or effective SIF, Krss, can be computed as a function of crack tip SIFs and the
Self-adaptive incremental Newton-Raphson algorithms
NASA Technical Reports Server (NTRS)
Padovan, J.
1980-01-01
Multilevel self-adaptive Newton-Raphson type strategies are developed to improve the solution efficiency of nonlinear finite element simulations of statically loaded structures. The overall strategy involves three basic levels. The first level involves preliminary solution tunneling via primative operators. Secondly, the solution is constantly monitored via quality/convergence/nonlinearity tests. Lastly, the third level involves self-adaptive algorithmic update procedures aimed at improving the convergence characteristics of the Newton-Raphson strategy. Numerical experiments are included to illustrate the results of the procedure.
Aich, Udayanath; Liu, Aston; Lakbub, Jude; Mozdzanowski, Jacek; Byrne, Michael; Shah, Nilesh; Galosy, Sybille; Patel, Pramthesh; Bam, Narendra
2016-03-01
Consistent glycosylation in therapeutic monoclonal antibodies is a major concern in the biopharmaceutical industry as it impacts the drug's safety and efficacy and manufacturing processes. Large numbers of samples are created for the analysis of glycans during various stages of recombinant proteins drug development. Profiling and quantifying protein N-glycosylation is important but extremely challenging due to its microheterogeneity and more importantly the limitations of existing time-consuming sample preparation methods. Thus, a quantitative method with fast sample preparation is crucial for understanding, controlling, and modifying the glycoform variance in therapeutic monoclonal antibody development. Presented here is a rapid and highly quantitative method for the analysis of N-glycans from monoclonal antibodies. The method comprises a simple and fast solution-based sample preparation method that uses nontoxic reducing reagents for direct labeling of N-glycans. The complete work flow for the preparation of fluorescently labeled N-glycans takes a total of 3 h with less than 30 min needed for the release of N-glycans from monoclonal antibody samples.
Adaptive superposition of finite element meshes in linear and nonlinear dynamic analysis
NASA Astrophysics Data System (ADS)
Yue, Zhihua
2005-11-01
The numerical analysis of transient phenomena in solids, for instance, wave propagation and structural dynamics, is a very important and active area of study in engineering. Despite the current evolutionary state of modern computer hardware, practical analysis of large scale, nonlinear transient problems requires the use of adaptive methods where computational resources are locally allocated according to the interpolation requirements of the solution form. Adaptive analysis of transient problems involves obtaining solutions at many different time steps, each of which requires a sequence of adaptive meshes. Therefore, the execution speed of the adaptive algorithm is of paramount importance. In addition, transient problems require that the solution must be passed from one adaptive mesh to the next adaptive mesh with a bare minimum of solution-transfer error since this form of error compromises the initial conditions used for the next time step. A new adaptive finite element procedure (s-adaptive) is developed in this study for modeling transient phenomena in both linear elastic solids and nonlinear elastic solids caused by progressive damage. The adaptive procedure automatically updates the time step size and the spatial mesh discretization in transient analysis, achieving the accuracy and the efficiency requirements simultaneously. The novel feature of the s-adaptive procedure is the original use of finite element mesh superposition to produce spatial refinement in transient problems. The use of mesh superposition enables the s-adaptive procedure to completely avoid the need for cumbersome multipoint constraint algorithms and mesh generators, which makes the s-adaptive procedure extremely fast. Moreover, the use of mesh superposition enables the s-adaptive procedure to minimize the solution-transfer error. In a series of different solid mechanics problem types including 2-D and 3-D linear elastic quasi-static problems, 2-D material nonlinear quasi-static problems
Coliţă, Andrei; Coliţă, Anca; Zamfirescu, Dragos; Lupu, Anca Roxana
2012-09-01
Hematopoietic stem cell transplantation (HSCT) is a a standard therapeutic option for several diseases. The success of the procedure depends on quality and quantity of transplanted cells and on stromal capacity to create an optimal microenvironment, that supports survival and development of the hematopoietic elements. Conditions associated with stromal dysfunction lead to slower/insufficient engraftment and/or immune reconstitution. A possible solution to this problem is to realize a combined graft of hematopoietic stem cells along with the medular stroma in the form of vascularized bone marrow transplant (VBMT). Another major drawback of HSCT is the risk of graft versus host disease (GVHD). Recently, mesenchymal stromal cells (MSC) have demonstrated the capacity to down-regulate alloreactive T-cell and to enhance the engraftment. Cotransplantation of MSC could be a therapeutic option for a better engraftment and GVHD prevention. Copyright © 2012 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Ipeaiyeda, Ayodele Rotimi; Ayoade, Abisayo Ruth
2017-07-01
Co-precipitation procedure has widely been employed for preconcentration and separation of metal ions from the matrices of environmental samples. This is simply due to its simplicity, low consumption of separating solvent and short duration for analysis. Various organic ligands have been used for this purpose. However, there is dearth of information on the application of 8-hydroxyquinoline (8-HQ) as ligand and Cu(II) as carrier element. The use of Cu(II) is desirable because there is no contamination and background adsorption interference. Therefore, the objective of this study was to use 8-HQ in the presence of Cu(II) for coprecipitation of Cd(II), Co(II), Cr(III), Ni(II) and Pb(II) from standard solutions and surface water prior to their determinations by flame atomic absorption spectrometry (FAAS). The effects of pH, sample volume, amount of 8-HQ and Cu(II) and interfering ions on the recoveries of metal ions from standard solutions were monitored using FAAS. The water samples were treated with 8-HQ under the optimum experimental conditions and metal concentrations were determined by FAAS. The metal concentrations in water samples not treated with 8-HQ were also determined. The optimum recovery values for metal ions were higher than 85.0%. The concentrations (mg/L) of Co(II), Ni(II), Cr(III), and Pb(II) in water samples treated with 8-HQ were 0.014 ± 0.002, 0.03 ± 0.01, 0.04 ± 0.02 and 0.05 ± 0.02, respectively. These concentrations and those obtained without coprecipitation technique were significantly different. Coprecipitation procedure using 8-HQ as ligand and Cu(II) as carrier element enhanced the preconcentration and separation of metal ions from the matrix of water sample.
Culhaoglu, Tanya; Zheng, Dan; Méchin, Valérie; Baumberger, Stéphanie
2011-10-15
The objective of this study was to adapt and improve an environmentally friendly and fast routine method for the analysis of ferulic and p-coumaric acids released from grass cell-walls by alkaline hydrolysis. This methodological development was performed on maize samples selected for their contrasted contents in ferulic and p-coumaric acids as a consequence of their different maturity stages (from stage of 7th leaf with visible ligule to stage of silage harvest). We demonstrate that the Carrez method is an efficient substitute to the common solvent-consuming extraction by ethyl acetate for the preparation of samples suitable for HPLC-ESI-MS analysis. We prove that it is possible to replace methanol by ethanol in the Carrez step and at last we propose a scale reduction of this procedure that offer a first step towards high throughput determinations. The new method leads to a solvent consumption reduced by a factor 100 and only requires ethanol as organic solvent. Copyright © 2011 Elsevier B.V. All rights reserved.
Watanabe, Hiroshi C; Banno, Misa; Sakurai, Minoru
2016-03-14
Quantum effects in solute-solvent interactions, such as the many-body effect and the dipole-induced dipole, are known to be critical factors influencing the infrared spectra of species in the liquid phase. For accurate spectrum evaluation, the surrounding solvent molecules, in addition to the solute of interest, should be treated using a quantum mechanical method. However, conventional quantum mechanics/molecular mechanics (QM/MM) methods cannot handle free QM solvent molecules during molecular dynamics (MD) simulation because of the diffusion problem. To deal with this problem, we have previously proposed an adaptive QM/MM "size-consistent multipartitioning (SCMP) method". In the present study, as the first application of the SCMP method, we demonstrate the reproduction of the infrared spectrum of liquid-phase water, and evaluate the quantum effect in comparison with conventional QM/MM simulations.
NASA Astrophysics Data System (ADS)
Kopera, Michal A.; Giraldo, Francis X.
2014-10-01
The resolutions of interests in atmospheric simulations require prohibitively large computational resources. Adaptive mesh refinement (AMR) tries to mitigate this problem by putting high resolution in crucial areas of the domain. We investigate the performance of a tree-based AMR algorithm for the high order discontinuous Galerkin method on quadrilateral grids with non-conforming elements. We perform a detailed analysis of the cost of AMR by comparing this to uniform reference simulations of two standard atmospheric test cases: density current and rising thermal bubble. The analysis shows up to 15 times speed-up of the AMR simulations with the cost of mesh adaptation below 1% of the total runtime. We pay particular attention to the implicit-explicit (IMEX) time integration methods and show that the ARK2 method is more robust with respect to dynamically adapting meshes than BDF2. Preliminary analysis of preconditioning reveals that it can be an important factor in the AMR overhead. The compiler optimizations provide significant runtime reduction and positively affect the effectiveness of AMR allowing for speed-ups greater than it would follow from the simple performance model.
Kim, Hyun-Seok; Huber, Kerry C
2007-06-27
A technique was established to remove impurities (e.g., salts) from starch dissolved in strong alkali and neutralized with acid to accommodate starch structural analysis via intermediate-pressure size-exclusion chromatography (IPSEC). Starch (corn and wheat) subjected to an alkaline-microwave dissolution scheme (35 s microwave heating in a mixture of 6 M urea and 1 M KOH) was either treated with ion-exchange resin or passed through a desalting column to remove salt/urea contaminants. Control (untreated) starch solution analyzed by IPSEC displayed a significant interfering peak (attributable to salt/urea), which coeluted with the starch amylose peak. The interfering peak was most efficiently eliminated by first passing the starch solution through a desalting column, which process effectively removed impurities (e.g., salts/urea) without appearing to adversely impact the starch structural analysis. This simple technique coupled with the rapid alkaline-microwave starch dissolution procedure greatly expedites structural investigation of starch by facilitating analysis by IPSEC.
McAllister, M; Billett, S; Moyle, W; Zimmer-Gembeck, M
2009-03-01
Self-harm is a risk factor for further episodes of self-harm and suicide. The most common service used by self-injurers is the emergency department. However, very often, nurses have received no special training to identify and address the needs of these patients. In addition this care context is typically biomedical and without psychosocial skills, nurses can tend to feel unprepared and lacking in confidence, particularly on the issue of self-harm. In a study that aimed to improve understanding and teach solution-focused skills to emergency nurses so that they may be more helpful with patients who self-harm, several outcome measures were considered, including knowledge, professional identity and clinical reasoning. The think-aloud procedure was used as a way of exploring and improving the solution-focused nature of nurses' clinical reasoning in a range of self-harm scenarios. A total of 28 emergency nurses completed the activity. Data were audiotaped, transcribed and analysed. The results indicated that significant improvements were noted in nurses' ability to consider the patients' psychosocial needs following the intervention. Thus this study has shown that interactive education not only improves attitude and confidence but enlarges nurses' reasoning skills to include psychosocial needs. This is likely to improve the quality of care provided to patients with mental health problems who present to emergency settings, reducing stigma for patients and providing the important first steps to enduring change - acknowledgment and respect.
NASA Astrophysics Data System (ADS)
Akhunov, R. R.; Gazizov, T. R.; Kuksenko, S. P.
2016-08-01
The mean time needed to solve a series of systems of linear algebraic equations (SLAEs) as a function of the number of SLAEs is investigated. It is proved that this function has an extremum point. An algorithm for adaptively determining the time when the preconditioner matrix should be recalculated when a series of SLAEs is solved is developed. A numerical experiment with multiply solving a series of SLAEs using the proposed algorithm for computing 100 capacitance matrices with two different structures—microstrip when its thickness varies and a modal filter as the gap between the conductors varies—is carried out. The speedups turned out to be close to the optimal ones.
Digital adaptive flight controller development
NASA Technical Reports Server (NTRS)
Kaufman, H.; Alag, G.; Berry, P.; Kotob, S.
1974-01-01
A design study of adaptive control logic suitable for implementation in modern airborne digital flight computers was conducted. Two designs are described for an example aircraft. Each of these designs uses a weighted least squares procedure to identify parameters defining the dynamics of the aircraft. The two designs differ in the way in which control law parameters are determined. One uses the solution of an optimal linear regulator problem to determine these parameters while the other uses a procedure called single stage optimization. Extensive simulation results and analysis leading to the designs are presented.
Beauvais, Z S; Thompson, K H; Kearfott, K J
2009-07-01
Due to a recent upward trend in the price of uranium and subsequent increased interest in uranium mining, accurate modeling of baseline dose from environmental sources of radioactivity is of increasing interest. Residual radioactivity model and code (RESRAD) is a program used to model environmental movement and calculate the dose due to the inhalation, ingestion, and exposure to radioactive materials following a placement. This paper presents a novel use of RESRAD for the calculation of dose from non-enhanced, or ancient, naturally occurring radioactive material (NORM). In order to use RESRAD to calculate the total effective dose (TED) due to ancient NORM, a procedural adaptation was developed to negate the effects of time progressive distribution of radioactive materials. A dose due to United States' average concentrations of uranium, actinium, and thorium series radionuclides was then calculated. For adults exposed in a residential setting and assumed to eat significant amounts of food grown in NORM concentrated areas, the annual dose due to national average NORM concentrations was 0.935 mSv y(-1). A set of environmental dose factors were calculated for simple estimation of dose from uranium, thorium, and actinium series radionuclides for various age groups and exposure scenarios as a function of elemental uranium and thorium activity concentrations in groundwater and soil. The values of these factors for uranium were lowest for an adult exposed in an industrial setting: 0.00476 microSv kg Bq(-1) y(-1) for soil and 0.00596 microSv m(3) Bq(-1) y(-1) for water (assuming a 1:1 234U:238U activity ratio in water). The uranium factors were highest for infants exposed in a residential setting and assumed to ingest food grown onsite: 34.8 microSv kg Bq(-1) y(-1) in soil and 13.0 microSv m(3) Bq(-1) y(-1) in water.
Golden, D A; Beuchat, L R
1990-01-01
Recovery and colony formation by healthy and sublethally heat-injured cells of Zygosaccharomyces rouxii as influenced by the procedure for sterilizing recovery media (YM agar [YMA], wort agar, cornmeal agar, and oatmeal agar) were investigated. Media were supplemented with various concentrations of glucose, sucrose, glycerol, or sorbitol and sterilized by autoclaving (110 degrees C, 15 min) and by repeated treatment with steam (100 degrees C). An increase in sensitivity was observed when heat-injured cells were plated on glucose-supplemented YMA at an aw of 0.880 compared with aws of 0.933 and 0.998. Colonies which developed from unheated and heated cells on YMA at aws of 0.998 and 0.933 generally exceeded 0.5 mm in diameter within 3.5 to 4 days of incubation at 25 degrees C, whereas colonies formed on YMA at an aw of 0.880 typically did not exceed 0.5 mm in diameter until after 5.5 to 6.5 days of incubation. The number of colonies exceeding 0.5 mm in diameter which were formed by heat-injured cells on YMA at an aw of 0.880 was 2 to 3 logs less than the total number of colonies detected, i.e., on YMA at an aw of 0.933 and using no limits of exclusion based on colony diameter. A substantial portion of cells which survived heat treatment were sublethally injured as evidenced by increased sensitivity to a suboptimum aw (0.880). In no instance was recovery of Z. rouxii significantly affected by medium sterilization procedure when glucose or sorbitol was used as the aw-suppressing solute.(ABSTRACT TRUNCATED AT 250 WORDS) PMID:2403251
NASA Astrophysics Data System (ADS)
Jiang, Xiao-Yu; Zong, Yan-Tao; Wang, Xi; Chen, Zhuo; Liu, Zhong-Xuan
2010-11-01
MEMS gyro is used in inertial measuring fields more and more widely, but random drift is considered as an important error restricting the precision of it. Establishing the proper models closed to actual state of movement and random drift, and designing a kind of effective filter are available to enhance the precision of the MEMS gyro. The dynamic model of angle movement is studied, the ARMA model describing random drift is established based on time series analysis method, and a modified self-adapted Kalman filter is designed for the signal processing. Finally, the random drift is distinguished and analyzed clearly by Allan variance. It is included that the above method can effectively eliminate the random drift and improve the precision of MEMS gyro.
Adaptive quadrilateral and triangular finite-element scheme for compressible flows
NASA Technical Reports Server (NTRS)
Ramakrishnan, R.; Thornton, Earl A.; Bey, Kim S.
1990-01-01
The development of an adaptive mesh refinement procedure for analyzing high-speed compressible flows using the finite-element method is described. This new adaptation procedure, which uses both quadrilateral and triangular elements, was implemented with two explicit finite-element algorithms - the two-step Taylor-Galerkin and the multistep Galerkin-Runge-Kutta schemes. A von Neumann stability analysis and a rotating 'cosine hill'problem demonstrate the instability of the Taylor-Galerkin scheme when coupled with the adaptation procedure. For the same adaptive refinement scheme, the Galerkin-Runge-Kutta procedure yields stable solutions within its explicit stability limit. The utility of this new adaptation procedure for the prediction of compressible flow features is illustrated for inviscid problems involving strong shock interactions at hypersonic speeds.
NASA Technical Reports Server (NTRS)
Radhakrishnan, Krishnan
1994-01-01
LSENS, the Lewis General Chemical Kinetics and Sensitivity Analysis Code, has been developed for solving complex, homogeneous, gas-phase chemical kinetics problems and contains sensitivity analysis for a variety of problems, including nonisothermal situations. This report is part 1 of a series of three reference publications that describe LENS, provide a detailed guide to its usage, and present many example problems. Part 1 derives the governing equations and describes the numerical solution procedures for the types of problems that can be solved. The accuracy and efficiency of LSENS are examined by means of various test problems, and comparisons with other methods and codes are presented. LSENS is a flexible, convenient, accurate, and efficient solver for chemical reaction problems such as static system; steady, one-dimensional, inviscid flow; reaction behind incident shock wave, including boundary layer correction; and perfectly stirred (highly backmixed) reactor. In addition, the chemical equilibrium state can be computed for the following assigned states: temperature and pressure, enthalpy and pressure, temperature and volume, and internal energy and volume. For static problems the code computes the sensitivity coefficients of the dependent variables and their temporal derivatives with respect to the initial values of the dependent variables and/or the three rate coefficient parameters of the chemical reactions.
Error estimation and adaptivity in Navier-Stokes incompressible flows
NASA Astrophysics Data System (ADS)
Wu, J.; Zhu, J. Z.; Szmelter, J.; Zienkiewicz, O. C.
1990-07-01
An adaptive remeshing procedure for solving Navier-Stokes incompressible fluid flow problems is presented in this paper. This procedure has been implemented using the error estimator developed by Zienkiewicz and Zhu (1987, 1989) and a semi-implicit time-marching scheme for Navier-Stokes flow problems (Zienkiewicz et al. 1990). Numerical examples are presented, showing that the error estimation and adaptive procedure are capable of monitoring the flow field, updating the mesh when necessary, and providing nearly optimal meshes throughout the calculation, thus making the solution reliable and the computation economical and efficient.
Fukuda, Ryoichi Ehara, Masahiro
2015-12-31
The effects from solvent environment are specific to the electronic states; therefore, a computational scheme for solvent effects consistent with the electronic states is necessary to discuss electronic excitation of molecules in solution. The PCM (polarizable continuum model) SAC (symmetry-adapted cluster) and SAC-CI (configuration interaction) methods are developed for such purposes. The PCM SAC-CI adopts the state-specific (SS) solvation scheme where solvent effects are self-consistently considered for every ground and excited states. For efficient computations of many excited states, we develop a perturbative approximation for the PCM SAC-CI method, which is called corrected linear response (cLR) scheme. Our test calculations show that the cLR PCM SAC-CI is a very good approximation of the SS PCM SAC-CI method for polar and nonpolar solvents.
Sowers, K R; Gunsalus, R P
1995-12-01
The methanogenic Archaea, like the Bacteria and Eucarya, possess several osmoregulatory strategies that enable them to adapt to osmotic changes in their environment. The physiological responses of Methanosarcina species to different osmotic pressures were studied in extracellular osmolalities ranging from 0.3 to 2.0 osmol/kg. Regardless of the isolation source, the maximum rate of growth for species from freshwater, sewage, and marine sources occurred in extracellular osmolalities between 0.62 and 1.0 osmol/kg and decreased to minimal detectable growth as the solute concentration approached 2.0 osmol/kg. The steady-state water-accessible volume of Methanosarcina thermophila showed a disproportionate decrease of 30% between 0.3 and 0.6 osmol/kg and then a linear decrease of 22% as the solute concentration in the media increased from 0.6 to 2.0 osmol/kg. The total intracellular K(sup+) ion concentration in M. thermophila increased from 0.12 to 0.5 mol/kg as the medium osmolality was raised from 0.3 to 1.0 osmol/kg and then remained above 0.4 mol/kg as extracellular osmolality was increased to 2.0 osmol/kg. Concurrent with K(sup+) accumulation, M. thermophila synthesized and accumulated (alpha)-glutamate as the predominant intracellular osmoprotectant in media containing up to 1.0 osmol of solute per kg. At medium osmolalities greater than 1.0 osmol/kg, the (alpha)-glutamate concentration leveled off and the zwitterionic (beta)-amino acid N(sup(epsilon))-acetyl-(beta)-lysine was synthesized, accumulating to an intracellular concentration exceeding 1.1 osmol/kg at an osmolality of 2.0 osmol/kg. When glycine betaine was added to culture medium, it caused partial repression of de novo (alpha)-glutamate and N(sup(epsilon))-acetyl-(beta)-lysine synthesis and was accumulated by the cell as the predominant compatible solute. The distribution and concentration of compatible solutes in eight strains representing five Methanosarcina spp. were similar to those found in M
Stamatakos, Georgios S.; Georgiadi, Eleni C.; Graf, Norbert; Kolokotroni, Eleni A.; Dionysiou, Dimitra D.
2011-01-01
The development of computational models for simulating tumor growth and response to treatment has gained significant momentum during the last few decades. At the dawn of the era of personalized medicine, providing insight into complex mechanisms involved in cancer and contributing to patient-specific therapy optimization constitute particularly inspiring pursuits. The in silico oncology community is facing the great challenge of effectively translating simulation models into clinical practice, which presupposes a thorough sensitivity analysis, adaptation and validation process based on real clinical data. In this paper, the behavior of a clinically-oriented, multiscale model of solid tumor response to chemotherapy is investigated, using the paradigm of nephroblastoma response to preoperative chemotherapy in the context of the SIOP/GPOH clinical trial. A sorting of the model's parameters according to the magnitude of their effect on the output has unveiled the relative importance of the corresponding biological mechanisms; major impact on the result of therapy is credited to the oxygenation and nutrient availability status of the tumor and the balance between the symmetric and asymmetric modes of stem cell division. The effect of a number of parameter combinations on the extent of chemotherapy-induced tumor shrinkage and on the tumor's growth rate are discussed. A real clinical case of nephroblastoma has served as a proof of principle study case, demonstrating the basics of an ongoing clinical adaptation and validation process. By using clinical data in conjunction with plausible values of model parameters, an excellent fit of the model to the available medical data of the selected nephroblastoma case has been achieved, in terms of both volume reduction and histological constitution of the tumor. In this context, the exploitation of multiscale clinical data drastically narrows the window of possible solutions to the clinical adaptation problem. PMID:21407827
Pérez-Jordá, José M
2010-01-14
A new method for solving the Schrödinger equation is proposed, based on the following details. First, a map u=u(r) from Cartesian coordinates r to a new coordinate system u is chosen. Second, the solution (orbital) psi(r) is written in terms of a function U depending on u so that psi(r)=/J(u)/(-1/2)U(u), where /J(u)/ is the Jacobian determinant of the map. Third, U is expressed as a linear combination of plane waves in the u coordinate, U(u)= sum (k)c(k)e(ik x u). Finally, the coefficients c(k) are variationally optimized to obtain the best energy, using a generalization of an algorithm originally developed for the Coulomb potential [J. M. Perez-Jorda, Phys. Rev. B 58, 1230 (1998)]. The method is tested for the radial Schrödinger equation in the hydrogen atom, resulting in micro-Hartree accuracy or better for the energy of ns and np orbitals (with n up to 5) using expansions of moderate length.
NASA Technical Reports Server (NTRS)
Johnson, F. T.; Samant, S. S.; Bieterman, M. B.; Melvin, R. G.; Young, D. P.; Bussoletti, J. E.; Hilmes, C. L.
1992-01-01
A new computer program, called TranAir, for analyzing complex configurations in transonic flow (with subsonic or supersonic freestream) was developed. This program provides accurate and efficient simulations of nonlinear aerodynamic flows about arbitrary geometries with the ease and flexibility of a typical panel method program. The numerical method implemented in TranAir is described. The method solves the full potential equation subject to a set of general boundary conditions and can handle regions with differing total pressure and temperature. The boundary value problem is discretized using the finite element method on a locally refined rectangular grid. The grid is automatically constructed by the code and is superimposed on the boundary described by networks of panels; thus no surface fitted grid generation is required. The nonlinear discrete system arising from the finite element method is solved using a preconditioned Krylov subspace method embedded in an inexact Newton method. The solution is obtained on a sequence of successively refined grids which are either constructed adaptively based on estimated solution errors or are predetermined based on user inputs. Many results obtained by using TranAir to analyze aerodynamic configurations are presented.
2012-01-01
Purpose To validate, in the context of adaptive radiotherapy, three commercial software solutions for atlas-based segmentation. Methods and materials Fifteen patients, five for each group, with cancer of the Head&Neck, pleura, and prostate were enrolled in the study. In addition to the treatment planning CT (pCT) images, one replanning CT (rCT) image set was acquired for each patient during the RT course. Three experienced physicians outlined on the pCT and rCT all the volumes of interest (VOIs). We used three software solutions (VelocityAI 2.6.2 (V), MIM 5.1.1 (M) by MIMVista and ABAS 2.0 (A) by CMS-Elekta) to generate the automatic contouring on the repeated CT. All the VOIs obtained with automatic contouring (AC) were successively corrected manually. We recorded the time needed for: 1) ex novo ROIs definition on rCT; 2) generation of AC by the three software solutions; 3) manual correction of AC. To compare the quality of the volumes obtained automatically by the software and manually corrected with those drawn from scratch on rCT, we used the following indexes: overlap coefficient (DICE), sensitivity, inclusiveness index, difference in volume, and displacement differences on three axes (x, y, z) from the isocenter. Results The time saved by the three software solutions for all the sites, compared to the manual contouring from scratch, is statistically significant and similar for all the three software solutions. The time saved for each site are as follows: about an hour for Head&Neck, about 40 minutes for prostate, and about 20 minutes for mesothelioma. The best DICE similarity coefficient index was obtained with the manual correction for: A (contours for prostate), A and M (contours for H&N), and M (contours for mesothelioma). Conclusions From a clinical point of view, the automated contouring workflow was shown to be significantly shorter than the manual contouring process, even though manual correction of the VOIs is always needed. PMID:22989046
La Macchia, Mariangela; Fellin, Francesco; Amichetti, Maurizio; Cianchetti, Marco; Gianolini, Stefano; Paola, Vitali; Lomax, Antony J; Widesott, Lamberto
2012-09-18
To validate, in the context of adaptive radiotherapy, three commercial software solutions for atlas-based segmentation. Fifteen patients, five for each group, with cancer of the Head&Neck, pleura, and prostate were enrolled in the study. In addition to the treatment planning CT (pCT) images, one replanning CT (rCT) image set was acquired for each patient during the RT course. Three experienced physicians outlined on the pCT and rCT all the volumes of interest (VOIs). We used three software solutions (VelocityAI 2.6.2 (V), MIM 5.1.1 (M) by MIMVista and ABAS 2.0 (A) by CMS-Elekta) to generate the automatic contouring on the repeated CT. All the VOIs obtained with automatic contouring (AC) were successively corrected manually. We recorded the time needed for: 1) ex novo ROIs definition on rCT; 2) generation of AC by the three software solutions; 3) manual correction of AC.To compare the quality of the volumes obtained automatically by the software and manually corrected with those drawn from scratch on rCT, we used the following indexes: overlap coefficient (DICE), sensitivity, inclusiveness index, difference in volume, and displacement differences on three axes (x, y, z) from the isocenter. The time saved by the three software solutions for all the sites, compared to the manual contouring from scratch, is statistically significant and similar for all the three software solutions. The time saved for each site are as follows: about an hour for Head&Neck, about 40 minutes for prostate, and about 20 minutes for mesothelioma. The best DICE similarity coefficient index was obtained with the manual correction for: A (contours for prostate), A and M (contours for H&N), and M (contours for mesothelioma). From a clinical point of view, the automated contouring workflow was shown to be significantly shorter than the manual contouring process, even though manual correction of the VOIs is always needed.
A self-adaptive computational method for transonic turbulent flow past a real projectile
NASA Technical Reports Server (NTRS)
Hsu, C.-C.; Shiau, N.-H.; Chyu, W.-J.
1988-01-01
An attempt to develop an effective solution-adaptive computational method for complex unsteady flow problems is reported. The adaptive grid generation technique is critically examined to understand its application to self-adaptive computational procedures. A complex flow problem involving an impulsive Mach 0.96 transonic turbulent flow past a real secant-ogive-cylinder-boattail projectile, including the base flow region at zero angle of attack, is considered. The coupling of the grid generation code to the unsteady Navier-Stokes code makes it possible to generate a new grid network adaptive to the computed solution at every time step.
NASA Astrophysics Data System (ADS)
Ren, Zhengyong; Kalscheuer, Thomas; Greenhalgh, Stewart; Maurer, Hansruedi
2013-02-01
We have developed a generalized and stable surface integral formula for 3-D uniform inducing field and plane wave electromagnetic induction problems, which works reliably over a wide frequency range. Vector surface electric currents and magnetic currents, scalar surface electric charges and magnetic charges are treated as the variables. This surface integral formula is successfully applied to compute the electromagnetic responses of 3-D topography to low frequency magnetotelluric and high frequency radio-magnetotelluric fields. The standard boundary element method which is used to solve this surface integral formula quickly exceeds the memory capacity of modern computers for problems involving hundreds of thousands of unknowns. To make the surface integral formulation applicable and capable of dealing with large-scale 3-D geo-electromagnetic problems, we have developed a matrix-free adaptive multilevel fast multipole boundary element solver. By means of the fast multipole approach, the time-complexity of solving the final system of linear equations is reduced to O(m log m) and the memory cost is reduced to O(m), where m is the number of unknowns. The analytical solutions for a half-space model were used to verify our numerical solutions over the frequency range 0.001-300 kHz. In addition, our numerical solution shows excellent agreement with a published numerical solution for an edge-based finite-element method on a trapezoidal hill model at a frequency of 2 Hz. Then, a high frequency simulation for a similar trapezoidal hill model was used to study the effects of displacement currents in the radio-magnetotelluric frequency range. Finally, the newly developed algorithm was applied to study the effect of moderate topography and to evaluate the applicability of a 2-D RMT inversion code that assumes a flat air-Earth interface, on RMT field data collected at Smørgrav, southern Norway. This paper constitutes the first part of a hybrid boundary element-finite element
NASA Astrophysics Data System (ADS)
Weller, Hilary; Browne, Philip; Budd, Chris; Cullen, Mike
2016-03-01
An equation of Monge-Ampère type has, for the first time, been solved numerically on the surface of the sphere in order to generate optimally transported (OT) meshes, equidistributed with respect to a monitor function. Optimal transport generates meshes that keep the same connectivity as the original mesh, making them suitable for r-adaptive simulations, in which the equations of motion can be solved in a moving frame of reference in order to avoid mapping the solution between old and new meshes and to avoid load balancing problems on parallel computers. The semi-implicit solution of the Monge-Ampère type equation involves a new linearisation of the Hessian term, and exponential maps are used to map from old to new meshes on the sphere. The determinant of the Hessian is evaluated as the change in volume between old and new mesh cells, rather than using numerical approximations to the gradients. OT meshes are generated to compare with centroidal Voronoi tessellations on the sphere and are found to have advantages and disadvantages; OT equidistribution is more accurate, the number of iterations to convergence is independent of the mesh size, face skewness is reduced and the connectivity does not change. However anisotropy is higher and the OT meshes are non-orthogonal. It is shown that optimal transport on the sphere leads to meshes that do not tangle. However, tangling can be introduced by numerical errors in calculating the gradient of the mesh potential. Methods for alleviating this problem are explored. Finally, OT meshes are generated using observed precipitation as a monitor function, in order to demonstrate the potential power of the technique.
Pérez, Cristina Díaz-Agero; Rodela, Ana Robustillo; Monge Jodrá, Vincente
2009-12-01
In 1997, a national standardized surveillance system (designated INCLIMECC [Indicadores Clínicos de Mejora Continua de la Calidad]) was established in Spain for health care-associated infection (HAI) in surgery patients, based on the National Nosocomial Infection Surveillance (NNIS) system. In 2005, in its procedure-associated module, the National Healthcare Safety Network (NHSN) inherited the NNIS program for surveillance of HAI in surgery patients and reorganized all surgical procedures. INCLIMECC actively monitors all patients referred to the surgical ward of each participating hospital. We present a summary of the data collected from January 1997 to December 2006 adapted to the new NHSN procedures. Surgical site infection (SSI) rates are provided by operative procedure and NNIS risk index category. Further quality indicators reported are surgical complications, length of stay, antimicrobial prophylaxis, mortality, readmission because of infection or other complication, and revision surgery. Because the ICD-9-CM surgery procedure code is included in each patient's record, we were able to reorganize our database avoiding the loss of extensive information, as has occurred with other systems.
Allen, Craig R.; Garmestani, Ahjond S.
2015-01-01
Adaptive management is an approach to natural resource management that emphasizes learning through management where knowledge is incomplete, and when, despite inherent uncertainty, managers and policymakers must act. Unlike a traditional trial and error approach, adaptive management has explicit structure, including a careful elucidation of goals, identification of alternative management objectives and hypotheses of causation, and procedures for the collection of data followed by evaluation and reiteration. The process is iterative, and serves to reduce uncertainty, build knowledge and improve management over time in a goal-oriented and structured process.
NASA Astrophysics Data System (ADS)
Aghajani, Khadijeh; Tayebi, Habib-Allah
2017-01-01
In this study, the Mesoporous material SBA-15 were synthesized and then, the surface was modified by the surfactant Cetyltrimethylammoniumbromide (CTAB). Finally, the obtained adsorbent was used in order to remove Reactive Red 198 (RR 198) from aqueous solution. Transmission electron microscope (TEM), Fourier transform infra-red spectroscopy (FTIR), Thermogravimetric analysis (TGA), X-ray diffraction (XRD), and BET were utilized for the purpose of examining the structural characteristics of obtained adsorbent. Parameters affecting the removal of RR 198 such as pH, the amount of adsorbent, and contact time were investigated at various temperatures and were also optimized. The obtained optimized condition is as follows: pH = 2, time = 60 min and adsorbent dose = 1 g/l. Moreover, a predictive model based on ANFIS for predicting the adsorption amount according to the input variables is presented. The presented model can be used for predicting the adsorption rate based on the input variables include temperature, pH, time, dosage, concentration. The error between actual and approximated output confirm the high accuracy of the proposed model in the prediction process. This fact results in cost reduction because prediction can be done without resorting to costly experimental efforts. SBA-15, CTAB, Reactive Red 198, adsorption study, Adaptive Neuro-Fuzzy Inference systems (ANFIS).
Johnson, Richard Wayne
2003-05-01
The application of collocation methods using spline basis functions to solve differential model equations has been in use for a few decades. However, the application of spline collocation to the solution of the nonlinear, coupled, partial differential equations (in primitive variables) that define the motion of fluids has only recently received much attention. The issues that affect the effectiveness and accuracy of B-spline collocation for solving differential equations include which points to use for collocation, what degree B-spline to use and what level of continuity to maintain. Success using higher degree B-spline curves having higher continuity at the knots, as opposed to more traditional approaches using orthogonal collocation, have recently been investigated along with collocation at the Greville points for linear (1D) and rectangular (2D) geometries. The development of automatic knot insertion techniques to provide sufficient accuracy for B-spline collocation has been underway. The present article reviews recent progress for the application of B-spline collocation to fluid motion equations as well as new work in developing a novel adaptive knot insertion algorithm for a 1D convection-diffusion model equation.
ERIC Educational Resources Information Center
Ho, Tsung-Han
2010-01-01
Computerized adaptive testing (CAT) provides a highly efficient alternative to the paper-and-pencil test. By selecting items that match examinees' ability levels, CAT not only can shorten test length and administration time but it can also increase measurement precision and reduce measurement error. In CAT, maximum information (MI) is the most…
Self-adaptive Solution Strategies
NASA Technical Reports Server (NTRS)
Padovan, J.
1984-01-01
The development of enhancements to current generation nonlinear finite element algorithms of the incremental Newton-Raphson type was overviewed. Work was introduced on alternative formulations which lead to improve algorithms that avoid the need for global level updating and inversion. To quantify the enhanced Newton-Raphson scheme and the new alternative algorithm, the results of several benchmarks are presented.
Cartesian Off-Body Grid Adaption for Viscous Time- Accurate Flow Simulation
NASA Technical Reports Server (NTRS)
Buning, Pieter G.; Pulliam, Thomas H.
2011-01-01
An improved solution adaption capability has been implemented in the OVERFLOW overset grid CFD code. Building on the Cartesian off-body approach inherent in OVERFLOW and the original adaptive refinement method developed by Meakin, the new scheme provides for automated creation of multiple levels of finer Cartesian grids. Refinement can be based on the undivided second-difference of the flow solution variables, or on a specific flow quantity such as vorticity. Coupled with load-balancing and an inmemory solution interpolation procedure, the adaption process provides very good performance for time-accurate simulations on parallel compute platforms. A method of using refined, thin body-fitted grids combined with adaption in the off-body grids is presented, which maximizes the part of the domain subject to adaption. Two- and three-dimensional examples are used to illustrate the effectiveness and performance of the adaption scheme.
This SOP describes the method used for preparing surrogate recovery standard and internal standard solutions for the analysis of polar target analytes. It also describes the method for preparing calibration standard solutions for polar analytes used for gas chromatography/mass sp...
This SOP describes the method used for preparing surrogate recovery standard and internal standard solutions for the analysis of polar target analytes. It also describes the method for preparing calibration standard solutions for polar analytes used for gas chromatography/mass sp...
Cao, Youfang; Liang, Jie
2013-07-14
Critical events that occur rarely in biological processes are of great importance, but are challenging to study using Monte Carlo simulation. By introducing biases to reaction selection and reaction rates, weighted stochastic simulation algorithms based on importance sampling allow rare events to be sampled more effectively. However, existing methods do not address the important issue of barrier crossing, which often arises from multistable networks and systems with complex probability landscape. In addition, the proliferation of parameters and the associated computing cost pose significant problems. Here we introduce a general theoretical framework for obtaining optimized biases in sampling individual reactions for estimating probabilities of rare events. We further describe a practical algorithm called adaptively biased sequential importance sampling (ABSIS) method for efficient probability estimation. By adopting a look-ahead strategy and by enumerating short paths from the current state, we estimate the reaction-specific and state-specific forward and backward moving probabilities of the system, which are then used to bias reaction selections. The ABSIS algorithm can automatically detect barrier-crossing regions, and can adjust bias adaptively at different steps of the sampling process, with bias determined by the outcome of exhaustively generated short paths. In addition, there are only two bias parameters to be determined, regardless of the number of the reactions and the complexity of the network. We have applied the ABSIS method to four biochemical networks: the birth-death process, the reversible isomerization, the bistable Schlögl model, and the enzymatic futile cycle model. For comparison, we have also applied the finite buffer discrete chemical master equation (dCME) method recently developed to obtain exact numerical solutions of the underlying discrete chemical master equations of these problems. This allows us to assess sampling results objectively
Cao, Youfang; Liang, Jie
2013-01-01
Critical events that occur rarely in biological processes are of great importance, but are challenging to study using Monte Carlo simulation. By introducing biases to reaction selection and reaction rates, weighted stochastic simulation algorithms based on importance sampling allow rare events to be sampled more effectively. However, existing methods do not address the important issue of barrier crossing, which often arises from multistable networks and systems with complex probability landscape. In addition, the proliferation of parameters and the associated computing cost pose significant problems. Here we introduce a general theoretical framework for obtaining optimized biases in sampling individual reactions for estimating probabilities of rare events. We further describe a practical algorithm called adaptively biased sequential importance sampling (ABSIS) method for efficient probability estimation. By adopting a look-ahead strategy and by enumerating short paths from the current state, we estimate the reaction-specific and state-specific forward and backward moving probabilities of the system, which are then used to bias reaction selections. The ABSIS algorithm can automatically detect barrier-crossing regions, and can adjust bias adaptively at different steps of the sampling process, with bias determined by the outcome of exhaustively generated short paths. In addition, there are only two bias parameters to be determined, regardless of the number of the reactions and the complexity of the network. We have applied the ABSIS method to four biochemical networks: the birth-death process, the reversible isomerization, the bistable Schlögl model, and the enzymatic futile cycle model. For comparison, we have also applied the finite buffer discrete chemical master equation (dCME) method recently developed to obtain exact numerical solutions of the underlying discrete chemical master equations of these problems. This allows us to assess sampling results objectively
NASA Astrophysics Data System (ADS)
Cao, Youfang; Liang, Jie
2013-07-01
Critical events that occur rarely in biological processes are of great importance, but are challenging to study using Monte Carlo simulation. By introducing biases to reaction selection and reaction rates, weighted stochastic simulation algorithms based on importance sampling allow rare events to be sampled more effectively. However, existing methods do not address the important issue of barrier crossing, which often arises from multistable networks and systems with complex probability landscape. In addition, the proliferation of parameters and the associated computing cost pose significant problems. Here we introduce a general theoretical framework for obtaining optimized biases in sampling individual reactions for estimating probabilities of rare events. We further describe a practical algorithm called adaptively biased sequential importance sampling (ABSIS) method for efficient probability estimation. By adopting a look-ahead strategy and by enumerating short paths from the current state, we estimate the reaction-specific and state-specific forward and backward moving probabilities of the system, which are then used to bias reaction selections. The ABSIS algorithm can automatically detect barrier-crossing regions, and can adjust bias adaptively at different steps of the sampling process, with bias determined by the outcome of exhaustively generated short paths. In addition, there are only two bias parameters to be determined, regardless of the number of the reactions and the complexity of the network. We have applied the ABSIS method to four biochemical networks: the birth-death process, the reversible isomerization, the bistable Schlögl model, and the enzymatic futile cycle model. For comparison, we have also applied the finite buffer discrete chemical master equation (dCME) method recently developed to obtain exact numerical solutions of the underlying discrete chemical master equations of these problems. This allows us to assess sampling results objectively
An assessment of the adaptive unstructured tetrahedral grid, Euler Flow Solver Code FELISA
NASA Technical Reports Server (NTRS)
Djomehri, M. Jahed; Erickson, Larry L.
1994-01-01
A three-dimensional solution-adaptive Euler flow solver for unstructured tetrahedral meshes is assessed, and the accuracy and efficiency of the method for predicting sonic boom pressure signatures about simple generic models are demonstrated. Comparison of computational and wind tunnel data and enhancement of numerical solutions by means of grid adaptivity are discussed. The mesh generation is based on the advancing front technique. The FELISA code consists of two solvers, the Taylor-Galerkin and the Runge-Kutta-Galerkin schemes, both of which are spacially discretized by the usual Galerkin weighted residual finite-element methods but with different explicit time-marching schemes to steady state. The solution-adaptive grid procedure is based on either remeshing or mesh refinement techniques. An alternative geometry adaptive procedure is also incorporated.
An adaptive remeshing scheme for vortex dominated flows using three-dimensional unstructured grids
NASA Astrophysics Data System (ADS)
Parikh, Paresh
1995-10-01
An adaptive remeshing procedure for vortex dominated flows is described, which uses three-dimensional unstructured grids. Surface grid adaptation is achieved using the static pressure as an adaptation parameter, while entropy is used in the field to accurately identify high vorticity regions. An emphasis has been placed in making the scheme as automatic as possible so that a minimum user interaction is required between remeshing cycles. Adapted flow solutions are obtained on two sharp-edged configurations at low speed, high angle-of-attack flow conditions. The results thus obtained are compared with fine grid CFD solutions and experimental data, and conclusions are drawn as to the efficiency of the adaptive procedure.
Adaptive algebraic reconstruction technique
Lu Wenkai; Yin Fangfang
2004-12-01
Algebraic reconstruction techniques (ART) are iterative procedures for reconstructing objects from their projections. It is proven that ART can be computationally efficient by carefully arranging the order in which the collected data are accessed during the reconstruction procedure and adaptively adjusting the relaxation parameters. In this paper, an adaptive algebraic reconstruction technique (AART), which adopts the same projection access scheme in multilevel scheme algebraic reconstruction technique (MLS-ART), is proposed. By introducing adaptive adjustment of the relaxation parameters during the reconstruction procedure, one-iteration AART can produce reconstructions with better quality, in comparison with one-iteration MLS-ART. Furthermore, AART outperforms MLS-ART with improved computational efficiency.
McCarey, Bernard E.; Edelhauser, Henry F.; Lynn, Michael J.
2010-01-01
Specular microscopy can provide a non-invasive morphological analysis of the corneal endothelial cell layer from subjects enrolled in clinical trials. The analysis provides a measure of the endothelial cell physiological reserve from aging, ocular surgical procedures, pharmaceutical exposure, and general health of the corneal endothelium. The purpose of this review is to discuss normal and stressed endothelial cell morphology, the techniques for determining the morphology parameters, and clinical trial applications. PMID:18245960
Adaptive Image Denoising by Mixture Adaptation.
Luo, Enming; Chan, Stanley H; Nguyen, Truong Q
2016-10-01
We propose an adaptive learning procedure to learn patch-based image priors for image denoising. The new algorithm, called the expectation-maximization (EM) adaptation, takes a generic prior learned from a generic external database and adapts it to the noisy image to generate a specific prior. Different from existing methods that combine internal and external statistics in ad hoc ways, the proposed algorithm is rigorously derived from a Bayesian hyper-prior perspective. There are two contributions of this paper. First, we provide full derivation of the EM adaptation algorithm and demonstrate methods to improve the computational complexity. Second, in the absence of the latent clean image, we show how EM adaptation can be modified based on pre-filtering. The experimental results show that the proposed adaptation algorithm yields consistently better denoising results than the one without adaptation and is superior to several state-of-the-art algorithms.
Adaptive Image Denoising by Mixture Adaptation
NASA Astrophysics Data System (ADS)
Luo, Enming; Chan, Stanley H.; Nguyen, Truong Q.
2016-10-01
We propose an adaptive learning procedure to learn patch-based image priors for image denoising. The new algorithm, called the Expectation-Maximization (EM) adaptation, takes a generic prior learned from a generic external database and adapts it to the noisy image to generate a specific prior. Different from existing methods that combine internal and external statistics in ad-hoc ways, the proposed algorithm is rigorously derived from a Bayesian hyper-prior perspective. There are two contributions of this paper: First, we provide full derivation of the EM adaptation algorithm and demonstrate methods to improve the computational complexity. Second, in the absence of the latent clean image, we show how EM adaptation can be modified based on pre-filtering. Experimental results show that the proposed adaptation algorithm yields consistently better denoising results than the one without adaptation and is superior to several state-of-the-art algorithms.
Morales, Blanca; Diaz-Orueta, Unai; García-Soler, Álvaro; Pecyna, Karol; Ossmann, Roland; Nussbaum, Gerhard; Veigl, Christoph; Weiss, Christoph; Acedo, Javier; Soria-Frisch, Aureli
2013-11-01
To present the AsTeRICS construction set, and examine different combinations of sensors installed in the platform and how users interact with them. Nearly 50 participants from Austria, Poland and Spain were included in the study. They had a heterogeneous range of diagnoses, but as a common feature all of them experienced motor limitations in their upper limbs. The study included a 1 h session with each participant where the user interacted with a personalized combination of sensors, based on a previous assessment on their motor capabilities performed by healthcare professionals. The sensors worked as substitutes for a standard QWERTY keyboard and a standard mouse. Semi-structured interviews were conducted to obtain participants' opinions. All collected data were analyzed based on the qualitative methodology. The findings illustrated that AsTeRICS is a flexible platform whose sensors can adapt to different degrees of users' motor capabilities, thus facilitating in most cases the interaction of the participants with a common computer. AsTeRICS platform can improve the interaction between people with mobility limitations and computers. It can provide access to new technologies and become a promising tool that can be integrated in physical rehabilitation programs for people with motor disabilities in their upper limbs. The AsTeRICS platform offers an interesting tool to interface and support the computerized rehabilitation program of the patients. Due to AsTeRICS platform high usability features, family and rehabilitation professionals can learn how to use the AsTeRICS platform quickly fostering the key role of their involvement on patients' rehabilitation. AsTeRICS is a flexible, extendable, adaptable and affordable technology adapted for using computer, environmental control, mobile phone, rehabilitation programs and mechatronic systems. AsTeRICS makes possible an easy reconfiguration and integration of new functionalities, such as biofeedback rehabilitation
AEST: Adaptive Eigenvalue Stability Code
NASA Astrophysics Data System (ADS)
Zheng, L.-J.; Kotschenreuther, M.; Waelbroeck, F.; van Dam, J. W.; Berk, H.
2002-11-01
An adaptive eigenvalue linear stability code is developed. The aim is on one hand to include the non-ideal MHD effects into the global MHD stability calculation for both low and high n modes and on the other hand to resolve the numerical difficulty involving MHD singularity on the rational surfaces at the marginal stability. Our code follows some parts of philosophy of DCON by abandoning relaxation methods based on radial finite element expansion in favor of an efficient shooting procedure with adaptive gridding. The δ W criterion is replaced by the shooting procedure and subsequent matrix eigenvalue problem. Since the technique of expanding a general solution into a summation of the independent solutions employed, the rank of the matrices involved is just a few hundreds. This makes easier to solve the eigenvalue problem with non-ideal MHD effects, such as FLR or even full kinetic effects, as well as plasma rotation effect, taken into account. To include kinetic effects, the approach of solving for the distribution function as a local eigenvalue ω problem as in the GS2 code will be employed in the future. Comparison of the ideal MHD version of the code with DCON, PEST, and GATO will be discussed. The non-ideal MHD version of the code will be employed to study as an application the transport barrier physics in tokamak discharges.
Graham, Jennifer
2006-05-01
Rabbits are popular companion animals that present to veterinary clinics for routine and emergency care. Clinics equipped for treat-ing dogs and cats may be easily adapted to accommodate rabbits. This article reviews common procedures performed by the clinician specific to rabbits. Topics include handling and restraint, triage and patient assessment, sample collection, and supportive care techniques. Miscellaneous procedures, including anesthetic delivery, nasolacrimal duct flushing, and ear cleaning, are also discussed.
Francis, M J; Pashley, R M
2009-07-09
In this work we have studied the evaporative cooling effect produced in a continuous flow air bubble column, containing water and salt solutions. We have established that, at equilibrium, a significant reduction in temperature is produced in an insulated, continuous flow, bubble column. For example, with a continuous flow of inlet air at 22 degrees C, a water bubble column cools to about 8 degrees C, at steady state equilibrium. The cooling effect observed in a continuous bubble column of concentrated aqueous salt solution could be used for commercial applications, such as for evaporative cooling systems. We have developed a simple method, based on the steady state thermal energy balance developed in a bubble column, to determine the latent heat of vaporization of the liquid in the column. Only the equilibrium temperature of the bubble column, the temperature of the inlet gas and the hydrostatic pressure across the column need to be measured. This analysis has been used to determine the heat of vaporization for water and some concentrated salt solutions.
Digital adaptation algorithms of adaptive optics corrected images
NASA Astrophysics Data System (ADS)
Polskikh, Sergey D.; Sviridov, Konstantin N.
2000-07-01
The technology is considered of space object image obtainment with high angular resolution, based on the adaptive tuning of image spatial spectra (digital adaptation), corrected by adaptive optics. As the basis of the technology, the algorithm is taken of the integral equation of the I-st kind of convolution type with unknown core and imprecisely given right part. It's shown the procedure of the inverse operator construction for this equation solution is connected with minimization of nonlinear regularizing multiextremel functionals and could be realized on the base of global optimization methods. The structure of multiextremel functionals is analyzed, and the main global extremum search methods are researched. It is shown, that as the basis of the optimal construction of the channel for the obtainment of images with high resolution, the principle must be taken of the sequential reduction of the global extremum search space dimensionality, and what's more, the predetector processing of the wavefront by the adaptive optics is the first stage of this reduction. The results are given of numerical modelling including the examples of the distorted and restorated images of model objects under different signal-to-noise ratios.
NASA Astrophysics Data System (ADS)
Barton, P.
1987-04-01
The basic principles of adaptive antennas are outlined in terms of the Wiener-Hopf expression for maximizing signal to noise ratio in an arbitrary noise environment; the analogy with generalized matched filter theory provides a useful aid to understanding. For many applications, there is insufficient information to achieve the above solution and thus non-optimum constrained null steering algorithms are also described, together with a summary of methods for preventing wanted signals being nulled by the adaptive system. The three generic approaches to adaptive weight control are discussed; correlation steepest descent, weight perturbation and direct solutions based on sample matrix conversion. The tradeoffs between hardware complexity and performance in terms of null depth and convergence rate are outlined. The sidelobe cancellor technique is described. Performance variation with jammer power and angular distribution is summarized and the key performance limitations identified. The configuration and performance characteristics of both multiple beam and phase scan array antennas are covered, with a brief discussion of performance factors.
NASA Astrophysics Data System (ADS)
Abdel Wahab, N. H.; Salah, Ahmed
2015-05-01
In this paper, the interaction of a three-level -configration atom and a one-mode quantized electromagnetic cavity field has been studied. The detuning parameters, the Kerr nonlinearity and the arbitrary form of both the field and intensity-dependent atom-field coupling have been taken into account. The wave function when the atom and the field are initially prepared in the excited state and coherent state, respectively, by using the Schrödinger equation has been given. The analytical approximation solution of this model has been obtained by using the modified homotopy analysis method (MHAM). The homotopy analysis method is mentioned summarily. MHAM can be obtained from the homotopy analysis method (HAM) applied to Laplace, inverse Laplace transform and Pade approximate. MHAM is used to increase the accuracy and accelerate the convergence rate of truncated series solution obtained by the HAM. The time-dependent parameters of the anti-bunching of photons, the amplitude-squared squeezing and the coherent properties have been calculated. The influence of the detuning parameters, Kerr nonlinearity and photon number operator on the temporal behavior of these phenomena have been analyzed. We noticed that the considered system is sensitive to variations in the presence of these parameters.
Urich, A.; Maier, R. R. J.; Yu, Fei; Knight, J. C.; Hand, D. P.; Shephard, J. D.
2012-01-01
We present the delivery of high energy microsecond pulses through a hollow-core negative-curvature fiber at 2.94 µm. The energy densities delivered far exceed those required for biological tissue manipulation and are of the order of 2300 J/cm2. Tissue ablation was demonstrated on hard and soft tissue in dry and aqueous conditions with no detrimental effects to the fiber or catastrophic damage to the end facets. The energy is guided in a well confined single mode allowing for a small and controllable focused spot delivered flexibly to the point of operation. Hence, a mechanically and chemically robust alternative to the existing Er:YAG delivery systems is proposed which paves the way for new routes for minimally invasive surgical laser procedures. PMID:23413120
Guy, Joshua H; Deakin, Glen B; Edwards, Andrew M; Miller, Catherine M; Pyne, David B
2015-03-01
Extreme environmental conditions present athletes with diverse challenges; however, not all sporting events are limited by thermoregulatory parameters. The purpose of this leading article is to identify specific instances where hot environmental conditions either compromise or augment performance and, where heat acclimation appears justified, evaluate the effectiveness of pre-event acclimation processes. To identify events likely to be receptive to pre-competition heat adaptation protocols, we clustered and quantified the magnitude of difference in performance of elite athletes competing in International Association of Athletics Federations (IAAF) World Championships (1999-2011) in hot environments (>25 °C) with those in cooler temperate conditions (<25 °C). Athletes in endurance events performed worse in hot conditions (~3 % reduction in performance, Cohen's d > 0.8; large impairment), while in contrast, performance in short-duration sprint events was augmented in the heat compared with temperate conditions (~1 % improvement, Cohen's d > 0.8; large performance gain). As endurance events were identified as compromised by the heat, we evaluated common short-term heat acclimation (≤7 days, STHA) and medium-term heat acclimation (8-14 days, MTHA) protocols. This process identified beneficial effects of heat acclimation on performance using both STHA (2.4 ± 3.5 %) and MTHA protocols (10.2 ± 14.0 %). These effects were differentially greater for MTHA, which also demonstrated larger reductions in both endpoint exercise heart rate (STHA: -3.5 ± 1.8 % vs MTHA: -7.0 ± 1.9 %) and endpoint core temperature (STHA: -0.7 ± 0.7 % vs -0.8 ± 0.3 %). It appears that worthwhile acclimation is achievable for endurance athletes via both short-and medium-length protocols but more is gained using MTHA. Conversely, it is also conceivable that heat acclimation may be counterproductive for sprinters. As high-performance athletes are often time-poor, shorter duration protocols may
Gonçalves, F S; Barretto, L S S; Arruda, R P; Perri, S H V; Mingoti, G Z
2014-01-01
The presence of heparin and a mixture of penicillamine, hypotaurine, and epinephrine (PHE) solution in the in vitro fertilization (IVF) media seem to be a prerequisite when bovine spermatozoa are capacitated in vitro, in order to stimulate sperm motility and acrosome reaction. The present study was designed to determine the effect of the addition of heparin and PHE during IVF on the quality and penetrability of spermatozoa into bovine oocytes and on subsequent embryo development. Sperm quality, evaluated by the integrity of plasma and acrosomal membranes and mitochondrial function, was diminished (P<0.05) in the presence of heparin and PHE. Oocyte penetration and normal pronuclear formation rates, as well as the percentage of zygotes presenting more than two pronuclei, was higher (P<0.05) in the presence of heparin and PHE. No differences were observed in cleavage rates between treatment and control (P>0.05). However, the developmental rate to the blastocyst stage was increased in the presence of heparin and PHE (P>0.05). The quality of embryos that reached the blastocyst stage was evaluated by counting the inner cell mass (ICM) and trophectoderm (TE) cell numbers and total number of cells; the percentage of ICM and TE cells was unaffected (P>0.05) in the presence of heparin and PHE (P<0.05). In conclusion, this study demonstrated that while the supplementation of IVF media with heparin and PHE solution impairs spermatozoa quality, it plays an important role in sperm capacitation, improving pronuclear formation, and early embryonic development.
Clarification Procedure for Gels
NASA Technical Reports Server (NTRS)
Barber, Patrick G.; Simpson, Norman R.
1987-01-01
Procedure developed to obtain transparent gels with consistencies suitable for crystal growth, by replacing sodium ions in silicate solution with potassium ions. Clarification process uses cation-exchange resin to replace sodium ions in stock solution with potassium ions, placed in 1M solution of soluble potassium salt. Slurry stirred for several hours to allow potassium ions to replace all other cations on resin. Supernatant solution decanted through filter, and beads rinsed with distilled water. Rinsing removes excess salt but leaves cation-exchange beads fully charged with potassium ions.
Brünisholz, H P; Schwarzwald, C C; Bettschart-Wolfensberger, R; Ringer, S K
2015-12-01
The aim of the present study was to investigate the effect of pentastarch on colloid osmotic pressure (COP) and cardiopulmonary function during and up to 24 h after anaesthesia in horses. Twenty-five systemically healthy horses were anaesthetised using isoflurane-medetomidine balanced anaesthesia. Twelve were assigned to treatment with hydroxyethyl starch (HES) (H group) and 13 to no HES (NH group). In the H group, 6 mL/kg of pentastarch 10% HES (200/0.5) was infused over 1 h starting 30 min after induction of anaesthesia. Horses of the NH group received an equal amount of lactated Ringer's solution (LRS). COP and blood biochemical, cardiopulmonary and anaesthesia-related variables were measured at different time points before and after treatment. Pentastarch was effective in correcting the decrease in COP observed with LRS administration. No differences between treatments were detected for blood glucose, lactate, total proteins and electrolytes. Packed cell volume was lower with the H group immediately after finishing HES-administration and for an additional 30 min. In all horses, all blood biochemical variables other than lactate returned to normal after 12 h. No clinically relevant differences between treatments were detected for cardiopulmonary variables, although 23.1% of the NH-horses needed rescue-HES to maintain cardiovascular function, while none of the H-horses needed additional colloids. Overall, 6 mL/kg HES (200/0.5) was found to be effective in maintaining COP during anaesthesia in systemically healthy horses. Intermediate and long-term effects were below the limit of detection. The potentially beneficial effects on cardiovascular function need further investigation, especially in critically ill horses. Copyright © 2015 Elsevier Ltd. All rights reserved.
On the dynamics of some grid adaption schemes
NASA Technical Reports Server (NTRS)
Sweby, Peter K.; Yee, Helen C.
1994-01-01
The dynamics of a one-parameter family of mesh equidistribution schemes coupled with finite difference discretisations of linear and nonlinear convection-diffusion model equations is studied numerically. It is shown that, when time marched to steady state, the grid adaption not only influences the stability and convergence rate of the overall scheme, but can also introduce spurious dynamics to the numerical solution procedure.
Adaptive mesh generation for viscous flows using Delaunay triangulation
NASA Technical Reports Server (NTRS)
Mavriplis, Dimitri J.
1990-01-01
A method for generating an unstructured triangular mesh in two dimensions, suitable for computing high Reynolds number flows over arbitrary configurations is presented. The method is based on a Delaunay triangulation, which is performed in a locally stretched space, in order to obtain very high aspect ratio triangles in the boundary layer and the wake regions. It is shown how the method can be coupled with an unstructured Navier-Stokes solver to produce a solution adaptive mesh generation procedure for viscous flows.
Adaptive mesh generation for viscous flows using Delaunay triangulation
NASA Technical Reports Server (NTRS)
Mavriplis, Dimitri J.
1988-01-01
A method for generating an unstructured triangular mesh in two dimensions, suitable for computing high Reynolds number flows over arbitrary configurations is presented. The method is based on a Delaunay triangulation, which is performed in a locally stretched space, in order to obtain very high aspect ratio triangles in the boundary layer and the wake regions. It is shown how the method can be coupled with an unstructured Navier-Stokes solver to produce a solution adaptive mesh generation procedure for viscous flows.
Daniel, Lorias Espinoza; Tapia, Fernando Montes; Arturo, Minor Martínez; Ricardo, Ordorica Flores
2014-12-01
The ability to handle and adapt to the visual perspectives generated by angled laparoscopes is crucial for skilled laparoscopic surgery. However, the control of the visual work space depends on the ability of the operator of the camera, who is often not the most experienced member of the surgical team. Here, we present a simple, low-cost option for surgical training that challenges the learner with static and dynamic visual perspectives at 30 degrees using a system that emulates the angled laparoscope. A system was developed using a low-cost camera and readily available materials to emulate the angled laparoscope. Nine participants undertook 3 tasks to test spatial adaptation to the static and dynamic visual perspectives at 30 degrees. Completing each task to a predefined satisfactory level ensured precision of execution of the tasks. Associated metrics (time and error rate) were recorded, and the performance of participants were determined. A total of 450 repetitions were performed by 9 residents at various stages of training. All the tasks were performed with a visual perspective of 30 degrees using the system. Junior residents were more proficient than senior residents. This system is a viable and low-cost alternative for developing the basic psychomotor skills necessary for the handling and adaptation to visual perspectives of 30 degrees, without depending on a laparoscopic tower, in junior residents. More advanced skills may then be acquired by other means, such as in the operating theater or through clinical experience.
Near-Body Grid Adaption for Overset Grids
NASA Technical Reports Server (NTRS)
Buning, Pieter G.; Pulliam, Thomas H.
2016-01-01
A solution adaption capability for curvilinear near-body grids has been implemented in the OVERFLOW overset grid computational fluid dynamics code. The approach follows closely that used for the Cartesian off-body grids, but inserts refined grids in the computational space of original near-body grids. Refined curvilinear grids are generated using parametric cubic interpolation, with one-sided biasing based on curvature and stretching ratio of the original grid. Sensor functions, grid marking, and solution interpolation tasks are implemented in the same fashion as for off-body grids. A goal-oriented procedure, based on largest error first, is included for controlling growth rate and maximum size of the adapted grid system. The adaption process is almost entirely parallelized using MPI, resulting in a capability suitable for viscous, moving body simulations. Two- and three-dimensional examples are presented.
Ryan, P C; Hillier, S; Wall, A J
2008-12-15
Sequential extraction procedures (SEPs) are commonly used to determine speciation of trace metals in soils and sediments. However, the non-selectivity of reagents for targeted phases has remained a lingering concern. Furthermore, potentially reactive phases such as phyllosilicate clay minerals often contain trace metals in structural sites, and their reactivity has not been quantified. Accordingly, the objective of this study is to analyze the behavior of trace metal-bearing clay minerals exposed to the revised BCR 3-step plus aqua regia SEP. Mineral quantification based on stoichiometric analysis and quantitative powder X-ray diffraction (XRD) documents progressive dissolution of chlorite (CCa-2 ripidolite) and two varieties of smectite (SapCa-2 saponite and SWa-1 nontronite) during steps 1-3 of the BCR procedure. In total, 8 (+/-1) % of ripidolite, 19 (+/-1) % of saponite, and 19 (+/-3) % of nontronite (% mineral mass) dissolved during extractions assumed by many researchers to release trace metals from exchange sites, carbonates, hydroxides, sulfides and organic matter. For all three reference clays, release of Ni into solution is correlated with clay dissolution. Hydrolysis of relatively weak Mg-O bonds (362 kJ/mol) during all stages, reduction of Fe(III) during hydroxylamine hydrochloride extraction and oxidation of Fe(II) during hydrogen peroxide extraction are the main reasons for clay mineral dissolution. These findings underscore the need for precise mineral quantification when using SEPs to understand the origin/partitioning of trace metals with solid phases.
Adaptive Batch Mode Active Learning.
Chakraborty, Shayok; Balasubramanian, Vineeth; Panchanathan, Sethuraman
2015-08-01
Active learning techniques have gained popularity to reduce human effort in labeling data instances for inducing a classifier. When faced with large amounts of unlabeled data, such algorithms automatically identify the exemplar and representative instances to be selected for manual annotation. More recently, there have been attempts toward a batch mode form of active learning, where a batch of data points is simultaneously selected from an unlabeled set. Real-world applications require adaptive approaches for batch selection in active learning, depending on the complexity of the data stream in question. However, the existing work in this field has primarily focused on static or heuristic batch size selection. In this paper, we propose two novel optimization-based frameworks for adaptive batch mode active learning (BMAL), where the batch size as well as the selection criteria are combined in a single formulation. We exploit gradient-descent-based optimization strategies as well as properties of submodular functions to derive the adaptive BMAL algorithms. The solution procedures have the same computational complexity as existing state-of-the-art static BMAL techniques. Our empirical results on the widely used VidTIMIT and the mobile biometric (MOBIO) data sets portray the efficacy of the proposed frameworks and also certify the potential of these approaches in being used for real-world biometric recognition applications.
hp-Adaptive time integration based on the BDF for viscous flows
NASA Astrophysics Data System (ADS)
Hay, A.; Etienne, S.; Pelletier, D.; Garon, A.
2015-06-01
This paper presents a procedure based on the Backward Differentiation Formulas of order 1 to 5 to obtain efficient time integration of the incompressible Navier-Stokes equations. The adaptive algorithm performs both stepsize and order selections to control respectively the solution accuracy and the computational efficiency of the time integration process. The stepsize selection (h-adaptivity) is based on a local error estimate and an error controller to guarantee that the numerical solution accuracy is within a user prescribed tolerance. The order selection (p-adaptivity) relies on the idea that low-accuracy solutions can be computed efficiently by low order time integrators while accurate solutions require high order time integrators to keep computational time low. The selection is based on a stability test that detects growing numerical noise and deems a method of order p stable if there is no method of lower order that delivers the same solution accuracy for a larger stepsize. Hence, it guarantees both that (1) the used method of integration operates inside of its stability region and (2) the time integration procedure is computationally efficient. The proposed time integration procedure also features a time-step rejection and quarantine mechanisms, a modified Newton method with a predictor and dense output techniques to compute solution at off-step points.
Numerical Boundary Condition Procedures
NASA Technical Reports Server (NTRS)
1981-01-01
Topics include numerical procedures for treating inflow and outflow boundaries, steady and unsteady discontinuous surfaces, far field boundaries, and multiblock grids. In addition, the effects of numerical boundary approximations on stability, accuracy, and convergence rate of the numerical solution are discussed.
Adaptive neuro-control for large flexible structures
NASA Astrophysics Data System (ADS)
Krishna Kumar, K.; Montgomery, L.
1992-12-01
Special problems related to control system design for large flexible structures include the inherent low damping, wide range of modal frequencies, unmodeled dynamics, and possibility of system failures. Neuro-control, which combines concepts from artificial neural networks and adaptive control is investigated as a solution to some of these problems. Specifically, the roles of neutro-controllers in learning unmodeled dynamics and adaptive control for system failures are investigated. The neuro-controller synthesis procedure and its capabilities in adaptively controlling the structure are demonstrated using a mathematical model of an existing structure, the advanced control evaluation for systems test article located at NASA/Marshall Space Flight Center. Also, the real-time adaptive capability of neuro-controllers is demonstrated via an experiment utilizing a flexible clamped-free beam equipped with an actuator that uses a bang-bang controller.
Agyepong, Irene Akua; Kodua, Augustina; Adjei, Sam; Adam, Taghreed
2012-10-01
Implementation of policies (decisions) in the health sector is sometimes defeated by the system's response to the policy itself. This can lead to counter-intuitive, unanticipated, or more modest effects than expected by those who designed the policy. The health sector fits the characteristics of complex adaptive systems (CAS) and complexity is at the heart of this phenomenon. Anticipating both positive and negative effects of policy decisions, understanding the interests, power and interaction between multiple actors; and planning for the delayed and distal impact of policy decisions are essential for effective decision making in CAS. Failure to appreciate these elements often leads to a series of reductionist approach interventions or 'fixes'. This in turn can initiate a series of negative feedback loops that further complicates the situation over time. In this paper we use a case study of the Additional Duty Hours Allowance (ADHA) policy in Ghana to illustrate these points. Using causal loop diagrams, we unpack the intended and unintended effects of the policy and how these effects evolved over time. The overall goal is to advance our understanding of decision making in complex adaptive systems; and through this process identify some essential elements in formulating, updating and implementing health policy that can help to improve attainment of desired outcomes and minimize negative unintended effects.
Adaptive unstructured meshing for thermal stress analysis of built-up structures
NASA Technical Reports Server (NTRS)
Dechaumphai, Pramote
1992-01-01
An adaptive unstructured meshing technique for mechanical and thermal stress analysis of built-up structures has been developed. A triangular membrane finite element and a new plate bending element are evaluated on a panel with a circular cutout and a frame stiffened panel. The adaptive unstructured meshing technique, without a priori knowledge of the solution to the problem, generates clustered elements only where needed. An improved solution accuracy is obtained at a reduced problem size and analysis computational time as compared to the results produced by the standard finite element procedure.
Time domain and frequency domain design techniques for model reference adaptive control systems
NASA Technical Reports Server (NTRS)
Boland, J. S., III
1971-01-01
Some problems associated with the design of model-reference adaptive control systems are considered and solutions to these problems are advanced. The stability of the adapted system is a primary consideration in the development of both the time-domain and the frequency-domain design techniques. Consequentially, the use of Liapunov's direct method forms an integral part of the derivation of the design procedures. The application of sensitivity coefficients to the design of model-reference adaptive control systems is considered. An application of the design techniques is also presented.
Sowers, K.R.; Gunsalus, R.P.
1995-12-01
The methanogenic Archaea, like the Bacteria and Eucarya, possess several osmoregulatory strategies that enable them to adapt to osmotic changes in their environment. The physiological responses of Methanosarcina species to different osmotic pressures were studied in extracellular osmolalities ranging from 0.3 to 2.0 osmol/kg. Regardless of the isolation source, the maximum rate of growth for species from freshwater, sewage, and marine sources occurred in extracellular osmolalities between 0.62 and 1.0 osmol/kg and decreased to minimal detectable growth as the solute concentration approached 2.0 osmol/kg. The distribution and concentration of compatible solutes in eight strains representing five Methanosarcina spp. were similar to those found in M. thermophila grown in extracellular osmolalities of 0.3 and 2.0 osmol/kg. Results of this study demonstrate that the mechanism of halotolerance in Methanosarcina spp. involves the regulation of K{sup +}, {alpha}-glutamate, N{sup {epsilon}}-acetyl-{beta}-lysine, and glycine betaine accumulation in response to the osmotic effects of extracellular solute.
Structured adaptive grid generation using algebraic methods
NASA Technical Reports Server (NTRS)
Yang, Jiann-Cherng; Soni, Bharat K.; Roger, R. P.; Chan, Stephen C.
1993-01-01
The accuracy of the numerical algorithm depends not only on the formal order of approximation but also on the distribution of grid points in the computational domain. Grid adaptation is a procedure which allows optimal grid redistribution as the solution progresses. It offers the prospect of accurate flow field simulations without the use of an excessively timely, computationally expensive, grid. Grid adaptive schemes are divided into two basic categories: differential and algebraic. The differential method is based on a variational approach where a function which contains a measure of grid smoothness, orthogonality and volume variation is minimized by using a variational principle. This approach provided a solid mathematical basis for the adaptive method, but the Euler-Lagrange equations must be solved in addition to the original governing equations. On the other hand, the algebraic method requires much less computational effort, but the grid may not be smooth. The algebraic techniques are based on devising an algorithm where the grid movement is governed by estimates of the local error in the numerical solution. This is achieved by requiring the points in the large error regions to attract other points and points in the low error region to repel other points. The development of a fast, efficient, and robust algebraic adaptive algorithm for structured flow simulation applications is presented. This development is accomplished in a three step process. The first step is to define an adaptive weighting mesh (distribution mesh) on the basis of the equidistribution law applied to the flow field solution. The second, and probably the most crucial step, is to redistribute grid points in the computational domain according to the aforementioned weighting mesh. The third and the last step is to reevaluate the flow property by an appropriate search/interpolate scheme at the new grid locations. The adaptive weighting mesh provides the information on the desired concentration
Coughlan, B M; Moroney, G A; van Pelt, F N A M; O'Brien, N M; Davenport, J; O'Halloran, J
2009-11-01
This study investigated the internal osmotic regulatory capabilities of the Manila clam (Ruditapes philippinarum) following in vivo exposure to a range of salinities. A second objective was to measure the health status of the Manila clam following exposure to different salinities using the neutral red retention (NRR) assay, and to compare results using a range of physiological saline solutions (PSS). On exposure to seawater of differing salinities, the Manila clam followed a pattern of an osmoconformer, although they seemed to partially regulate their circulatory haemolytic fluids to be hyperosmotic to the surrounding aqueous environment. Significant differences were found when different PSS were used, emphasizing the importance of using a suitable PSS to reduce additional osmotic stress. Using PSS in the NRR assay that do not exert additional damage to lysosomal membrane integrity will help to more accurately quantify the effects of exposure to pollutants on the organism(s) under investigation.
Matsuda, Ikki; Sha, John C M; Ortmann, Sylvia; Schwarm, Angela; Grandl, Florian; Caton, Judith; Jens, Warner; Kreuzer, Michael; Marlena, Diana; Hagen, Katharina B; Clauss, Marcus
2015-10-01
Behavioral observations and small fecal particles compared to other primates indicate that free-ranging proboscis monkeys (Nasalis larvatus) have a strategy of facultative merycism(rumination). In functional ruminants (ruminant and camelids), rumination is facilitated by a particle sorting mechanism in the forestomach that selectively retains larger particles and subjects them to repeated mastication. Using a set of a solute and three particle markers of different sizes (b2, 5 and 8mm),we displayed digesta passage kinetics and measured mean retention times (MRTs) in four captive proboscis monkeys (6–18 kg) and compared the marker excretion patterns to those in domestic cattle. In addition, we evaluated various methods of calculating and displaying passage characteristics. The mean ± SD dry matter intake was 98 ± 22 g kg−0.75 d−1, 68 ± 7% of which was browse. Accounting for sampling intervals in MRT calculation yielded results that were not affected by the sampling frequency. Displaying marker excretion patterns using fecal marker concentrations (rather than amounts) facilitated comparisons with reactor theory outputs and indicated that both proboscis and cattle digestive tracts represent a series of very few tank reactors. However, the separation of the solute and particle marker and the different-sized particle markers, evident in cattle, did not occur in proboscis monkeys, in which all markers moved together, at MRTs of approximately 40 h. The results indicate that the digestive physiology of proboscis monkeys does not show typical characteristics of ruminants, which may explain why merycism is only a facultative strategy in this species.
Refined numerical solution of the transonic flow past a wedge
NASA Technical Reports Server (NTRS)
Liang, S.-M.; Fung, K.-Y.
1985-01-01
A numerical procedure combining the ideas of solving a modified difference equation and of adaptive mesh refinement is introduced. The numerical solution on a fixed grid is improved by using better approximations of the truncation error computed from local subdomain grid refinements. This technique is used to obtain refined solutions of steady, inviscid, transonic flow past a wedge. The effects of truncation error on the pressure distribution, wave drag, sonic line, and shock position are investigated. By comparing the pressure drag on the wedge and wave drag due to the shocks, a supersonic-to-supersonic shock originating from the wedge shoulder is confirmed.
NASA Technical Reports Server (NTRS)
Narendra, K. S.; Annaswamy, A. M.
1985-01-01
Several concepts and results in robust adaptive control are are discussed and is organized in three parts. The first part surveys existing algorithms. Different formulations of the problem and theoretical solutions that have been suggested are reviewed here. The second part contains new results related to the role of persistent excitation in robust adaptive systems and the use of hybrid control to improve robustness. In the third part promising new areas for future research are suggested which combine different approaches currently known.
Ahmad, H; Saleemuddin, M
1983-07-01
A modification of the bromophenol blue dye binding procedure of protein estimation is described. Substitution of glycine/phosphoric acid, pH 2.6, for dilute acetic acid in the colour reagent extended the applicability of the procedure to protein solutions containing buffers of various pH values. This was, however, accompanied by approximately 25% loss in the sensitivity of the procedure. The modified reagent exhibited very marked tolerance to detergents and could be successfully adapted for the measurement of proteolytic activity in acidic, neutral or alkaline pH ranges.
Three-dimensional adaptive grid-embedding Euler technique
NASA Astrophysics Data System (ADS)
Davis, Roger L.; Dannenhoffer, John F., III
1994-06-01
A new three-dimensional adaptive-grid Euler procedure is presented that automatically detects high-gradient regions in the flow and locally subdivides the computational grid in these regions to provide a uniform, high level of accuracy over the entire domain. A tunable, semistructured data system is utilized that provides global topological unstructured-grid flexibility along with the efficiency of a local, structured-grid system. In addition, this structure data allows for the flow solution algorithm to be executed on a wide variety of parallel/vector computing platforms. An explicit, time-marching, control volume procedure is used to integrate the Euler equations to a steady state. In addition, a multiple-grid procedure is used throughout the embedded-grid regions as well as on subgrids coarser than the initial grid to accelerate convergence and properly propagate disturbance waves through refined-grid regions. Upon convergence, high flow gradient regions, where it is assumed that large truncation errors in the solution exist, are detected using a combination of directional refinement vectors that have large components in areas of these gradients. The local computational grid is directionally subdivided in these regions and the flow solution is reinitiated. Overall convergence occurs when a prespecified level of accuracy is reached. Solutions are presented that demonstrate the efficiency and accuracy of the present procedure.
Adapted Canoeing for the Handicapped.
ERIC Educational Resources Information Center
Frith, Greg H.; Warren, L. D.
1984-01-01
Safety as well as instructional recommendations are offered for adapting canoeing as a recreationial activity for handicapped students. Major steps of the instructional program feature orientation to the water and canoe, entry and exit techinques, and mobility procedures. (CL)
Fukuda, Ryoichi Ehara, Masahiro
2014-10-21
Solvent effects on electronic excitation spectra are considerable in many situations; therefore, we propose an efficient and reliable computational scheme that is based on the symmetry-adapted cluster-configuration interaction (SAC-CI) method and the polarizable continuum model (PCM) for describing electronic excitations in solution. The new scheme combines the recently proposed first-order PCM SAC-CI method with the PTE (perturbation theory at the energy level) PCM SAC scheme. This is essentially equivalent to the usual SAC and SAC-CI computations with using the PCM Hartree-Fock orbital and integrals, except for the additional correction terms that represent solute-solvent interactions. The test calculations demonstrate that the present method is a very good approximation of the more costly iterative PCM SAC-CI method for excitation energies of closed-shell molecules in their equilibrium geometry. This method provides very accurate values of electric dipole moments but is insufficient for describing the charge-transfer (CT) indices in polar solvent. The present method accurately reproduces the absorption spectra and their solvatochromism of push-pull type 2,2{sup ′}-bithiophene molecules. Significant solvent and substituent effects on these molecules are intuitively visualized using the CT indices. The present method is the simplest and theoretically consistent extension of SAC-CI method for including PCM environment, and therefore, it is useful for theoretical and computational spectroscopy.
NASA Astrophysics Data System (ADS)
Fukuda, Ryoichi; Ehara, Masahiro
2014-10-01
Solvent effects on electronic excitation spectra are considerable in many situations; therefore, we propose an efficient and reliable computational scheme that is based on the symmetry-adapted cluster-configuration interaction (SAC-CI) method and the polarizable continuum model (PCM) for describing electronic excitations in solution. The new scheme combines the recently proposed first-order PCM SAC-CI method with the PTE (perturbation theory at the energy level) PCM SAC scheme. This is essentially equivalent to the usual SAC and SAC-CI computations with using the PCM Hartree-Fock orbital and integrals, except for the additional correction terms that represent solute-solvent interactions. The test calculations demonstrate that the present method is a very good approximation of the more costly iterative PCM SAC-CI method for excitation energies of closed-shell molecules in their equilibrium geometry. This method provides very accurate values of electric dipole moments but is insufficient for describing the charge-transfer (CT) indices in polar solvent. The present method accurately reproduces the absorption spectra and their solvatochromism of push-pull type 2,2'-bithiophene molecules. Significant solvent and substituent effects on these molecules are intuitively visualized using the CT indices. The present method is the simplest and theoretically consistent extension of SAC-CI method for including PCM environment, and therefore, it is useful for theoretical and computational spectroscopy.
Ramponi, Denise R
2016-01-01
Dental problems are a common complaint in emergency departments in the United States. There are a wide variety of dental issues addressed in emergency department visits such as dental caries, loose teeth, dental trauma, gingival infections, and dry socket syndrome. Review of the most common dental blocks and dental procedures will allow the practitioner the opportunity to make the patient more comfortable and reduce the amount of analgesia the patient will need upon discharge. Familiarity with the dental equipment, tooth, and mouth anatomy will help prepare the practitioner for to perform these dental procedures.
Prism Adaptation in Schizophrenia
ERIC Educational Resources Information Center
Bigelow, Nirav O.; Turner, Beth M.; Andreasen, Nancy C.; Paulsen, Jane S.; O'Leary, Daniel S.; Ho, Beng-Choon
2006-01-01
The prism adaptation test examines procedural learning (PL) in which performance facilitation occurs with practice on tasks without the need for conscious awareness. Dynamic interactions between frontostriatal cortices, basal ganglia, and the cerebellum have been shown to play key roles in PL. Disruptions within these neural networks have also…
Prism Adaptation in Schizophrenia
ERIC Educational Resources Information Center
Bigelow, Nirav O.; Turner, Beth M.; Andreasen, Nancy C.; Paulsen, Jane S.; O'Leary, Daniel S.; Ho, Beng-Choon
2006-01-01
The prism adaptation test examines procedural learning (PL) in which performance facilitation occurs with practice on tasks without the need for conscious awareness. Dynamic interactions between frontostriatal cortices, basal ganglia, and the cerebellum have been shown to play key roles in PL. Disruptions within these neural networks have also…
ERIC Educational Resources Information Center
Flournoy, Nancy
Designs for sequential sampling procedures that adapt to cumulative information are discussed. A familiar illustration is the play-the-winner rule in which there are two treatments; after a random start, the same treatment is continued as long as each successive subject registers a success. When a failure occurs, the other treatment is used until…
ERIC Educational Resources Information Center
Eisenhower, R. Warren
Because grievances are unavoidable, it is essential for organizations, such as the schools, to utilize an efficient, effective procedure to handle friction between employers and employees. Through successive steps, representatives of labor and management attempt to resolve the grievance, first with meetings of lower level representatives (such as…
[Complications of plateletpheresis procedures].
García Gala, J M; Rodríguez-Vicente, P; Martínez Revuelta, E; Alonso García, A; Sanzo Lombardero, C; Alvarez Ferrando, A
1998-10-01
Thrombopheresis procedures have been recently expanded with the development or different programmes. Taking into account that this reasonably safe procedure is not devoid of complications, it would be desirable to select those individuals with lower risk of suffering adverse side effects as donors. The thrombopheresis procedures performed in our hospital between 1986 and 1997 were analysed in order to establish the useful guidelines for such selection. All the thrombopheresis procedures performed in the Asturias Central Hospital blood bank in the 1986-1987 period were analysed. The first procedure per donor, along with all data referred to adverse effects appearing during thrombopheresis, were collected. Sex, age, body, weight, blood cells count (before and after thrombopheresis) and serum calcium levels (before and after thrombopheresis) were taken as variables with predictive value for adverse effects. With regard to the procedure, the model of cell separator, the duration of the procedure, the amount and type of anticoagulant solution and the prophylactic use of calcium ions were assessed. A total number of 1,024 thrombophereses were analysed. Some types of adverse effect were seen in 259 instances (25.3%). Of these, 70.3%, were mild, 29.3% moderate and 0.4% severe. The commonest adverse effect was perioral paraesthesia. Of the different variables studied, female sex and low weight acquired predictive value with respect to the occurrence of adverse effects. Prophylactic administration of calcium did not prevent the appearance of complications. The thrombopheresis procedures may present adverse side effects in a high percentage of cases, which, although mostly mild, require specialised personnel for identification and management. Males weighing over 70 kg are less prone to suffer such effects. Oral administration of calcium before the apheresis does not prevent the adverse reactions.
SAGE - MULTIDIMENSIONAL SELF-ADAPTIVE GRID CODE
NASA Technical Reports Server (NTRS)
Davies, C. B.
1994-01-01
SAGE, Self Adaptive Grid codE, is a flexible tool for adapting and restructuring both 2D and 3D grids. Solution-adaptive grid methods are useful tools for efficient and accurate flow predictions. In supersonic and hypersonic flows, strong gradient regions such as shocks, contact discontinuities, shear layers, etc., require careful distribution of grid points to minimize grid error and produce accurate flow-field predictions. SAGE helps the user obtain more accurate solutions by intelligently redistributing (i.e. adapting) the original grid points based on an initial or interim flow-field solution. The user then computes a new solution using the adapted grid as input to the flow solver. The adaptive-grid methodology poses the problem in an algebraic, unidirectional manner for multi-dimensional adaptations. The procedure is analogous to applying tension and torsion spring forces proportional to the local flow gradient at every grid point and finding the equilibrium position of the resulting system of grid points. The multi-dimensional problem of grid adaption is split into a series of one-dimensional problems along the computational coordinate lines. The reduced one dimensional problem then requires a tridiagonal solver to find the location of grid points along a coordinate line. Multi-directional adaption is achieved by the sequential application of the method in each coordinate direction. The tension forces direct the redistribution of points to the strong gradient region. To maintain smoothness and a measure of orthogonality of grid lines, torsional forces are introduced that relate information between the family of lines adjacent to one another. The smoothness and orthogonality constraints are direction-dependent, since they relate only the coordinate lines that are being adapted to the neighboring lines that have already been adapted. Therefore the solutions are non-unique and depend on the order and direction of adaption. Non-uniqueness of the adapted grid is
SAGE - MULTIDIMENSIONAL SELF-ADAPTIVE GRID CODE
NASA Technical Reports Server (NTRS)
Davies, C. B.
1994-01-01
SAGE, Self Adaptive Grid codE, is a flexible tool for adapting and restructuring both 2D and 3D grids. Solution-adaptive grid methods are useful tools for efficient and accurate flow predictions. In supersonic and hypersonic flows, strong gradient regions such as shocks, contact discontinuities, shear layers, etc., require careful distribution of grid points to minimize grid error and produce accurate flow-field predictions. SAGE helps the user obtain more accurate solutions by intelligently redistributing (i.e. adapting) the original grid points based on an initial or interim flow-field solution. The user then computes a new solution using the adapted grid as input to the flow solver. The adaptive-grid methodology poses the problem in an algebraic, unidirectional manner for multi-dimensional adaptations. The procedure is analogous to applying tension and torsion spring forces proportional to the local flow gradient at every grid point and finding the equilibrium position of the resulting system of grid points. The multi-dimensional problem of grid adaption is split into a series of one-dimensional problems along the computational coordinate lines. The reduced one dimensional problem then requires a tridiagonal solver to find the location of grid points along a coordinate line. Multi-directional adaption is achieved by the sequential application of the method in each coordinate direction. The tension forces direct the redistribution of points to the strong gradient region. To maintain smoothness and a measure of orthogonality of grid lines, torsional forces are introduced that relate information between the family of lines adjacent to one another. The smoothness and orthogonality constraints are direction-dependent, since they relate only the coordinate lines that are being adapted to the neighboring lines that have already been adapted. Therefore the solutions are non-unique and depend on the order and direction of adaption. Non-uniqueness of the adapted grid is
Adaptive process control using fuzzy logic and genetic algorithms
NASA Technical Reports Server (NTRS)
Karr, C. L.
1993-01-01
Researchers at the U.S. Bureau of Mines have developed adaptive process control systems in which genetic algorithms (GA's) are used to augment fuzzy logic controllers (FLC's). GA's are search algorithms that rapidly locate near-optimum solutions to a wide spectrum of problems by modeling the search procedures of natural genetics. FLC's are rule based systems that efficiently manipulate a problem environment by modeling the 'rule-of-thumb' strategy used in human decision making. Together, GA's and FLC's possess the capabilities necessary to produce powerful, efficient, and robust adaptive control systems. To perform efficiently, such control systems require a control element to manipulate the problem environment, and a learning element to adjust to the changes in the problem environment. Details of an overall adaptive control system are discussed. A specific laboratory acid-base pH system is used to demonstrate the ideas presented.
Adaptive Process Control with Fuzzy Logic and Genetic Algorithms
NASA Technical Reports Server (NTRS)
Karr, C. L.
1993-01-01
Researchers at the U.S. Bureau of Mines have developed adaptive process control systems in which genetic algorithms (GA's) are used to augment fuzzy logic controllers (FLC's). GA's are search algorithms that rapidly locate near-optimum solutions to a wide spectrum of problems by modeling the search procedures of natural genetics. FLC's are rule based systems that efficiently manipulate a problem environment by modeling the 'rule-of-thumb' strategy used in human decision-making. Together, GA's and FLC's possess the capabilities necessary to produce powerful, efficient, and robust adaptive control systems. To perform efficiently, such control systems require a control element to manipulate the problem environment, an analysis element to recognize changes in the problem environment, and a learning element to adjust to the changes in the problem environment. Details of an overall adaptive control system are discussed. A specific laboratory acid-base pH system is used to demonstrate the ideas presented.
Topology and grid adaption for high-speed flow computations
NASA Astrophysics Data System (ADS)
Abolhassani, Jamshid S.; Tiwari, Surendra N.
1989-03-01
This study investigates the effects of grid topology and grid adaptation on numerical solutions of the Navier-Stokes equations. In the first part of this study, a general procedure is presented for computation of high-speed flow over complex three-dimensional configurations. The flow field is simulated on the surface of a Butler wing in a uniform stream. Results are presented for Mach number 3.5 and a Reynolds number of 2,000,000. The O-type and H-type grids have been used for this study, and the results are compared together and with other theoretical and experimental results. The results demonstrate that while the H-type grid is suitable for the leading and trailing edges, a more accurate solution can be obtained for the middle part of the wing with an O-type grid. In the second part of this study, methods of grid adaption are reviewed and a method is developed with the capability of adapting to several variables. This method is based on a variational approach and is an algebraic method. Also, the method has been formulated in such a way that there is no need for any matrix inversion. This method is used in conjunction with the calculation of hypersonic flow over a blunt-nose body. A movie has been produced which shows simultaneously the transient behavior of the solution and the grid adaption.
Topology and grid adaption for high-speed flow computations
NASA Technical Reports Server (NTRS)
Abolhassani, Jamshid S.; Tiwari, Surendra N.
1989-01-01
This study investigates the effects of grid topology and grid adaptation on numerical solutions of the Navier-Stokes equations. In the first part of this study, a general procedure is presented for computation of high-speed flow over complex three-dimensional configurations. The flow field is simulated on the surface of a Butler wing in a uniform stream. Results are presented for Mach number 3.5 and a Reynolds number of 2,000,000. The O-type and H-type grids have been used for this study, and the results are compared together and with other theoretical and experimental results. The results demonstrate that while the H-type grid is suitable for the leading and trailing edges, a more accurate solution can be obtained for the middle part of the wing with an O-type grid. In the second part of this study, methods of grid adaption are reviewed and a method is developed with the capability of adapting to several variables. This method is based on a variational approach and is an algebraic method. Also, the method has been formulated in such a way that there is no need for any matrix inversion. This method is used in conjunction with the calculation of hypersonic flow over a blunt-nose body. A movie has been produced which shows simultaneously the transient behavior of the solution and the grid adaption.
Webster, Michael A.
2015-01-01
Sensory systems continuously mold themselves to the widely varying contexts in which they must operate. Studies of these adaptations have played a long and central role in vision science. In part this is because the specific adaptations remain a powerful tool for dissecting vision, by exposing the mechanisms that are adapting. That is, “if it adapts, it's there.” Many insights about vision have come from using adaptation in this way, as a method. A second important trend has been the realization that the processes of adaptation are themselves essential to how vision works, and thus are likely to operate at all levels. That is, “if it's there, it adapts.” This has focused interest on the mechanisms of adaptation as the target rather than the probe. Together both approaches have led to an emerging insight of adaptation as a fundamental and ubiquitous coding strategy impacting all aspects of how we see. PMID:26858985
Error analysis of finite element solutions for postbuckled cylinders
NASA Technical Reports Server (NTRS)
Sistla, Rajaram; Thurston, Gaylen A.
1989-01-01
A general method of error analysis and correction is investigated for the discrete finite-element results for cylindrical shell structures. The method for error analysis is an adaptation of the method of successive approximation. When applied to the equilibrium equations of shell theory, successive approximations derive an approximate continuous solution from the discrete finite-element results. The advantage of this continuous solution is that it contains continuous partial derivatives of an order higher than the basis functions of the finite-element solution. Preliminary numerical results are presented in this paper for the error analysis of finite-element results for a postbuckled stiffened cylindrical panel modeled by a general purpose shell code. Numerical results from the method have previously been reported for postbuckled stiffened plates. A procedure for correcting the continuous approximate solution by Newton's method is outlined.
Adaptive management is an approach to natural resource management that emphasizes learning through management where knowledge is incomplete, and when, despite inherent uncertainty, managers and policymakers must act. Unlike a traditional trial and error approach, adaptive managem...
Adaptive management is an approach to natural resource management that emphasizes learning through management where knowledge is incomplete, and when, despite inherent uncertainty, managers and policymakers must act. Unlike a traditional trial and error approach, adaptive managem...
NASA Astrophysics Data System (ADS)
Webster, Michael A.; Webster, Shernaaz M.; MacDonald, Jennifer; Bahradwadj, Shrikant R.
2001-06-01
Blur is an intrinsic property of the retinal image that can vary substantially in natural viewing. We examined how processes of contrast adaptation might adjust the visual system to regulate the perception of blur. Observers viewed a blurred or sharpened image for 2-5 minutes, and then judged the apparent focus of a series of 0.5-sec test images interleaved with 6-sec of readaptation. A 2AFC staircase procedure was used to vary the amplitude spectrum of successive test to find the image that appeared in focus. Adapting to a blurred image causes a physically focused image to appear too sharp. Opposite after-effects occur for sharpened adapting images. Pronounced biases were observed over a wide range of magnitudes of adapting blur, and were similar for different types of blur. After-effects were also similar for different classes of images but were generally weaker when the adapting and test stimuli were different images, showing that the adaptation is not adjusting simply to blur per se. These adaptive adjustments may strongly influence the perception of blur in normal vision and how it changes with refractive errors.
Interdisciplinarity in Adapted Physical Activity
ERIC Educational Resources Information Center
Bouffard, Marcel; Spencer-Cavaliere, Nancy
2016-01-01
It is commonly accepted that inquiry in adapted physical activity involves the use of different disciplines to address questions. It is often advanced today that complex problems of the kind frequently encountered in adapted physical activity require a combination of disciplines for their solution. At the present time, individual research…
Interdisciplinarity in Adapted Physical Activity
ERIC Educational Resources Information Center
Bouffard, Marcel; Spencer-Cavaliere, Nancy
2016-01-01
It is commonly accepted that inquiry in adapted physical activity involves the use of different disciplines to address questions. It is often advanced today that complex problems of the kind frequently encountered in adapted physical activity require a combination of disciplines for their solution. At the present time, individual research…
NASA Technical Reports Server (NTRS)
Banks, D. W.; Hafez, M. M.
1996-01-01
Grid adaptation for structured meshes is the art of using information from an existing, but poorly resolved, solution to automatically redistribute the grid points in such a way as to improve the resolution in regions of high error, and thus the quality of the solution. This involves: (1) generate a grid vis some standard algorithm, (2) calculate a solution on this grid, (3) adapt the grid to this solution, (4) recalculate the solution on this adapted grid, and (5) repeat steps 3 and 4 to satisfaction. Steps 3 and 4 can be repeated until some 'optimal' grid is converged to but typically this is not worth the effort and just two or three repeat calculations are necessary. They also may be repeated every 5-10 time steps for unsteady calculations.
Cyclic creep analysis from elastic finite-element solutions
NASA Technical Reports Server (NTRS)
Kaufman, A.; Hwang, S. Y.
1986-01-01
A uniaxial approach was developed for calculating cyclic creep and stress relaxation at the critical location of a structure subjected to cyclic thermomechanical loading. This approach was incorporated into a simplified analytical procedure for predicting the stress-strain history at a crack initiation site for life prediction purposes. An elastic finite-element solution for the problem was used as input for the simplified procedure. The creep analysis includes a self-adaptive time incrementing scheme. Cumulative creep is the sum of the initial creep, the recovery from the stress relaxation and the incremental creep. The simplified analysis was exercised for four cases involving a benchmark notched plate problem. Comparisons were made with elastic-plastic-creep solutions for these cases using the MARC nonlinear finite-element computer code.
Developing Flexible Procedural Knowledge in Undergraduate Calculus
ERIC Educational Resources Information Center
Maciejewski, Wes; Star, Jon R.
2016-01-01
Mathematics experts often choose appropriate procedures to produce an efficient or elegant solution to a mathematical task. This "flexible procedural knowledge" distinguishes novice and expert procedural performances. This article reports on an intervention intended to aid the development of undergraduate calculus students' flexible use…
Developing Flexible Procedural Knowledge in Undergraduate Calculus
ERIC Educational Resources Information Center
Maciejewski, Wes; Star, Jon R.
2016-01-01
Mathematics experts often choose appropriate procedures to produce an efficient or elegant solution to a mathematical task. This "flexible procedural knowledge" distinguishes novice and expert procedural performances. This article reports on an intervention intended to aid the development of undergraduate calculus students' flexible use…
Climate Literacy and Adaptation Solutions for Society
NASA Astrophysics Data System (ADS)
Sohl, L. E.; Chandler, M. A.
2011-12-01
Many climate literacy programs and resources are targeted specifically at children and young adults, as part of the concerted effort to improve STEM education in the U.S. This work is extremely important in building a future society that is well prepared to adopt policies promoting climate change resilience. What these climate literacy efforts seldom do, however, is reach the older adult population that is making economic decisions right now (or not, as the case may be) on matters that can be impacted by climate change. The result is a lack of appreciation of "climate intelligence" - information that could be incorporated into the decision-making process, to maximize opportunities, minimize risk, and create a climate-resilient economy. A National Climate Service, akin to the National Weather Service, would help provide legitimacy to the need for climate intelligence, and would certainly also be the first stop for both governments and private sector concerns seeking climate information for operational purposes. However, broader collaboration between the scientific and business communities is also needed, so that they become co-creators of knowledge that is beneficial and informative to all. The stakeholder-driven research that is the focus of NOAA's RISA (Regional Integrated Sciences and Assessments) projects is one example of how such collaborations can be developed.
Adaptive management: Chapter 1
Allen, Craig R.; Garmestani, Ahjond S.; Allen, Craig R.; Garmestani, Ahjond S.
2015-01-01
Adaptive management is an approach to natural resource management that emphasizes learning through management where knowledge is incomplete, and when, despite inherent uncertainty, managers and policymakers must act. Unlike a traditional trial and error approach, adaptive management has explicit structure, including a careful elucidation of goals, identification of alternative management objectives and hypotheses of causation, and procedures for the collection of data followed by evaluation and reiteration. The process is iterative, and serves to reduce uncertainty, build knowledge and improve management over time in a goal-oriented and structured process.
Inferential Aspects of Adaptive Allocation Rules.
ERIC Educational Resources Information Center
Berry, Donald A.
In clinical trials, adaptive allocation means that the therapies assigned to the next patient or patients depend on the results obtained thus far in the trial. Although many adaptive allocation procedures have been proposed for clinical trials, few have actually used adaptive assignment, largely because classical frequentist measures of inference…
NASA Technical Reports Server (NTRS)
Georgeff, Michael P.; Lansky, Amy L.
1986-01-01
Much of commonsense knowledge about the real world is in the form of procedures or sequences of actions for achieving particular goals. In this paper, a formalism is presented for representing such knowledge using the notion of process. A declarative semantics for the representation is given, which allows a user to state facts about the effects of doing things in the problem domain of interest. An operational semantics is also provided, which shows how this knowledge can be used to achieve particular goals or to form intentions regarding their achievement. Given both semantics, the formalism additionally serves as an executable specification language suitable for constructing complex systems. A system based on this formalism is described, and examples involving control of an autonomous robot and fault diagnosis for NASA's Space Shuttle are provided.
PLUM: Parallel Load Balancing for Unstructured Adaptive Meshes
NASA Technical Reports Server (NTRS)
Oliker, Leonid
1998-01-01
Dynamic mesh adaption on unstructured grids is a powerful tool for computing large-scale problems that require grid modifications to efficiently resolve solution features. Unfortunately, an efficient parallel implementation is difficult to achieve, primarily due to the load imbalance created by the dynamically-changing nonuniform grid. To address this problem, we have developed PLUM, an automatic portable framework for performing adaptive large-scale numerical computations in a message-passing environment. First, we present an efficient parallel implementation of a tetrahedral mesh adaption scheme. Extremely promising parallel performance is achieved for various refinement and coarsening strategies on a realistic-sized domain. Next we describe PLUM, a novel method for dynamically balancing the processor workloads in adaptive grid computations. This research includes interfacing the parallel mesh adaption procedure based on actual flow solutions to a data remapping module, and incorporating an efficient parallel mesh repartitioner. A significant runtime improvement is achieved by observing that data movement for a refinement step should be performed after the edge-marking phase but before the actual subdivision. We also present optimal and heuristic remapping cost metrics that can accurately predict the total overhead for data redistribution. Several experiments are performed to verify the effectiveness of PLUM on sequences of dynamically adapted unstructured grids. Portability is demonstrated by presenting results on the two vastly different architectures of the SP2 and the Origin2OOO. Additionally, we evaluate the performance of five state-of-the-art partitioning algorithms that can be used within PLUM. It is shown that for certain classes of unsteady adaption, globally repartitioning the computational mesh produces higher quality results than diffusive repartitioning schemes. We also demonstrate that a coarse starting mesh produces high quality load balancing, at
The development and application of the self-adaptive grid code, SAGE
NASA Technical Reports Server (NTRS)
Davies, Carol B.
1993-01-01
The multidimensional self-adaptive grid code, SAGE, has proven to be a flexible and useful tool in the solution of complex flow problems. Both 2- and 3-D examples given in this report show the code to be reliable and to substantially improve flowfield solutions. Since the adaptive procedure is a marching scheme the code is extremely fast and uses insignificant CPU time compared to the corresponding flow solver. The SAGE program is also machine and flow solver independent. Significant effort was made to simplify user interaction, though some parameters still need to be chosen with care. It is also difficult to tell when the adaption process has provided its best possible solution. This is particularly true if no experimental data are available or if there is a lack of theoretical understanding of the flow. Another difficulty occurs if local features are important but missing in the original grid; the adaption to this solution will not result in any improvement, and only grid refinement can result in an improved solution. These are complex issues that need to be explored within the context of each specific problem.
Barrett, Harrison H.; Furenlid, Lars R.; Freed, Melanie; Hesterman, Jacob Y.; Kupinski, Matthew A.; Clarkson, Eric; Whitaker, Meredith K.
2008-01-01
Adaptive imaging systems alter their data-acquisition configuration or protocol in response to the image information received. An adaptive pinhole single-photon emission computed tomography (SPECT) system might acquire an initial scout image to obtain preliminary information about the radiotracer distribution and then adjust the configuration or sizes of the pinholes, the magnifications, or the projection angles in order to improve performance. This paper briefly describes two small-animal SPECT systems that allow this flexibility and then presents a framework for evaluating adaptive systems in general, and adaptive SPECT systems in particular. The evaluation is in terms of the performance of linear observers on detection or estimation tasks. Expressions are derived for the ideal linear (Hotelling) observer and the ideal linear (Wiener) estimator with adaptive imaging. Detailed expressions for the performance figures of merit are given, and possible adaptation rules are discussed. PMID:18541485
A two-dimensional adaptive mesh generation method
NASA Astrophysics Data System (ADS)
Altas, Irfan; Stephenson, John W.
1991-05-01
The present, two-dimensional adaptive mesh-generation method allows selective modification of a small portion of the mesh without affecting large areas of adjacent mesh-points, and is applicable with or without boundary-fitted coordinate-generation procedures. The cases of differential equation discretization by, on the one hand, classical difference formulas designed for uniform meshes, and on the other the present difference formulas, are illustrated through the application of the method to the Hiemenz flow for which the Navier-Stokes equation's exact solution is known, as well as to a two-dimensional viscous internal flow problem.
Numerical Differentiation for Adaptively Refined Finite Element Meshes
NASA Technical Reports Server (NTRS)
Borgioli, Andrea; Cwik, Tom
1998-01-01
Postprocessing of point-wise data is a fundamental process in many fields of research. Numerical differentiation is a key operation in computational electromagnetics. In the case of data obtained from a finite element method with automatic mesh refinement much work needs still to be done. This paper addresses some issues in differentiating data obtained from a finite element electromagnetic code with adaptive mesh refinement, and it proposes a methodology for deriving the electric field given the magnetic field on a mesh of linear triangular elements. The procedure itself is nevertheless more general and might be extended for numerically differentiating any point-wise solution based on triangular meshes.
An efficient method-of-lines simulation procedure for organic semiconductor devices.
Rogel-Salazar, J; Bradley, D D C; Cash, J R; Demello, J C
2009-03-14
We describe an adaptive grid method-of-lines (MOL) solution procedure for modelling charge transport and recombination in organic semiconductor devices. The procedure we describe offers an efficient, robust and versatile means of simulating semiconductor devices that allows for much simpler coding of the underlying equations than alternative simulation procedures. The MOL technique is especially well-suited to modelling the extremely stiff (and hence difficult to solve) equations that arise during the simulation of organic-and some inorganic-semiconductor devices. It also has wider applications in other areas, including reaction kinetics, combustion and aero- and fluid dynamics, where its ease of implementation also makes it an attractive choice. The MOL procedure we use converts the underlying semiconductor equations into a series of coupled ordinary differential equations (ODEs) that can be integrated forward in time using an appropriate ODE solver. The time integration is periodically interrupted, the numerical solution is interpolated onto a new grid that is better matched to the solution profile, and the time integration is then resumed on the new grid. The efficacy of the simulation procedure is assessed by considering a single layer device structure, for which exact analytical solutions are available for the electric potential, the charge distributions and the current-voltage characteristics. Two separate state-of-the-art ODE solvers are tested: the single-step Runge-Kutta solver Radau5 and the multi-step solver ODE15s, which is included as part of the Matlab ODE suite. In both cases, the numerical solutions show excellent agreement with the exact analytical solutions, yielding results that are accurate to one part in 1 x 10(4). The single-step Radau5 solver, however, is found to provide faster convergence since its efficiency is not compromised by the periodic interruption of the time integration when the grid is updated.
Solutions For Smart Metering Under Harsh Environmental Condicions
NASA Astrophysics Data System (ADS)
Kunicina, N.; Zabasta, A.; Kondratjevs, K.; Asmanis, G.
2015-02-01
The described case study concerns application of wireless sensor networks to the smart control of power supply substations. The solution proposed for metering is based on the modular principle and has been tested in the intersystem communication paradigm using selectable interface modules (IEEE 802.3, ISM radio interface, GSM/GPRS). The solution modularity gives 7 % savings of maintenance costs. The developed solution can be applied to the control of different critical infrastructure networks using adapted modules. The proposed smart metering is suitable for outdoor installation, indoor industrial installations, operation under electromagnetic pollution, temperature and humidity impact. The results of tests have shown a good electromagnetic compatibility of the prototype meter with other electronic devices. The metering procedure is exemplified by operation of a testing company's workers under harsh environmental conditions.
Lattice model for water-solute mixtures
NASA Astrophysics Data System (ADS)
Furlan, A. P.; Almarza, N. G.; Barbosa, M. C.
2016-10-01
A lattice model for the study of mixtures of associating liquids is proposed. Solvent and solute are modeled by adapting the associating lattice gas (ALG) model. The nature of interaction of solute/solvent is controlled by tuning the energy interactions between the patches of ALG model. We have studied three set of parameters, resulting in, hydrophilic, inert, and hydrophobic interactions. Extensive Monte Carlo simulations were carried out, and the behavior of pure components and the excess properties of the mixtures have been studied. The pure components, water (solvent) and solute, have quite similar phase diagrams, presenting gas, low density liquid, and high density liquid phases. In the case of solute, the regions of coexistence are substantially reduced when compared with both the water and the standard ALG models. A numerical procedure has been developed in order to attain series of results at constant pressure from simulations of the lattice gas model in the grand canonical ensemble. The excess properties of the mixtures, volume and enthalpy as the function of the solute fraction, have been studied for different interaction parameters of the model. Our model is able to reproduce qualitatively well the excess volume and enthalpy for different aqueous solutions. For the hydrophilic case, we show that the model is able to reproduce the excess volume and enthalpy of mixtures of small alcohols and amines. The inert case reproduces the behavior of large alcohols such as propanol, butanol, and pentanol. For the last case (hydrophobic), the excess properties reproduce the behavior of ionic liquids in aqueous solution.
Slope bias of psychometric functions derived from adaptive data.
Kaernbach, C
2001-11-01
Several investigators have fit psychometric functions to data from adaptive procedures for threshold estimation. Although the threshold estimates are in general quite correct, one encounters a slope bias that has not been explained up to now. The present paper demonstrates slope bias for parametric and nonparametric maximum-likelihood fits and for Spearman-Kärber analysis of adaptive data. The examples include staircase and stochastic approximation procedures. The paper then presents an explanation of slope bias based on serial data dependency in adaptive procedures. Data dependency is first illustrated with simple two-trial examples and then extended to realistic adaptive procedures. Finally, the paper presents an adaptive staircase procedure designed to measure threshold and slope directly. In contrast to classical adaptive threshold-only procedures, this procedure varies both a threshold and a spread parameter in response to double trials.
Adaptive Force Control in Compliant Motion
NASA Technical Reports Server (NTRS)
Seraji, H.
1994-01-01
This paper addresses the problem of controlling a manipulator in compliant motion while in contact with an environment having an unknown stiffness. Two classes of solutions are discussed: adaptive admittance control and adaptive compliance control. In both admittance and compliance control schemes, compensator adaptation is used to ensure a stable and uniform system performance.
QUEST - A Bayesian adaptive psychometric method
NASA Technical Reports Server (NTRS)
Watson, A. B.; Pelli, D. G.
1983-01-01
An adaptive psychometric procedure that places each trial at the current most probable Bayesian estimate of threshold is described. The procedure takes advantage of the common finding that the human psychometric function is invariant in form when expressed as a function of log intensity. The procedure is simple, fast, and efficient, and may be easily implemented on any computer.
Adjoint-Based, Three-Dimensional Error Prediction and Grid Adaptation
NASA Technical Reports Server (NTRS)
Park, Michael A.
2002-01-01
Engineering computational fluid dynamics (CFD) analysis and design applications focus on output functions (e.g., lift, drag). Errors in these output functions are generally unknown and conservatively accurate solutions may be computed. Computable error estimates can offer the possibility to minimize computational work for a prescribed error tolerance. Such an estimate can be computed by solving the flow equations and the linear adjoint problem for the functional of interest. The computational mesh can be modified to minimize the uncertainty of a computed error estimate. This robust mesh-adaptation procedure automatically terminates when the simulation is within a user specified error tolerance. This procedure for estimating and adapting to error in a functional is demonstrated for three-dimensional Euler problems. An adaptive mesh procedure that links to a Computer Aided Design (CAD) surface representation is demonstrated for wing, wing-body, and extruded high lift airfoil configurations. The error estimation and adaptation procedure yielded corrected functions that are as accurate as functions calculated on uniformly refined grids with ten times as many grid points.
Adaptive multiresolution modeling of groundwater flow in heterogeneous porous media
NASA Astrophysics Data System (ADS)
Malenica, Luka; Gotovac, Hrvoje; Srzic, Veljko; Andric, Ivo
2016-04-01
Proposed methodology was originally developed by our scientific team in Split who designed multiresolution approach for analyzing flow and transport processes in highly heterogeneous porous media. The main properties of the adaptive Fup multi-resolution approach are: 1) computational capabilities of Fup basis functions with compact support capable to resolve all spatial and temporal scales, 2) multi-resolution presentation of heterogeneity as well as all other input and output variables, 3) accurate, adaptive and efficient strategy and 4) semi-analytical properties which increase our understanding of usually complex flow and transport processes in porous media. The main computational idea behind this approach is to separately find the minimum number of basis functions and resolution levels necessary to describe each flow and transport variable with the desired accuracy on a particular adaptive grid. Therefore, each variable is separately analyzed, and the adaptive and multi-scale nature of the methodology enables not only computational efficiency and accuracy, but it also describes subsurface processes closely related to their understood physical interpretation. The methodology inherently supports a mesh-free procedure, avoiding the classical numerical integration, and yields continuous velocity and flux fields, which is vitally important for flow and transport simulations. In this paper, we will show recent improvements within the proposed methodology. Since "state of the art" multiresolution approach usually uses method of lines and only spatial adaptive procedure, temporal approximation was rarely considered as a multiscale. Therefore, novel adaptive implicit Fup integration scheme is developed, resolving all time scales within each global time step. It means that algorithm uses smaller time steps only in lines where solution changes are intensive. Application of Fup basis functions enables continuous time approximation, simple interpolation calculations across
NASA Astrophysics Data System (ADS)
Kinzig, Ann P.
2015-03-01
This paper is intended as a brief introduction to climate adaptation in a conference devoted otherwise to the physics of sustainable energy. Whereas mitigation involves measures to reduce the probability of a potential event, such as climate change, adaptation refers to actions that lessen the impact of climate change. Mitigation and adaptation differ in other ways as well. Adaptation does not necessarily have to be implemented immediately to be effective; it only needs to be in place before the threat arrives. Also, adaptation does not necessarily require global, coordinated action; many effective adaptation actions can be local. Some urban communities, because of land-use change and the urban heat-island effect, currently face changes similar to some expected under climate change, such as changes in water availability, heat-related morbidity, or changes in disease patterns. Concern over those impacts might motivate the implementation of measures that would also help in climate adaptation, despite skepticism among some policy makers about anthropogenic global warming. Studies of ancient civilizations in the southwestern US lends some insight into factors that may or may not be important to successful adaptation.
Adaptive Algebraic Multigrid Methods
Brezina, M; Falgout, R; MacLachlan, S; Manteuffel, T; McCormick, S; Ruge, J
2004-04-09
Our ability to simulate physical processes numerically is constrained by our ability to solve the resulting linear systems, prompting substantial research into the development of multiscale iterative methods capable of solving these linear systems with an optimal amount of effort. Overcoming the limitations of geometric multigrid methods to simple geometries and differential equations, algebraic multigrid methods construct the multigrid hierarchy based only on the given matrix. While this allows for efficient black-box solution of the linear systems associated with discretizations of many elliptic differential equations, it also results in a lack of robustness due to assumptions made on the near-null spaces of these matrices. This paper introduces an extension to algebraic multigrid methods that removes the need to make such assumptions by utilizing an adaptive process. The principles which guide the adaptivity are highlighted, as well as their application to algebraic multigrid solution of certain symmetric positive-definite linear systems.
NASA Astrophysics Data System (ADS)
Julie, Hongki; Sanjaya, Febi; Anggoro, Ant. Yudhi
2017-08-01
One of purposes of this study was to describe the solution profile of the junior high school students for the PISA adaptation test. The procedures conducted by researchers to achieve this objective were (1) adapting the PISA test, (2) validating the adapting PISA test, (3) asking junior high school students to do the adapting PISA test, and (4) making the students' solution profile. The PISA problems for mathematics could be classified into four areas, namely quantity, space and shape, change and relationship, and uncertainty. The research results that would be presented in this paper were the result test for uncertainty problems. In the adapting PISA test, there were fifteen questions. Subjects in this study were 18 students from 11 junior high schools in Yogyakarta, Central Java, and Banten. The type of research that used by the researchers was a qualitative research. For the first uncertainty problem in the adapting test, 66.67% of students reached level 3. For the second uncertainty problem in the adapting test, 44.44% of students achieved level 4, and 33.33% of students reached level 3. For the third uncertainty problem in the adapting test n, 38.89% of students achieved level 5, 11.11% of students reached level 4, and 5.56% of students achieved level 3. For the part a of the fourth uncertainty problem in the adapting test, 72.22% of students reached level 4 and for the part b of the fourth uncertainty problem in the adapting test, 83.33% students achieved level 4.
Application of Sequential Interval Estimation to Adaptive Mastery Testing
ERIC Educational Resources Information Center
Chang, Yuan-chin Ivan
2005-01-01
In this paper, we apply sequential one-sided confidence interval estimation procedures with beta-protection to adaptive mastery testing. The procedures of fixed-width and fixed proportional accuracy confidence interval estimation can be viewed as extensions of one-sided confidence interval procedures. It can be shown that the adaptive mastery…
Launay, Jean-Claude; Savourey, Gustave
2009-07-01
Nowdays, occupational and recreational activities in cold environments are common. Exposure to cold induces thermoregulatory responses like changes of behaviour and physiological adjustments to maintain thermal balance either by increasing metabolic heat production by shivering and/or by decreasing heat losses consecutive to peripheral cutaneous vasoconstriction. Those physiological responses present a great variability among individuals and depend mainly on biometrical characteristics, age, and general cold adaptation. During severe cold exposure, medical disorders may occur such as accidental hypothermia and/or freezing or non-freezing cold injuries. General cold adaptations have been qualitatively classified by Hammel and quantitatively by Savourey. This last classification takes into account the quantitative changes of the main cold reactions: higher or lower metabolic heat production, higher or lesser heat losses and finally the level of the core temperature observed at the end of a standardized exposure to cold. General cold adaptations observed previously in natives could also be developed in laboratory conditions by continuous or intermittent cold exposures. Beside general cold adaptation, local cold adaptation exists and is characterized by a lesser decrease of skin temperature, a more pronounced cold induced vasodilation, less pain and a higher manual dexterity. Adaptations to cold may reduce the occurrence of accidents and improve human performance as surviving in the cold. The present review describes both general and local cold adaptations in humans and how they are of interest for cold workers.
Feline onychectomy and elective procedures.
Young, William Phillip
2002-05-01
The development of the carbon dioxide (CO2) surgical laser has given veterinarians a new perspective in the field of surgery. Recently developed techniques and improvisations of established procedures have opened the field of surgery to infinite applications never before dreamed of as little as 10 years ago. Today's CO2 surgical laser is an adaptable, indispensable tool for the everyday veterinary practitioner. Its use is becoming a common occurrence in offices of veterinarians around the world.
Adaptive building skin structures
NASA Astrophysics Data System (ADS)
Del Grosso, A. E.; Basso, P.
2010-12-01
The concept of adaptive and morphing structures has gained considerable attention in the recent years in many fields of engineering. In civil engineering very few practical applications are reported to date however. Non-conventional structural concepts like deployable, inflatable and morphing structures may indeed provide innovative solutions to some of the problems that the construction industry is being called to face. To give some examples, searches for low-energy consumption or even energy-harvesting green buildings are amongst such problems. This paper first presents a review of the above problems and technologies, which shows how the solution to these problems requires a multidisciplinary approach, involving the integration of architectural and engineering disciplines. The discussion continues with the presentation of a possible application of two adaptive and dynamically morphing structures which are proposed for the realization of an acoustic envelope. The core of the two applications is the use of a novel optimization process which leads the search for optimal solutions by means of an evolutionary technique while the compatibility of the resulting configurations of the adaptive envelope is ensured by the virtual force density method.
ERIC Educational Resources Information Center
Exceptional Parent, 1987
1987-01-01
Suggestions are presented for helping disabled individuals learn to use or adapt toothbrushes for proper dental care. A directory lists dental health instructional materials available from various organizations. (CB)
Davidson, Rebecca K; Oines, Oivind; Madslien, Knut; Mathis, Alexander
2009-02-01
Echinococcus multilocularis, causing alveolar echinococcosis in humans, is a highly pathogenic emerging zoonotic disease in central Europe. The gold standard for the identification of this parasite in the main host, the red fox, namely identification of the adult parasite in the intestine at necropsy, is very laborious. Copro-enzyme-linked immunosorbent assay (ELISA) with confirmatory polymerase chain reaction (PCR) has been suggested as an acceptable alternative, but no commercial copro-ELISA tests are currently available and an in-house test is therefore required. Published methods for taeniid egg isolation and a multiplex PCR assay for simultaneous identification of E. multilocularis, E. granulosus and other cestodes were adapted to be carried out on pooled faecal samples from red foxes in Norway. None of the 483 fox faecal samples screened were PCR-positive for E. multilocularis, indicating an apparent prevalence of between 0% and 1.5%. The advantages and disadvantages of using the adapted method are discussed as well as the results pertaining to taeniid and non-taeniid cestodes as identified by multiplex PCR.
NASA Astrophysics Data System (ADS)
Van Den Daele, W.; Malaquin, C.; Baumel, N.; Kononchuk, O.; Cristoloveanu, S.
2013-10-01
This paper revisits and adapts of the pseudo-MOSFET (Ψ-MOSFET) characterization technique for advanced fully depleted silicon on insulator (FDSOI) wafers. We review the current challenges for standard Ψ-MOSFET set-up on ultra-thin body (12 nm) over ultra-thin buried oxide (25 nm BOX) and propose a novel set-up enabling the technique on FDSOI structures. This novel configuration embeds 4 probes with large tip radius (100-200 μm) and low pressure to avoid oxide damage. Compared with previous 4-point probe measurements, we introduce a simplified and faster methodology together with an adapted Y-function. The models for parameters extraction are revisited and calibrated through systematic measurements of SOI wafers with variable film thickness. We propose an in-depth analysis of the FDSOI structure through comparison of experimental data, TCAD (Technology Computed Aided Design) simulations, and analytical modeling. TCAD simulations are used to unify previously reported thickness-dependent analytical models by analyzing the BOX/substrate potential and the electrical field in ultrathin films. Our updated analytical models are used to explain the results and to extract correct electrical parameters such as low-field electron and hole mobility, subthreshold slope, and film/BOX interface traps density.
Adaptation improves face trustworthiness discrimination.
Keefe, B D; Dzhelyova, M; Perrett, D I; Barraclough, N E
2013-01-01
Adaptation to facial characteristics, such as gender and viewpoint, has been shown to both bias our perception of faces and improve facial discrimination. In this study, we examined whether adapting to two levels of face trustworthiness improved sensitivity around the adapted level. Facial trustworthiness was manipulated by morphing between trustworthy and untrustworthy prototypes, each generated by morphing eight trustworthy and eight untrustworthy faces, respectively. In the first experiment, just-noticeable differences (JNDs) were calculated for an untrustworthy face after participants adapted to an untrustworthy face, a trustworthy face, or did not adapt. In the second experiment, the three conditions were identical, except that JNDs were calculated for a trustworthy face. In the third experiment we examined whether adapting to an untrustworthy male face improved discrimination to an untrustworthy female face. In all experiments, participants completed a two-interval forced-choice (2-IFC) adaptive staircase procedure, in which they judged which face was more untrustworthy. JNDs were derived from a psychometric function fitted to the data. Adaptation improved sensitivity to faces conveying the same level of trustworthiness when compared to no adaptation. When adapting to and discriminating around a different level of face trustworthiness there was no improvement in sensitivity and JNDs were equivalent to those in the no adaptation condition. The improvement in sensitivity was found to occur even when adapting to a face with different gender and identity. These results suggest that adaptation to facial trustworthiness can selectively enhance mechanisms underlying the coding of facial trustworthiness to improve perceptual sensitivity. These findings have implications for the role of our visual experience in the decisions we make about the trustworthiness of other individuals.
Adaptation improves face trustworthiness discrimination
Keefe, B. D.; Dzhelyova, M.; Perrett, D. I.; Barraclough, N. E.
2013-01-01
Adaptation to facial characteristics, such as gender and viewpoint, has been shown to both bias our perception of faces and improve facial discrimination. In this study, we examined whether adapting to two levels of face trustworthiness improved sensitivity around the adapted level. Facial trustworthiness was manipulated by morphing between trustworthy and untrustworthy prototypes, each generated by morphing eight trustworthy and eight untrustworthy faces, respectively. In the first experiment, just-noticeable differences (JNDs) were calculated for an untrustworthy face after participants adapted to an untrustworthy face, a trustworthy face, or did not adapt. In the second experiment, the three conditions were identical, except that JNDs were calculated for a trustworthy face. In the third experiment we examined whether adapting to an untrustworthy male face improved discrimination to an untrustworthy female face. In all experiments, participants completed a two-interval forced-choice (2-IFC) adaptive staircase procedure, in which they judged which face was more untrustworthy. JNDs were derived from a psychometric function fitted to the data. Adaptation improved sensitivity to faces conveying the same level of trustworthiness when compared to no adaptation. When adapting to and discriminating around a different level of face trustworthiness there was no improvement in sensitivity and JNDs were equivalent to those in the no adaptation condition. The improvement in sensitivity was found to occur even when adapting to a face with different gender and identity. These results suggest that adaptation to facial trustworthiness can selectively enhance mechanisms underlying the coding of facial trustworthiness to improve perceptual sensitivity. These findings have implications for the role of our visual experience in the decisions we make about the trustworthiness of other individuals. PMID:23801979
Hankins, Sam C; Brimhall, Bryan B; Kankanala, Vineel; Austin, Gregory L
2017-01-01
Low-volume polyethylene glycol (PEG) bowel preparations are better tolerated by patients than high-volume preparations and may achieve similar preparation quality. However, there is little data comparing their effects on a recommendation for an early repeat colonoscopy (because of a suboptimal preparation), procedure times, adenoma detection rate (ADR), and advanced adenoma detection rate (AADR). This is a retrospective cohort study of outpatient colonoscopies performed during a one-year period at a single academic medical center in which low-volume MoviPrep® (n = 1841) or high-volume Colyte® (n = 1337) was used. All preparations were split-dosed. Appropriate covariates were included in regression models assessing suboptimal preparation quality (fair, poor, or inadequate), procedure times, recommendation for an early repeat colonoscopy, ADR, and AADR. MoviPrep® was associated with an increase in having a suboptimal bowel preparation (OR 1.36; 95% CI: 1.06-1.76), but it was not associated with differences in insertion (p = 0.43), withdrawal (p = 0.22), or total procedure times (p = 0.10). The adjusted percentage with a suboptimal preparation was 11.7% for patients using MoviPrep® and 8.8% for patients using Colyte®. MoviPrep® was not associated with a significant difference in overall ADR (OR 0.93; 95% CI: 0.78-1.11), AADR (OR 1.18; 95% CI: 0.87-1.62), or recommendation for early repeat colonoscopy (OR 1.16; 95% CI: 0.72-1.88). MoviPrep® was associated with a small absolute increase in having a suboptimal preparation, but did not affect recommendations for an early repeat colonoscopy, procedure times, or adenoma detection rates. Mechanisms to reduce financial barriers limiting low-volume preparations should be considered because of their favorable tolerability profile.
ADAPTIVE ROBUST VARIABLE SELECTION
Fan, Jianqing; Fan, Yingying; Barut, Emre
2014-01-01
Heavy-tailed high-dimensional data are commonly encountered in various scientific fields and pose great challenges to modern statistical analysis. A natural procedure to address this problem is to use penalized quantile regression with weighted L1-penalty, called weighted robust Lasso (WR-Lasso), in which weights are introduced to ameliorate the bias problem induced by the L1-penalty. In the ultra-high dimensional setting, where the dimensionality can grow exponentially with the sample size, we investigate the model selection oracle property and establish the asymptotic normality of the WR-Lasso. We show that only mild conditions on the model error distribution are needed. Our theoretical results also reveal that adaptive choice of the weight vector is essential for the WR-Lasso to enjoy these nice asymptotic properties. To make the WR-Lasso practically feasible, we propose a two-step procedure, called adaptive robust Lasso (AR-Lasso), in which the weight vector in the second step is constructed based on the L1-penalized quantile regression estimate from the first step. This two-step procedure is justified theoretically to possess the oracle property and the asymptotic normality. Numerical studies demonstrate the favorable finite-sample performance of the AR-Lasso. PMID:25580039
Development of a Countermeasure to Enhance Postflight Locomotor Adaptability
NASA Technical Reports Server (NTRS)
Bloomberg, Jacob J.
2006-01-01
Astronauts returning from space flight experience locomotor dysfunction following their return to Earth. Our laboratory is currently developing a gait adaptability training program that is designed to facilitate recovery of locomotor function following a return to a gravitational environment. The training program exploits the ability of the sensorimotor system to generalize from exposure to multiple adaptive challenges during training so that the gait control system essentially learns to learn and therefore can reorganize more rapidly when faced with a novel adaptive challenge. We have previously confirmed that subjects participating in adaptive generalization training programs using a variety of visuomotor distortions can enhance their ability to adapt to a novel sensorimotor environment. Importantly, this increased adaptability was retained even one month after completion of the training period. Adaptive generalization has been observed in a variety of other tasks requiring sensorimotor transformations including manual control tasks and reaching (Bock et al., 2001, Seidler, 2003) and obstacle avoidance during walking (Lam and Dietz, 2004). Taken together, the evidence suggests that a training regimen exposing crewmembers to variation in locomotor conditions, with repeated transitions among states, may enhance their ability to learn how to reassemble appropriate locomotor patterns upon return from microgravity. We believe exposure to this type of training will extend crewmembers locomotor behavioral repertoires, facilitating the return of functional mobility after long duration space flight. Our proposed training protocol will compel subjects to develop new behavioral solutions under varying sensorimotor demands. Over time subjects will learn to create appropriate locomotor solution more rapidly enabling acquisition of mobility sooner after long-duration space flight. Our laboratory is currently developing adaptive generalization training procedures and the
Multiple Comparison Procedures when Population Variances Differ.
ERIC Educational Resources Information Center
Olejnik, Stephen; Lee, JaeShin
A review of the literature on multiple comparison procedures suggests several alternative approaches for comparing means when population variances differ. These include: (1) the approach of P. A. Games and J. F. Howell (1976); (2) C. W. Dunnett's C confidence interval (1980); and (3) Dunnett's T3 solution (1980). These procedures control the…
Ritz Procedure for COSMIC/NASTRAN
NASA Technical Reports Server (NTRS)
Citerley, R. L.; Woytowitz, P. J.
1985-01-01
An analysis procedure has been developed and incorporated into COSMIC/NASTRAN that permits large dynamic degree of freedom models to be processed accurately with little or no extra effort required by the user. The method employs existing capabilities without the need for approximate Guyan reduction techniques. Comparisons to existing solution procedures presently within NASTRAN are discussed.
The benefits of using customized procedure packs.
Baines, R; Colquhoun, G; Jones, N; Bateman, R
2001-01-01
Discrete item purchasing is the traditional approach for hospitals to obtain consumable supplies for theatre procedures. Although most items are relatively low cost, the management and co-ordination of the supply chain, raising orders, controlling stock, picking and delivering to each operating theatre can be complex and costly. Customized procedure packs provide a solution.
Gault, M. H.
1973-01-01
Certain preventable complications in the treatment of renal failure, in part related to the composition of commercially prepared peritoneal dialysis solutions, continue to occur. Solutions are advocated which would contain sodium 132, calcium 3.5, magnesium 1.5, chloride 102 and lactate or acetate 35 mEq./1., and dextrose 1.5% or about 4.25%. Elimination of 7% dextrose solutions and a reduction of the sodium and lactate concentrations should reduce complications due to hypovolemia, hyperglycemia, hypernatremia and alkalosis. Reduction in the number of solutions should simplify the procedure and perhaps reduce costs. It is anticipated that some of the changes discussed will soon be introduced by industry. PMID:4691094
Pipe Cleaning Operating Procedures
Clark, D.; Wu, J.; /Fermilab
1991-01-24
This cleaning procedure outlines the steps involved in cleaning the high purity argon lines associated with the DO calorimeters. The procedure is broken down into 7 cycles: system setup, initial flush, wash, first rinse, second rinse, final rinse and drying. The system setup involves preparing the pump cart, line to be cleaned, distilled water, and interconnecting hoses and fittings. The initial flush is an off-line flush of the pump cart and its plumbing in order to preclude contaminating the line. The wash cycle circulates the detergent solution (Micro) at 180 degrees Fahrenheit through the line to be cleaned. The first rinse is then intended to rid the line of the majority of detergent and only needs to run for 30 minutes and at ambient temperature. The second rinse (if necessary) should eliminate the remaining soap residue. The final rinse is then intended to be a check that there is no remaining soap or other foreign particles in the line, particularly metal 'chips.' The final rinse should be run at 180 degrees Fahrenheit for at least 90 minutes. The filters should be changed after each cycle, paying particular attention to the wash cycle and the final rinse cycle return filters. These filters, which should be bagged and labeled, prove that the pipeline is clean. Only distilled water should be used for all cycles, especially rinsing. The level in the tank need not be excessive, merely enough to cover the heater float switch. The final rinse, however, may require a full 50 gallons. Note that most of the details of the procedure are included in the initial flush description. This section should be referred to if problems arise in the wash or rinse cycles.
Organization of Distributed Adaptive Learning
ERIC Educational Resources Information Center
Vengerov, Alexander
2009-01-01
The growing sensitivity of various systems and parts of industry, society, and even everyday individual life leads to the increased volume of changes and needs for adaptation and learning. This creates a new situation where learning from being purely academic knowledge transfer procedure is becoming a ubiquitous always-on essential part of all…
Visualizing Search Behavior with Adaptive Discriminations
Cook, Robert G.; Qadri, Muhammad A. J.
2014-01-01
We examined different aspects of the visual search behavior of a pigeon using an open-ended, adaptive testing procedure controlled by a genetic algorithm. The animal had to accurately search for and peck a gray target element randomly located from among a variable number of surrounding darker and lighter distractor elements. Display composition was controlled by a genetic algorithm involving the multivariate configuration of different parameters or genes (number of distractors, element size, shape, spacing, target brightness, and distractor brightness). Sessions were composed of random displays, testing randomized combinations of these genes, and selected displays, representing the varied descendants of displays correctly identified by the pigeon. Testing a larger number of random displays than done previously, it was found that the bird’s solution to the search task was highly stable and did not change with extensive experience in the task. The location and shape of this attractor was visualized using multivariate behavioral surfaces in which element size and the number of distractors were the most important factors controlling search accuracy and search time. The resulting visualizations of the bird’s search behavior are discussed with reference to the potential of using adaptive, open-ended experimental techniques for investigating animal cognition and their implications for Bond and Kamil’s innovative development of virtual ecologies using an analogous methodology. PMID:24370702
Bremer, P. -T.
2014-08-26
ADAPT is a topological analysis code that allow to compute local threshold, in particular relevance based thresholds for features defined in scalar fields. The initial target application is vortex detection but the software is more generally applicable to all threshold based feature definitions.
Adaptive neuro-control for large flexible structures
NASA Astrophysics Data System (ADS)
Krishankumar, K.; Montgomery, L.
Special problems related to control system design for large flexible structures include the inherent low structural damping, wide range of modal frequencies, unmodeled dynamics, and possibility of system failures. Neuro-control, which combines concepts from artificial neural networks and adaptive control is investigated as a solution to some of these problems. Specifically, the roles of neuro-controllers in learning unmodeled dynamics and adaptive control for system failures are investigated. Satisfying these objectives requires training a neural network model (neuro-model) to simulate the actual structure, and then training a neural network controller (neuro-controller) to minimize structural response resulting from an arbitrary disturbance. The neuro-controller synthesis procedure and its capabilities in adaptively controlling the structure are demonstrated using a mathematical model of an existing structure, the Advanced Control Evaluation for Systems test article located at NASA/Marshall Space Flight Center, Huntsville, Alabama. Also, the real-time adaptive capability of neuro-controllers is demonstrated via an experiment utilizing a flexible clamped-free beam equipped with an actuator that uses a bang-bang controller.
Collected radiochemical and geochemical procedures
Kleinberg, J
1990-05-01
This revision of LA-1721, 4th Ed., Collected Radiochemical Procedures, reflects the activities of two groups in the Isotope and Nuclear Chemistry Division of the Los Alamos National Laboratory: INC-11, Nuclear and radiochemistry; and INC-7, Isotope Geochemistry. The procedures fall into five categories: I. Separation of Radionuclides from Uranium, Fission-Product Solutions, and Nuclear Debris; II. Separation of Products from Irradiated Targets; III. Preparation of Samples for Mass Spectrometric Analysis; IV. Dissolution Procedures; and V. Geochemical Procedures. With one exception, the first category of procedures is ordered by the positions of the elements in the Periodic Table, with separate parts on the Representative Elements (the A groups); the d-Transition Elements (the B groups and the Transition Triads); and the Lanthanides (Rare Earths) and Actinides (the 4f- and 5f-Transition Elements). The members of Group IIIB-- scandium, yttrium, and lanthanum--are included with the lanthanides, elements they resemble closely in chemistry and with which they occur in nature. The procedures dealing with the isolation of products from irradiated targets are arranged by target element.
Adaptive Units of Learning and Educational Videogames
ERIC Educational Resources Information Center
Moreno-Ger, Pablo; Thomas, Pilar Sancho; Martinez-Ortiz, Ivan; Sierra, Jose Luis; Fernandez-Manjon, Baltasar
2007-01-01
In this paper, we propose three different ways of using IMS Learning Design to support online adaptive learning modules that include educational videogames. The first approach relies on IMS LD to support adaptation procedures where the educational games are considered as Learning Objects. These games can be included instead of traditional content…
Adaptive remeshing method in 2D based on refinement and coarsening techniques
NASA Astrophysics Data System (ADS)
Giraud-Moreau, L.; Borouchaki, H.; Cherouat, A.
2007-04-01
The analysis of mechanical structures using the Finite Element Method, in the framework of large elastoplastic strains, needs frequent remeshing of the deformed domain during computation. Remeshing is necessary for two main reasons, the large geometric distortion of finite elements and the adaptation of the mesh size to the physical behavior of the solution. This paper presents an adaptive remeshing method to remesh a mechanical structure in two dimensions subjected to large elastoplastic deformations with damage. The proposed remeshing technique includes adaptive refinement and coarsening procedures, based on geometrical and physical criteria. The proposed method has been integrated in a computational environment using the ABAQUS solver. Numerical examples show the efficiency of the proposed approach.
PLUM: Parallel Load Balancing for Unstructured Adaptive Meshes. Degree awarded by Colorado Univ.
NASA Technical Reports Server (NTRS)
Oliker, Leonid
1998-01-01
Dynamic mesh adaption on unstructured grids is a powerful tool for computing large-scale problems that require grid modifications to efficiently resolve solution features. By locally refining and coarsening the mesh to capture physical phenomena of interest, such procedures make standard computational methods more cost effective. Unfortunately, an efficient parallel implementation of these adaptive methods is rather difficult to achieve, primarily due to the load imbalance created by the dynamically-changing nonuniform grid. This requires significant communication at runtime, leading to idle processors and adversely affecting the total execution time. Nonetheless, it is generally thought that unstructured adaptive- grid techniques will constitute a significant fraction of future high-performance supercomputing. Various dynamic load balancing methods have been reported to date; however, most of them either lack a global view of loads across processors or do not apply their techniques to realistic large-scale applications.
Adaptation of adaptive optics systems.
NASA Astrophysics Data System (ADS)
Xin, Yu; Zhao, Dazun; Li, Chen
1997-10-01
In the paper, a concept of an adaptation of adaptive optical system (AAOS) is proposed. The AAOS has certain real time optimization ability against the variation of the brightness of detected objects m, atmospheric coherence length rO and atmospheric time constant τ by means of changing subaperture number and diameter, dynamic range, and system's temporal response. The necessity of AAOS using a Hartmann-Shack wavefront sensor and some technical approaches are discussed. Scheme and simulation of an AAOS with variable subaperture ability by use of both hardware and software are presented as an example of the system.
Adaptive Texture Synthesis for Large Scale City Modeling
NASA Astrophysics Data System (ADS)
Despine, G.; Colleu, T.
2015-02-01
Large scale city models textured with aerial images are well suited for bird-eye navigation but generally the image resolution does not allow pedestrian navigation. One solution to face this problem is to use high resolution terrestrial photos but it requires huge amount of manual work to remove occlusions. Another solution is to synthesize generic textures with a set of procedural rules and elementary patterns like bricks, roof tiles, doors and windows. This solution may give realistic textures but with no correlation to the ground truth. Instead of using pure procedural modelling we present a method to extract information from aerial images and adapt the texture synthesis to each building. We describe a workflow allowing the user to drive the information extraction and to select the appropriate texture patterns. We also emphasize the importance to organize the knowledge about elementary pattern in a texture catalogue allowing attaching physical information, semantic attributes and to execute selection requests. Roofs are processed according to the detected building material. Façades are first described in terms of principal colours, then opening positions are detected and some window features are computed. These features allow selecting the most appropriate patterns from the texture catalogue. We experimented this workflow on two samples with 20 cm and 5 cm resolution images. The roof texture synthesis and opening detection were successfully conducted on hundreds of buildings. The window characterization is still sensitive to the distortions inherent to the projection of aerial images onto the facades.
40 CFR 60.648 - Optional procedure for measuring hydrogen sulfide in acid gas-Tutwiler Procedure. 1
Code of Federal Regulations, 2014 CFR
2014-07-01
... per cubic feet of gas. (3) Starch solution. Rub into a thin paste about one teaspoonful of wheat starch with a little water; pour into about a pint of boiling water; stir; let cool and decant off clear solution. Make fresh solution every few days. (d) Procedure. Fill leveling bulb with starch solution. Raise...
40 CFR 60.648 - Optional procedure for measuring hydrogen sulfide in acid gas-Tutwiler Procedure. 1
Code of Federal Regulations, 2013 CFR
2013-07-01
... per cubic feet of gas. (3) Starch solution. Rub into a thin paste about one teaspoonful of wheat starch with a little water; pour into about a pint of boiling water; stir; let cool and decant off clear solution. Make fresh solution every few days. (d) Procedure. Fill leveling bulb with starch solution. Raise...
Adaptive Identification by Systolic Arrays.
1987-12-01
BIBLIOGRIAPHY Anton , Howard , Elementary Linear Algebra , John Wiley & Sons, 19S4. Cristi, Roberto, A Parallel Structure Jor Adaptive Pole Placement...10 11. SYSTEM IDENTIFICATION M*YETHODS ....................... 12 A. LINEAR SYSTEM MODELING ......................... 12 B. SOLUTION OF SYSTEMS OF... LINEAR EQUATIONS ......... 13 C. QR DECOMPOSITION ................................ 14 D. RECURSIVE LEAST SQUARES ......................... 16 E. BLOCK
NASA Technical Reports Server (NTRS)
Hacker, Scott C. (Inventor); Dean, Richard J. (Inventor); Burge, Scott W. (Inventor); Dartez, Toby W. (Inventor)
2007-01-01
An adapter for installing a connector to a terminal post, wherein the connector is attached to a cable, is presented. In an embodiment, the adapter is comprised of an elongated collet member having a longitudinal axis comprised of a first collet member end, a second collet member end, an outer collet member surface, and an inner collet member surface. The inner collet member surface at the first collet member end is used to engage the connector. The outer collet member surface at the first collet member end is tapered for a predetermined first length at a predetermined taper angle. The collet includes a longitudinal slot that extends along the longitudinal axis initiating at the first collet member end for a predetermined second length. The first collet member end is formed of a predetermined number of sections segregated by a predetermined number of channels and the longitudinal slot.
NASA Astrophysics Data System (ADS)
Qureshi, S. U. H.
1985-09-01
Theoretical work which has been effective in improving data transmission by telephone and radio links using adaptive equalization (AE) techniques is reviewed. AE has been applied to reducing the temporal dispersion effects, such as intersymbol interference, caused by the channel accessed. Attention is given to the Nyquist telegraph transmission theory, least mean square error adaptive filtering and the theory and structure of linear receive and transmit filters for reducing error. Optimum nonlinear receiver structures are discussed in terms of optimality criteria as a function of error probability. A suboptimum receiver structure is explored in the form of a decision-feedback equalizer. Consideration is also given to quadrature amplitude modulation and transversal equalization for receivers.
Climate adaptation: Holistic thinking beyond technology
NASA Astrophysics Data System (ADS)
Boyd, Emily
2017-02-01
The countries most vulnerable to climate change impacts are among the poorest in the world. A recent evaluation of Least Developed Countries Fund projects suggests that adaptation efforts must move beyond technological solutions.
Evaluating Content Alignment in Computerized Adaptive Testing
ERIC Educational Resources Information Center
Wise, Steven L.; Kingsbury, G. Gage; Webb, Norman L.
2015-01-01
The alignment between a test and the content domain it measures represents key evidence for the validation of test score inferences. Although procedures have been developed for evaluating the content alignment of linear tests, these procedures are not readily applicable to computerized adaptive tests (CATs), which require large item pools and do…
Evaluating Content Alignment in Computerized Adaptive Testing
ERIC Educational Resources Information Center
Wise, Steven L.; Kingsbury, G. Gage; Webb, Norman L.
2015-01-01
The alignment between a test and the content domain it measures represents key evidence for the validation of test score inferences. Although procedures have been developed for evaluating the content alignment of linear tests, these procedures are not readily applicable to computerized adaptive tests (CATs), which require large item pools and do…
Adapting to life: ocean biogeochemical modelling and adaptive remeshing
NASA Astrophysics Data System (ADS)
Hill, J.; Popova, E. E.; Ham, D. A.; Piggott, M. D.; Srokosz, M.
2014-05-01
An outstanding problem in biogeochemical modelling of the ocean is that many of the key processes occur intermittently at small scales, such as the sub-mesoscale, that are not well represented in global ocean models. This is partly due to their failure to resolve sub-mesoscale phenomena, which play a significant role in vertical nutrient supply. Simply increasing the resolution of the models may be an inefficient computational solution to this problem. An approach based on recent advances in adaptive mesh computational techniques may offer an alternative. Here the first steps in such an approach are described, using the example of a simple vertical column (quasi-1-D) ocean biogeochemical model. We present a novel method of simulating ocean biogeochemical behaviour on a vertically adaptive computational mesh, where the mesh changes in response to the biogeochemical and physical state of the system throughout the simulation. We show that the model reproduces the general physical and biological behaviour at three ocean stations (India, Papa and Bermuda) as compared to a high-resolution fixed mesh simulation and to observations. The use of an adaptive mesh does not increase the computational error, but reduces the number of mesh elements by a factor of 2-3. Unlike previous work the adaptivity metric used is flexible and we show that capturing the physical behaviour of the model is paramount to achieving a reasonable solution. Adding biological quantities to the adaptivity metric further refines the solution. We then show the potential of this method in two case studies where we change the adaptivity metric used to determine the varying mesh sizes in order to capture the dynamics of chlorophyll at Bermuda and sinking detritus at Papa. We therefore demonstrate that adaptive meshes may provide a suitable numerical technique for simulating seasonal or transient biogeochemical behaviour at high vertical resolution whilst minimising the number of elements in the mesh. More
Watson, B.L.; Aeby, I.
1980-08-26
An adaptive data compression device for compressing data is described. The device has a frequency content, including a plurality of digital filters for analyzing the content of the data over a plurality of frequency regions, a memory, and a control logic circuit for generating a variable rate memory clock corresponding to the analyzed frequency content of the data in the frequency region and for clocking the data into the memory in response to the variable rate memory clock.
Watson, Bobby L.; Aeby, Ian
1982-01-01
An adaptive data compression device for compressing data having variable frequency content, including a plurality of digital filters for analyzing the content of the data over a plurality of frequency regions, a memory, and a control logic circuit for generating a variable rate memory clock corresponding to the analyzed frequency content of the data in the frequency region and for clocking the data into the memory in response to the variable rate memory clock.
Adaptive sampling for noisy problems
Cantu-Paz, E
2004-03-26
The usual approach to deal with noise present in many real-world optimization problems is to take an arbitrary number of samples of the objective function and use the sample average as an estimate of the true objective value. The number of samples is typically chosen arbitrarily and remains constant for the entire optimization process. This paper studies an adaptive sampling technique that varies the number of samples based on the uncertainty of deciding between two individuals. Experiments demonstrate the effect of adaptive sampling on the final solution quality reached by a genetic algorithm and the computational cost required to find the solution. The results suggest that the adaptive technique can effectively eliminate the need to set the sample size a priori, but in many cases it requires high computational costs.
NASA Astrophysics Data System (ADS)
Abdel-Khalik, Hany Samy
numerical solutions to demanding computational models, matrix methods are often employed to produce approximately equivalent discretized computational models that may be manipulated further by computers. The discretized models are described by matrix operators that are often rank-deficient, i.e. ill-posed. We introduce a novel set of matrix algorithms, denoted by Efficient Subspace Methods (ESM), intended to approximate the action of very large, dense, and numerically rank-deficient matrix operators. We demonstrate that significant reductions in both computational and storage burdens can be attained for a typical BWR core simulator adaption problem without compromising the quality of the adaption. We demonstrate robust and high fidelity adaption utilizing a virtual core, e.g. core simulator predicted observables with the virtual core either based upon a modified version of the core simulator whose input data are to be adjusted or an entirely different core simulator. Further, one specific application of ESM is demonstrated, that is being the determination of the uncertainties of important core attributes such as core reactivity and core power distribution due to the available ENDF/B cross-sections uncertainties. The use of ESM is however not limited to adaptive core simulation techniques only, but a wide range of engineering applications may easily benefit from the introduced algorithms, e.g. machine learning and information retrieval techniques highly depends on finding low rank approximations to large scale matrices. In the appendix, we present a stand-alone paper that presents a generalized framework for ESM, including the mathematical theory behind the algorithms and several demonstrative applications that are central to many engineering arenas---(a) sensitivity analysis, (b) parameter estimation, and (c) uncertainty analysis. We choose to do so to allow other engineers, applied mathematicians, and scientists from other scientific disciplines to take direct advantage of
Bullock, Jonathan S.; Harper, William L.; Peck, Charles G.
1976-06-22
This invention is directed to an aqueous halogen-free electromarking solution which possesses the capacity for marking a broad spectrum of metals and alloys selected from different classes. The aqueous solution comprises basically the nitrate salt of an amphoteric metal, a chelating agent, and a corrosion-inhibiting agent.
Adaptive sampling in behavioral surveys.
Thompson, S K
1997-01-01
Studies of populations such as drug users encounter difficulties because the members of the populations are rare, hidden, or hard to reach. Conventionally designed large-scale surveys detect relatively few members of the populations so that estimates of population characteristics have high uncertainty. Ethnographic studies, on the other hand, reach suitable numbers of individuals only through the use of link-tracing, chain referral, or snowball sampling procedures that often leave the investigators unable to make inferences from their sample to the hidden population as a whole. In adaptive sampling, the procedure for selecting people or other units to be in the sample depends on variables of interest observed during the survey, so the design adapts to the population as encountered. For example, when self-reported drug use is found among members of the sample, sampling effort may be increased in nearby areas. Types of adaptive sampling designs include ordinary sequential sampling, adaptive allocation in stratified sampling, adaptive cluster sampling, and optimal model-based designs. Graph sampling refers to situations with nodes (for example, people) connected by edges (such as social links or geographic proximity). An initial sample of nodes or edges is selected and edges are subsequently followed to bring other nodes into the sample. Graph sampling designs include network sampling, snowball sampling, link-tracing, chain referral, and adaptive cluster sampling. A graph sampling design is adaptive if the decision to include linked nodes depends on variables of interest observed on nodes already in the sample. Adjustment methods for nonsampling errors such as imperfect detection of drug users in the sample apply to adaptive as well as conventional designs.
Adaptive Process Control in Rubber Industry.
Brause, Rüdiger W; Pietruschka, Ulf
1998-01-01
This paper describes the problems and an adaptive solution for process control in rubber industry. We show that the human and economical benefits of an adaptive solution for the approximation of process parameters are very attractive. The modeling of the industrial problem is done by the means of artificial neural networks. For the example of the extrusion of a rubber profile in tire production our method shows good resuits even using only a few training samples.
Structured programming: Principles, notation, procedure
NASA Technical Reports Server (NTRS)
JOST
1978-01-01
Structured programs are best represented using a notation which gives a clear representation of the block encapsulation. In this report, a set of symbols which can be used until binding directives are republished is suggested. Structured programming also allows a new method of procedure for design and testing. Programs can be designed top down, that is, they can start at the highest program plane and can penetrate to the lowest plane by step-wise refinements. The testing methodology also is adapted to this procedure. First, the highest program plane is tested, and the programs which are not yet finished in the next lower plane are represented by so-called dummies. They are gradually replaced by the real programs.
Molecular aggregates in cryogenic solutions
NASA Astrophysics Data System (ADS)
Schauer, M. W.; Lee, J.; Bernstein, E. R.
1981-07-01
In this report, experimental procedures and results concerning the study of aggregates are presented. Absorption spectra of solutions of the following solutes and solvents have been studied: pyrazine/C3H8; benzene/NF3, C3H8, N2, CO, CF4; and osmium tetroxide/NF3, CH4, C3H8. In order to obtain some qualitative estimation of aggregate size light scattering experiments were also performed on solutions of pyrazine/C3H8, benzene/CF4 benzene/NF3, and benzene/C3H8. The nature of these non-equilibrium molecular clusters in solution will be addressed.
Seitz, M.G.
1982-01-01
Reviewed in this statement are methods of preparing solutions to be used in laboratory experiments to examine technical issues related to the safe disposal of nuclear waste from power generation. Each approach currently used to prepare solutions has advantages and any one approach may be preferred over the others in particular situations, depending upon the goals of the experimental program. These advantages are highlighted herein for three approaches to solution preparation that are currently used most in studies of nuclear waste disposal. Discussion of the disadvantages of each approach is presented to help a user select a preparation method for his particular studies. Also presented in this statement are general observations regarding solution preparation. These observations are used as examples of the types of concerns that need to be addressed regarding solution preparation. As shown by these examples, prior to experimentation or chemical analyses, laboratory techniques based on scientific knowledge of solutions can be applied to solutions, often resulting in great improvement in the usefulness of results.
Strategies: Office Procedures with Communications Math.
ERIC Educational Resources Information Center
Wyoming Univ., Laramie. Coll. of Education.
This booklet contains 30 one-page strategies for teaching mathematical skills needed for office procedures. All the strategies are suitable for or can be adapted for special needs students. Each strategy is a classroom activity and is matched with the skill that it develops and its technology/content area (communications and/or mathematics). Some…
Cortazar, E; Usobiaga, A; Fernández, L A; de, Diego A; Madariaga, J M
2002-02-01
A MATHEMATICA package, 'CONDU.M', has been developed to find the polynomial in concentration and temperature which best fits conductimetric data of the type (kappa, c, T) or (kappa, c1, c2, T) of electrolyte solutions (kappa: specific conductivity; ci: concentration of component i; T: temperature). In addition, an interface, 'TKONDU', has been written in the TCL/Tk language to facilitate the use of CONDU.M by an operator not familiarised with MATHEMATICA. All this software is available on line (UPV/EHU, 2001). 'CONDU.M' has been programmed to: (i) select the optimum grade in c1 and/or c2; (ii) compare models with linear or quadratic terms in temperature; (iii) calculate the set of adjustable parameters which best fits data; (iv) simplify the model by elimination of 'a priori' included adjustable parameters which after the regression analysis result in low statistical significance; (v) facilitate the location of outlier data by graphical analysis of the residuals; and (vi) provide quantitative statistical information on the quality of the fit, allowing a critical comparison among different models. Due to the multiple options offered the software allows testing different conductivity models in a short time, even if a large set of conductivity data is being considered simultaneously. Then, the user can choose the best model making use of the graphical and statistical information provided in the output file. Although the program has been initially designed to treat conductimetric data, it can be also applied for processing data with similar structure, e.g. (P, c, T) or (P, c1, c2, T), being P any appropriate transport, physical or thermodynamic property.
NASA Astrophysics Data System (ADS)
Robock, A.
2010-12-01
Geoengineering by carbon capture and storage (CCS) or solar radiation management (SRM) has been suggested as a possible solution to global warming. However, it is clear that mitigation should be the main response of society, quickly reducing emissions of greenhouse gases. While there is no concerted mitigation effort yet, even if the world moves quickly to reduce emissions, the gases that are already in the atmosphere will continue to warm the planet. CCS, if a system that is efficacious, safe, and not costly could be developed, would slowly remove CO2 from the atmosphere, but this will have a gradual effect on concentrations. SRM, if a system could be developed to produce stratospheric aerosols or brighten marine stratocumulus clouds, could be quickly effective in cooling, but could also have so many negative side effects that it would be better not do it at all. This means that, in spite of a concerted effort at mitigation and to develop CCS, there will be a certain amount of global warming in our future. Because CCS geoengineering will be too slow and SRM geoengineering is not a practical or safe solution to geoengineering, adaptation will be needed. Our current understanding of geoengineering makes it even more important to focus on adaptation responses to global warming.
Implementation and Measurement Efficiency of Multidimensional Computerized Adaptive Testing
ERIC Educational Resources Information Center
Wang, Wen-Chung; Chen, Po-Hsi
2004-01-01
Multidimensional adaptive testing (MAT) procedures are proposed for the measurement of several latent traits by a single examination. Bayesian latent trait estimation and adaptive item selection are derived. Simulations were conducted to compare the measurement efficiency of MAT with those of unidimensional adaptive testing and random…
Analyzing Hedges in Verbal Communication: An Adaptation-Based Approach
ERIC Educational Resources Information Center
Wang, Yuling
2010-01-01
Based on Adaptation Theory, the article analyzes the production process of hedges. The procedure consists of the continuous making of choices in linguistic forms and communicative strategies. These choices are made just for adaptation to the contextual correlates. Besides, the adaptation process is dynamic, intentional and bidirectional.
Countermeasures to Enhance Sensorimotor Adaptability
NASA Technical Reports Server (NTRS)
Bloomberg, J. J.; Peters, B. T.; Mulavara, A. P.; Brady, R. A.; Batson, C. C.; Miller, C. A.; Cohen, H. S.
2011-01-01
adaptability. These results indicate that SA training techniques can be added to existing treadmill exercise equipment and procedures to produce a single integrated countermeasure system to improve performance of astro/cosmonauts during prolonged exploratory space missions.
The Henry problem: New semianalytical solution for velocity-dependent dispersion
NASA Astrophysics Data System (ADS)
Fahs, Marwan; Ataie-Ashtiani, Behzad; Younes, Anis; Simmons, Craig T.; Ackerer, Philippe
2016-09-01
A new semianalytical solution is developed for the velocity-dependent dispersion Henry problem using the Fourier-Galerkin method (FG). The integral arising from the velocity-dependent dispersion term is evaluated numerically using an accurate technique based on an adaptive scheme. Numerical integration and nonlinear dependence of the dispersion on the velocity render the semianalytical solution impractical. To alleviate this issue and to obtain the solution at affordable computational cost, a robust implementation for solving the nonlinear system arising from the FG method is developed. It allows for reducing the number of attempts of the iterative procedure and the computational cost by iteration. The accuracy of the semianalytical solution is assessed in terms of the truncation orders of the Fourier series. An appropriate algorithm based on the sensitivity of the solution to the number of Fourier modes is used to obtain the required truncation levels. The resulting Fourier series are used to analytically evaluate the position of the principal isochlors and metrics characterizing the saltwater wedge. They are also used to calculate longitudinal and transverse dispersive fluxes and to provide physical insight into the dispersion mechanisms within the mixing zone. The developed semianalytical solutions are compared against numerical solutions obtained using an in house code based on variant techniques for both space and time discretization. The comparison provides better confidence on the accuracy of both numerical and semianalytical results. It shows that the new solutions are highly sensitive to the approximation techniques used in the numerical code which highlights their benefits for code benchmarking.
Designing Flightdeck Procedures
NASA Technical Reports Server (NTRS)
Barshi, Immanuel; Mauro, Robert; Degani, Asaf; Loukopoulou, Loukia
2016-01-01
The primary goal of this document is to provide guidance on how to design, implement, and evaluate flight deck procedures. It provides a process for developing procedures that meet clear and specific requirements. This document provides a brief overview of: 1) the requirements for procedures, 2) a process for the design of procedures, and 3) a process for the design of checklists. The brief overview is followed by amplified procedures that follow the above steps and provide details for the proper design, implementation and evaluation of good flight deck procedures and checklists.
Computerized procedures system
Lipner, Melvin H.; Mundy, Roger A.; Franusich, Michael D.
2010-10-12
An online data driven computerized procedures system that guides an operator through a complex process facility's operating procedures. The system monitors plant data, processes the data and then, based upon this processing, presents the status of the current procedure step and/or substep to the operator. The system supports multiple users and a single procedure definition supports several interface formats that can be tailored to the individual user. Layered security controls access privileges and revisions are version controlled. The procedures run on a server that is platform independent of the user workstations that the server interfaces with and the user interface supports diverse procedural views.
Adaptive evolution of molecular phenotypes
NASA Astrophysics Data System (ADS)
Held, Torsten; Nourmohammad, Armita; Lässig, Michael
2014-09-01
Molecular phenotypes link genomic information with organismic functions, fitness, and evolution. Quantitative traits are complex phenotypes that depend on multiple genomic loci. In this paper, we study the adaptive evolution of a quantitative trait under time-dependent selection, which arises from environmental changes or through fitness interactions with other co-evolving phenotypes. We analyze a model of trait evolution under mutations and genetic drift in a single-peak fitness seascape. The fitness peak performs a constrained random walk in the trait amplitude, which determines the time-dependent trait optimum in a given population. We derive analytical expressions for the distribution of the time-dependent trait divergence between populations and of the trait diversity within populations. Based on this solution, we develop a method to infer adaptive evolution of quantitative traits. Specifically, we show that the ratio of the average trait divergence and the diversity is a universal function of evolutionary time, which predicts the stabilizing strength and the driving rate of the fitness seascape. From an information-theoretic point of view, this function measures the macro-evolutionary entropy in a population ensemble, which determines the predictability of the evolutionary process. Our solution also quantifies two key characteristics of adapting populations: the cumulative fitness flux, which measures the total amount of adaptation, and the adaptive load, which is the fitness cost due to a population's lag behind the fitness peak.
Coelho, V N; Coelho, I M; Souza, M J F; Oliveira, T A; Cota, L P; Haddad, M N; Mladenovic, N; Silva, R C P; Guimarães, F G
2016-01-01
This article presents an Evolution Strategy (ES)--based algorithm, designed to self-adapt its mutation operators, guiding the search into the solution space using a Self-Adaptive Reduced Variable Neighborhood Search procedure. In view of the specific local search operators for each individual, the proposed population-based approach also fits into the context of the Memetic Algorithms. The proposed variant uses the Greedy Randomized Adaptive Search Procedure with different greedy parameters for generating its initial population, providing an interesting exploration-exploitation balance. To validate the proposal, this framework is applied to solve three different [Formula: see text]-Hard combinatorial optimization problems: an Open-Pit-Mining Operational Planning Problem with dynamic allocation of trucks, an Unrelated Parallel Machine Scheduling Problem with Setup Times, and the calibration of a hybrid fuzzy model for Short-Term Load Forecasting. Computational results point out the convergence of the proposed model and highlight its ability in combining the application of move operations from distinct neighborhood structures along the optimization. The results gathered and reported in this article represent a collective evidence of the performance of the method in challenging combinatorial optimization problems from different application domains. The proposed evolution strategy demonstrates an ability of adapting the strength of the mutation disturbance during the generations of its evolution process. The effectiveness of the proposal motivates the application of this novel evolutionary framework for solving other combinatorial optimization problems.
Parallel Adaptive Mesh Refinement
Diachin, L; Hornung, R; Plassmann, P; WIssink, A
2005-03-04
As large-scale, parallel computers have become more widely available and numerical models and algorithms have advanced, the range of physical phenomena that can be simulated has expanded dramatically. Many important science and engineering problems exhibit solutions with localized behavior where highly-detailed salient features or large gradients appear in certain regions which are separated by much larger regions where the solution is smooth. Examples include chemically-reacting flows with radiative heat transfer, high Reynolds number flows interacting with solid objects, and combustion problems where the flame front is essentially a two-dimensional sheet occupying a small part of a three-dimensional domain. Modeling such problems numerically requires approximating the governing partial differential equations on a discrete domain, or grid. Grid spacing is an important factor in determining the accuracy and cost of a computation. A fine grid may be needed to resolve key local features while a much coarser grid may suffice elsewhere. Employing a fine grid everywhere may be inefficient at best and, at worst, may make an adequately resolved simulation impractical. Moreover, the location and resolution of fine grid required for an accurate solution is a dynamic property of a problem's transient features and may not be known a priori. Adaptive mesh refinement (AMR) is a technique that can be used with both structured and unstructured meshes to adjust local grid spacing dynamically to capture solution features with an appropriate degree of resolution. Thus, computational resources can be focused where and when they are needed most to efficiently achieve an accurate solution without incurring the cost of a globally-fine grid. Figure 1.1 shows two example computations using AMR; on the left is a structured mesh calculation of a impulsively-sheared contact surface and on the right is the fuselage and volume discretization of an RAH-66 Comanche helicopter [35]. Note the
NASA Astrophysics Data System (ADS)
Lee, Go-Eun; Kim, Il-Ho; Lim, Young Soo; Seo, Won-Seon; Choi, Byeong-Jun; Hwang, Chang-Won
2014-06-01
Since Bi2Te3 and Bi2Se3 have the same crystal structure, they form a homogeneous solid solution. Therefore, the thermal conductivity of the solid solution can be reduced by phonon scattering. The thermoelectric figure of merit can be improved by controlling the carrier concentration through doping. In this study, Bi2Te2.85Se0.15:D m (D: dopants such as I, Cu, Ag, Ni, Zn) solid solutions were prepared by encapsulated melting and hot pressing. All specimens exhibited n-type conduction in the measured temperature range (323 K to 523 K), and their electrical conductivities decreased slightly with increasing temperature. The undoped solid solution showed a carrier concentration of 7.37 × 1019 cm-3, power factor of 2.1 mW m-1 K-1, and figure of merit of 0.56 at 323 K. The figure of merit ( ZT) was improved due to the increased power factor by I, Cu, and Ag dopings, and maximum ZT values were obtained as 0.76 at 323 K for Bi2Te2.85Se0.15:Cu0.01 and 0.90 at 423 K for Bi2Te2.85Se0.15:I0.005. However, the thermoelectric properties of Ni- and Zn-doped solid solutions were not enhanced.
Adaptive implicit-explicit and parallel element-by-element iteration schemes
NASA Technical Reports Server (NTRS)
Tezduyar, T. E.; Liou, J.; Nguyen, T.; Poole, S.
1989-01-01
Adaptive implicit-explicit (AIE) and grouped element-by-element (GEBE) iteration schemes are presented for the finite element solution of large-scale problems in computational mechanics and physics. The AIE approach is based on the dynamic arrangement of the elements into differently treated groups. The GEBE procedure, which is a way of rewriting the EBE formulation to make its parallel processing potential and implementation more clear, is based on the static arrangement of the elements into groups with no inter-element coupling within each group. Various numerical tests performed demonstrate the savings in the CPU time and memory.
Adaptive implicit-explicit and parallel element-by-element iteration schemes
NASA Astrophysics Data System (ADS)
Tezduyar, T. E.; Liou, J.; Nguyen, T.; Poole, S.
Adaptive implicit-explicit (AIE) and grouped element-by-element (GEBE) iteration schemes are presented for the finite element solution of large-scale problems in computational mechanics and physics. The AIE approach is based on the dynamic arrangement of the elements into differently treated groups. The GEBE procedure, which is a way of rewriting the EBE formulation to make its parallel processing potential and implementation more clear, is based on the static arrangement of the elements into groups with no inter-element coupling within each group. Various numerical tests performed demonstrate the savings in the CPU time and memory.
Adaptive implicit-explicit and parallel element-by-element iteration schemes
NASA Technical Reports Server (NTRS)
Tezduyar, T. E.; Liou, J.; Nguyen, T.; Poole, S.
1989-01-01
Adaptive implicit-explicit (AIE) and grouped element-by-element (GEBE) iteration schemes are presented for the finite element solution of large-scale problems in computational mechanics and physics. The AIE approach is based on the dynamic arrangement of the elements into differently treated groups. The GEBE procedure, which is a way of rewriting the EBE formulation to make its parallel processing potential and implementation more clear, is based on the static arrangement of the elements into groups with no inter-element coupling within each group. Various numerical tests performed demonstrate the savings in the CPU time and memory.
Time-dependent grid adaptation for meshes of triangles and tetrahedra
NASA Technical Reports Server (NTRS)
Rausch, Russ D.
1993-01-01
This paper presents in viewgraph form a method of optimizing grid generation for unsteady CFD flow calculations that distributes the numerical error evenly throughout the mesh. Adaptive meshing is used to locally enrich in regions of relatively large errors and to locally coarsen in regions of relatively small errors. The enrichment/coarsening procedures are robust for isotropic cells; however, enrichment of high aspect ratio cells may fail near boundary surfaces with relatively large curvature. The enrichment indicator worked well for the cases shown, but in general requires user supervision for a more efficient solution.
Cardiac Procedures and Surgeries
... Angioplasty and Coronary Artery Balloon Dilation. ( View an animation of angioplasty ) What the Procedure Does Special tubing ... Angioplasty and Coronary Artery Balloon Dilation. ( View an animation of angioplasty ) What the Procedure Does Special tubing ...
NASA Astrophysics Data System (ADS)
Colby, Ralph H.
2008-03-01
Pierre-Gilles de Gennes once described polyelectrolytes as the ``least understood form of condensed matter''. In this talk, I will describe the state of the polyelectrolyte field before and after de Gennes' seminal contributions published 1976-1980. De Gennes clearly explained why electrostatic interactions only stretch the polyelectrolyte chains on intermediate scales in semidilute solution (between the electrostatic blob size and the correlation length) and why the scattering function has a peak corresponding to the correlation length (the distance to the next chain). Despite many other ideas being suggested since then, the simple de Gennes scaling picture of polyelectrolyte conformation in solution has stood the test of time. How that model is used today, including consequences for dynamics in polyelectrolyte solutions, and what questions remain, will clarify the importance of de Gennes' ideas.
Modified Sham Feeding of Sweet Solutions in Women with and without Bulimia Nervosa
Klein, DA; Schebendach, JE; Brown, AJ; Smith, GP; Walsh, BT
2009-01-01
Although it is possible that binge eating in humans is due to increased responsiveness of orosensory excitatory controls of eating, there is no direct evidence for this because food ingested during a test meal stimulates both orosensory excitatory and postingestive inhibitory controls. To overcome this problem, we adapted the modified sham feeding technique (MSF) to measure the orosensory excitatory control of intake of a series of sweetened solutions. Previously published data showed the feasibility of a “sip-and-spit” procedure in nine healthy control women using solutions flavored with cherry Kool Aid® and sweetened with sucrose (0-20%)1. The current study extended this technique to measure the intake of artificially sweetened solutions in women with bulimia nervosa (BN) and in women with no history of eating disorders. Ten healthy women and 11 women with BN were randomly presented with cherry Kool Aid® solutions sweetened with five concentrations of aspartame (0, 0.01, 0.03, 0.08 and 0.28%) in a closed opaque container fitted with a straw. They were instructed to sip as much as they wanted of the solution during 1-minute trials and to spit the fluid out into another opaque container. Across all subjects, presence of sweetener increased intake (p<0.001). Women with BN sipped 40.5-53.1% more of all solutions than controls (p=0.03 for total intake across all solutions). Self-report ratings of liking, wanting and sweetness of solutions did not differ between groups. These results support the feasibility of a MSF procedure using artificially sweetened solutions, and the hypothesis that the orosensory stimulation of MSF provokes larger intake in women with BN than controls. PMID:18773914
Control of microorganisms in flowing nutrient solutions
NASA Astrophysics Data System (ADS)
Evans, R. D.
1994-11-01
Controlling microorganisms in flowing nutrient solutions involves different techniques when targeting the nutrient solution, hardware surfaces in contact with the solution, or the active root zone. This review presents basic principles and applications of a number of treatment techniques, including disinfection by chemicals, ultrafiltration, ultrasonics, and heat treatment, with emphasis on UV irradiation and ozone treatment. Procedures for control of specific pathogens by nutrient solution conditioning also are reviewed.
Control of microorganisms in flowing nutrient solutions.
Evans, R D
1994-11-01
Controlling microorganisms in flowing nutrient solutions involves different techniques when targeting the nutrient solution, hardware surfaces in contact with the solution, or the active root zone. This review presents basic principles and applications of a number of treatment techniques, including disinfection by chemicals, ultrafiltration, ultrasonics, and heat treatment, with emphasis on UV irradiation and ozone treatment. Procedures for control of specific pathogens by nutrient solution conditioning also are reviewed.
Limits of adaptation, residual interferences
NASA Technical Reports Server (NTRS)
Mokry, Miroslav (Editor); Erickson, J. C., Jr.; Goodyer, Michael J.; Mignosi, Andre; Russo, Giuseppe P.; Smith, J.; Wedemeyer, Erich H.; Newman, Perry A.
1990-01-01
Methods of determining linear residual wall interference appear to be well established theoretically; however they need to be validated, for example by comparative studies of test data on the same model in different adaptive-wall wind tunnels as well as in passive, ventilated-wall tunnels. The GARTEur CAST 7 and the CAST 10/DOA 2 investigations are excellent examples of such comparative studies. Results to date in both one-variable and two-variable methods for nonlinear wall interference indicate that a great deal more research and validation are required. The status in 2D flow is advanced over that in 3D flow as is the case generally with adaptive-wall development. Nevertheless, it is now well established that for transonic testing with extensive supercritical flow present, significant wall interference is likely to exist in conventional ventilated test sections. Consequently, residual correction procedures require further development hand-in-hand with further adaptive-wall development.
Conceptualizing, Investigating, and Enhancing Adaptive Expertise in Elementary Mathematics Education
ERIC Educational Resources Information Center
Verschaffel, Lieven; Luwel, Koen; Torbeyns, Joke; Van Dooren, Wim
2009-01-01
Some years ago, Hatano differentiated between routine and adaptive expertise and made a strong plea for the development and implementation of learning environments that aim at the latter type of expertise and not just the former. In this contribution we reflect on one aspect of adaptivity, namely the adaptive use of solution strategies in…
Organic compatible solutes of halotolerant and halophilic microorganisms
Roberts, Mary F
2005-01-01
Microorganisms that adapt to moderate and high salt environments use a variety of solutes, organic and inorganic, to counter external osmotic pressure. The organic solutes can be zwitterionic, noncharged, or anionic (along with an inorganic cation such as K+). The range of solutes, their diverse biosynthetic pathways, and physical properties of the solutes that effect molecular stability are reviewed. PMID:16176595
Adapting agriculture to climate change.
Howden, S Mark; Soussana, Jean-François; Tubiello, Francesco N; Chhetri, Netra; Dunlop, Michael; Meinke, Holger
2007-12-11
The strong trends in climate change already evident, the likelihood of further changes occurring, and the increasing scale of potential climate impacts give urgency to addressing agricultural adaptation more coherently. There are many potential adaptation options available for marginal change of existing agricultural systems, often variations of existing climate risk management. We show that implementation of these options is likely to have substantial benefits under moderate climate change for some cropping systems. However, there are limits to their effectiveness under more severe climate changes. Hence, more systemic changes in resource allocation need to be considered, such as targeted diversification of production systems and livelihoods. We argue that achieving increased adaptation action will necessitate integration of climate change-related issues with other risk factors, such as climate variability and market risk, and with other policy domains, such as sustainable development. Dealing with the many barriers to effective adaptation will require a comprehensive and dynamic policy approach covering a range of scales and issues, for example, from the understanding by farmers of change in risk profiles to the establishment of efficient markets that facilitate response strategies. Science, too, has to adapt. Multidisciplinary problems require multidisciplinary solutions, i.e., a focus on integrated rather than disciplinary science and a strengthening of the interface with decision makers. A crucial component of this approach is the implementation of adaptation assessment frameworks that are relevant, robust, and easily operated by all stakeholders, practitioners, policymakers, and scientists.
Adapting agriculture to climate change
Howden, S. Mark; Soussana, Jean-François; Tubiello, Francesco N.; Chhetri, Netra; Dunlop, Michael; Meinke, Holger
2007-01-01
The strong trends in climate change already evident, the likelihood of further changes occurring, and the increasing scale of potential climate impacts give urgency to addressing agricultural adaptation more coherently. There are many potential adaptation options available for marginal change of existing agricultural systems, often variations of existing climate risk management. We show that implementation of these options is likely to have substantial benefits under moderate climate change for some cropping systems. However, there are limits to their effectiveness under more severe climate changes. Hence, more systemic changes in resource allocation need to be considered, such as targeted diversification of production systems and livelihoods. We argue that achieving increased adaptation action will necessitate integration of climate change-related issues with other risk factors, such as climate variability and market risk, and with other policy domains, such as sustainable development. Dealing with the many barriers to effective adaptation will require a comprehensive and dynamic policy approach covering a range of scales and issues, for example, from the understanding by farmers of change in risk profiles to the establishment of efficient markets that facilitate response strategies. Science, too, has to adapt. Multidisciplinary problems require multidisciplinary solutions, i.e., a focus on integrated rather than disciplinary science and a strengthening of the interface with decision makers. A crucial component of this approach is the implementation of adaptation assessment frameworks that are relevant, robust, and easily operated by all stakeholders, practitioners, policymakers, and scientists. PMID:18077402
A Multiple Ranking Procedure Adapted to Discrete-Event Simulation.
1983-12-01
He is the finest instructor I have ever had. It was a privilege to be in his classes. I would also like to thank Dr. Melba Crawford and Dr. Robert ... Sullivan for their interest and help during my months of study. Many times their encouragement made the long hours easier to bear. Finally to my family
Properties of Some Bayesian Scoring Procedures for Computerized Adaptive Tests
1987-08-01
444))1 irdj 4 m -I Io, t ) ht7 , e R i 10208 -1cu* ~ I ia t ’inir 22 302-)28 o , S..:€ . -. S 5 ** -. 5l . . . . . ., "- " 0 , ABSTRACT The computerized...unlimited. 4 PERFORFsNG ORGA%, " ’ON REPORT \\%IMBER(SI 5 MAONITORING ORGANIZATION REPORT NUMBER(S) , - ’. x-. CRM 87-161 6j NAMEOFPEFORMIGORGANiZA’ON bo...Month, Day) 5 PAGE COUNT Final FROM TO August 1987 24 T6 SUPPLEMENTARY NOTATION 17 COSATI CODES T8 SUBJECT TERMS (Continue on reverse if necessary and
Krawczyk, Gerhard Erich; Miller, Kevin Michael
2011-07-26
There is provided a method of making a polymer solution comprising polymerizing one or more monomer in a solvent, wherein said monomer comprises one or more ethylenically unsaturated monomer that is a multi-functional Michael donor, and wherein said solvent comprises 40% or more by weight, based on the weight of said solvent, one or more multi-functional Michael donor.
NASA Astrophysics Data System (ADS)
Eliyan, Faysal Fayez; Alfantazi, Akram
2014-11-01
This paper presents an electrochemical study on the corrosion behavior of API-X100 steel, heat-treated to have microstructures similar to those of the heat-affected zones (HAZs) of pipeline welding, in bicarbonate-CO2 saturated solutions. The corrosion reactions, onto the surface and through the passive films, are simulated by cyclic voltammetry. The interrelation between bicarbonate concentration and CO2 hydration is analyzed during the filming process at the open-circuit potentials. In dilute bicarbonate solutions, H2CO3 drives more dominantly the cathodic reduction and the passive films form slowly. In the concentrated solutions, bicarbonate catalyzes both the anodic and cathodic reactions, only initially, after which it drives a fast-forming thick passivation that inhibits the underlying dissolution and impedes the cathodic reduction. The significance of the substrate is as critical as that of passivation in controlling the course of the corrosion reactions in the dilute solutions. For fast-cooled (heat treatment) HAZs, its metallurgical significance becomes more comparable to that of slower-cooled HAZs as the bicarbonate concentration is higher.
ERIC Educational Resources Information Center
Starkman, Neal
2007-01-01
Poor classroom acoustics are impairing students' hearing and their ability to learn. However, technology has come up with a solution: tools that focus voices in a way that minimizes intrusive ambient noise and gets to the intended receiver--not merely amplifying the sound, but also clarifying and directing it. One provider of classroom audio…
ERIC Educational Resources Information Center
Starkman, Neal
2007-01-01
Poor classroom acoustics are impairing students' hearing and their ability to learn. However, technology has come up with a solution: tools that focus voices in a way that minimizes intrusive ambient noise and gets to the intended receiver--not merely amplifying the sound, but also clarifying and directing it. One provider of classroom audio…
A grid generation and flow solution method for the Euler equations on unstructured grids
Anderson, W.K. )
1994-01-01
A grid generation and flow solution algorithm for the Euler equations on unstructured grids is presented. The grid generation scheme utilizes Delaunay triangulation and self-generates the field points for the mesh based on cell aspect ratios and allows for clustering near solid surfaces. The flow solution method is an implicit algorithm in which the linear set or equations arising at each time step is solved using a Gauss Seidel procedure which is completely vectorizable. In addition, a study is conducted to examine the number of subiterations required for good convergence of the overall algorithm. Grid generation results are shown in two dimensions for a NACA 0012 airfoil as well as two-element configuration. Flow solution results are shown for two-dimensional flow over the NACA 0012 airfoil and for a two-element configuration in which the solution has been obtained through an adaptation procedure and compared to an exact solution. Preliminary three-dimensional results are also shown in which subsonic flow over a business jet is computed. 31 refs. 30 figs.
Grid generation and flow solution method for Euler equations on unstructured grids
NASA Technical Reports Server (NTRS)
Anderson, W. Kyle
1992-01-01
A grid generation and flow solution algorithm for the Euler equations on unstructured grids is presented. The grid generation scheme, which uses Delaunay triangulation, generates the field points for the mesh based on cell aspect ratios and allows clustering of grid points near solid surfaces. The flow solution method is an implicit algorithm in which the linear set of equations arising at each time step is solved using a Gauss-Seidel procedure that is completely vectorizable. Also, a study is conducted to examine the number of subiterations required for good convergence of the overall algorithm. Grid generation results are shown in two dimensions for an NACA 0012 airfoil as well as a two element configuration. Flow solution results are shown for a two dimensional flow over the NACA 0012 airfoil and for a two element configuration in which the solution was obtained through an adaptation procedure and compared with an exact solution. Preliminary three dimensional results also are shown in which the subsonic flow over a business jet is computed.
A grid generation and flow solution method for the Euler equations on unstructured grids
NASA Technical Reports Server (NTRS)
Anderson, W. Kyle
1994-01-01
A grid generation and flow solution algorithm for the Euler equations on unstructured grids is presented. The grid generation scheme utilizes Delaunay triangulation and self-generates the field points for the mesh based on cell aspect ratios and allows for clustering near solid surfaces. The flow solution method is an implicit algorithm in which the linear set of equations arising at each time step is solved using a Gauss Seidel procedure which is completely vectorizable. In addition, a study is conducted to examine the number of subiterations required for good convergence of the overall algorithm. Grid generation results are shown in two dimensions for a National Advisory Committee for Aeronautics (NACA) 0012 airfoil as well as a two-element configuration. Flow solution results are shown for two-dimensional flow over the NACA 0012 airfoil and for a two-element configuration in which the solution has been obtained through an adaptation procedure and compared to an exact solution. Preliminary three-dimensional results are also shown in which subsonic flow over a business jet is computed.
Crew procedures development techniques
NASA Technical Reports Server (NTRS)
Arbet, J. D.; Benbow, R. L.; Hawk, M. L.; Mangiaracina, A. A.; Mcgavern, J. L.; Spangler, M. C.
1975-01-01
The study developed requirements, designed, developed, checked out and demonstrated the Procedures Generation Program (PGP). The PGP is a digital computer program which provides a computerized means of developing flight crew procedures based on crew action in the shuttle procedures simulator. In addition, it provides a real time display of procedures, difference procedures, performance data and performance evaluation data. Reconstruction of displays is possible post-run. Data may be copied, stored on magnetic tape and transferred to the document processor for editing and documentation distribution.
Optimum Testing Procedures for System Diagnosis and Fault Isolation.
1981-03-31
fault detection and isolation procedures are directed...Conference, Vol. 32 (1968), pp. 529-534. 4. Cohn, H. Y. and Ott, G., "Design of Adaptive Procedures for Fault Detection and Isolation ," IEEE... detection and isolation Built-in-test Optimum sequenceof testing Branch-and Bound 20. ABSTRACT (Contin... on revera. .ide f nwcider’ and identify
Line relaxation methods for the solution of 2D and 3D compressible flows
NASA Technical Reports Server (NTRS)
Hassan, O.; Probert, E. J.; Morgan, K.; Peraire, J.
1993-01-01
An implicit finite element based algorithm for the compressible Navier-Stokes equations is outlined, and the solution of the resulting equation by a line relaxation on general meshes of triangles or tetrahedra is described. The problem of generating and adapting unstructured meshes for viscous flows is reexamined, and an approach for both 2D and 3D simulations is proposed. An efficient approach appears to be the use of an implicit/explicit procedure, with the implicit treatment being restricted to those regions of the mesh where viscous effects are known to be dominant. Numerical examples demonstrating the computational performance of the proposed techniques are given.
Inverse problem solution in ellipsometry
NASA Astrophysics Data System (ADS)
Zabashta, Lubov A.; Zabashta, Oleg I.
1995-11-01
Interactive graphic system 'ELLA' is described which is an integrated program packet for reverse problem solution in ellipsometry. The solutions stable to experimental errors are found by two algorithms: a simplex method under constraints and a regularizing iteration method. A developed graphic procedure kit includes display of graphic surface layers, their optical parameters, and all main results of intermediate calculations. Specialized graphic input functions allow us to change the parameters of a chosen solution method, the basic data, to enter new additional information, etc. On the examples of model structure of GaAs-oxide MAI capabilities in ellipsometry for determination of multilayer structure optical parameters are studied.
40 CFR 60.648 - Optional procedure for measuring hydrogen sulfide in acid gas-Tutwiler Procedure. 1
Code of Federal Regulations, 2011 CFR
2011-07-01
...) Starch solution. Rub into a thin paste about one teaspoonful of wheat starch with a little water; pour... every few days. (d) Procedure. Fill leveling bulb with starch solution. Raise (L), open cock (G), open... the 100 ml mark, close (G) and (F), and disconnect sampling tube. Open (G) and bring starch solution...
40 CFR 60.648 - Optional procedure for measuring hydrogen sulfide in acid gas-Tutwiler Procedure. 1
Code of Federal Regulations, 2012 CFR
2012-07-01
...) Starch solution. Rub into a thin paste about one teaspoonful of wheat starch with a little water; pour... every few days. (d) Procedure. Fill leveling bulb with starch solution. Raise (L), open cock (G), open... the 100 ml mark, close (G) and (F), and disconnect sampling tube. Open (G) and bring starch solution...
Relaxation solution of the full Euler equations
NASA Technical Reports Server (NTRS)
Johnson, G. M.
1982-01-01
A numerical procedure for the relaxation solution of the full steady Euler equations is described. By embedding the Euler system in a second order surrogate system, central differencing may be used in subsonic regions while retaining matrix forms well suited to iterative solution procedures and convergence acceleration techniques. Hence, this method allows the development of stable, fully conservative differencing schemes for the solution of quite general inviscid flow problems. Results are presented for both subcritical and shocked supercritical internal flows. Comparisons are made with a standard time dependent solution algorithm.
Relaxation solution of the full Euler equations
NASA Technical Reports Server (NTRS)
Johnson, G. M.
1982-01-01
A numerical procedure for the relaxation solution of the full steady Euler equations is described. By embedding the Euler system in a second order surrogate system, central differencing may be used in subsonic regions while retaining matrix forms well suited to iterative solution procedures and convergence acceleration techniques. Hence, this method allows the development of stable, fully conservative differencing schemes for the solution of quite general inviscid flow problems. Results are presented for both subcritical and shocked supercritical internal flows. Comparisons are made with a standard time dependent solution algorithm. Previously announced in STAR as N82-24859
Clause Elimination Procedures for CNF Formulas
NASA Astrophysics Data System (ADS)
Heule, Marijn; Järvisalo, Matti; Biere, Armin
We develop and analyze clause elimination procedures, a specific family of simplification techniques for conjunctive normal form (CNF) formulas. Extending known procedures such as tautology, subsumption, and blocked clause elimination, we introduce novel elimination procedures based on hidden and asymmetric variants of these techniques. We analyze the resulting nine (including five new) clause elimination procedures from various perspectives: size reduction, BCP-preservance, confluence, and logical equivalence. For the variants not preserving logical equivalence, we show how to reconstruct solutions to original CNFs from satisfying assignments to simplified CNFs. We also identify a clause elimination procedure that does a transitive reduction of the binary implication graph underlying any CNF formula purely on the CNF level.
Zijp, Michiel C; Posthuma, Leo; Wintersen, Arjen; Devilee, Jeroen; Swartjes, Frank A
2016-05-01
This paper introduces Solution-focused Sustainability Assessment (SfSA), provides practical guidance formatted as a versatile process framework, and illustrates its utility for solving a wicked environmental management problem. Society faces complex and increasingly wicked environmental problems for which sustainable solutions are sought. Wicked problems are multi-faceted, and deriving of a management solution requires an approach that is participative, iterative, innovative, and transparent in its definition of sustainability and translation to sustainability metrics. We suggest to add the use of a solution-focused approach. The SfSA framework is collated from elements from risk assessment, risk governance, adaptive management and sustainability assessment frameworks, expanded with the 'solution-focused' paradigm as recently proposed in the context of risk assessment. The main innovation of this approach is the broad exploration of solutions upfront in assessment projects. The case study concerns the sustainable management of slightly contaminated sediments continuously formed in ditches in rural, agricultural areas. This problem is wicked, as disposal of contaminated sediment on adjacent land is potentially hazardous to humans, ecosystems and agricultural products. Non-removal would however reduce drainage capacity followed by increased risks of flooding, while contaminated sediment removal followed by offsite treatment implies high budget costs and soil subsidence. Application of the steps in the SfSA-framework served in solving this problem. Important elements were early exploration of a wide 'solution-space', stakeholder involvement from the onset of the assessment, clear agreements on the risk and sustainability metrics of the problem and on the interpretation and decision procedures, and adaptive management. Application of the key elements of the SfSA approach eventually resulted in adoption of a novel sediment management policy. The stakeholder
Making Intelligent Systems Adaptive. (Revision)
1988-10-01
eventually produce solutions. BY contrast, human beinge and other intelligent animls continuously adapt to the demands and opportunities presented by a...such as monitoring critically ill medical patients or controlling a manufacturing process. Following the model set by human intelligence, we define...signs probabilistically, using a belief network, as well as from first principles, using explicit models of system structure and function. Concurrent
Milne, Roger Brent
1995-12-01
This thesis describes a new method for the numerical solution of partial differential equations of the parabolic type on an adaptively refined mesh in two or more spatial dimensions. The method is motivated and developed in the context of the level set formulation for the curvature dependent propagation of surfaces in three dimensions. In that setting, it realizes the multiple advantages of decreased computational effort, localized accuracy enhancement, and compatibility with problems containing a range of length scales.
Core Verbal Autopsy Procedures with Comparative Validation Results from Two Countries
Setel, Philip W; Rao, Chalapati; Hemed, Yusuf; Whiting, David R; Yang, Gonghuan; Chandramohan, Daniel; Alberti, K. G. M. M; Lopez, Alan D
2006-01-01
Background Cause-specific mortality statistics remain scarce for the majority of low-income countries, where the highest disease burdens are experienced. Neither facility-based information systems nor vital registration provide adequate or representative data. The expansion of sample vital registration with verbal autopsy procedures represents the most promising interim solution for this problem. The development and validation of core verbal autopsy forms and suitable coding and tabulation procedures are an essential first step to extending the benefits of this method. Methods and Findings Core forms for peri- and neonatal, child, and adult deaths were developed and revised over 12 y through a project of the Tanzanian Ministry of Health and were applied to over 50,000 deaths. The contents of the core forms draw upon and are generally comparable with previously proposed verbal autopsy procedures. The core forms and coding procedures based on the International Statistical Classification of Diseases (ICD) were further adapted for use in China. These forms, the ICD tabulation list, the summary validation protocol, and the summary validation results from Tanzania and China are presented here. Conclusions The procedures are capable of providing reasonable mortality estimates as adjudged against stated performance criteria for several common causes of death in two countries with radically different cause structures of mortality. However, the specific causes for which the procedures perform well varied between the two settings because of differences in the underlying prevalence of the main causes of death. These differences serve to emphasize the need to undertake validation studies of verbal autopsy procedures when they are applied in new epidemiological settings. PMID:16942391
Higher-order numerical solutions using cubic splines
NASA Technical Reports Server (NTRS)
Rubin, S. G.; Khosla, P. K.
1976-01-01
A cubic spline collocation procedure was developed for the numerical solution of partial differential equations. This spline procedure is reformulated so that the accuracy of the second-derivative approximation is improved and parallels that previously obtained for lower derivative terms. The final result is a numerical procedure having overall third-order accuracy of a nonuniform mesh. Solutions using both spline procedures, as well as three-point finite difference methods, are presented for several model problems.
Adaptation of Baroreflexes and Orthostatic Hypotension
NASA Technical Reports Server (NTRS)
Convertino, Victor A.
1993-01-01
The footward shift of blood volume and compensatory autonomic reflex responses initiated during the assumption of the upright posture in terrestrial gravity is a common feature in humans. When regular exposure to upright posture is removed, the headward redistribution of blood induces numerous physiological adaptations which compromise this normal response and the development of low blood pressure upon standing (orthostatic hypotension) can ensue. Such microgravity conditions can be produced by prolonged exposure to spaceflight or prolonged bed rest. Since the reduction of blood and plasma volume during exposure to microgravity has been associated with orthostatic instability following spaceflight and bed rest, it has been a reasonable assumption that hypovolemia may be a primary contributing factor to the development of orthostatic hypotension. This view was supported by the observation that the attempt to restore vascular volume in astronauts by drinking saline solutions just prior to re-entry had proven effective in reducing orthostatic instability after spaceflights of short duration. However, as the duration of spaceflight lengthens, orthostatic instability persists despite fluid loading procedures, suggesting that mechanisms other than hypovolemia may contribute to orthostatic hypotension following prolonged exposure to microgravity conditions. Recent evidence has been generated from spaceflight and groundbase experiments that supports the notion that changes in autonomic baroreflexes that control cardiac and vascular responses during orthostatic challenges may be affected by longer periods of microgravity exposure and can contribute to postflight orthostatic instability.
Local adaptive tone mapping for video enhancement
NASA Astrophysics Data System (ADS)
Lachine, Vladimir; Dai, Min (.
2015-03-01
As new technologies like High Dynamic Range cameras, AMOLED and high resolution displays emerge on consumer electronics market, it becomes very important to deliver the best picture quality for mobile devices. Tone Mapping (TM) is a popular technique to enhance visual quality. However, the traditional implementation of Tone Mapping procedure is limited by pixel's value to value mapping, and the performance is restricted in terms of local sharpness and colorfulness. To overcome the drawbacks of traditional TM, we propose a spatial-frequency based framework in this paper. In the proposed solution, intensity component of an input video/image signal is split on low pass filtered (LPF) and high pass filtered (HPF) bands. Tone Mapping (TM) function is applied to LPF band to improve the global contrast/brightness, and HPF band is added back afterwards to keep the local contrast. The HPF band may be adjusted by a coring function to avoid noise boosting and signal overshooting. Colorfulness of an original image may be preserved or enhanced by chroma components correction by means of saturation function. Localized content adaptation is further improved by dividing an image to a set of non-overlapped regions and modifying each region individually. The suggested framework allows users to implement a wide range of tone mapping applications with perceptional local sharpness and colorfulness preserved or enhanced. Corresponding hardware circuit may be integrated in camera, video or display pipeline with minimal hardware budget
Research in digital adaptive flight controllers
NASA Technical Reports Server (NTRS)
Kaufman, H.
1976-01-01
A design study of adaptive control logic suitable for implementation in modern airborne digital flight computers was conducted. Both explicit controllers which directly utilize parameter identification and implicit controllers which do not require identification were considered. Extensive analytical and simulation efforts resulted in the recommendation of two explicit digital adaptive flight controllers. Interface weighted least squares estimation procedures with control logic were developed using either optimal regulator theory or with control logic based upon single stage performance indices.
Coping and Adaptation: Theoretical and Applied Perspectives.
1994-11-01
vital life coping skills that, in turn, would facilitate successful adaptation to life in Europe and reduce problems with retention and performance...This report describes the theoretical framework for the Life Coping Skills in USAREUR project, develops a model of the coping process, summarizes...studies that have identified needed life coping skills, reviews literature related to adaptation to the military and to foreign countries, and makes recommendations concerning directions and procedures for project tasks.