Science.gov

Sample records for adaptive solution procedure

  1. Interactive solution-adaptive grid generation procedure

    NASA Technical Reports Server (NTRS)

    Henderson, Todd L.; Choo, Yung K.; Lee, Ki D.

    1992-01-01

    TURBO-AD is an interactive solution adaptive grid generation program under development. The program combines an interactive algebraic grid generation technique and a solution adaptive grid generation technique into a single interactive package. The control point form uses a sparse collection of control points to algebraically generate a field grid. This technique provides local grid control capability and is well suited to interactive work due to its speed and efficiency. A mapping from the physical domain to a parametric domain was used to improve difficulties encountered near outwardly concave boundaries in the control point technique. Therefore, all grid modifications are performed on the unit square in the parametric domain, and the new adapted grid is then mapped back to the physical domain. The grid adaption is achieved by adapting the control points to a numerical solution in the parametric domain using control sources obtained from the flow properties. Then a new modified grid is generated from the adapted control net. This process is efficient because the number of control points is much less than the number of grid points and the generation of the grid is an efficient algebraic process. TURBO-AD provides the user with both local and global controls.

  2. Multilevel adaptive solution procedure for material nonlinear problems in visual programming environment

    SciTech Connect

    Kim, D.; Ghanem, R.

    1994-12-31

    Multigrid solution technique to solve a material nonlinear problem in a visual programming environment using the finite element method is discussed. The nonlinear equation of equilibrium is linearized to incremental form using Newton-Rapson technique, then multigrid solution technique is used to solve linear equations at each Newton-Rapson step. In the process, adaptive mesh refinement, which is based on the bisection of a pair of triangles, is used to form grid hierarchy for multigrid iteration. The solution process is implemented in a visual programming environment with distributed computing capability, which enables more intuitive understanding of solution process, and more effective use of resources.

  3. Constrained Self-adaptive Solutions Procedures for Structure Subject to High Temperature Elastic-plastic Creep Effects

    NASA Technical Reports Server (NTRS)

    Padovan, J.; Tovichakchaikul, S.

    1983-01-01

    This paper will develop a new solution strategy which can handle elastic-plastic-creep problems in an inherently stable manner. This is achieved by introducing a new constrained time stepping algorithm which will enable the solution of creep initiated pre/postbuckling behavior where indefinite tangent stiffnesses are encountered. Due to the generality of the scheme, both monotone and cyclic loading histories can be handled. The presentation will give a thorough overview of current solution schemes and their short comings, the development of constrained time stepping algorithms as well as illustrate the results of several numerical experiments which benchmark the new procedure.

  4. Adaptive Modeling Procedure Selection by Data Perturbation*

    PubMed Central

    Zhang, Yongli; Shen, Xiaotong

    2015-01-01

    Summary Many procedures have been developed to deal with the high-dimensional problem that is emerging in various business and economics areas. To evaluate and compare these procedures, modeling uncertainty caused by model selection and parameter estimation has to be assessed and integrated into a modeling process. To do this, a data perturbation method estimates the modeling uncertainty inherited in a selection process by perturbing the data. Critical to data perturbation is the size of perturbation, as the perturbed data should resemble the original dataset. To account for the modeling uncertainty, we derive the optimal size of perturbation, which adapts to the data, the model space, and other relevant factors in the context of linear regression. On this basis, we develop an adaptive data-perturbation method that, unlike its nonadaptive counterpart, performs well in different situations. This leads to a data-adaptive model selection method. Both theoretical and numerical analysis suggest that the data-adaptive model selection method adapts to distinct situations in that it yields consistent model selection and optimal prediction, without knowing which situation exists a priori. The proposed method is applied to real data from the commodity market and outperforms its competitors in terms of price forecasting accuracy. PMID:26640319

  5. Interactive solution-adaptive grid generation

    NASA Technical Reports Server (NTRS)

    Choo, Yung K.; Henderson, Todd L.

    1992-01-01

    TURBO-AD is an interactive solution-adaptive grid generation program under development. The program combines an interactive algebraic grid generation technique and a solution-adaptive grid generation technique into a single interactive solution-adaptive grid generation package. The control point form uses a sparse collection of control points to algebraically generate a field grid. This technique provides local grid control capability and is well suited to interactive work due to its speed and efficiency. A mapping from the physical domain to a parametric domain was used to improve difficulties that had been encountered near outwardly concave boundaries in the control point technique. Therefore, all grid modifications are performed on a unit square in the parametric domain, and the new adapted grid in the parametric domain is then mapped back to the physical domain. The grid adaptation is achieved by first adapting the control points to a numerical solution in the parametric domain using control sources obtained from flow properties. Then a new modified grid is generated from the adapted control net. This solution-adaptive grid generation process is efficient because the number of control points is much less than the number of grid points and the generation of a new grid from the adapted control net is an efficient algebraic process. TURBO-AD provides the user with both local and global grid controls.

  6. Adaptive EAGLE dynamic solution adaptation and grid quality enhancement

    NASA Technical Reports Server (NTRS)

    Luong, Phu Vinh; Thompson, J. F.; Gatlin, B.; Mastin, C. W.; Kim, H. J.

    1992-01-01

    In the effort described here, the elliptic grid generation procedure in the EAGLE grid code was separated from the main code into a subroutine, and a new subroutine which evaluates several grid quality measures at each grid point was added. The elliptic grid routine can now be called, either by a computational fluid dynamics (CFD) code to generate a new adaptive grid based on flow variables and quality measures through multiple adaptation, or by the EAGLE main code to generate a grid based on quality measure variables through static adaptation. Arrays of flow variables can be read into the EAGLE grid code for use in static adaptation as well. These major changes in the EAGLE adaptive grid system make it easier to convert any CFD code that operates on a block-structured grid (or single-block grid) into a multiple adaptive code.

  7. Combined LAURA-UPS hypersonic solution procedure

    NASA Technical Reports Server (NTRS)

    Wood, William A.; Thompson, Richard A.

    1993-01-01

    A combined solution procedure for hypersonic flowfields around blunted slender bodies was implemented using a thin-layer Navier-Stokes code (LAURA) in the nose region and a parabolized Navier-Stokes code (UPS) on the after body region. Perfect gas, equilibrium air, and non-equilibrium air solutions to sharp cones and a sharp wedge were obtained using UPS alone as a preliminary step. Surface heating rates are presented for two slender bodies with blunted noses, having used LAURA to provide a starting solution to UPS downstream of the sonic line. These are an 8 deg sphere-cone in Mach 5, perfect gas, laminar flow at 0 and 4 deg angles of attack and the Reentry F body at Mach 20, 80,000 ft equilibrium gas conditions for 0 and 0.14 deg angles of attack. The results indicate that this procedure is a timely and accurate method for obtaining aerothermodynamic predictions on slender hypersonic vehicles.

  8. Staggered solution procedures for multibody dynamics simulation

    NASA Astrophysics Data System (ADS)

    Park, K. C.; Chiou, J. C.; Downer, J. D.

    1990-04-01

    The numerical solution procedure for multibody dynamics (MBD) systems is termed a staggered MBD solution procedure that solves the generalized coordinates in a separate module from that for the constraint force. This requires a reformulation of the constraint conditions so that the constraint forces can also be integrated in time. A major advantage of such a partitioned solution procedure is that additional analysis capabilities such as active controller and design optimization modules can be easily interfaced without embedding them into a monolithic program. After introducing the basic equations of motion for MBD system in the second section, Section 3 briefly reviews some constraint handling techniques and introduces the staggered stabilized technique for the solution of the constraint forces as independent variables. The numerical direct time integration of the equations of motion is described in Section 4. As accurate damping treatment is important for the dynamics of space structures, we have employed the central difference method and the mid-point form of the trapezoidal rule since they engender no numerical damping. This is in contrast to the current practice in dynamic simulations of ground vehicles by employing a set of backward difference formulas. First, the equations of motion are partitioned according to the translational and the rotational coordinates. This sets the stage for an efficient treatment of the rotational motions via the singularity-free Euler parameters. The resulting partitioned equations of motion are then integrated via a two-stage explicit stabilized algorithm for updating both the translational coordinates and angular velocities. Once the angular velocities are obtained, the angular orientations are updated via the mid-point implicit formula employing the Euler parameters. When the two algorithms, namely, the two-stage explicit algorithm for the generalized coordinates and the implicit staggered procedure for the constraint Lagrange

  9. Adaptive Distributed Environment for Procedure Training (ADEPT)

    NASA Technical Reports Server (NTRS)

    Domeshek, Eric; Ong, James; Mohammed, John

    2013-01-01

    ADEPT (Adaptive Distributed Environment for Procedure Training) is designed to provide more effective, flexible, and portable training for NASA systems controllers. When creating a training scenario, an exercise author can specify a representative rationale structure using the graphical user interface, annotating the results with instructional texts where needed. The author's structure may distinguish between essential and optional parts of the rationale, and may also include "red herrings" - hypotheses that are essential to consider, until evidence and reasoning allow them to be ruled out. The system is built from pre-existing components, including Stottler Henke's SimVentive? instructional simulation authoring tool and runtime. To that, a capability was added to author and exploit explicit control decision rationale representations. ADEPT uses SimVentive's Scalable Vector Graphics (SVG)- based interactive graphic display capability as the basis of the tool for quickly noting aspects of decision rationale in graph form. The ADEPT prototype is built in Java, and will run on any computer using Windows, MacOS, or Linux. No special peripheral equipment is required. The software enables a style of student/ tutor interaction focused on the reasoning behind systems control behavior that better mimics proven Socratic human tutoring behaviors for highly cognitive skills. It supports fast, easy, and convenient authoring of such tutoring behaviors, allowing specification of detailed scenario-specific, but content-sensitive, high-quality tutor hints and feedback. The system places relatively light data-entry demands on the student to enable its rationale-centered discussions, and provides a support mechanism for fostering coherence in the student/ tutor dialog by including focusing, sequencing, and utterance tuning mechanisms intended to better fit tutor hints and feedback into the ongoing context.

  10. An adaptive refinement procedure for transient thermal analysis using nodeless variable finite elements

    NASA Technical Reports Server (NTRS)

    Ramakrishnan, R.; Wieting, Allan R.; Thornton, Earl A.

    1990-01-01

    An adaptive mesh refinement procedure that uses nodeless variables and quadratic interpolation functions is presented for analyzing transient thermal problems. A temperature based finite element scheme with Crank-Nicolson time marching is used to obtain the thermal solution. The strategies used for mesh adaption, computing refinement indicators, and time marching are described. Examples in one and two dimensions are presented and comparisons are made with exact solutions. The effectiveness of this procedure for transient thermal analysis is reflected in good solution accuracy, reduction in number of elements used, and computational efficiency.

  11. Effects of adaptive refinement on the inverse EEG solution

    NASA Astrophysics Data System (ADS)

    Weinstein, David M.; Johnson, Christopher R.; Schmidt, John A.

    1995-10-01

    One of the fundamental problems in electroencephalography can be characterized by an inverse problem. Given a subset of electrostatic potentials measured on the surface of the scalp and the geometry and conductivity properties within the head, calculate the current vectors and potential fields within the cerebrum. Mathematically the generalized EEG problem can be stated as solving Poisson's equation of electrical conduction for the primary current sources. The resulting problem is mathematically ill-posed i.e., the solution does not depend continuously on the data, such that small errors in the measurement of the voltages on the scalp can yield unbounded errors in the solution, and, for the general treatment of a solution of Poisson's equation, the solution is non-unique. However, if accurate solutions the general treatment of a solution of Poisson's equation, the solution is non-unique. However, if accurate solutions to such problems could be obtained, neurologists would gain noninvasive accesss to patient-specific cortical activity. Access to such data would ultimately increase the number of patients who could be effectively treated for pathological cortical conditions such as temporal lobe epilepsy. In this paper, we present the effects of spatial adaptive refinement on the inverse EEG problem and show that the use of adaptive methods allow for significantly better estimates of electric and potential fileds within the brain through an inverse procedure. To test these methods, we have constructed several finite element head models from magneteic resonance images of a patient. The finite element meshes ranged in size from 2724 nodes and 12,812 elements to 5224 nodes and 29,135 tetrahedral elements, depending on the level of discretization. We show that an adaptive meshing algorithm minimizes the error in the forward problem due to spatial discretization and thus increases the accuracy of the inverse solution.

  12. Symmetry-adapted Wannier functions in the maximal localization procedure

    NASA Astrophysics Data System (ADS)

    Sakuma, R.

    2013-06-01

    A procedure to construct symmetry-adapted Wannier functions in the framework of the maximally localized Wannier function approach [Marzari and Vanderbilt, Phys. Rev. BPRBMDO1098-012110.1103/PhysRevB.56.12847 56, 12847 (1997); Souza, Marzari, and Vanderbilt, Phys. Rev. BPRBMDO1098-012110.1103/PhysRevB.65.035109 65, 035109 (2001)] is presented. In this scheme, the minimization of the spread functional of the Wannier functions is performed with constraints that are derived from symmetry properties of the specified set of the Wannier functions and the Bloch functions used to construct them, therefore one can obtain a solution that does not necessarily yield the global minimum of the spread functional. As a test of this approach, results of atom-centered Wannier functions for GaAs and Cu are presented.

  13. Procedure for Adapting Direct Simulation Monte Carlo Meshes

    NASA Technical Reports Server (NTRS)

    Woronowicz, Michael S.; Wilmoth, Richard G.; Carlson, Ann B.; Rault, Didier F. G.

    1992-01-01

    A technique is presented for adapting computational meshes used in the G2 version of the direct simulation Monte Carlo method. The physical ideas underlying the technique are discussed, and adaptation formulas are developed for use on solutions generated from an initial mesh. The effect of statistical scatter on adaptation is addressed, and results demonstrate the ability of this technique to achieve more accurate results without increasing necessary computational resources.

  14. An adaptive embedded mesh procedure for leading-edge vortex flows

    NASA Technical Reports Server (NTRS)

    Powell, Kenneth G.; Beer, Michael A.; Law, Glenn W.

    1989-01-01

    A procedure for solving the conical Euler equations on an adaptively refined mesh is presented, along with a method for determining which cells to refine. The solution procedure is a central-difference cell-vertex scheme. The adaptation procedure is made up of a parameter on which the refinement decision is based, and a method for choosing a threshold value of the parameter. The refinement parameter is a measure of mesh-convergence, constructed by comparison of locally coarse- and fine-grid solutions. The threshold for the refinement parameter is based on the curvature of the curve relating the number of cells flagged for refinement to the value of the refinement threshold. Results for three test cases are presented. The test problem is that of a delta wing at angle of attack in a supersonic free-stream. The resulting vortices and shocks are captured efficiently by the adaptive code.

  15. Anisotropic Solution Adaptive Unstructured Grid Generation Using AFLR

    NASA Technical Reports Server (NTRS)

    Marcum, David L.

    2007-01-01

    An existing volume grid generation procedure, AFLR3, was successfully modified to generate anisotropic tetrahedral elements using a directional metric transformation defined at source nodes. The procedure can be coupled with a solver and an error estimator as part of an overall anisotropic solution adaptation methodology. It is suitable for use with an error estimator based on an adjoint, optimization, sensitivity derivative, or related approach. This offers many advantages, including more efficient point placement along with robust and efficient error estimation. It also serves as a framework for true grid optimization wherein error estimation and computational resources can be used as cost functions to determine the optimal point distribution. Within AFLR3 the metric transformation is implemented using a set of transformation vectors and associated aspect ratios. The modified overall procedure is presented along with details of the anisotropic transformation implementation. Multiple two-and three-dimensional examples are also presented that demonstrate the capability of the modified AFLR procedure to generate anisotropic elements using a set of source nodes with anisotropic transformation metrics. The example cases presented use moderate levels of anisotropy and result in usable element quality. Future testing with various flow solvers and methods for obtaining transformation metric information is needed to determine practical limits and evaluate the efficacy of the overall approach.

  16. a Procedural Solution to Model Roman Masonry Structures

    NASA Astrophysics Data System (ADS)

    Cappellini, V.; Saleri, R.; Stefani, C.; Nony, N.; De Luca, L.

    2013-07-01

    The paper will describe a new approach based on the development of a procedural modelling methodology for archaeological data representation. This is a custom-designed solution based on the recognition of the rules belonging to the construction methods used in roman times. We have conceived a tool for 3D reconstruction of masonry structures starting from photogrammetric surveying. Our protocol considers different steps. Firstly we have focused on the classification of opus based on the basic interconnections that can lead to a descriptive system used for their unequivocal identification and design. Secondly, we have chosen an automatic, accurate, flexible and open-source photogrammetric pipeline named Pastis Apero Micmac - PAM, developed by IGN (Paris). We have employed it to generate ortho-images from non-oriented images, using a user-friendly interface implemented by CNRS Marseille (France). Thirdly, the masonry elements are created in parametric and interactive way, and finally they are adapted to the photogrammetric data. The presented application, currently under construction, is developed with an open source programming language called Processing, useful for visual, animated or static, 2D or 3D, interactive creations. Using this computer language, a Java environment has been developed. Therefore, even if the procedural modelling reveals an accuracy level inferior to the one obtained by manual modelling (brick by brick), this method can be useful when taking into account the static evaluation on buildings (requiring quantitative aspects) and metric measures for restoration purposes.

  17. An adaptive grid method for computing time accurate solutions on structured grids

    NASA Technical Reports Server (NTRS)

    Bockelie, Michael J.; Smith, Robert E.; Eiseman, Peter R.

    1991-01-01

    The solution method consists of three parts: a grid movement scheme; an unsteady Euler equation solver; and a temporal coupling routine that links the dynamic grid to the Euler solver. The grid movement scheme is an algebraic method containing grid controls that generate a smooth grid that resolves the severe solution gradients and the sharp transitions in the solution gradients. The temporal coupling is performed with a grid prediction correction procedure that is simple to implement and provides a grid that does not lag the solution in time. The adaptive solution method is tested by computing the unsteady inviscid solutions for a one dimensional shock tube and a two dimensional shock vortex iteraction.

  18. An adaptive mesh-moving and refinement procedure for one-dimensional conservation laws

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak; Flaherty, Joseph E.; Arney, David C.

    1993-01-01

    We examine the performance of an adaptive mesh-moving and /or local mesh refinement procedure for the finite difference solution of one-dimensional hyperbolic systems of conservation laws. Adaptive motion of a base mesh is designed to isolate spatially distinct phenomena, and recursive local refinement of the time step and cells of the stationary or moving base mesh is performed in regions where a refinement indicator exceeds a prescribed tolerance. These adaptive procedures are incorporated into a computer code that includes a MacCormack finite difference scheme wih Davis' artificial viscosity model and a discretization error estimate based on Richardson's extrapolation. Experiments are conducted on three problems in order to qualify the advantages of adaptive techniques relative to uniform mesh computations and the relative benefits of mesh moving and refinement. Key results indicate that local mesh refinement, with and without mesh moving, can provide reliable solutions at much lower computational cost than possible on uniform meshes; that mesh motion can be used to improve the results of uniform mesh solutions for a modest computational effort; that the cost of managing the tree data structure associated with refinement is small; and that a combination of mesh motion and refinement reliably produces solutions for the least cost per unit accuracy.

  19. An Innovative Adaptive Pushover Procedure Based on Storey Shear

    SciTech Connect

    Shakeri, Kazem; Shayanfar, Mohsen A.

    2008-07-08

    Since the conventional pushover analyses are unable to consider the effect of the higher modes and progressive variation in dynamic properties, recent years have witnessed the development of some advanced adaptive pushover methods. However in these methods, using the quadratic combination rules to combine the modal forces result in a positive value in load pattern at all storeys and the reversal sign of the modes is removed; consequently these methods do not have a major advantage over their non-adaptive counterparts. Herein an innovative adaptive pushover method based on storey shear is proposed which can take into account the reversal signs in higher modes. In each storey the applied load pattern is derived from the storey shear profile; consequently, the sign of the applied loads in consecutive steps could be changed. Accuracy of the proposed procedure is examined by applying it to a 20-storey steel building. It illustrates a good estimation of the peak response in inelastic phase.

  20. NIF Anti-Reflective Coating Solutions: Preparation, Procedures and Specifications

    SciTech Connect

    Suratwala, T; Carman, L; Thomas, I

    2003-07-01

    The following document contains a detailed description of the preparation procedures for the antireflective coating solutions used for NIF optics. This memo includes preparation procedures for the coating solutions (sections 2.0-4.0), specifications and vendor information of the raw materials used and on all equipment used (section 5.0), and QA specifications (section 6.0) and procedures (section 7.0) to determine quality and repeatability of all the coating solutions. There are different five coating solutions that will be used to coat NIF optics. These solutions are listed below: (1) Colloidal silica (3%) in ethanol (2) Colloidal silica (2%) in sec-butanol (3) Colloidal silica (9%) in sec-butanol (deammoniated) (4) HMDS treated silica (10%) in decane (5) GR650 (3.3%) in ethanol/sec-butanol The names listed above are to be considered the official name for the solution. They will be referred to by these names in the remainder of this document. Table 1 gives a summary of all the optics to be coated including: (1) the surface to be coated; (2) the type of solution to be used; (3) the coating method (meniscus, dip, or spin coating) to be used; (4) the type of coating (broadband, 1?, 2?, 3?) to be made; (5) number of optics to be coated; and (6) the type of post processing required (if any). Table 2 gives a summary of the batch compositions and measured properties of all five of these solutions.

  1. Solution-adaptive finite element method in computational fracture mechanics

    NASA Technical Reports Server (NTRS)

    Min, J. B.; Bass, J. M.; Spradley, L. W.

    1993-01-01

    Some recent results obtained using solution-adaptive finite element method in linear elastic two-dimensional fracture mechanics problems are presented. The focus is on the basic issue of adaptive finite element method for validating the applications of new methodology to fracture mechanics problems by computing demonstration problems and comparing the stress intensity factors to analytical results.

  2. Multigrid solution strategies for adaptive meshing problems

    NASA Technical Reports Server (NTRS)

    Mavriplis, Dimitri J.

    1995-01-01

    This paper discusses the issues which arise when combining multigrid strategies with adaptive meshing techniques for solving steady-state problems on unstructured meshes. A basic strategy is described, and demonstrated by solving several inviscid and viscous flow cases. Potential inefficiencies in this basic strategy are exposed, and various alternate approaches are discussed, some of which are demonstrated with an example. Although each particular approach exhibits certain advantages, all methods have particular drawbacks, and the formulation of a completely optimal strategy is considered to be an open problem.

  3. Solution procedure of residue harmonic balance method and its applications

    NASA Astrophysics Data System (ADS)

    Guo, ZhongJin; Leung, A. Y. T.; Ma, XiaoYan

    2014-08-01

    This paper presents a simple and rigorous solution procedure of residue harmonic balance for predicting the accurate approximation of certain autonomous ordinary differential systems. In this solution procedure, no small parameter is assumed. The harmonic residue of balance equation is separated in two parts at each step. The first part has the same number of Fourier terms as the present order of approximation and the remaining part is used in the subsequent improvement. The corrections are governed by linear ordinary differential equation so that they can be solved easily by means of harmonic balance method again. Three kinds of different differential equations involving general, fractional and delay ordinary differential systems are given as numerical examples respectively. Highly accurate limited cycle frequency and amplitude are captured. The results match well with the exact solutions or numerical solutions for a wide range of control parameters. Comparison with those available shows that the residue harmonic balance solution procedure is very effective for these autonomous differential systems. Moreover, the present method works not only in predicting the amplitude but also the frequency of bifurcated period solution for delay ordinary differential equation.

  4. An Adaptive Ridge Procedure for L0 Regularization

    PubMed Central

    Frommlet, Florian; Nuel, Grégory

    2016-01-01

    Penalized selection criteria like AIC or BIC are among the most popular methods for variable selection. Their theoretical properties have been studied intensively and are well understood, but making use of them in case of high-dimensional data is difficult due to the non-convex optimization problem induced by L0 penalties. In this paper we introduce an adaptive ridge procedure (AR), where iteratively weighted ridge problems are solved whose weights are updated in such a way that the procedure converges towards selection with L0 penalties. After introducing AR its specific shrinkage properties are studied in the particular case of orthogonal linear regression. Based on extensive simulations for the non-orthogonal case as well as for Poisson regression the performance of AR is studied and compared with SCAD and adaptive LASSO. Furthermore an efficient implementation of AR in the context of least-squares segmentation is presented. The paper ends with an illustrative example of applying AR to analyze GWAS data. PMID:26849123

  5. Transmission Line Adapted Analytical Power Charts Solution

    NASA Astrophysics Data System (ADS)

    Sakala, Japhet D.; Daka, James S. J.; Setlhaolo, Ditiro; Malichi, Alec Pulu

    2016-08-01

    The performance of a transmission line has been assessed over the years using power charts. These are graphical representations, drawn to scale, of the equations that describe the performance of transmission lines. Various quantities that describe the performance, such as sending end voltage, sending end power and compensation to give zero voltage regulation, may be deduced from the power charts. Usually required values are read off and then converted using the appropriate scales and known relationships. In this paper, the authors revisit this area of circle diagrams for transmission line performance. The work presented here formulates the mathematical model that analyses the transmission line performance from the power charts relationships and then uses them to calculate the transmission line performance. In this proposed approach, it is not necessary to draw the power charts for the solution. However the power charts may be drawn for the visual presentation. The method is based on applying derived equations and is simple to use since it does not require rigorous derivations.

  6. Three-dimensional solution-adaptive grid generation of composite configurations

    NASA Astrophysics Data System (ADS)

    Tu, Yen

    A solution adaptive grid generation procedure is developed and applied to 3-D inviscid transonic fluid flow around complex geometries using a composite block grid structure. The adaptation is based upon control functions in an elliptic grid generation system. The control function is constructed in a manner such that a proper grid network can be generated as a fluid flow solution is evolving. The grid network is boundary conforming for accurate representation of boundary conditions. The procedure implemented allows orthodonality at boundaries for more accurate computations, while smoothness is implicit in the elliptic equations. The approach allows multiple block grid systems to be constructed to treat complex configurations as well. The solution adaptive computational procedure was accomplished by coupling the elliptic grid generation technique with an implicit, finite volume, upwind Euler flow solver. In simulating trasonic fluid flow around finned body of revolution and a multiple store configuration, the grid systems adapt to pressure gradients in the flow field. Results obtained show that the technique is capable of generating grid networks proper for the simulations of complex aerodynamic configurations.

  7. A flux-split solution procedure for unsteady flow calculations

    NASA Technical Reports Server (NTRS)

    Pordal, H. S.; Khosla, P. K.; Rubin, S. G.

    1990-01-01

    The solution of reduced Navier Stokes (RNS) equations is considered using a flux-split procedure. Unsteady flow in a two dimensional engine inlet is computed. The problems of unstart and restart are investigated. A sparse matrix direct solver combined with domain decomposition strategy is used to compute the unsteady flow field at each instant of time. Strong shock-boundary layer interaction, time varying shocks and time varying recirculation regions are efficiently captured.

  8. Radiographic skills learning: procedure simulation using adaptive hypermedia.

    PubMed

    Costaridou, L; Panayiotakis, G; Pallikarakis, N; Proimos, B

    1996-10-01

    The design and development of a simulation tool supporting learning of radiographic skills is reported. This tool has by textual, graphical and iconic resources, organized according to a building-block, adaptive hypermedia approach, which is described and supported by an image base of radiographs. It offers interactive user-controlled simulation of radiographic imaging procedures. The development is based on a commercially available environment (Toolbook 3.0, Asymetrix Corporation). The core of the system is an attributed precedence (priority) graph, which represents a task outline (concept and resources structure), which is dynamically adjusted to selected procedures. The user interface imitates a conventional radiography system, i.e. operating console, tube, table, patient and cassette. System parameters, such as patient positioning, focus-to-patient distance, magnification, field dimensions, tube voltage and mAs are under user control. Their effects on image quality are presented, by means of an image base acquired under controlled exposure conditions. Innovative use of hypermedia, computer based learning and simulation principles and technology in the development of this tool resulted in an enhanced interactive environment providing radiographic parameter control and visualization of parameter effects on image quality. PMID:9038530

  9. Adaptation of sweeteners in water and in tannic acid solutions.

    PubMed

    Schiffman, S S; Pecore, S D; Booth, B J; Losee, M L; Carr, B T; Sattely-Miller, E; Graham, B G; Warwick, Z S

    1994-03-01

    Repeated exposure to a tastant often leads to a decrease in magnitude of the perceived intensity; this phenomenon is termed adaptation. The purpose of this study was to determine the degree of adaptation of the sweet response for a variety of sweeteners in water and in the presence of two levels of tannic acid. Sweetness intensity ratings were given by a trained panel for 14 sweeteners: three sugars (fructose, glucose, sucrose), two polyhydric alcohols (mannitol, sorbitol), two terpenoid glycosides (rebaudioside-A, stevioside), two dipeptide derivatives (alitame, aspartame), one sulfamate (sodium cyclamate), one protein (thaumatin), two N-sulfonyl amides (acesulfame-K, sodium saccharin), and one dihydrochalcone (neohesperidin dihydrochalcone). Panelists were given four isointense concentrations of each sweetener by itself and in the presence of two concentrations of tannic acid. Each sweetener concentration was tasted and rated four consecutive times with a 30 s interval between each taste and a 2 min interval between each concentration. Within a taste session, a series of concentrations of a given sweetener was presented in ascending order of magnitude. Adaptation was calculated as the decrease in intensity from the first to the fourth sample. The greatest adaptation in water solutions was found for acesulfame-K, Na saccharin, rebaudioside-A, and stevioside. This was followed by the dipeptide sweeteners, alitame and aspartame. The least adaptation occurred with the sugars, polyhydric alcohols, and neohesperidin dihydrochalcone. Adaptation was greater in tannic acid solutions than in water for six sweeteners. Adaptation of sweet taste may result from the desensitization of sweetener receptors analogous to the homologous desensitization found in the beta adrenergic system.

  10. Solutions and procedures to assure the flow in deepwater conditions

    SciTech Connect

    Gomes, M.G.F.M.; Pereira, F.B.; Lino, A.C.F.

    1996-12-31

    Petrobras has been developing deep water oil fields located in Campos Basin, a vanguard subsea project which faces big challenges, one of them wax deposition in production flowlines. So, since 1990, Petrobras has been studying methods to prevent and remove paraffin-wax deposits. Tests of techniques based on chemical inhibition of crystal growth, thermo-chemical cleaning (SGN), mechanical cleaning (pigging), electrical heating and thermal insulation were done and the main results obtained at CENPES (Petrobras R and D Center) started to be used in the field in 1993. This paper presents solutions and procedures which has been used to minimize oil production losses at Campos Basin -- Brazil.

  11. An Efficient Dynamically Adaptive Mesh for Potentially Singular Solutions

    NASA Astrophysics Data System (ADS)

    Ceniceros, Hector D.; Hou, Thomas Y.

    2001-09-01

    We develop an efficient dynamically adaptive mesh generator for time-dependent problems in two or more dimensions. The mesh generator is motivated by the variational approach and is based on solving a new set of nonlinear elliptic PDEs for the mesh map. When coupled to a physical problem, the mesh map evolves with the underlying solution and maintains high adaptivity as the solution develops complicated structures and even singular behavior. The overall mesh strategy is simple to implement, avoids interpolation, and can be easily incorporated into a broad range of applications. The efficacy of the mesh is first demonstrated by two examples of blowing-up solutions to the 2-D semilinear heat equation. These examples show that the mesh can follow with high adaptivity a finite-time singularity process. The focus of applications presented here is however the baroclinic generation of vorticity in a strongly layered 2-D Boussinesq fluid, a challenging problem. The moving mesh follows effectively the flow resolving both its global features and the almost singular shear layers developed dynamically. The numerical results show the fast collapse to small scales and an exponential vorticity growth.

  12. Solution-Adaptive Cartesian Cell Approach for Viscous and Inviscid Flows

    NASA Technical Reports Server (NTRS)

    Coirier, William J.; Powell, Kenneth G.

    1996-01-01

    A Cartesian cell-based approach for adaptively refined solutions of the Euler and Navier-Stokes equations in two dimensions is presented. Grids about geometrically complicated bodies are generated automatically, by the recursive subdivision of a single Cartesian cell encompassing the entire flow domain. Where the resulting cells intersect bodies, polygonal cut cells are created using modified polygon-clipping algorithms. The grid is stored in a binary tree data structure that provides a natural means of obtaining cell-to-cell connectivity and of carrying out solution-adaptive mesh refinement. The Euler and Navier-Stokes equations are solved on the resulting grids using a finite volume formulation. The convective terms are upwinded: A linear reconstruction of the primitive variables is performed, providing input states to an approximate Riemann solver for computing the fluxes between neighboring cells. The results of a study comparing the accuracy and positivity of two classes of cell-centered, viscous gradient reconstruction procedures is briefly summarized. Adaptively refined solutions of the Navier-Stokes equations are shown using the more robust of these gradient reconstruction procedures, where the results computed by the Cartesian approach are compared to theory, experiment, and other accepted computational results for a series of low and moderate Reynolds number flows.

  13. Adaptive Multigrid Solution of Stokes' Equation on CELL Processor

    NASA Astrophysics Data System (ADS)

    Elgersma, M. R.; Yuen, D. A.; Pratt, S. G.

    2006-12-01

    We are developing an adaptive multigrid solver for treating nonlinear elliptic partial-differential equations, needed for mantle convection problems. Since multigrid is being used for the complete solution, not just as a preconditioner, spatial difference operators are kept nearly diagonally dominant by increasing density of the coarsest grid in regions where coefficients have rapid spatial variation. At each time step, the unstructured coarse grid is refined in regions where coefficients associated with the differential operators or boundary conditions have rapid spatial variation, and coarsened in regions where there is more gradual spatial variation. For three-dimensional problems, the boundary is two-dimensional, and regions where coefficients change rapidly are often near two-dimensional surfaces, so the coarsest grid is only fine near two-dimensional subsets of the three-dimensional space. Coarse grid density drops off exponentially with distance from boundary surfaces and rapid-coefficient-change surfaces. This unstructured coarse grid results in the number of coarse grid voxels growing proportional to surface area, rather than proportional to volume. This results in significant computational savings for the coarse-grid solution. This coarse-grid solution is then refined for the fine-grid solution, and multigrid methods have memory usage and runtime proportional to the number of fine-grid voxels. This adaptive multigrid algorithm is being implemented on the CELL processor, where each chip has eight floating point processors and each processor operates on four floating point numbers each clock cycle. Both the adaptive grid algorithm and the multigrid solver have very efficient parallel implementations, in order to take advantage of the CELL processor architecture.

  14. Ocular cytotoxicity evaluation of medical devices such as contact lens solutions and benefits of a rinse step in cleaning procedure.

    PubMed

    Dutot, Mélody; Vincent, Jacques; Martin-Brisac, Nicolas; Fabre, Isabelle; Grasmick, Christine; Rat, Patrice

    2013-01-01

    Contact lens care solutions are known to have toxic effects on the ocular surface. The ISO 10993-5 standard describes test methods to assess the cytotoxicity of medical devices, but it needs some improvements to discriminate contact lens care multipurpose solutions. First we evaluated the biological hazards associated with the use of ophthalmic solutions, running a collaborative study with the French medical agency to propose adapted tools to study contact lens care solutions' ocular cytotoxicity (human cell line, short incubation times, and no dilution of solutions to test). Then we took into account the potential risk of these ophthalmic solutions adsorbed on contact lenses and released on the ocular surface, highlighting the addition of a rinse step with unpreserved marine solution in the contact lens cleaning procedure to avoid side effects of contact lens care solutions.

  15. Procedures for Computing Transonic Flows for Control of Adaptive Wind Tunnels. Ph.D. Thesis - Technische Univ., Berlin, Mar. 1986

    NASA Technical Reports Server (NTRS)

    Rebstock, Rainer

    1987-01-01

    Numerical methods are developed for control of three dimensional adaptive test sections. The physical properties of the design problem occurring in the external field computation are analyzed, and a design procedure suited for solution of the problem is worked out. To do this, the desired wall shape is determined by stepwise modification of an initial contour. The necessary changes in geometry are determined with the aid of a panel procedure, or, with incident flow near the sonic range, with a transonic small perturbation (TSP) procedure. The designed wall shape, together with the wall deflections set during the tunnel run, are the input to a newly derived one-step formula which immediately yields the adapted wall contour. This is particularly important since the classical iterative adaptation scheme is shown to converge poorly for 3D flows. Experimental results obtained in the adaptive test section with eight flexible walls are presented to demonstrate the potential of the procedure. Finally, a method is described to minimize wall interference in 3D flows by adapting only the top and bottom wind tunnel walls.

  16. Solution adaptive grids applied to low Reynolds number flow

    NASA Astrophysics Data System (ADS)

    de With, G.; Holdø, A. E.; Huld, T. A.

    2003-08-01

    A numerical study has been undertaken to investigate the use of a solution adaptive grid for flow around a cylinder in the laminar flow regime. The main purpose of this work is twofold. The first aim is to investigate the suitability of a grid adaptation algorithm and the reduction in mesh size that can be obtained. Secondly, the uniform asymmetric flow structures are ideal to validate the mesh structures due to mesh refinement and consequently the selected refinement criteria. The refinement variable used in this work is a product of the rate of strain and the mesh cell size, and contains two variables Cm and Cstr which determine the order of each term. By altering the order of either one of these terms the refinement behaviour can be modified.

  17. Adjoint-based error estimation and mesh adaptation for the correction procedure via reconstruction method

    NASA Astrophysics Data System (ADS)

    Shi, Lei; Wang, Z. J.

    2015-08-01

    Adjoint-based mesh adaptive methods are capable of distributing computational resources to areas which are important for predicting an engineering output. In this paper, we develop an adjoint-based h-adaptation approach based on the high-order correction procedure via reconstruction formulation (CPR) to minimize the output or functional error. A dual-consistent CPR formulation of hyperbolic conservation laws is developed and its dual consistency is analyzed. Super-convergent functional and error estimate for the output with the CPR method are obtained. Factors affecting the dual consistency, such as the solution point distribution, correction functions, boundary conditions and the discretization approach for the non-linear flux divergence term, are studied. The presented method is then used to perform simulations for the 2D Euler and Navier-Stokes equations with mesh adaptation driven by the adjoint-based error estimate. Several numerical examples demonstrate the ability of the presented method to dramatically reduce the computational cost comparing with uniform grid refinement.

  18. Adapting Assessment Procedures for Delivery via an Automated Format.

    ERIC Educational Resources Information Center

    Kelly, Karen L.; And Others

    The Office of Personnel Management (OPM) decided to explore alternative examining procedures for positions covered by the Administrative Careers with America (ACWA) examination. One requirement for new procedures was that they be automated for use with OPM's recently developed Microcomputer Assisted Rating System (MARS), a highly efficient system…

  19. Adaptive multigrid domain decomposition solutions for viscous interacting flows

    NASA Technical Reports Server (NTRS)

    Rubin, Stanley G.; Srinivasan, Kumar

    1992-01-01

    Several viscous incompressible flows with strong pressure interaction and/or axial flow reversal are considered with an adaptive multigrid domain decomposition procedure. Specific examples include the triple deck structure surrounding the trailing edge of a flat plate, the flow recirculation in a trough geometry, and the flow in a rearward facing step channel. For the latter case, there are multiple recirculation zones, of different character, for laminar and turbulent flow conditions. A pressure-based form of flux-vector splitting is applied to the Navier-Stokes equations, which are represented by an implicit lowest-order reduced Navier-Stokes (RNS) system and a purely diffusive, higher-order, deferred-corrector. A trapezoidal or box-like form of discretization insures that all mass conservation properties are satisfied at interfacial and outflow boundaries, even for this primitive-variable, non-staggered grid computation.

  20. Procedure for Adaptive Laboratory Evolution of Microorganisms Using a Chemostat.

    PubMed

    Jeong, Haeyoung; Lee, Sang J; Kim, Pil

    2016-01-01

    Natural evolution involves genetic diversity such as environmental change and a selection between small populations. Adaptive laboratory evolution (ALE) refers to the experimental situation in which evolution is observed using living organisms under controlled conditions and stressors; organisms are thereby artificially forced to make evolutionary changes. Microorganisms are subject to a variety of stressors in the environment and are capable of regulating certain stress-inducible proteins to increase their chances of survival. Naturally occurring spontaneous mutations bring about changes in a microorganism's genome that affect its chances of survival. Long-term exposure to chemostat culture provokes an accumulation of spontaneous mutations and renders the most adaptable strain dominant. Compared to the colony transfer and serial transfer methods, chemostat culture entails the highest number of cell divisions and, therefore, the highest number of diverse populations. Although chemostat culture for ALE requires more complicated culture devices, it is less labor intensive once the operation begins. Comparative genomic and transcriptome analyses of the adapted strain provide evolutionary clues as to how the stressors contribute to mutations that overcome the stress. The goal of the current paper is to bring about accelerated evolution of microorganisms under controlled laboratory conditions.

  1. Procedure for Adaptive Laboratory Evolution of Microorganisms Using a Chemostat.

    PubMed

    Jeong, Haeyoung; Lee, Sang J; Kim, Pil

    2016-01-01

    Natural evolution involves genetic diversity such as environmental change and a selection between small populations. Adaptive laboratory evolution (ALE) refers to the experimental situation in which evolution is observed using living organisms under controlled conditions and stressors; organisms are thereby artificially forced to make evolutionary changes. Microorganisms are subject to a variety of stressors in the environment and are capable of regulating certain stress-inducible proteins to increase their chances of survival. Naturally occurring spontaneous mutations bring about changes in a microorganism's genome that affect its chances of survival. Long-term exposure to chemostat culture provokes an accumulation of spontaneous mutations and renders the most adaptable strain dominant. Compared to the colony transfer and serial transfer methods, chemostat culture entails the highest number of cell divisions and, therefore, the highest number of diverse populations. Although chemostat culture for ALE requires more complicated culture devices, it is less labor intensive once the operation begins. Comparative genomic and transcriptome analyses of the adapted strain provide evolutionary clues as to how the stressors contribute to mutations that overcome the stress. The goal of the current paper is to bring about accelerated evolution of microorganisms under controlled laboratory conditions. PMID:27684991

  2. Unstructured viscous flow solution using adaptive hybrid grids

    NASA Technical Reports Server (NTRS)

    Galle, Martin

    1995-01-01

    A three dimensional finite volume scheme based on hybrid grids containing both tetrahedral and hexahedral cells is presented. The application to hybrid grids offers the possibility to combine the flexibility of tetrahedral meshes with the accuracy of hexahedral grids. An algorithm to compute a dual mesh for the entire computational domain was developed. The dual mesh technique guarantees conservation in the whole flow field even at interfaces between hexahedral and tetrahedral domains and enables the employment of an accurate upwind flow solver. The hybrid mesh can be adapted to the solution by dividing cells in areas of insufficient resolution. The method is tested on different viscous and inviscid cases for hypersonic, transonic and subsonic flows.

  3. A Solution Adaptive Technique Using Tetrahedral Unstructured Grids

    NASA Technical Reports Server (NTRS)

    Pirzadeh, Shahyar Z.

    2000-01-01

    An adaptive unstructured grid refinement technique has been developed and successfully applied to several three dimensional inviscid flow test cases. The method is based on a combination of surface mesh subdivision and local remeshing of the volume grid Simple functions of flow quantities are employed to detect dominant features of the flowfield The method is designed for modular coupling with various error/feature analyzers and flow solvers. Several steady-state, inviscid flow test cases are presented to demonstrate the applicability of the method for solving practical three-dimensional problems. In all cases, accurate solutions featuring complex, nonlinear flow phenomena such as shock waves and vortices have been generated automatically and efficiently.

  4. Cooperative solutions coupling a geometry engine and adaptive solver codes

    NASA Technical Reports Server (NTRS)

    Dickens, Thomas P.

    1995-01-01

    Follow-on work has progressed in using Aero Grid and Paneling System (AGPS), a geometry and visualization system, as a dynamic real time geometry monitor, manipulator, and interrogator for other codes. In particular, AGPS has been successfully coupled with adaptive flow solvers which iterate, refining the grid in areas of interest, and continuing on to a solution. With the coupling to the geometry engine, the new grids represent the actual geometry much more accurately since they are derived directly from the geometry and do not use refits to the first-cut grids. Additional work has been done with design runs where the geometric shape is modified to achieve a desired result. Various constraints are used to point the solution in a reasonable direction which also more closely satisfies the desired results. Concepts and techniques are presented, as well as examples of sample case studies. Issues such as distributed operation of the cooperative codes versus running all codes locally and pre-calculation for performance are discussed. Future directions are considered which will build on these techniques in light of changing computer environments.

  5. A mineral separation procedure using hot Clerici solution

    USGS Publications Warehouse

    Rosenblum, Sam

    1974-01-01

    Careful boiling of Clerici solution in a Pyrex test tube in an oil bath is used to float minerals with densities up to 5.0 in order to obtain purified concentrates of monazite (density 5.1) for analysis. The "sink" and "float" fractions are trapped in solidified Clerici salts on rapid chilling, and the fractions are washed into separate filter papers with warm water. The hazardous nature of Clerici solution requires unusual care in handling.

  6. Element-by-element Solution Procedures for Nonlinear Structural Analysis

    NASA Technical Reports Server (NTRS)

    Hughes, T. J. R.; Winget, J. M.; Levit, I.

    1984-01-01

    Element-by-element approximate factorization procedures are proposed for solving the large finite element equation systems which arise in nonlinear structural mechanics. Architectural and data base advantages of the present algorithms over traditional direct elimination schemes are noted. Results of calculations suggest considerable potential for the methods described.

  7. [Adaptive procedures for measuring arterial blood flow velocity in retinal vessels using indicator technique].

    PubMed

    Vilser, W; Schack, B; Bareshova, E; Senff, I; Bräuer-Burchardt, C; Münch, K; Strobel, J

    1995-10-01

    There are highly significant differences in the measuring results of arterial blood velocity between the indicator and laser-Doppler techniques (up to 800%). A new measuring procedure for the analysis of indicator dilution curves was developed based on indicator model and experimental results. The use of this new measuring procedure results in reduced mean systematic error between the indicator and laser-Doppler techniques to values around 10%. With the introduction of adaptive measuring arrays for the creation of indicator dilution curves and the application of adaptive algorithms for centering and spectral normalizing of the dilution curves, improved reproducibility can be expected.

  8. Adaptive correction procedure for TVL1 image deblurring under impulse noise

    NASA Astrophysics Data System (ADS)

    Bai, Minru; Zhang, Xiongjun; Shao, Qianqian

    2016-08-01

    For the problem of image restoration of observed images corrupted by blur and impulse noise, the widely used TVL1 model may deviate from both the data-acquisition model and the prior model, especially for high noise levels. In order to seek a solution of high recovery quality beyond the reach of the TVL1 model, we propose an adaptive correction procedure for TVL1 image deblurring under impulse noise. Then, a proximal alternating direction method of multipliers (ADMM) is presented to solve the corrected TVL1 model and its convergence is also established under very mild conditions. It is verified by numerical experiments that our proposed approach outperforms the TVL1 model in terms of signal-to-noise ratio (SNR) values and visual quality, especially for high noise levels: it can handle salt-and-pepper noise as high as 90% and random-valued noise as high as 70%. In addition, a comparison with a state-of-the-art method, the two-phase method, demonstrates the superiority of the proposed approach.

  9. Development of generalized block correction procedures for the solution of discretized Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Kelkar, Kanchan M.; Patankar, Suhas V.

    1987-01-01

    Effort is directed towards developing a solution method which combines advantages of both the iterative and the direct methods. It involves iterative solution on the fine grid, convergence of which is enhanced by a direct solution for correction quantities on a coarse grid. The proposed block correction procedure was applied to compute recirculating flow in a driven cavity.

  10. Measurement of Actinides in Molybdenum-99 Solution Analytical Procedure

    SciTech Connect

    Soderquist, Chuck Z.; Weaver, Jamie L.

    2015-11-01

    This document is a companion report to a previous report, PNNL 24519, Measurement of Actinides in Molybdenum-99 Solution, A Brief Review of the Literature, August 2015. In this companion report, we report a fast, accurate, newly developed analytical method for measurement of trace alpha-emitting actinide elements in commercial high-activity molybdenum-99 solution. Molybdenum-99 is widely used to produce 99mTc for medical imaging. Because it is used as a radiopharmaceutical, its purity must be proven to be extremely high, particularly for the alpha emitting actinides. The sample of 99Mo solution is measured into a vessel (such as a polyethylene centrifuge tube) and acidified with dilute nitric acid. A gadolinium carrier is added (50 µg). Tracers and spikes are added as necessary. Then the solution is made strongly basic with ammonium hydroxide, which causes the gadolinium carrier to precipitate as hydrous Gd(OH)3. The precipitate of Gd(OH)3 carries all of the actinide elements. The suspension of gadolinium hydroxide is then passed through a membrane filter to make a counting mount suitable for direct alpha spectrometry. The high-activity 99Mo and 99mTc pass through the membrane filter and are separated from the alpha emitters. The gadolinium hydroxide, carrying any trace actinide elements that might be present in the sample, forms a thin, uniform cake on the surface of the membrane filter. The filter cake is first washed with dilute ammonium hydroxide to push the last traces of molybdate through, then with water. The filter is then mounted on a stainless steel counting disk. Finally, the alpha emitting actinide elements are measured by alpha spectrometry.

  11. The block adaptive multigrid method applied to the solution of the Euler equations

    NASA Technical Reports Server (NTRS)

    Pantelelis, Nikos

    1993-01-01

    In the present study, a scheme capable of solving very fast and robust complex nonlinear systems of equations is presented. The Block Adaptive Multigrid (BAM) solution method offers multigrid acceleration and adaptive grid refinement based on the prediction of the solution error. The proposed solution method was used with an implicit upwind Euler solver for the solution of complex transonic flows around airfoils. Very fast results were obtained (18-fold acceleration of the solution) using one fourth of the volumes of a global grid with the same solution accuracy for two test cases.

  12. Three-dimensional Navier-Stokes calculations using solution-adapted grids

    NASA Technical Reports Server (NTRS)

    Henderson, T. L.; Huang, W.; Lee, K. D.; Choo, Y. K.

    1993-01-01

    A three-dimensional solution-adaptive grid generation technique is presented. The adaptation technique redistributes grid points to improve the accuracy of a flow solution without increasing the number of grid points. It is applicable to structured grids with a multiblock topology. The method uses a numerical mapping and potential theory to modify the initial grid distribution based on the properties of the flow solution on the initial grid. The technique is demonstrated with two examples - a transonic finite wing and a supersonic blunt fin. The advantages are shown by comparing flow solutions on the adapted grids with those on the initial grids.

  13. A Procedure for Controlling General Test Overlap in Computerized Adaptive Testing

    ERIC Educational Resources Information Center

    Chen, Shu-Ying

    2010-01-01

    To date, exposure control procedures that are designed to control test overlap in computerized adaptive tests (CATs) are based on the assumption of item sharing between pairs of examinees. However, in practice, examinees may obtain test information from more than one previous test taker. This larger scope of information sharing needs to be…

  14. Solution of Reactive Compressible Flows Using an Adaptive Wavelet Method

    NASA Astrophysics Data System (ADS)

    Zikoski, Zachary; Paolucci, Samuel; Powers, Joseph

    2008-11-01

    This work presents numerical simulations of reactive compressible flow, including detailed multicomponent transport, using an adaptive wavelet algorithm. The algorithm allows for dynamic grid adaptation which enhances our ability to fully resolve all physically relevant scales. The thermodynamic properties, equation of state, and multicomponent transport properties are provided by CHEMKIN and TRANSPORT libraries. Results for viscous detonation in a H2:O2:Ar mixture, and other problems in multiple dimensions, are included.

  15. A structured multi-block solution-adaptive mesh algorithm with mesh quality assessment

    NASA Technical Reports Server (NTRS)

    Ingram, Clint L.; Laflin, Kelly R.; Mcrae, D. Scott

    1995-01-01

    The dynamic solution adaptive grid algorithm, DSAGA3D, is extended to automatically adapt 2-D structured multi-block grids, including adaption of the block boundaries. The extension is general, requiring only input data concerning block structure, connectivity, and boundary conditions. Imbedded grid singular points are permitted, but must be prevented from moving in space. Solutions for workshop cases 1 and 2 are obtained on multi-block grids and illustrate both increased resolution of and alignment with the solution. A mesh quality assessment criteria is proposed to determine how well a given mesh resolves and aligns with the solution obtained upon it. The criteria is used to evaluate the grid quality for solutions of workshop case 6 obtained on both static and dynamically adapted grids. The results indicate that this criteria shows promise as a means of evaluating resolution.

  16. An iterative transformation procedure for numerical solution of flutter and similar characteristics-value problems

    NASA Technical Reports Server (NTRS)

    Gossard, Myron L

    1952-01-01

    An iterative transformation procedure suggested by H. Wielandt for numerical solution of flutter and similar characteristic-value problems is presented. Application of this procedure to ordinary natural-vibration problems and to flutter problems is shown by numerical examples. Comparisons of computed results with experimental values and with results obtained by other methods of analysis are made.

  17. A Two Stage Solution Procedure for Production Planning System with Advance Demand Information

    NASA Astrophysics Data System (ADS)

    Ueno, Nobuyuki; Kadomoto, Kiyotaka; Hasuike, Takashi; Okuhara, Koji

    We model for ‘Naiji System’ which is a unique corporation technique between a manufacturer and suppliers in Japan. We propose a two stage solution procedure for a production planning problem with advance demand information, which is called ‘Naiji’. Under demand uncertainty, this model is formulated as a nonlinear stochastic programming problem which minimizes the sum of production cost and inventory holding cost subject to a probabilistic constraint and some linear production constraints. By the convexity and the special structure of correlation matrix in the problem where inventory for different periods is not independent, we propose a solution procedure with two stages which are named Mass Customization Production Planning & Management System (MCPS) and Variable Mesh Neighborhood Search (VMNS) based on meta-heuristics. It is shown that the proposed solution procedure is available to get a near optimal solution efficiently and practical for making a good master production schedule in the suppliers.

  18. The use of solution adaptive grids in solving partial differential equations

    NASA Technical Reports Server (NTRS)

    Anderson, D. A.; Rai, M. M.

    1982-01-01

    The grid point distribution used in solving a partial differential equation using a numerical method has a substantial influence on the quality of the solution. An adaptive grid which adjusts as the solution changes provides the best results when the number of grid points available for use during the calculation is fixed. Basic concepts used in generating and applying adaptive grids are reviewed in this paper, and examples illustrating applications of these concepts are presented.

  19. Paradoxical results of adaptive false discovery rate procedures in neuroimaging studies.

    PubMed

    Reiss, Philip T; Schwartzman, Armin; Lu, Feihan; Huang, Lei; Proal, Erika

    2012-12-01

    Adaptive false discovery rate (FDR) procedures, which offer greater power than the original FDR procedure of Benjamini and Hochberg, are often applied to statistical maps of the brain. When a large proportion of the null hypotheses are false, as in the case of widespread effects such as cortical thinning throughout much of the brain, adaptive FDR methods can surprisingly reject more null hypotheses than not accounting for multiple testing at all-i.e., using uncorrected p-values. A straightforward mathematical argument is presented to explain why this can occur with the q-value method of Storey and colleagues, and a simulation study shows that it can also occur, to a lesser extent, with a two-stage FDR procedure due to Benjamini and colleagues. We demonstrate the phenomenon with reference to a published data set documenting cortical thinning in attention deficit/hyperactivity disorder. The paper concludes with recommendations for how to proceed when adaptive FDR results of this kind are encountered in practice.

  20. Solution-verified reliability analysis and design of bistable MEMS using error estimation and adaptivity.

    SciTech Connect

    Eldred, Michael Scott; Subia, Samuel Ramirez; Neckels, David; Hopkins, Matthew Morgan; Notz, Patrick K.; Adams, Brian M.; Carnes, Brian; Wittwer, Jonathan W.; Bichon, Barron J.; Copps, Kevin D.

    2006-10-01

    This report documents the results for an FY06 ASC Algorithms Level 2 milestone combining error estimation and adaptivity, uncertainty quantification, and probabilistic design capabilities applied to the analysis and design of bistable MEMS. Through the use of error estimation and adaptive mesh refinement, solution verification can be performed in an automated and parameter-adaptive manner. The resulting uncertainty analysis and probabilistic design studies are shown to be more accurate, efficient, reliable, and convenient.

  1. An adaptive weighted ensemble procedure for efficient computation of free energies and first passage rates.

    PubMed

    Bhatt, Divesh; Bahar, Ivet

    2012-09-14

    We introduce an adaptive weighted-ensemble procedure (aWEP) for efficient and accurate evaluation of first-passage rates between states for two-state systems. The basic idea that distinguishes aWEP from conventional weighted-ensemble (WE) methodology is the division of the configuration space into smaller regions and equilibration of the trajectories within each region upon adaptive partitioning of the regions themselves into small grids. The equilibrated conditional∕transition probabilities between each pair of regions lead to the determination of populations of the regions and the first-passage times between regions, which in turn are combined to evaluate the first passage times for the forward and backward transitions between the two states. The application of the procedure to a non-trivial coarse-grained model of a 70-residue calcium binding domain of calmodulin is shown to efficiently yield information on the equilibrium probabilities of the two states as well as their first passage times. Notably, the new procedure is significantly more efficient than the canonical implementation of the WE procedure, and this improvement becomes even more significant at low temperatures.

  2. Numerical Simulations of STOVL Hot Gas Ingestion in Ground Proximity Using a Multigrid Solution Procedure

    NASA Technical Reports Server (NTRS)

    Wang, Gang

    2003-01-01

    A multi grid solution procedure for the numerical simulation of turbulent flows in complex geometries has been developed. A Full Multigrid-Full Approximation Scheme (FMG-FAS) is incorporated into the continuity and momentum equations, while the scalars are decoupled from the multi grid V-cycle. A standard kappa-Epsilon turbulence model with wall functions has been used to close the governing equations. The numerical solution is accomplished by solving for the Cartesian velocity components either with a traditional grid staggering arrangement or with a multiple velocity grid staggering arrangement. The two solution methodologies are evaluated for relative computational efficiency. The solution procedure with traditional staggering arrangement is subsequently applied to calculate the flow and temperature fields around a model Short Take-off and Vertical Landing (STOVL) aircraft hovering in ground proximity.

  3. An adaptive nonlinear solution scheme for reservoir simulation

    SciTech Connect

    Lett, G.S.

    1996-12-31

    Numerical reservoir simulation involves solving large, nonlinear systems of PDE with strongly discontinuous coefficients. Because of the large demands on computer memory and CPU, most users must perform simulations on very coarse grids. The average properties of the fluids and rocks must be estimated on these grids. These coarse grid {open_quotes}effective{close_quotes} properties are costly to determine, and risky to use, since their optimal values depend on the fluid flow being simulated. Thus, they must be found by trial-and-error techniques, and the more coarse the grid, the poorer the results. This paper describes a numerical reservoir simulator which accepts fine scale properties and automatically generates multiple levels of coarse grid rock and fluid properties. The fine grid properties and the coarse grid simulation results are used to estimate discretization errors with multilevel error expansions. These expansions are local, and identify areas requiring local grid refinement. These refinements are added adoptively by the simulator, and the resulting composite grid equations are solved by a nonlinear Fast Adaptive Composite (FAC) Grid method, with a damped Newton algorithm being used on each local grid. The nonsymmetric linear system of equations resulting from Newton`s method are in turn solved by a preconditioned Conjugate Gradients-like algorithm. The scheme is demonstrated by performing fine and coarse grid simulations of several multiphase reservoirs from around the world.

  4. Prism Adaptation and Aftereffect: Specifying the Properties of a Procedural Memory System

    PubMed Central

    Fernández-Ruiz, Juan; Díaz, Rosalinda

    1999-01-01

    Prism adaptation, a form of procedural learning, is a phenomenon in which the motor system adapts to new visuospatial coordinates imposed by prisms that displace the visual field. Once the prisms are withdrawn, the degree and strength of the adaptation can be measured by the spatial deviation of the motor actions in the direction opposite to the visual displacement imposed by the prisms, a phenomenon known as aftereffect. This study was designed to define the variables that affect the acquisition and retention of the aftereffect. Subjects were required to throw balls to a target in front of them before, during, and after lateral displacement of the visual field with prismatic spectacles. The diopters of the prisms and the number of throws were varied among different groups of subjects. The results show that the adaptation process is dependent on the number of interactions between the visual and motor system, and not on the time spent wearing the prisms. The results also show that the magnitude of the aftereffect is highly correlated with the magnitude of the adaptation, regardless of the diopters of the prisms or the number of throws. Finally, the results suggest that persistence of the aftereffect depends on the number of throws after the adaptation is complete. On the basis of these results, we propose that the system underlying this kind of learning stores at least two different parameters, the contents (measured as the magnitude of displacement) and the persistence (measured as the number of throws to return to the baseline) of the learned information. PMID:10355523

  5. Multi-threaded adaptive extrapolation procedure for Feynman loop integrals in the physical region

    NASA Astrophysics Data System (ADS)

    de Doncker, E.; Yuasa, F.; Assaf, R.

    2013-08-01

    Feynman loop integrals appear in higher order corrections of interaction cross section calculations in perturbative quantum field theory. The integrals are computationally intensive especially in view of singularities which may occur within the integration domain. For the treatment of threshold and infrared singularities we developed techniques using iterated (repeated) adaptive integration and extrapolation. In this paper we describe a shared memory parallelization and its application to one- and two-loop problems, by multi-threading in the outer integrations of the iterated integral. The implementation is layered over OpenMP and retains the adaptive procedure of the sequential method exactly. We give performance results for loop integrals associated with various types of diagrams including one-loop box, pentagon, two-loop self-energy and two-loop vertex diagrams.

  6. A Heat Vulnerability Index and Adaptation Solutions for Pittsburgh, Pennsylvania.

    PubMed

    Bradford, Kathryn; Abrahams, Leslie; Hegglin, Miriam; Klima, Kelly

    2015-10-01

    With increasing evidence of global warming, many cities have focused attention on response plans to address their populations' vulnerabilities. Despite expected increased frequency and intensity of heat waves, the health impacts of such events in urban areas can be minimized with careful policy and economic investments. We focus on Pittsburgh, Pennsylvania and ask two questions. First, what are the top factors contributing to heat vulnerability and how do these characteristics manifest geospatially throughout Pittsburgh? Second, assuming the City wishes to deploy additional cooling centers, what placement will optimally address the vulnerability of the at risk populations? We use national census data, ArcGIS geospatial modeling, and statistical analysis to determine a range of heat vulnerability indices and optimal cooling center placement. We find that while different studies use different data and statistical calculations, all methods tested locate additional cooling centers at the confluence of the three rivers (Downtown), the northeast side of Pittsburgh (Shadyside/Highland Park), and the southeast side of Pittsburgh (Squirrel Hill). This suggests that for Pittsburgh, a researcher could apply the same factor analysis procedure to compare data sets for different locations and times; factor analyses for heat vulnerability are more robust than previously thought.

  7. A Heat Vulnerability Index and Adaptation Solutions for Pittsburgh, Pennsylvania.

    PubMed

    Bradford, Kathryn; Abrahams, Leslie; Hegglin, Miriam; Klima, Kelly

    2015-10-01

    With increasing evidence of global warming, many cities have focused attention on response plans to address their populations' vulnerabilities. Despite expected increased frequency and intensity of heat waves, the health impacts of such events in urban areas can be minimized with careful policy and economic investments. We focus on Pittsburgh, Pennsylvania and ask two questions. First, what are the top factors contributing to heat vulnerability and how do these characteristics manifest geospatially throughout Pittsburgh? Second, assuming the City wishes to deploy additional cooling centers, what placement will optimally address the vulnerability of the at risk populations? We use national census data, ArcGIS geospatial modeling, and statistical analysis to determine a range of heat vulnerability indices and optimal cooling center placement. We find that while different studies use different data and statistical calculations, all methods tested locate additional cooling centers at the confluence of the three rivers (Downtown), the northeast side of Pittsburgh (Shadyside/Highland Park), and the southeast side of Pittsburgh (Squirrel Hill). This suggests that for Pittsburgh, a researcher could apply the same factor analysis procedure to compare data sets for different locations and times; factor analyses for heat vulnerability are more robust than previously thought. PMID:26333158

  8. A Heat Vulnerability Index and Adaptation Solutions for Pittsburgh, Pennsylvania

    NASA Astrophysics Data System (ADS)

    Klima, K.; Abrahams, L.; Bradford, K.; Hegglin, M.

    2015-12-01

    With increasing evidence of global warming, many cities have focused attention on response plans to address their populations' vulnerabilities. Despite expected increased frequency and intensity of heat waves, the health impacts of such events in urban areas can be minimized with careful policy and economic investments. We focus on Pittsburgh, Pennsylvania and ask two questions. First, what are the top factors contributing to heat vulnerability and how do these characteristics manifest geospatially throughout Pittsburgh? Second, assuming the City wishes to deploy additional cooling centers, what placement will optimally address the vulnerability of the at risk populations? We use national census data, ArcGIS geospatial modeling, and statistical analysis to determine a range of heat vulnerability indices and optimal cooling center placement. We find that while different studies use different data and statistical calculations, all methods tested locate additional cooling centers at the confluence of the three rivers (Downtown), the northeast side of Pittsburgh (Shadyside/ Highland Park), and the southeast side of Pittsburgh (Squirrel Hill). This suggests that for Pittsburgh, a researcher could apply the same factor analysis procedure to compare datasets for different locations and times; factor analyses for heat vulnerability are more robust than previously thought.

  9. An adaptive procedure for the numerical parameters of a particle simulation

    NASA Astrophysics Data System (ADS)

    Galitzine, Cyril; Boyd, Iain D.

    2015-01-01

    In this article, a computational procedure that automatically determines the optimum time step, cell weight and species weights for steady-state multi-species DSMC (direct simulation Monte Carlo) simulations is presented. The time step is required to satisfy the basic requirements of the DSMC method while the weight and relative weights fields are chosen so as to obtain a user-specified average number of particles in all cells of the domain. The procedure allows the conduct of efficient DSMC simulations with minimal user input and is integrable into existing DSMC codes. The adaptive method is used to simulate a test case consisting of two counterflowing jets at a Knudsen number of 0.015. Large accuracy gains for sampled number densities and velocities over a standard simulation approach for the same number of particles are observed.

  10. Parallelization of an Adaptive Multigrid Algorithm for Fast Solution of Finite Element Structural Problems

    SciTech Connect

    Crane, N K; Parsons, I D; Hjelmstad, K D

    2002-03-21

    Adaptive mesh refinement selectively subdivides the elements of a coarse user supplied mesh to produce a fine mesh with reduced discretization error. Effective use of adaptive mesh refinement coupled with an a posteriori error estimator can produce a mesh that solves a problem to a given discretization error using far fewer elements than uniform refinement. A geometric multigrid solver uses increasingly finer discretizations of the same geometry to produce a very fast and numerically scalable solution to a set of linear equations. Adaptive mesh refinement is a natural method for creating the different meshes required by the multigrid solver. This paper describes the implementation of a scalable adaptive multigrid method on a distributed memory parallel computer. Results are presented that demonstrate the parallel performance of the methodology by solving a linear elastic rocket fuel deformation problem on an SGI Origin 3000. Two challenges must be met when implementing adaptive multigrid algorithms on massively parallel computing platforms. First, although the fine mesh for which the solution is desired may be large and scaled to the number of processors, the multigrid algorithm must also operate on much smaller fixed-size data sets on the coarse levels. Second, the mesh must be repartitioned as it is adapted to maintain good load balancing. In an adaptive multigrid algorithm, separate mesh levels may require separate partitioning, further complicating the load balance problem. This paper shows that, when the proper optimizations are made, parallel adaptive multigrid algorithms perform well on machines with several hundreds of processors.

  11. An adaptive clinical trials procedure for a sensitive subgroup examined in the multiple sclerosis context.

    PubMed

    Riddell, Corinne A; Zhao, Yinshan; Petkau, John

    2016-08-01

    The biomarker-adaptive threshold design (BATD) allows researchers to simultaneously study the efficacy of treatment in the overall group and to investigate the relationship between a hypothesized predictive biomarker and the treatment effect on the primary outcome. It was originally developed for survival outcomes for Phase III clinical trials where the biomarker of interest is measured on a continuous scale. In this paper, generalizations of the BATD to accommodate count biomarkers and outcomes are developed and then studied in the multiple sclerosis (MS) context where the number of relapses is a commonly used outcome. Through simulation studies, we find that the BATD has increased power compared with a traditional fixed procedure under varying scenarios for which there exists a sensitive patient subgroup. As an illustration, we apply the procedure for two hypothesized markers, baseline enhancing lesion count and disease duration at baseline, using data from a previously completed trial. MS duration appears to be a predictive marker relationship for this dataset, and the procedure indicates that the treatment effect is strongest for patients who have had MS for less than 7.8 years. The procedure holds promise of enhanced statistical power when the treatment effect is greatest in a sensitive patient subgroup.

  12. Application of a solution adaptive grid scheme, SAGE, to complex three-dimensional flows

    NASA Technical Reports Server (NTRS)

    Davies, Carol B.; Venkatapathy, Ethiraj

    1991-01-01

    A new three-dimensional (3D) adaptive grid code based on the algebraic, solution-adaptive scheme of Nakahashi and Deiwert is developed and applied to a variety of problems. The new computer code, SAGE, is an extension of the same-named two-dimensional (2D) solution-adaptive program that has already proven to be a powerful tool in computational fluid dynamics applications. The new code has been applied to a range of complex three-dimensional, supersonic and hypersonic flows. Examples discussed are a tandem-slot fuel injector, the hypersonic forebody of the Aeroassist Flight Experiment (AFE), the 3D base flow behind the AFE, the supersonic flow around a 3D swept ramp and a generic, hypersonic, 3D nozzle-plume flow. The associated adapted grids and the solution enhancements resulting from the grid adaption are presented for these cases. Three-dimensional adaption is more complex than its 2D counterpart, and the complexities unique to the 3D problems are discussed.

  13. An adaptive gating approach for x-ray dose reduction during cardiac interventional procedures

    SciTech Connect

    Abdel-Malek, A.; Yassa, F.; Bloomer, J. )

    1994-03-01

    The increasing number of cardiac interventional procedures has resulted in a tremendous increase in the absorbed x-ray dose by radiologists as well as patients. A new method is presented for x-ray dose reduction which utilizes adaptive tube pulse-rate scheduling in pulsed fluoroscopic systems. In the proposed system, pulse-rate scheduling depends on the heart muscle activity phase determined through continuous guided segmentation of the patient's electrocardiogram (ECG). Displaying images generated at the proposed adaptive nonuniform rate is visually unacceptable; therefore, a frame-filling approach is devised to ensure a 30 frame/sec display rate. The authors adopted two approaches for the frame-filling portion of the system depending on the imaging mode used in the procedure. During cine-mode imaging (high x-ray dose), collected image frame-to-frame pixel motion is estimated using a pel-recursive algorithm followed by motion-based pixel interpolation to estimate the frames necessary to increase the rate to 30 frames/sec. The other frame-filling approach is adopted during fluoro-mode imaging (low x-ray dose), characterized by low signal-to-noise ratio images. This approach consists of simply holding the last collected frame for as many frames as necessary to maintain the real-time display rate.

  14. Combined LAURA-UPS solution procedure for chemically-reacting flows. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Wood, William A.

    1994-01-01

    A new procedure seeks to combine the thin-layer Navier-Stokes solver LAURA with the parabolized Navier-Stokes solver UPS for the aerothermodynamic solution of chemically-reacting air flowfields. The interface protocol is presented and the method is applied to two slender, blunted shapes. Both axisymmetric and three dimensional solutions are included with surface pressure and heat transfer comparisons between the present method and previously published results. The case of Mach 25 flow over an axisymmetric six degree sphere-cone with a noncatalytic wall is considered to 100 nose radii. A stability bound on the marching step size was observed with this case and is attributed to chemistry effects resulting from the noncatalytic wall boundary condition. A second case with Mach 28 flow over a sphere-cone-cylinder-flare configuration is computed at both two and five degree angles of attack with a fully-catalytic wall. Surface pressures are seen to be within five percent with the present method compared to the baseline LAURA solution and heat transfers are within 10 percent. The effect of grid resolution is investigated and the nonequilibrium results are compared with a perfect gas solution, showing that while the surface pressure is relatively unchanged by the inclusion of reacting chemistry the nonequilibrium heating is 25 percent higher. The procedure demonstrates significant, order of magnitude reductions in solution time and required memory for the three dimensional case over an all thin-layer Navier-Stokes solution.

  15. Multigrid-based grid-adaptive solution of the Navier-Stokes equations

    NASA Astrophysics Data System (ADS)

    Michelsen, Jess

    A finite volume scheme for solution of the incompressible Navier-Stokes equations in two dimensions and axisymmetry is described. Solutions are obtained on nonorthogonal, solution adaptive BFC grids, based on the Brackbill-Saltzman generator. Adaptivity is achieved by the use of a single control function based on the local kinetic energy production. Nonstaggered allocation of pressure and Cartesian velocity components avoids the introduction of curvature terms associated with the use of a grid-direction vector-base. A special interpolation of the pressure correction equation in the SIMPLE algorithm ensures firm coupling between velocity and pressure field. Steady-state solutions are accelerated by a full approximation multigrid scheme working on the decoupled grid-flow problem, while an algebraic multigrid scheme is employed for the pressure correction equation.

  16. qPR: An adaptive partial-report procedure based on Bayesian inference

    PubMed Central

    Baek, Jongsoo; Lesmes, Luis Andres; Lu, Zhong-Lin

    2016-01-01

    Iconic memory is best assessed with the partial report procedure in which an array of letters appears briefly on the screen and a poststimulus cue directs the observer to report the identity of the cued letter(s). Typically, 6–8 cue delays or 600–800 trials are tested to measure the iconic memory decay function. Here we develop a quick partial report, or qPR, procedure based on a Bayesian adaptive framework to estimate the iconic memory decay function with much reduced testing time. The iconic memory decay function is characterized by an exponential function and a joint probability distribution of its three parameters. Starting with a prior of the parameters, the method selects the stimulus to maximize the expected information gain in the next test trial. It then updates the posterior probability distribution of the parameters based on the observer's response using Bayesian inference. The procedure is reiterated until either the total number of trials or the precision of the parameter estimates reaches a certain criterion. Simulation studies showed that only 100 trials were necessary to reach an average absolute bias of 0.026 and a precision of 0.070 (both in terms of probability correct). A psychophysical validation experiment showed that estimates of the iconic memory decay function obtained with 100 qPR trials exhibited good precision (the half width of the 68.2% credible interval = 0.055) and excellent agreement with those obtained with 1,600 trials of the conventional method of constant stimuli procedure (RMSE = 0.063). Quick partial-report relieves the data collection burden in characterizing iconic memory and makes it possible to assess iconic memory in clinical populations. PMID:27580045

  17. Global solution for a kinetic chemotaxis model with internal dynamics and its fast adaptation limit

    NASA Astrophysics Data System (ADS)

    Liao, Jie

    2015-12-01

    A nonlinear kinetic chemotaxis model with internal dynamics incorporating signal transduction and adaptation is considered. This paper is concerned with: (i) the global solution for this model, and, (ii) its fast adaptation limit to Othmer-Dunbar-Alt type model. This limit gives some insight to the molecular origin of the chemotaxis behaviour. First, by using the Schauder fixed point theorem, the global existence of weak solution is proved based on detailed a priori estimates, under quite general assumptions. However, the Schauder theorem does not provide uniqueness, so additional analysis is required to be developed for uniqueness. Next, the fast adaptation limit of this model is derived by extracting a weak convergence subsequence in measure space. For this limit, the first difficulty is to show the concentration effect on the internal state. Another difficulty is the strong compactness argument on the chemical potential, which is essential for passing the nonlinear kinetic equation to the weak limit.

  18. Parallel iterative procedures for approximate solutions of wave propagation by finite element and finite difference methods

    SciTech Connect

    Kim, S.

    1994-12-31

    Parallel iterative procedures based on domain decomposition techniques are defined and analyzed for the numerical solution of wave propagation by finite element and finite difference methods. For finite element methods, in a Lagrangian framework, an efficient way for choosing the algorithm parameter as well as the algorithm convergence are indicated. Some heuristic arguments for finding the algorithm parameter for finite difference schemes are addressed. Numerical results are presented to indicate the effectiveness of the methods.

  19. Preliminary tests of a possible outdoor light adaptation solution for a fly inspired visual sensor: a biomimetic solution - biomed 2011.

    PubMed

    Dean, Brian K; Wright, Cameron H G; Barrett, Steven F

    2011-01-01

    Two previous papers, presented at RMBS in 2009 and 2010, introduced a fly inspired vision sensor that could adapt to indoor light conditions by mimicking the light adaptation process of the commonhousefly, Muscadomestica. A new system has been designed that should allow the sensor to adapt to outdoor light conditions which will enable the sensor’s use inapplications such as: unmanned aerial vehicle (UAV) obstacle avoidance, UAV landing support, target tracking, wheelchair guidance, large structure monitoring, and many other outdoor applications. A sensor of this type is especially suited for these applications due to features of hyperacuity (or an ability to achieve movement resolution beyond the theoretical limit), extreme sensitivity to motion, and (through software simulation) image edge extraction, motion detection, and orientation and location of a line.Many of these qualities are beyond the ability of traditional computervision sensors such as charge coupled device (CCD) arrays.To achieve outdoor light adaptation, a variety of design obstacles have to be overcome such as infrared interference, dynamic range expansion, and light saturation. The newly designed system overcomes the latter two design obstacles by mimicking the fly’s solution of logarithmic compression followed by removal of the average background light intensity. This paper presents the new design and the preliminary tests that were conducted to determine its effectiveness. PMID:21525612

  20. Preliminary tests of a possible outdoor light adaptation solution for a fly inspired visual sensor: a biomimetic solution - biomed 2011.

    PubMed

    Dean, Brian K; Wright, Cameron H G; Barrett, Steven F

    2011-01-01

    Two previous papers, presented at RMBS in 2009 and 2010, introduced a fly inspired vision sensor that could adapt to indoor light conditions by mimicking the light adaptation process of the commonhousefly, Muscadomestica. A new system has been designed that should allow the sensor to adapt to outdoor light conditions which will enable the sensor’s use inapplications such as: unmanned aerial vehicle (UAV) obstacle avoidance, UAV landing support, target tracking, wheelchair guidance, large structure monitoring, and many other outdoor applications. A sensor of this type is especially suited for these applications due to features of hyperacuity (or an ability to achieve movement resolution beyond the theoretical limit), extreme sensitivity to motion, and (through software simulation) image edge extraction, motion detection, and orientation and location of a line.Many of these qualities are beyond the ability of traditional computervision sensors such as charge coupled device (CCD) arrays.To achieve outdoor light adaptation, a variety of design obstacles have to be overcome such as infrared interference, dynamic range expansion, and light saturation. The newly designed system overcomes the latter two design obstacles by mimicking the fly’s solution of logarithmic compression followed by removal of the average background light intensity. This paper presents the new design and the preliminary tests that were conducted to determine its effectiveness.

  1. A rapid perturbation procedure for determining nonlinear flow solutions: Application to transonic turbomachinery flows

    NASA Technical Reports Server (NTRS)

    Stahara, S. S.; Elliott, J. P.; Spreiter, J. R.

    1981-01-01

    Perturbation procedures and associated computational codes for determining nonlinear flow solutions were developed to establish a method for minimizing computational requirements associated with parametric studies of transonic flows in turbomachines. The procedure that was developed and evaluated was found to be capable of determining highly accurate approximations to families of strongly nonlinear solutions which are either continuous or discontinuous, and which represent variations in some arbitrary parameter. Coordinate straining is employed to account for the movement of discontinuities and maxima of high gradient regions due to the perturbation. The development and results reported are for the single parameter perturbation problem. Flows past both isolated airfoils and compressor cascades involving a wide variety of flow and geometry parameter changes are reported. Attention is focused in particular on transonic flows which are strongly supercritical and exhibit large surface shock movement over the parametric range studied; and on subsonic flows which display large pressure variations in the stagnation and peak suction pressure regions. Comparisons with the corresponding 'exact' nonlinear solutions indicate a remarkable accuracy and range of validity of such a procedure.

  2. Construction and solution of an adaptive image-restoration model for removing blur and mixed noise

    NASA Astrophysics Data System (ADS)

    Wang, Youquan; Cui, Lihong; Cen, Yigang; Sun, Jianjun

    2016-03-01

    We establish a practical regularized least-squares model with adaptive regularization for dealing with blur and mixed noise in images. This model has some advantages, such as good adaptability for edge restoration and noise suppression due to the application of a priori spatial information obtained from a polluted image. We further focus on finding an important feature of image restoration using an adaptive restoration model with different regularization parameters in polluted images. A more important observation is that the gradient of an image varies regularly from one regularization parameter to another under certain conditions. Then, a modified graduated nonconvexity approach combined with a median filter version of a spatial information indicator is proposed to seek the solution of our adaptive image-restoration model by applying variable splitting and weighted penalty techniques. Numerical experiments show that the method is robust and effective for dealing with various blur and mixed noise levels in images.

  3. An edge-based solution-adaptive method applied to the AIRPLANE code

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak; Thomas, Scott D.; Cliff, Susan E.

    1995-01-01

    Computational methods to solve large-scale realistic problems in fluid flow can be made more efficient and cost effective by using them in conjunction with dynamic mesh adaption procedures that perform simultaneous coarsening and refinement to capture flow features of interest. This work couples the tetrahedral mesh adaption scheme, 3D_TAG, with the AIRPLANE code to solve complete aircraft configuration problems in transonic and supersonic flow regimes. Results indicate that the near-field sonic boom pressure signature of a cone-cylinder is improved, the oblique and normal shocks are better resolved on a transonic wing, and the bow shock ahead of an unstarted inlet is better defined.

  4. Adaptively Refined Euler and Navier-Stokes Solutions with a Cartesian-Cell Based Scheme

    NASA Technical Reports Server (NTRS)

    Coirier, William J.; Powell, Kenneth G.

    1995-01-01

    A Cartesian-cell based scheme with adaptive mesh refinement for solving the Euler and Navier-Stokes equations in two dimensions has been developed and tested. Grids about geometrically complicated bodies were generated automatically, by recursive subdivision of a single Cartesian cell encompassing the entire flow domain. Where the resulting cells intersect bodies, N-sided 'cut' cells were created using polygon-clipping algorithms. The grid was stored in a binary-tree data structure which provided a natural means of obtaining cell-to-cell connectivity and of carrying out solution-adaptive mesh refinement. The Euler and Navier-Stokes equations were solved on the resulting grids using an upwind, finite-volume formulation. The inviscid fluxes were found in an upwinded manner using a linear reconstruction of the cell primitives, providing the input states to an approximate Riemann solver. The viscous fluxes were formed using a Green-Gauss type of reconstruction upon a co-volume surrounding the cell interface. Data at the vertices of this co-volume were found in a linearly K-exact manner, which ensured linear K-exactness of the gradients. Adaptively-refined solutions for the inviscid flow about a four-element airfoil (test case 3) were compared to theory. Laminar, adaptively-refined solutions were compared to accepted computational, experimental and theoretical results.

  5. Individual Differences and Test Administration Procedures: A Comparison of Fixed-Item, Computerized-Adaptive, and Self-Adapted Testing.

    ERIC Educational Resources Information Center

    Vispoel, Walter P.; And Others

    1994-01-01

    Vocabulary fixed-item (FIT), computerized-adaptive (CAT), and self-adapted (SAT) tests were compared with 121 college students. CAT was more precise and efficient than SAT, which was more precise and efficient than FIT. SAT also yielded higher ability estimates for individuals with lower verbal self-concepts. (SLD)

  6. Boundedness of the solutions for certain classes of fractional differential equations with application to adaptive systems.

    PubMed

    Aguila-Camacho, Norelys; Duarte-Mermoud, Manuel A

    2016-01-01

    This paper presents the analysis of three classes of fractional differential equations appearing in the field of fractional adaptive systems, for the case when the fractional order is in the interval α ∈(0,1] and the Caputo definition for fractional derivatives is used. The boundedness of the solutions is proved for all three cases, and the convergence to zero of the mean value of one of the variables is also proved. Applications of the obtained results to fractional adaptive schemes in the context of identification and control problems are presented at the end of the paper, including numerical simulations which support the analytical results.

  7. Solution-Adaptive Program for Computing 2D/Axi Viscous Flow

    NASA Technical Reports Server (NTRS)

    Wood, William A.

    2003-01-01

    A computer program solves the Navier- Stokes equations governing the flow of a viscous, compressible fluid in an axisymmetric or two-dimensional (2D) setting. To obtain solutions more accurate than those generated by prior such programs that utilize regular and/or fixed computational meshes, this program utilizes unstructured (that is, irregular triangular) computational meshes that are automatically adapted to solutions. The adaptation can refine to regions of high change in gradient or can be driven by a novel residual minimization technique. Starting from an initial mesh and a corresponding data structure, the adaptation of the mesh is controlled by use of minimization functional. Other improvements over prior such programs include the following: (1) Boundary conditions are imposed weakly; that is, following initial specification of solution values at boundary nodes, these values are relaxed in time by means of the same formulations as those used for interior nodes. (2) Eigenvalues are limited in order to suppress expansion shocks. (3) An upwind fluctuation-splitting distribution scheme applied to inviscid flux requires fewer operations and produces less artificial dissipation than does a finite-volume scheme, leading to greater accuracy of solutions.

  8. A procedure for weighted summation of the derivatives of reflection coefficients in adaptive Schur filter with application to fault detection in rolling element bearings

    NASA Astrophysics Data System (ADS)

    Makowski, Ryszard; Zimroz, Radoslaw

    2013-07-01

    A procedure for feature extraction using adaptive Schur filter for damage detection in rolling element bearings is proposed in the paper. Damaged bearings produce impact signals (shocks) related with local change (loss) of stiffness in pairs: inner/outer race-rolling element. If significant disturbances do not occur (i.e. signal to noise ratio is sufficient), diagnostics is not very complicated and usually envelope analysis is used. Unfortunately, in most industrial examples, these impulsive contributions in vibration are completely masked by noise or other high energy sources. Moreover, impulses may have time varying amplitudes caused by transmission path, load and properties of noise changing in time. Thus, in order to extract time varying signal of interest, the solution would be an adaptive one. The proposed approach is based on the normalized exact least-square time-variant lattice filter (adaptive Schur filter). It is characterized by an extremely fast start-up performance, excellent convergence behavior, and fast parameter tracking capability, making this approach interesting. Schur adaptive filter consists of P sections, estimating, among others, time-varying reflection coefficients (RCs). In this paper it is proposed to use RCs and their derivatives as diagnostic features. However, it is not convenient to analyze simultaneously P signals for P sections, so instead of these, weighted sum of derivatives of RCs can be used. The key question is how to find these weight values for summation procedure. An original contributions are: application of Schur filter to bearings vibration processing, proposal of several features that can be used for detection and mentioned procedure of weighted summation of signal from sections of Schur filter. The method of signal processing is well-adapted for analysis of the non-stationary time-series, so it sounds very promising for diagnostics of machines working in time varying load/speed conditions.

  9. A new solution procedure for a nonlinear infinite beam equation of motion

    NASA Astrophysics Data System (ADS)

    Jang, T. S.

    2016-10-01

    Our goal of this paper is of a purely theoretical question, however which would be fundamental in computational partial differential equations: Can a linear solution-structure for the equation of motion for an infinite nonlinear beam be directly manipulated for constructing its nonlinear solution? Here, the equation of motion is modeled as mathematically a fourth-order nonlinear partial differential equation. To answer the question, a pseudo-parameter is firstly introduced to modify the equation of motion. And then, an integral formalism for the modified equation is found here, being taken as a linear solution-structure. It enables us to formulate a nonlinear integral equation of second kind, equivalent to the original equation of motion. The fixed point approach, applied to the integral equation, results in proposing a new iterative solution procedure for constructing the nonlinear solution of the original beam equation of motion, which consists luckily of just the simple regular numerical integration for its iterative process; i.e., it appears to be fairly simple as well as straightforward to apply. A mathematical analysis is carried out on both natures of convergence and uniqueness of the iterative procedure by proving a contractive character of a nonlinear operator. It follows conclusively,therefore, that it would be one of the useful nonlinear strategies for integrating the equation of motion for a nonlinear infinite beam, whereby the preceding question may be answered. In addition, it may be worth noticing that the pseudo-parameter introduced here has double roles; firstly, it connects the original beam equation of motion with the integral equation, second, it is related with the convergence of the iterative method proposed here.

  10. A Solution Adaptive Structured/Unstructured Overset Grid Flow Solver with Applications to Helicopter Rotor Flows

    NASA Technical Reports Server (NTRS)

    Duque, Earl P. N.; Biswas, Rupak; Strawn, Roger C.

    1995-01-01

    This paper summarizes a method that solves both the three dimensional thin-layer Navier-Stokes equations and the Euler equations using overset structured and solution adaptive unstructured grids with applications to helicopter rotor flowfields. The overset structured grids use an implicit finite-difference method to solve the thin-layer Navier-Stokes/Euler equations while the unstructured grid uses an explicit finite-volume method to solve the Euler equations. Solutions on a helicopter rotor in hover show the ability to accurately convect the rotor wake. However, isotropic subdivision of the tetrahedral mesh rapidly increases the overall problem size.

  11. A Dynamically Adaptive Arbitrary Lagrangian-Eulerian Method for Solution of the Euler Equations

    SciTech Connect

    Anderson, R W; Elliott, N S; Pember, R B

    2003-02-14

    A new method that combines staggered grid arbitrary Lagrangian-Eulerian (ALE) techniques with structured local adaptive mesh refinement (AMR) has been developed for solution of the Euler equations. The novel components of the methods are driven by the need to reconcile traditional AMR techniques with the staggered variables and moving, deforming meshes associated with Lagrange based ALE schemes. We develop interlevel solution transfer operators and interlevel boundary conditions first in the case of purely Lagrangian hydrodynamics, and then extend these ideas into an ALE method by developing adaptive extensions of elliptic mesh relaxation techniques. Conservation properties of the method are analyzed, and a series of test problem calculations are presented which demonstrate the utility and efficiency of the method.

  12. Multigrid iteration solution procedure for solving three-dimensional sets of coupled equations

    SciTech Connect

    Vondy, D.R.

    1984-08-01

    A procedure of iterative solution was coded in Fortran to apply the multigrid scheme of iteration to a set of coupled equations for three-dimensional problems. The incentive for this effort was to make available an implemented procedure that may be readily used as an alternative to overrelaxation, of special interest in applications where the latter is ineffective. The multigrid process was found to be effective, although noncompetitive with simple overrelaxation for simple, small problems. Absolute error level evaluation was used to support methods assessment. A code source listing is presented to allow ready application when the computer memory size is adequate, avoiding data transfer from auxiliary storage. Included are the capabilities for one-dimensional rebalance and a driver program illustrating use requirements. Feedback of additional experience from application is anticipated.

  13. An Adaptive Landscape Classification Procedure using Geoinformatics and Artificial Neural Networks

    SciTech Connect

    Coleman, Andre Michael

    2008-06-01

    The Adaptive Landscape Classification Procedure (ALCP), which links the advanced geospatial analysis capabilities of Geographic Information Systems (GISs) and Artificial Neural Networks (ANNs) and particularly Self-Organizing Maps (SOMs), is proposed as a method for establishing and reducing complex data relationships. Its adaptive and evolutionary capability is evaluated for situations where varying types of data can be combined to address different prediction and/or management needs such as hydrologic response, water quality, aquatic habitat, groundwater recharge, land use, instrumentation placement, and forecast scenarios. The research presented here documents and presents favorable results of a procedure that aims to be a powerful and flexible spatial data classifier that fuses the strengths of geoinformatics and the intelligence of SOMs to provide data patterns and spatial information for environmental managers and researchers. This research shows how evaluation and analysis of spatial and/or temporal patterns in the landscape can provide insight into complex ecological, hydrological, climatic, and other natural and anthropogenic-influenced processes. Certainly, environmental management and research within heterogeneous watersheds provide challenges for consistent evaluation and understanding of system functions. For instance, watersheds over a range of scales are likely to exhibit varying levels of diversity in their characteristics of climate, hydrology, physiography, ecology, and anthropogenic influence. Furthermore, it has become evident that understanding and analyzing these diverse systems can be difficult not only because of varying natural characteristics, but also because of the availability, quality, and variability of spatial and temporal data. Developments in geospatial technologies, however, are providing a wide range of relevant data, and in many cases, at a high temporal and spatial resolution. Such data resources can take the form of high

  14. Wavelet multiresolution analyses adapted for the fast solution of boundary value ordinary differential equations

    NASA Technical Reports Server (NTRS)

    Jawerth, Bjoern; Sweldens, Wim

    1993-01-01

    We present ideas on how to use wavelets in the solution of boundary value ordinary differential equations. Rather than using classical wavelets, we adapt their construction so that they become (bi)orthogonal with respect to the inner product defined by the operator. The stiffness matrix in a Galerkin method then becomes diagonal and can thus be trivially inverted. We show how one can construct an O(N) algorithm for various constant and variable coefficient operators.

  15. Rocket injector anomalies study. Volume 1: Description of the mathematical model and solution procedure

    NASA Technical Reports Server (NTRS)

    Przekwas, A. J.; Singhal, A. K.; Tam, L. T.

    1984-01-01

    The capability of simulating three dimensional two phase reactive flows with combustion in the liquid fuelled rocket engines is demonstrated. This was accomplished by modifying an existing three dimensional computer program (REFLAN3D) with Eulerian Lagrangian approach to simulate two phase spray flow, evaporation and combustion. The modified code is referred as REFLAN3D-SPRAY. The mathematical formulation of the fluid flow, heat transfer, combustion and two phase flow interaction of the numerical solution procedure, boundary conditions and their treatment are described.

  16. Calculation procedures for potential and viscous flow solutions for engine inlets

    NASA Technical Reports Server (NTRS)

    Albers, J. A.; Stockman, N. O.

    1973-01-01

    The method and basic elements of computer solutions for both potential flow and viscous flow calculations for engine inlets are described. The procedure is applicable to subsonic conventional (CTOL), short-haul (STOL), and vertical takeoff (VTOL) aircraft engine nacelles operating in a compressible viscous flow. The calculated results compare well with measured surface pressure distributions for a number of model inlets. The paper discusses the uses of the program in both the design and analysis of engine inlets, with several examples given for VTOL lift fans, acoustic splitters, and for STOL engine nacelles. Several test support applications are also given.

  17. A discontinuous Petrov-Galerkin methodology for adaptive solutions to the incompressible Navier-Stokes equations

    NASA Astrophysics Data System (ADS)

    Roberts, Nathan V.; Demkowicz, Leszek; Moser, Robert

    2015-11-01

    The discontinuous Petrov-Galerkin methodology with optimal test functions (DPG) of Demkowicz and Gopalakrishnan [18,20] guarantees the optimality of the solution in an energy norm, and provides several features facilitating adaptive schemes. Whereas Bubnov-Galerkin methods use identical trial and test spaces, Petrov-Galerkin methods allow these function spaces to differ. In DPG, test functions are computed on the fly and are chosen to realize the supremum in the inf-sup condition; the method is equivalent to a minimum residual method. For well-posed problems with sufficiently regular solutions, DPG can be shown to converge at optimal rates-the inf-sup constants governing the convergence are mesh-independent, and of the same order as those governing the continuous problem [48]. DPG also provides an accurate mechanism for measuring the error, and this can be used to drive adaptive mesh refinements. We employ DPG to solve the steady incompressible Navier-Stokes equations in two dimensions, building on previous work on the Stokes equations, and focusing particularly on the usefulness of the approach for automatic adaptivity starting from a coarse mesh. We apply our approach to a manufactured solution due to Kovasznay as well as the lid-driven cavity flow, backward-facing step, and flow past a cylinder problems.

  18. A Discontinuous Petrov-Galerkin Methodology for Adaptive Solutions to the Incompressible Navier-Stokes Equations

    SciTech Connect

    Roberts, Nathan V.; Demkowiz, Leszek; Moser, Robert

    2015-11-15

    The discontinuous Petrov-Galerkin methodology with optimal test functions (DPG) of Demkowicz and Gopalakrishnan [18, 20] guarantees the optimality of the solution in an energy norm, and provides several features facilitating adaptive schemes. Whereas Bubnov-Galerkin methods use identical trial and test spaces, Petrov-Galerkin methods allow these function spaces to differ. In DPG, test functions are computed on the fly and are chosen to realize the supremum in the inf-sup condition; the method is equivalent to a minimum residual method. For well-posed problems with sufficiently regular solutions, DPG can be shown to converge at optimal rates—the inf-sup constants governing the convergence are mesh-independent, and of the same order as those governing the continuous problem [48]. DPG also provides an accurate mechanism for measuring the error, and this can be used to drive adaptive mesh refinements. We employ DPG to solve the steady incompressible Navier-Stokes equations in two dimensions, building on previous work on the Stokes equations, and focusing particularly on the usefulness of the approach for automatic adaptivity starting from a coarse mesh. We apply our approach to a manufactured solution due to Kovasznay as well as the lid-driven cavity flow, backward-facing step, and flow past a cylinder problems.

  19. Effects of Estimation Bias on Multiple-Category Classification with an IRT-Based Adaptive Classification Procedure

    ERIC Educational Resources Information Center

    Yang, Xiangdong; Poggio, John C.; Glasnapp, Douglas R.

    2006-01-01

    The effects of five ability estimators, that is, maximum likelihood estimator, weighted likelihood estimator, maximum a posteriori, expected a posteriori, and Owen's sequential estimator, on the performances of the item response theory-based adaptive classification procedure on multiple categories were studied via simulations. The following…

  20. Mission to Mars: Adaptive Identifier for the Solution of Inverse Optical Metrology Tasks

    NASA Astrophysics Data System (ADS)

    Krapivin, Vladimir F.; Varotsos, Costas A.; Christodoulakis, John

    2016-06-01

    A human mission to Mars requires the solution of many problems that mainly linked to the safety of life, the reliable operational control of drinking water as well as health care. The availability of liquid fuels is also an important issue since the existing tools cannot fully provide the required liquid fuels quantities for the mission return journey. This paper presents the development of new methods and technology for reliable, operational, and with high availability chemical analysis of liquid solutions of various types. This technology is based on the employment of optical sensors (such as the multi-channel spectrophotometers or spectroellipsometers and microwave radiometers) and the development of a database of spectral images for typical liquid solutions that could be the objects of life on Mars. This database exploits the adaptive recognition of optical images of liquids using specific algorithms that are based on spectral analysis, cluster analysis and methods for solving the inverse optical metrology tasks.

  1. Chitosan-based hydrogel for dye removal from aqueous solutions: Optimization of the preparation procedure

    NASA Astrophysics Data System (ADS)

    Gioiella, Lucia; Altobelli, Rosaria; de Luna, Martina Salzano; Filippone, Giovanni

    2016-05-01

    The efficacy of chitosan-based hydrogels in the removal of dyes from aqueous solutions has been investigated as a function of different parameters. Hydrogels were obtained by gelation of chitosan with a non-toxic gelling agent based on an aqueous basic solution. The preparation procedure has been optimized in terms of chitosan concentration in the starting solution, gelling agent concentration and chitosan-to-gelling agent ratio. The goal is to properly select the material- and process-related parameters in order to optimize the performances of the chitosan-based dye adsorbent. First, the influence of such factors on the gelling process has been studied from a kinetic point of view. Then, the effects on the adsorption capacity and kinetics of the chitosan hydrogels obtained in different conditions have been investigated. A common food dye (Indigo Carmine) has been used for this purpose. Noticeably, although the disk-shaped hydrogels are in the bulk form, their adsorption capacity is comparable to that reported in the literature for films and beads. In addition, the bulk samples can be easily separated from the liquid phase after the adsorption process, which is highly attractive from a practical point of view. Compression tests reveal that the samples do not breakup even after relatively large compressive strains. The obtained results suggest that the fine tuning of the process parameters allows the production of mechanical resistant and highly adsorbing chitosan-based hydrogels.

  2. Adaptive resolution simulation of an atomistic DNA molecule in MARTINI salt solution

    NASA Astrophysics Data System (ADS)

    Zavadlav, J.; Podgornik, R.; Melo, M. N.; Marrink, S. J.; Praprotnik, M.

    2016-07-01

    We present a dual-resolution model of a deoxyribonucleic acid (DNA) molecule in a bathing solution, where we concurrently couple atomistic bundled water and ions with the coarse-grained MARTINI model of the solvent. We use our fine-grained salt solution model as a solvent in the inner shell surrounding the DNA molecule, whereas the solvent in the outer shell is modeled by the coarse-grained model. The solvent entities can exchange between the two domains and adapt their resolution accordingly. We critically asses the performance of our multiscale model in adaptive resolution simulations of an infinitely long DNA molecule, focusing on the structural characteristics of the solvent around DNA. Our analysis shows that the adaptive resolution scheme does not produce any noticeable artifacts in comparison to a reference system simulated in full detail. The effect of using a bundled-SPC model, required for multiscaling, compared to the standard free SPC model is also evaluated. Our multiscale approach opens the way for large scale applications of DNA and other biomolecules which require a large solvent reservoir to avoid boundary effects.

  3. Generic Procedure for Coupling the PHREEQC Geochemical Modeling Framework with Flow and Solute Transport Simulators

    NASA Astrophysics Data System (ADS)

    Wissmeier, L. C.; Barry, D. A.

    2009-12-01

    Computer simulations of water availability and quality play an important role in state-of-the-art water resources management. However, many of the most utilized software programs focus either on physical flow and transport phenomena (e.g., MODFLOW, MT3DMS, FEFLOW, HYDRUS) or on geochemical reactions (e.g., MINTEQ, PHREEQC, CHESS, ORCHESTRA). In recent years, several couplings between both genres of programs evolved in order to consider interactions between flow and biogeochemical reactivity (e.g., HP1, PHWAT). Software coupling procedures can be categorized as ‘close couplings’, where programs pass information via the memory stack at runtime, and ‘remote couplings’, where the information is exchanged at each time step via input/output files. The former generally involves modifications of software codes and therefore expert programming skills are required. We present a generic recipe for remotely coupling the PHREEQC geochemical modeling framework and flow and solute transport (FST) simulators. The iterative scheme relies on operator splitting with continuous re-initialization of PHREEQC and the FST of choice at each time step. Since PHREEQC calculates the geochemistry of aqueous solutions in contact with soil minerals, the procedure is primarily designed for couplings to FST’s for liquid phase flow in natural environments. It requires the accessibility of initial conditions and numerical parameters such as time and space discretization in the input text file for the FST and control of the FST via commands to the operating system (batch on Windows; bash/shell on Unix/Linux). The coupling procedure is based on PHREEQC’s capability to save the state of a simulation with all solid, liquid and gaseous species as a PHREEQC input file by making use of the dump file option in the TRANSPORT keyword. The output from one reaction calculation step is therefore reused as input for the following reaction step where changes in element amounts due to advection

  4. Patched based methods for adaptive mesh refinement solutions of partial differential equations

    SciTech Connect

    Saltzman, J.

    1997-09-02

    This manuscript contains the lecture notes for a course taught from July 7th through July 11th at the 1997 Numerical Analysis Summer School sponsored by C.E.A., I.N.R.I.A., and E.D.F. The subject area was chosen to support the general theme of that year`s school which is ``Multiscale Methods and Wavelets in Numerical Simulation.`` The first topic covered in these notes is a description of the problem domain. This coverage is limited to classical PDEs with a heavier emphasis on hyperbolic systems and constrained hyperbolic systems. The next topic is difference schemes. These schemes are the foundation for the adaptive methods. After the background material is covered, attention is focused on a simple patched based adaptive algorithm and its associated data structures for square grids and hyperbolic conservation laws. Embellishments include curvilinear meshes, embedded boundary and overset meshes. Next, several strategies for parallel implementations are examined. The remainder of the notes contains descriptions of elliptic solutions on the mesh hierarchy, elliptically constrained flow solution methods and elliptically constrained flow solution methods with diffusion.

  5. An Adaptive QoS Routing Solution for MANET Based Multimedia Communications in Emergency Cases

    NASA Astrophysics Data System (ADS)

    Ramrekha, Tipu Arvind; Politis, Christos

    The Mobile Ad hoc Networks (MANET) is a wireless network deprived of any fixed central authoritative routing entity. It relies entirely on collaborating nodes forwarding packets from source to destination. This paper describes the design, implementation and performance evaluation of CHAMELEON, an adaptive Quality of Service (QoS) routing solution, with improved delay and jitter performances, enabling multimedia communication for MANETs in extreme emergency situations such as forest fire and terrorist attacks as defined in the PEACE project. CHAMELEON is designed to adapt its routing behaviour according to the size of a MANET. The reactive Ad Hoc on-Demand Distance Vector Routing (AODV) and proactive Optimized Link State Routing (OLSR) protocols are deemed appropriate for CHAMELEON through their performance evaluation in terms of delay and jitter for different MANET sizes in a building fire emergency scenario. CHAMELEON is then implemented in NS-2 and evaluated similarly. The paper concludes with a summary of findings so far and intended future work.

  6. A solution procedure for two- and three-dimensional unsteady viscous flows

    NASA Astrophysics Data System (ADS)

    Weinberg, B. C.; McDonald, H.; Shamroth, S. J.

    1985-01-01

    An efficient computational procedure for solving three-dimensional unsteady turbulent flows is described. The consistently split Linearized Block Implicit (LBI) scheme is used in conjunction with the QR operator scheme to solve an approximate form of the Navier-Stokes equations in generalized nonorthogonal coordinates employing physical velocity components. As a demonstration calculation the turbulent oscillating flow over a flat plate corresponding to the experiment of Karlsson is considered in both two and three dimensions. New inflow boundary conditions are proposed which yield physically plausible solutions near the upstream boundary. The results obtained agree both qualitatively and quantitatively with Karlsson's data and shed new light on the controversy concerning the interpretation of the skin friction phase angle as a function of reduced frequency.

  7. Adaptive-mesh-based algorithm for fluorescence molecular tomography using an analytical solution.

    PubMed

    Wang, Daifa; Song, Xiaolei; Bai, Jing

    2007-07-23

    Fluorescence molecular tomography (FMT) has become an important method for in-vivo imaging of small animals. It has been widely used for tumor genesis, cancer detection, metastasis, drug discovery, and gene therapy. In this study, an algorithm for FMT is proposed to obtain accurate and fast reconstruction by combining an adaptive mesh refinement technique and an analytical solution of diffusion equation. Numerical studies have been performed on a parallel plate FMT system with matching fluid. The reconstructions obtained show that the algorithm is efficient in computation time, and they also maintain image quality.

  8. Two-Dimensional Fully Adaptive Solutions of Solid-Solid Alloying Reactions

    NASA Astrophysics Data System (ADS)

    Smooke, M. D.; Koszykowski, M. L.

    1986-01-01

    Solid-solid alloying reactions occur in a variety of pyrotechnical applications. They arise when a mixture of powders composed of appropriate oxidizing and reducing agents is heated. The large quantity of heat evolved produces a self-propagating reaction front that is often very narrow with sharp changes in both the temperature and the concentrations of the reacting species. Solution of problems of this type with an equispaced or mildly nonuniform grid can be extremely inefficient. In this paper we develop a two-dimensional fully adaptive method for solving problems of this class. The method adaptively adjusts the number of grid points needed to equidistribute a positive weight function over a given mesh interval in each direction at each time level. We monitor the solution from one time level to another to ensure that the local error per unit step associated with the time differencing method is below some specified tolerance. The method is applied to several examples involving exothermic, diffusion-controlled, self-propagating reactions in packed bed reactors.

  9. Evaluation of solution procedures for material and/or geometrically nonlinear structural analysis by the direct stiffness method.

    NASA Technical Reports Server (NTRS)

    Stricklin, J. A.; Haisler, W. E.; Von Riesemann, W. A.

    1972-01-01

    This paper presents an assessment of the solution procedures available for the analysis of inelastic and/or large deflection structural behavior. A literature survey is given which summarized the contribution of other researchers in the analysis of structural problems exhibiting material nonlinearities and combined geometric-material nonlinearities. Attention is focused at evaluating the available computation and solution techniques. Each of the solution techniques is developed from a common equation of equilibrium in terms of pseudo forces. The solution procedures are applied to circular plates and shells of revolution in an attempt to compare and evaluate each with respect to computational accuracy, economy, and efficiency. Based on the numerical studies, observations and comments are made with regard to the accuracy and economy of each solution technique.

  10. Applications of an adaptive unstructured solution algorithm to the analysis of high speed flows

    NASA Technical Reports Server (NTRS)

    Thareja, R. R.; Prabhu, R. K.; Morgan, K.; Peraire, J.; Peiro, J.

    1990-01-01

    An upwind cell-centered scheme for the solution of steady laminar viscous high-speed flows is implemented on unstructured two-dimensional meshes. The first-order implementation employs Roe's (1981) approximate Riemann solver, and a higher-order extension is produced by using linear reconstruction with limiting. The procedure is applied to the solution of inviscid subsonic flow over an airfoil, inviscid supersonic flow past a cylinder, and viscous hypersonic flow past a double ellipse. A detailed study is then made of a hypersonic laminar viscous flow on a 24-deg compression corner. It is shown that good agreement is achieved with previous predictions using finite-difference and finite-volume schemes. However, these predictions do not agree with experimental observations. With refinement of the structured grid at the leading edge, good agreement with experimental observations for the distributions of wall pressure, heating rate and skin friction is obtained.

  11. Variational solution of Poisson's equation using plane waves in adaptive coordinates.

    PubMed

    Pérez-Jordá, José M

    2014-11-01

    A procedure for solving Poisson's equation using plane waves in adaptive coordinates (u) is described. The method, based on Gygi's work, writes a trial potential ξ as the product of a preselected Coulomb weight μ times a plane-wave expansion depending on u. Then, the Coulomb potential generated by a given density ρ is obtained by variationally optimizing ξ, so that the error in the Coulomb energy is second-order with respect to the error in ξ. The Coulomb weight μ is chosen to provide to each ξ the typical long-range tail of a Coulomb potential, so that calculations on atoms and molecules are made possible without having to resort to the supercell approximation. As a proof of concept, the method is tested on the helium atom and the H_{2} and H_{3}^{+} molecules, where Hartree-Fock energies with better than milli-Hartree accuracy require only a moderate number of plane waves.

  12. Adaptive kernel independent component analysis and UV spectrometry applied to characterize the procedure for processing prepared rhubarb roots.

    PubMed

    Wang, Guoqing; Hou, Zhenyu; Peng, Yang; Wang, Yanjun; Sun, Xiaoli; Sun, Yu-an

    2011-11-01

    By determination of the number of absorptive chemical components (ACCs) in mixtures using median absolute deviation (MAD) analysis and extraction of spectral profiles of ACCs using kernel independent component analysis (KICA), an adaptive KICA (AKICA) algorithm was proposed. The proposed AKICA algorithm was used to characterize the procedure for processing prepared rhubarb roots by resolution of the measured mixed raw UV spectra of the rhubarb samples that were collected at different steaming intervals. The results show that the spectral features of ACCs in the mixtures can be directly estimated without chemical and physical pre-separation and other prior information. The estimated three independent components (ICs) represent different chemical components in the mixtures, which are mainly polysaccharides (IC1), tannin (IC2), and anthraquinone glycosides (IC3). The variations of the relative concentrations of the ICs can account for the chemical and physical changes during the processing procedure: IC1 increases significantly before the first 5 h, and is nearly invariant after 6 h; IC2 has no significant changes or is slightly decreased during the processing procedure; IC3 decreases significantly before the first 5 h and decreases slightly after 6 h. The changes of IC1 can explain why the colour became black and darkened during the processing procedure, and the changes of IC3 can explain why the processing procedure can reduce the bitter and dry taste of the rhubarb roots. The endpoint of the processing procedure can be determined as 5-6 h, when the increasing or decreasing trends of the estimated ICs are insignificant. The AKICA-UV method provides an alternative approach for the characterization of the processing procedure of rhubarb roots preparation, and provides a novel way for determination of the endpoint of the traditional Chinese medicine (TCM) processing procedure by inspection of the change trends of the ICs.

  13. Solution-based adaptive parallel patterning by laser-induced local plasmonic surface defunctionalization.

    PubMed

    Kang, Bongchul; Kim, Jongsu; Yang, Minyang

    2012-12-17

    Adaptive mass fabrication method based on laser-induced plasmonic local surface defunctionalization was suggested to realize solution-based high resolution self-patterning on transparent substrate in parallel. After non-patterned functional monolayer was locally deactivated by laser-induced metallic plasma species, various micro/nano metal structures could be simultaneously fabricated by the parallel self-selective deposition of metal nanoparticles on a specific region. This method makes the eco-friendly and cost-effective production of high resolution pattern possible. Moreover, it can respond to design change actively due to the broad controllable range and easy change of key patterning specifications such as a resolution (subwavelength~100 μm), thickness (100 nm~6 μm), type (dot and line), and shape.

  14. Error norms for the adaptive solution of the Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Forester, C. K.

    1982-01-01

    The adaptive solution of the Navier-Stokes equations depends upon the successful interaction of three key elements: (1) the ability to flexibly select grid length scales in composite grids, (2) the ability to efficiently control residual error in composite grids, and (3) the ability to define reliable, convenient error norms to guide the grid adjustment and optimize the residual levels relative to the local truncation errors. An initial investigation was conducted to explore how to approach developing these key elements. Conventional error assessment methods were defined and defect and deferred correction methods were surveyed. The one dimensional potential equation was used as a multigrid test bed to investigate how to achieve successful interaction of these three key elements.

  15. A Minimax Sequential Procedure in the Context of Computerized Adaptive Mastery Testing.

    ERIC Educational Resources Information Center

    Vos, Hans J.

    The purpose of this paper is to derive optimal rules for variable-length mastery tests in case three mastery classification decisions (nonmastery, partial mastery, and mastery) are distinguished. In a variable-length or adaptive mastery test, the decision is to classify a subject as a master, a partial master, a nonmaster, or continuing sampling…

  16. Two-stage sequential sampling: A neighborhood-free adaptive sampling procedure

    USGS Publications Warehouse

    Salehi, M.; Smith, D.R.

    2005-01-01

    Designing an efficient sampling scheme for a rare and clustered population is a challenging area of research. Adaptive cluster sampling, which has been shown to be viable for such a population, is based on sampling a neighborhood of units around a unit that meets a specified condition. However, the edge units produced by sampling neighborhoods have proven to limit the efficiency and applicability of adaptive cluster sampling. We propose a sampling design that is adaptive in the sense that the final sample depends on observed values, but it avoids the use of neighborhoods and the sampling of edge units. Unbiased estimators of population total and its variance are derived using Murthy's estimator. The modified two-stage sampling design is easy to implement and can be applied to a wider range of populations than adaptive cluster sampling. We evaluate the proposed sampling design by simulating sampling of two real biological populations and an artificial population for which the variable of interest took the value either 0 or 1 (e.g., indicating presence and absence of a rare event). We show that the proposed sampling design is more efficient than conventional sampling in nearly all cases. The approach used to derive estimators (Murthy's estimator) opens the door for unbiased estimators to be found for similar sequential sampling designs. ?? 2005 American Statistical Association and the International Biometric Society.

  17. Multidisciplinary Procedures for Designing Housing Adaptations for People with Mobility Disabilities.

    PubMed

    Sukkay, Sasicha

    2016-01-01

    Based on a 2013 statistic published by Thai with Disability foundation, five percent of Thailand's population are disabled people. Six hundred thousand of them have mobility disability, and the number is increasing every year. To support them, the Thai government has implemented a number of disability laws and policies. One of the policies is to better disabled people's quality of life by adapting their houses to facilitate their activities. However, the policy has not been fully realized yet-there is still no specific guideline for housing adaptation for people with disabilities. This study is an attempt to address the lack of standardized criteria for such adaptation by developing a number of effective ones. Our development had 3 objectives: first, to identify the body functioning of a group of people with mobility disability according to the international classification functioning concept (ICF); second, to perform post-occupancy evaluation of this group and their houses; and third, with the collected data, to have a group of multidisciplinary experts cooperatively develop criteria for housing adaptation. The major findings were that room dimensions and furniture materials really had an impact on accessibility and toilet as well as bed room were the most difficult areas to access. PMID:27534326

  18. Impact of Metal Nanoform Colloidal Solution on the Adaptive Potential of Plants

    NASA Astrophysics Data System (ADS)

    Taran, Nataliya; Batsmanova, Ludmila; Kovalenko, Mariia; Okanenko, Alexander

    2016-02-01

    Nanoparticles are a known cause of oxidative stress and so induce antistress action. The latter property was the purpose of our study. The effect of two concentrations (120 and 240 mg/l) of nanoform biogenic metal (Ag, Cu, Fe, Zn, Mn) colloidal solution on antioxidant enzymes, superoxide dismutase and catalase; the level of the factor of the antioxidant state; and the content of thiobarbituric acid reactive substances (TBARSs) of soybean plant in terms of field experience were studied. It was found that the oxidative processes developed a metal nanoparticle pre-sowing seed treatment variant at a concentration of 120 mg/l, as evidenced by the increase in the content of TBARS in photosynthetic tissues by 12 %. Pre-sowing treatment in a double concentration (240 mg/l) resulted in a decrease in oxidative processes (19 %), and pre-sowing treatment combined with vegetative treatment also contributed to the reduction of TBARS (10 %). Increased activity of superoxide dismutase (SOD) was observed in a variant by increasing the content of TBARS; SOD activity was at the control level in two other variants. Catalase activity decreased in all variants. The factor of antioxidant activity was highest (0.3) in a variant with nanoparticle double treatment (pre-sowing and vegetative) at a concentration of 120 mg/l. Thus, the studied nanometal colloidal solution when used in small doses, in a certain time interval, can be considered as a low-level stress factor which according to hormesis principle promoted adaptive response reaction.

  19. Dissociating proportion congruent and conflict adaptation effects in a Simon-Stroop procedure.

    PubMed

    Torres-Quesada, Maryem; Funes, Maria Jesús; Lupiáñez, Juan

    2013-02-01

    Proportion congruent and conflict adaptation are two well known effects associated with cognitive control. A critical open question is whether they reflect the same or separate cognitive control mechanisms. In this experiment, in a training phase we introduced a proportion congruency manipulation for one conflict type (i.e. Simon), whereas in pre-training and post-training phases two conflict types (e.g. Simon and Spatial Stroop) were displayed with the same incongruent-to-congruent ratio. The results supported the sustained nature of the proportion congruent effect, as it transferred from the training to the post-training phase. Furthermore, this transfer generalized to both conflict types. By contrast, the conflict adaptation effect was specific to conflict type, as it was only observed when the same conflict type (either Simon or Stroop) was presented on two consecutive trials (no effect was observed on conflict type alternation trials). Results are interpreted as supporting the reactive and proactive control mechanisms distinction.

  20. Detecting DIF for Polytomously Scored Items: An Adaptation of the SIBTEST Procedure. Research Report.

    ERIC Educational Resources Information Center

    Chang, Hua-Hua; And Others

    Recently, R. Shealy and W. Stout (1993) proposed a procedure for detecting differential item functioning (DIF) called SIBTEST. Current versions of SIBTEST can only be used for dichotomously scored items, but this paper presents an extension to handle polytomous items. The paper presents: (1) a discussion of an appropriate definition of DIF for…

  1. Bayesian Procedures for Identifying Aberrant Response-Time Patterns in Adaptive Testing

    ERIC Educational Resources Information Center

    van der Linden, Wim J.; Guo, Fanmin

    2008-01-01

    In order to identify aberrant response-time patterns on educational and psychological tests, it is important to be able to separate the speed at which the test taker operates from the time the items require. A lognormal model for response times with this feature was used to derive a Bayesian procedure for detecting aberrant response times.…

  2. EEG-Based BCI System Using Adaptive Features Extraction and Classification Procedures

    PubMed Central

    Mangia, Anna Lisa; Cappello, Angelo

    2016-01-01

    Motor imagery is a common control strategy in EEG-based brain-computer interfaces (BCIs). However, voluntary control of sensorimotor (SMR) rhythms by imagining a movement can be skilful and unintuitive and usually requires a varying amount of user training. To boost the training process, a whole class of BCI systems have been proposed, providing feedback as early as possible while continuously adapting the underlying classifier model. The present work describes a cue-paced, EEG-based BCI system using motor imagery that falls within the category of the previously mentioned ones. Specifically, our adaptive strategy includes a simple scheme based on a common spatial pattern (CSP) method and support vector machine (SVM) classification. The system's efficacy was proved by online testing on 10 healthy participants. In addition, we suggest some features we implemented to improve a system's “flexibility” and “customizability,” namely, (i) a flexible training session, (ii) an unbalancing in the training conditions, and (iii) the use of adaptive thresholds when giving feedback.

  3. EEG-Based BCI System Using Adaptive Features Extraction and Classification Procedures

    PubMed Central

    Mangia, Anna Lisa; Cappello, Angelo

    2016-01-01

    Motor imagery is a common control strategy in EEG-based brain-computer interfaces (BCIs). However, voluntary control of sensorimotor (SMR) rhythms by imagining a movement can be skilful and unintuitive and usually requires a varying amount of user training. To boost the training process, a whole class of BCI systems have been proposed, providing feedback as early as possible while continuously adapting the underlying classifier model. The present work describes a cue-paced, EEG-based BCI system using motor imagery that falls within the category of the previously mentioned ones. Specifically, our adaptive strategy includes a simple scheme based on a common spatial pattern (CSP) method and support vector machine (SVM) classification. The system's efficacy was proved by online testing on 10 healthy participants. In addition, we suggest some features we implemented to improve a system's “flexibility” and “customizability,” namely, (i) a flexible training session, (ii) an unbalancing in the training conditions, and (iii) the use of adaptive thresholds when giving feedback. PMID:27635129

  4. EEG-Based BCI System Using Adaptive Features Extraction and Classification Procedures.

    PubMed

    Mondini, Valeria; Mangia, Anna Lisa; Cappello, Angelo

    2016-01-01

    Motor imagery is a common control strategy in EEG-based brain-computer interfaces (BCIs). However, voluntary control of sensorimotor (SMR) rhythms by imagining a movement can be skilful and unintuitive and usually requires a varying amount of user training. To boost the training process, a whole class of BCI systems have been proposed, providing feedback as early as possible while continuously adapting the underlying classifier model. The present work describes a cue-paced, EEG-based BCI system using motor imagery that falls within the category of the previously mentioned ones. Specifically, our adaptive strategy includes a simple scheme based on a common spatial pattern (CSP) method and support vector machine (SVM) classification. The system's efficacy was proved by online testing on 10 healthy participants. In addition, we suggest some features we implemented to improve a system's "flexibility" and "customizability," namely, (i) a flexible training session, (ii) an unbalancing in the training conditions, and (iii) the use of adaptive thresholds when giving feedback.

  5. EEG-Based BCI System Using Adaptive Features Extraction and Classification Procedures.

    PubMed

    Mondini, Valeria; Mangia, Anna Lisa; Cappello, Angelo

    2016-01-01

    Motor imagery is a common control strategy in EEG-based brain-computer interfaces (BCIs). However, voluntary control of sensorimotor (SMR) rhythms by imagining a movement can be skilful and unintuitive and usually requires a varying amount of user training. To boost the training process, a whole class of BCI systems have been proposed, providing feedback as early as possible while continuously adapting the underlying classifier model. The present work describes a cue-paced, EEG-based BCI system using motor imagery that falls within the category of the previously mentioned ones. Specifically, our adaptive strategy includes a simple scheme based on a common spatial pattern (CSP) method and support vector machine (SVM) classification. The system's efficacy was proved by online testing on 10 healthy participants. In addition, we suggest some features we implemented to improve a system's "flexibility" and "customizability," namely, (i) a flexible training session, (ii) an unbalancing in the training conditions, and (iii) the use of adaptive thresholds when giving feedback. PMID:27635129

  6. Test procedure for anion exchange testing with Argonne 10-L solutions

    SciTech Connect

    Compton, J.A.

    1995-05-17

    Four anion exchange resins will be tested to confirm that they will sorb and release plutonium from/to the appropriate solutions in the presence of other cations. Certain cations need to be removed from the test solutions to minimize adverse behavior in other processing equipment. The ion exchange resins will be tested using old laboratory solutions from Argonne National Laboratory; results will be compared to results from other similar processes for application to all plutonium solutions stored in the Plutonium Finishing Plant.

  7. Designing experimental setup and procedures for studying alpha-particle-induced adaptive response in zebrafish embryos in vivo

    NASA Astrophysics Data System (ADS)

    Choi, V. W. Y.; Lam, R. K. K.; Chong, E. Y. W.; Cheng, S. H.; Yu, K. N.

    2010-03-01

    The present work was devoted to designing the experimental setup and the associated procedures for alpha-particle-induced adaptive response in zebrafish embryos in vivo. Thin PADC films with a thickness of 16 μm were fabricated and employed as support substrates for holding dechorionated zebrafish embryos for alpha-particle irradiation from the bottom through the films. Embryos were collected within 15 min when the light photoperiod began, which were then incubated and dechorionated at 4 h post fertilization (hpf). They were then irradiated at 5 hpf by alpha particles using a planar 241Am source with an activity of 0.1151 μCi for 24 s (priming dose), and subsequently at 10 hpf using the same source for 240 s (challenging dose). The levels of apoptosis in irradiated zebrafish embryos at 24 hpf were quantified through staining with the vital dye acridine orange, followed by counting the stained cells under a florescent microscope. The results revealed the presence of the adaptive response in zebrafish embryos in vivo, and demonstrated the feasibility of the adopted experimental setup and procedures.

  8. Flexible design of two-stage adaptive procedures for phase III clinical trials.

    PubMed

    Koyama, Tatsuki

    2007-07-01

    The recent popularity of two-stage adaptive designs has fueled a number of proposals for their use in phase III clinical trials. Many of these designs assign certain restrictive functional forms to the design elements of stage 2, such as sample size, critical value and conditional power functions. We propose a more flexible method of design without imposing any particular functional forms on these design elements. Our methodology permits specification of a design based on either conditional or unconditional characteristics, and allows accommodation of sample size limit. Furthermore, we show how to compute the P value, confidence interval and a reasonable point estimate for any design that can be placed under the proposed framework. PMID:17307399

  9. Adaptation of the Unterzaucher procedure for determination of oxygen-18 in organic substances

    SciTech Connect

    Santrock, J.; Hayes, J.M.

    1987-01-01

    A method for the preparation of carbon dioxide from complex organic material for oxygen isotopic analysis is described. A commercial elemental analyzer has been modified so that oxygen contained in the organic material is quantitatively converted to carbon dioxide by the Schuetze-Unterzaucher technique, chromatographically purified, and transferred to a sample container for subsequent analysis by isotope ratio mass spectrometry. The organic sample is pyrolyzed, and the products of pyrolysis are equilibrated with elemental carbon at 1060 /sup 0/C to produce CO, and the CO is oxidized to CO/sub 2/ by I/sub 2/O/sub 5/. The details of these processes are considered, and a quantitative model is developed to allow correction for contamination of the carbon dioxide oxygen pool by an oxygen blank, oxygen from previous samples (memory), an oxygen from iodine pentoxide. Procedures for determination of the parameters used in the mathematical correction and routine application of the model to isotopic analysis are outlined. At natural abundance, the standard deviation for determination of the fractional abundance of oxygen-18 in a sample of organic material is 2 x 10/sup -7/ (equivalent to 0.1%). The detection limit for /sup 18/O as a tracer in biological materials is better than 1 atom excess/10/sup 6/ atoms total O. Analyses of independently established standards show that results obtained by the mathematical correction procedure are accurate and allow determination of abundances of /sup 18/O in the sucrose standards prepared by Hardcastle and Friedman.

  10. Adapt

    NASA Astrophysics Data System (ADS)

    Bargatze, L. F.

    2015-12-01

    Active Data Archive Product Tracking (ADAPT) is a collection of software routines that permits one to generate XML metadata files to describe and register data products in support of the NASA Heliophysics Virtual Observatory VxO effort. ADAPT is also a philosophy. The ADAPT concept is to use any and all available metadata associated with scientific data to produce XML metadata descriptions in a consistent, uniform, and organized fashion to provide blanket access to the full complement of data stored on a targeted data server. In this poster, we present an application of ADAPT to describe all of the data products that are stored by using the Common Data File (CDF) format served out by the CDAWEB and SPDF data servers hosted at the NASA Goddard Space Flight Center. These data servers are the primary repositories for NASA Heliophysics data. For this purpose, the ADAPT routines have been used to generate data resource descriptions by using an XML schema named Space Physics Archive, Search, and Extract (SPASE). SPASE is the designated standard for documenting Heliophysics data products, as adopted by the Heliophysics Data and Model Consortium. The set of SPASE XML resource descriptions produced by ADAPT includes high-level descriptions of numerical data products, display data products, or catalogs and also includes low-level "Granule" descriptions. A SPASE Granule is effectively a universal access metadata resource; a Granule associates an individual data file (e.g. a CDF file) with a "parent" high-level data resource description, assigns a resource identifier to the file, and lists the corresponding assess URL(s). The CDAWEB and SPDF file systems were queried to provide the input required by the ADAPT software to create an initial set of SPASE metadata resource descriptions. Then, the CDAWEB and SPDF data repositories were queried subsequently on a nightly basis and the CDF file lists were checked for any changes such as the occurrence of new, modified, or deleted

  11. A formal protocol test procedure for the Survivable Adaptable Fiber Optic Embedded Network (SAFENET)

    NASA Astrophysics Data System (ADS)

    High, Wayne

    1993-03-01

    This thesis focuses upon a new method for verifying the correct operation of a complex, high speed fiber optic communication network. These networks are of growing importance to the military because of their increased connectivity, survivability, and reconfigurability. With the introduction and increased dependence on sophisticated software and protocols, it is essential that their operation be correct. Because of the speed and complexity of fiber optic networks being designed today, they are becoming increasingly difficult to test. Previously, testing was accomplished by application of conformance test methods which had little connection with an implementation's specification. The major goal of conformance testing is to ensure that the implementation of a profile is consistent with its specification. Formal specification is needed to ensure that the implementation performs its intended operations while exhibiting desirable behaviors. The new conformance test method presented is based upon the System of Communicating Machine model which uses a formal protocol specification to generate a test sequence. The major contribution of this thesis is the application of the System of Communicating Machine model to formal profile specifications of the Survivable Adaptable Fiber Optic Embedded Network (SAFENET) standard which results in the derivation of test sequences for a SAFENET profile. The results applying this new method to SAFENET's OSI and Lightweight profiles are presented.

  12. PHYCAA+: an optimized, adaptive procedure for measuring and controlling physiological noise in BOLD fMRI.

    PubMed

    Churchill, Nathan W; Strother, Stephen C

    2013-11-15

    The presence of physiological noise in functional MRI can greatly limit the sensitivity and accuracy of BOLD signal measurements, and produce significant false positives. There are two main types of physiological confounds: (1) high-variance signal in non-neuronal tissues of the brain including vascular tracts, sinuses and ventricles, and (2) physiological noise components which extend into gray matter tissue. These physiological effects may also be partially coupled with stimuli (and thus the BOLD response). To address these issues, we have developed PHYCAA+, a significantly improved version of the PHYCAA algorithm (Churchill et al., 2011) that (1) down-weights the variance of voxels in probable non-neuronal tissue, and (2) identifies the multivariate physiological noise subspace in gray matter that is linked to non-neuronal tissue. This model estimates physiological noise directly from EPI data, without requiring external measures of heartbeat and respiration, or manual selection of physiological components. The PHYCAA+ model significantly improves the prediction accuracy and reproducibility of single-subject analyses, compared to PHYCAA and a number of commonly-used physiological correction algorithms. Individual subject denoising with PHYCAA+ is independently validated by showing that it consistently increased between-subject activation overlap, and minimized false-positive signal in non gray-matter loci. The results are demonstrated for both block and fast single-event task designs, applied to standard univariate and adaptive multivariate analysis models.

  13. Dispensing an enzyme-conjugated solution into an ELISA plate by adapting ink-jet printers.

    PubMed

    Lonini, Luca; Accoto, Dino; Petroni, Silvia; Guglielmelli, Eugenio

    2008-04-24

    The rapid and precise delivery of small volumes of bio-fluids (from picoliters to nanoliters) is a key feature of modern bioanalytical assays. Commercial ink-jet printers are low-cost systems which enable the dispensing of tiny droplets at a rate which may exceed 10(4) Hz per nozzle. Currently, the main ejection technologies are piezoelectric and bubble-jet. We adapted two commercial printers, respectively a piezoelectric and a bubble-jet one, for the deposition of immunoglobulins into an ELISA plate. The objective was to perform a comparative evaluation of the two classes of ink-jet technologies in terms of required hardware modifications and possible damage on the dispensed molecules. The hardware of the two printers was modified to dispense an enzyme conjugate solution, containing polyclonal rabbit anti-human IgG labelled with HRP in 7 wells of an ELISA plate. Moreover, the ELISA assay was used to assess the functional activity of the biomolecules after ejection. ELISA is a common and well-assessed technique to detect the presence of particular antigens or antibodies in a sample. We employed an ELISA diagnostic kit for the qualitative screening of anti-ENA antibodies to verify the ability of the dispensed immunoglobulins to bind the primary antibodies in the wells. Experimental tests showed that the dispensing of immunoglobulins using the piezoelectric printer does not cause any detectable difference on the outcome of the ELISA test if compared to manual dispensing using micropipettes. On the contrary, the thermal printhead was not able to reliably dispense the bio-fluid, which may mean that a surfactant is required to modify the wetting properties of the liquid.

  14. Optimization in the utility maximization framework for conservation planning: a comparison of solution procedures in a study of multifunctional agriculture.

    PubMed

    Kreitler, Jason; Stoms, David M; Davis, Frank W

    2014-01-01

    Quantitative methods of spatial conservation prioritization have traditionally been applied to issues in conservation biology and reserve design, though their use in other types of natural resource management is growing. The utility maximization problem is one form of a covering problem where multiple criteria can represent the expected social benefits of conservation action. This approach allows flexibility with a problem formulation that is more general than typical reserve design problems, though the solution methods are very similar. However, few studies have addressed optimization in utility maximization problems for conservation planning, and the effect of solution procedure is largely unquantified. Therefore, this study mapped five criteria describing elements of multifunctional agriculture to determine a hypothetical conservation resource allocation plan for agricultural land conservation in the Central Valley of CA, USA. We compared solution procedures within the utility maximization framework to determine the difference between an open source integer programming approach and a greedy heuristic, and find gains from optimization of up to 12%. We also model land availability for conservation action as a stochastic process and determine the decline in total utility compared to the globally optimal set using both solution algorithms. Our results are comparable to other studies illustrating the benefits of optimization for different conservation planning problems, and highlight the importance of maximizing the effectiveness of limited funding for conservation and natural resource management.

  15. Optimization in the utility maximization framework for conservation planning: a comparison of solution procedures in a study of multifunctional agriculture

    PubMed Central

    Stoms, David M.; Davis, Frank W.

    2014-01-01

    Quantitative methods of spatial conservation prioritization have traditionally been applied to issues in conservation biology and reserve design, though their use in other types of natural resource management is growing. The utility maximization problem is one form of a covering problem where multiple criteria can represent the expected social benefits of conservation action. This approach allows flexibility with a problem formulation that is more general than typical reserve design problems, though the solution methods are very similar. However, few studies have addressed optimization in utility maximization problems for conservation planning, and the effect of solution procedure is largely unquantified. Therefore, this study mapped five criteria describing elements of multifunctional agriculture to determine a hypothetical conservation resource allocation plan for agricultural land conservation in the Central Valley of CA, USA. We compared solution procedures within the utility maximization framework to determine the difference between an open source integer programming approach and a greedy heuristic, and find gains from optimization of up to 12%. We also model land availability for conservation action as a stochastic process and determine the decline in total utility compared to the globally optimal set using both solution algorithms. Our results are comparable to other studies illustrating the benefits of optimization for different conservation planning problems, and highlight the importance of maximizing the effectiveness of limited funding for conservation and natural resource management. PMID:25538868

  16. Optimization in the utility maximization framework for conservation planning: a comparison of solution procedures in a study of multifunctional agriculture

    USGS Publications Warehouse

    Kreitler, Jason R.; Stoms, David M.; Davis, Frank W.

    2014-01-01

    Quantitative methods of spatial conservation prioritization have traditionally been applied to issues in conservation biology and reserve design, though their use in other types of natural resource management is growing. The utility maximization problem is one form of a covering problem where multiple criteria can represent the expected social benefits of conservation action. This approach allows flexibility with a problem formulation that is more general than typical reserve design problems, though the solution methods are very similar. However, few studies have addressed optimization in utility maximization problems for conservation planning, and the effect of solution procedure is largely unquantified. Therefore, this study mapped five criteria describing elements of multifunctional agriculture to determine a hypothetical conservation resource allocation plan for agricultural land conservation in the Central Valley of CA, USA. We compared solution procedures within the utility maximization framework to determine the difference between an open source integer programming approach and a greedy heuristic, and find gains from optimization of up to 12%. We also model land availability for conservation action as a stochastic process and determine the decline in total utility compared to the globally optimal set using both solution algorithms. Our results are comparable to other studies illustrating the benefits of optimization for different conservation planning problems, and highlight the importance of maximizing the effectiveness of limited funding for conservation and natural resource management.

  17. Development of an unstructured solution adaptive method for the quasi-three-dimensional Euler and Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Jiang, Yi-Tsann; Usab, William J., Jr.

    1993-01-01

    A general solution adaptive scheme based on a remeshing technique is developed for solving the two-dimensional and quasi-three-dimensional Euler and Favre-averaged Navier-Stokes equations. The numerical scheme is formulated on an unstructured triangular mesh utilizing an edge-based pointer system which defines the edge connectivity of the mesh structure. Jameson's four-stage hybrid Runge-Kutta scheme is used to march the solution in time. The convergence rate is enhanced through the use of local time stepping and implicit residual averaging. As the solution evolves, the mesh is regenerated adaptively using flow field information. Mesh adaptation parameters are evaluated such that an estimated local numerical error is equally distributed over the whole domain. For inviscid flows, the present approach generates a complete unstructured triangular mesh using the advancing front method. For turbulent flows, the approach combines a local highly stretched structured triangular mesh in the boundary layer region with an unstructured mesh in the remaining regions to efficiently resolve the important flow features. One-equation and two-equation turbulence models are incorporated into the present unstructured approach. Results are presented for a wide range of flow problems including two-dimensional multi-element airfoils, two-dimensional cascades, and quasi-three-dimensional cascades. This approach is shown to gain flow resolution in the refined regions while achieving a great reduction in the computational effort and storage requirements since solution points are not wasted in regions where they are not required.

  18. Development of an unstructured solution adaptive method for the quasi-three-dimensional Euler and Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Jiang, Yi-Tsann

    1993-01-01

    A general solution adaptive scheme-based on a remeshing technique is developed for solving the two-dimensional and quasi-three-dimensional Euler and Favre-averaged Navier-Stokes equations. The numerical scheme is formulated on an unstructured triangular mesh utilizing an edge-based pointer system which defines the edge connectivity of the mesh structure. Jameson's four-stage hybrid Runge-Kutta scheme is used to march the solution in time. The convergence rate is enhanced through the use of local time stepping and implicit residual averaging. As the solution evolves, the mesh is regenerated adaptively using flow field information. Mesh adaptation parameters are evaluated such that an estimated local numerical error is equally distributed over the whole domain. For inviscid flows, the present approach generates a complete unstructured triangular mesh using the advancing front method. For turbulent flows, the approach combines a local highly stretched structured triangular mesh in the boundary layer region with an unstructured mesh in the remaining regions to efficiently resolve the important flow features. One-equation and two-equation turbulence models are incorporated into the present unstructured approach. Results are presented for a wide range of flow problems including two-dimensional multi-element airfoils, two-dimensional cascades, and quasi-three-dimensional cascades. This approach is shown to gain flow resolution in the refined regions while achieving a great reduction in the computational effort and storage requirements since solution points are not wasted in regions where they are not required.

  19. Computational procedure for finite difference solution of one-dimensional heat conduction problems reduces computer time

    NASA Technical Reports Server (NTRS)

    Iida, H. T.

    1966-01-01

    Computational procedure reduces the numerical effort whenever the method of finite differences is used to solve ablation problems for which the surface recession is large relative to the initial slab thickness. The number of numerical operations required for a given maximum space mesh size is reduced.

  20. A numerical procedure to compute the stabilising solution of game theoretic Riccati equations of stochastic control

    NASA Astrophysics Data System (ADS)

    Dragan, Vasile; Ivanov, Ivan

    2011-04-01

    In this article, the problem of the numerical computation of the stabilising solution of the game theoretic algebraic Riccati equation is investigated. The Riccati equation under consideration occurs in connection with the solution of the H ∞ control problem for a class of stochastic systems affected by state-dependent and control-dependent white noise and subjected to Markovian jumping. The stabilising solution of the considered game theoretic Riccati equation is obtained as a limit of a sequence of approximations constructed based on stabilising solutions of a sequence of algebraic Riccati equations of stochastic control with definite sign of the quadratic part. The proposed algorithm extends to this general framework the method proposed in Lanzon, Feng, Anderson, and Rotkowitz (Lanzon, A., Feng, Y., Anderson, B.D.O., and Rotkowitz, M. (2008), 'Computing the Positive Stabilizing Solution to Algebraic Riccati Equations with an Indefinite Quadratic Term Viaa Recursive Method,' IEEE Transactions on Automatic Control, 53, pp. 2280-2291). In the proof of the convergence of the proposed algorithm different concepts associated the generalised Lyapunov operators as stability, stabilisability and detectability are widely involved. The efficiency of the proposed algorithm is demonstrated by several numerical experiments.

  1. A coupled multi-block solution procedure for spray combustion in complex geometries

    NASA Technical Reports Server (NTRS)

    Chen, Kuo-Huey; Shuen, Jian-Shun

    1993-01-01

    Turbulent spray-combusting flow in complex geometries is presently treated by a coupled implicit procedure that employs finite-rate chemistry and real gas properties for combustion, as well as the stochastic separated model for spray and a multiblock treatment for complex geometries. Illustrative numerical tests conducted encompass a steady-state nonreacting backward-facing step flow, a premixed single-phase combustion flow, and spray combustion flow in a gas turbine combustor.

  2. A Cartesian, cell-based approach for adaptively-refined solutions of the Euler and Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Coirier, William J.; Powell, Kenneth G.

    1995-01-01

    A Cartesian, cell-based approach for adaptively-refined solutions of the Euler and Navier-Stokes equations in two dimensions is developed and tested. Grids about geometrically complicated bodies are generated automatically, by recursive subdivision of a single Cartesian cell encompassing the entire flow domain. Where the resulting cells intersect bodies, N-sided 'cut' cells are created using polygon-clipping algorithms. The grid is stored in a binary-tree data structure which provides a natural means of obtaining cell-to-cell connectivity and of carrying out solution-adaptive mesh refinement. The Euler and Navier-Stokes equations are solved on the resulting grids using a finite-volume formulation. The convective terms are upwinded: A gradient-limited, linear reconstruction of the primitive variables is performed, providing input states to an approximate Riemann solver for computing the fluxes between neighboring cells. The more robust of a series of viscous flux functions is used to provide the viscous fluxes at the cell interfaces. Adaptively-refined solutions of the Navier-Stokes equations using the Cartesian, cell-based approach are obtained and compared to theory, experiment and other accepted computational results for a series of low and moderate Reynolds number flows.

  3. A Cartesian, cell-based approach for adaptively-refined solutions of the Euler and Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Coirier, William J.; Powell, Kenneth G.

    1994-01-01

    A Cartesian, cell-based approach for adaptively-refined solutions of the Euler and Navier-Stokes equations in two dimensions is developed and tested. Grids about geometrically complicated bodies are generated automatically, by recursive subdivision of a single Cartesian cell encompassing the entire flow domain. Where the resulting cells intersect bodies, N-sided 'cut' cells are created using polygon-clipping algorithms. The grid is stored in a binary-tree structure which provides a natural means of obtaining cell-to-cell connectivity and of carrying out solution-adaptive mesh refinement. The Euler and Navier-Stokes equations are solved on the resulting grids using a finite-volume formulation. The convective terms are upwinded: a gradient-limited, linear reconstruction of the primitive variables is performed, providing input states to an approximate Riemann solver for computing the fluxes between neighboring cells. The more robust of a series of viscous flux functions is used to provide the viscous fluxes at the cell interfaces. Adaptively-refined solutions of the Navier-Stokes equations using the Cartesian, cell-based approach are obtained and compared to theory, experiment, and other accepted computational results for a series of low and moderate Reynolds number flows.

  4. A Procedure to Construct Exact Solutions of Nonlinear Fractional Differential Equations

    PubMed Central

    Güner, Özkan; Cevikel, Adem C.

    2014-01-01

    We use the fractional transformation to convert the nonlinear partial fractional differential equations with the nonlinear ordinary differential equations. The Exp-function method is extended to solve fractional partial differential equations in the sense of the modified Riemann-Liouville derivative. We apply the Exp-function method to the time fractional Sharma-Tasso-Olver equation, the space fractional Burgers equation, and the time fractional fmKdV equation. As a result, we obtain some new exact solutions. PMID:24737972

  5. A procedure to construct exact solutions of nonlinear fractional differential equations.

    PubMed

    Güner, Özkan; Cevikel, Adem C

    2014-01-01

    We use the fractional transformation to convert the nonlinear partial fractional differential equations with the nonlinear ordinary differential equations. The Exp-function method is extended to solve fractional partial differential equations in the sense of the modified Riemann-Liouville derivative. We apply the Exp-function method to the time fractional Sharma-Tasso-Olver equation, the space fractional Burgers equation, and the time fractional fmKdV equation. As a result, we obtain some new exact solutions.

  6. Decision making in offshore emergencies: Are standard operating procedures the solution?

    SciTech Connect

    Skriver, J.; Flin, R.H.

    1996-12-31

    Emergency situations on offshore installations can have devastating effects as seen in the Piper Alpha disaster in 1988. Offshore installations in the North Sea can be situated more than 100 miles from the coast and it is therefore imperative that personnel have the ability and facilities to deal with an emergency on their own. The offshore installation manager (OIM) is responsible for handling an incident which is likely to be characterized by time pressure, high risk, ambiguous information, unclear goals, and constantly changing conditions. To help the OIM, standard operating procedures (SOPs) have been introduced by the n operating companies, which provide a set of rules to apply in a given crisis. At present there is a trend towards creating SOPs for every predictable offshore crisis. A constant increase in the number of procedures, generated to cover every eventuality, may obviate the need for OIM decision making but also create a problem if a novel emergency is encountered. Moreover, there may be dangers associated with too great a reliance on SOPs as they may become hard and fast rules that must be followed blindly. It is therefore of interest to identify how SOPs are utilized and what knowledge underpins their use. This research is based on interviews with 10 experienced OIMs on UKCS platforms from one major operator. It suggests that experienced OIMs have a repertoire of standard responses which they can apply in a crisis. This intimate knowledge of the emergency procedures has been developed through regular exercises, onshore simulator training, and involvement in the maintenance and improvement of the safety management systems on their installations. That is, decision making in offshore emergencies appears to be based on sound foundations, not on blind application of rules. A comparison is drawn between the decision making of the OIM and that of other emergency commanders, with particular reference to current theories of naturalistic decision making.

  7. New connection method for isolating and disinfecting intraluminal path during peritoneal dialysis solution-exchange procedures.

    PubMed

    Grabowy, R S; Kelley, R; Richter, S G; Bousquet, G G; Carr, K L

    1998-01-01

    Microbiological data have been collected on the performance of a new method of isolating and disinfecting the intraluminal path at the connect/disconnect site of a peritoneal dialysis (PD)-exchange pathway. High-temperature moist-heat (HTMH) disinfection is accomplished by a new device that uses microwave energy to heat the solution contained in the pressure-tight inner lumen of PD connector pairs between the transfer-set connector-clamp and the bag-connector break-away seal. An 85 degrees C (S.D. = 2.4 degrees C, n = 10) rise in solution temperature is seen in 12 seconds, thus yielding temperatures under pressure well over 100 degrees C with starting temperatures of 25 degrees C. Connector pairs were prepared by inoculation of a solution suspension containing at least 10(6) colony-forming units (CFU) of a test micro-organism. Approximately 0.4 mL of solution was contained within the mated connector pair. Using standard D-value determination methods, data were obtained for surviving organisms versus five exposure times and a positive control to obtain a population reduction curve. Four micro-organisms (S. epidermidis, P. aeruginosa, C. albicans, and A. niger) recognized to be among the most prevalent or problematic in causing peritonitis were tested. After microwave heating, the treated solution was aseptically withdrawn from the connector pair using a needle and syringe, plated in growth media, and incubated. Population counts of CFUs after incubation were used to establish survival curves. Results showed a tenfold population reduction in less than 3 seconds for all organisms tested. A 30-second cycle time safely achieves a > 10(8) population-reduction for bacteria and yeast organisms, and a > 10(7) population reduction for fungi. One potential benefit of using this new intraluminal disinfection method is that it may help reduce peritonitis resulting from the even more problematic pathogens such as the gram-negative bacteria and fungal organisms. PMID:10649714

  8. Adaptive Filtering for Large Space Structures: A Closed-Form Solution

    NASA Technical Reports Server (NTRS)

    Rauch, H. E.; Schaechter, D. B.

    1985-01-01

    In a previous paper Schaechter proposes using an extended Kalman filter to estimate adaptively the (slowly varying) frequencies and damping ratios of a large space structure. The time varying gains for estimating the frequencies and damping ratios can be determined in closed form so it is not necessary to integrate the matrix Riccati equations. After certain approximations, the time varying adaptive gain can be written as the product of a constant matrix times a matrix derived from the components of the estimated state vector. This is an important savings of computer resources and allows the adaptive filter to be implemented with approximately the same effort as the nonadaptive filter. The success of this new approach for adaptive filtering was demonstrated using synthetic data from a two mode system.

  9. A Simple Procedure for Constructing 5'-Amino-Terminated Oligodeoxynucleotides in Aqueous Solution

    NASA Technical Reports Server (NTRS)

    Bruick, Richard K.; Koppitz, Marcus; Joyce, Gerald F.; Orgel, Leslie E.

    1997-01-01

    A rapid method for the synthesis of oligodeoxynucleotides (ODNs) terminated by 5'-amino-5'-deoxythymidine is described. A 3'-phosphorylated ODN (the donor) is incubated in aqueous solution with 5'-amino- 5'-deoxythymidine in the presence of N-(3-dimethylaminopropyl)-)N'-ethylcarbodiimide hydrochloride (EDC), extending the donor by one residue via a phosphoramidate bond. Template- directed ligation of the extended donor and an acceptor ODN, followed by acid hydrolysis, yields the acceptor ODN extended by a single 5'-amino-5'-deoxythymidine residue at its 5'terminus.

  10. Finding the Genomic Basis of Local Adaptation: Pitfalls, Practical Solutions, and Future Directions.

    PubMed

    Hoban, Sean; Kelley, Joanna L; Lotterhos, Katie E; Antolin, Michael F; Bradburd, Gideon; Lowry, David B; Poss, Mary L; Reed, Laura K; Storfer, Andrew; Whitlock, Michael C

    2016-10-01

    Uncovering the genetic and evolutionary basis of local adaptation is a major focus of evolutionary biology. The recent development of cost-effective methods for obtaining high-quality genome-scale data makes it possible to identify some of the loci responsible for adaptive differences among populations. Two basic approaches for identifying putatively locally adaptive loci have been developed and are broadly used: one that identifies loci with unusually high genetic differentiation among populations (differentiation outlier methods) and one that searches for correlations between local population allele frequencies and local environments (genetic-environment association methods). Here, we review the promises and challenges of these genome scan methods, including correcting for the confounding influence of a species' demographic history, biases caused by missing aspects of the genome, matching scales of environmental data with population structure, and other statistical considerations. In each case, we make suggestions for best practices for maximizing the accuracy and efficiency of genome scans to detect the underlying genetic basis of local adaptation. With attention to their current limitations, genome scan methods can be an important tool in finding the genetic basis of adaptive evolutionary change. PMID:27622873

  11. An adaptive computation mesh for the solution of singular perturbation problems

    NASA Technical Reports Server (NTRS)

    Brackbill, J. U.; Saltzman, J.

    1980-01-01

    In singular perturbation problems, control of zone size variation can affect the effort required to obtain accurate, numerical solutions of finite difference equations. The mesh is generated by the solution of potential equations. Numerical results for a singular perturbation problem in two dimensions are presented. The mesh was used in calculations of resistive magnetohydrodynamic flow in two dimensions.

  12. The role of interactions in a world implementing adaptation and mitigation solutions to climate change.

    PubMed

    Warren, Rachel

    2011-01-13

    The papers in this volume discuss projections of climate change impacts upon humans and ecosystems under a global mean temperature rise of 4°C above preindustrial levels. Like most studies, they are mainly single-sector or single-region-based assessments. Even the multi-sector or multi-region approaches generally consider impacts in sectors and regions independently, ignoring interactions. Extreme weather and adaptation processes are often poorly represented and losses of ecosystem services induced by climate change or human adaptation are generally omitted. This paper addresses this gap by reviewing some potential interactions in a 4°C world, and also makes a comparison with a 2°C world. In a 4°C world, major shifts in agricultural land use and increased drought are projected, and an increased human population might increasingly be concentrated in areas remaining wet enough for economic prosperity. Ecosystem services that enable prosperity would be declining, with carbon cycle feedbacks and fire causing forest losses. There is an urgent need for integrated assessments considering the synergy of impacts and limits to adaptation in multiple sectors and regions in a 4°C world. By contrast, a 2°C world is projected to experience about one-half of the climate change impacts, with concomitantly smaller challenges for adaptation. Ecosystem services, including the carbon sink provided by the Earth's forests, would be expected to be largely preserved, with much less potential for interaction processes to increase challenges to adaptation. However, demands for land and water for biofuel cropping could reduce the availability of these resources for agricultural and natural systems. Hence, a whole system approach to mitigation and adaptation, considering interactions, potential human and species migration, allocation of land and water resources and ecosystem services, will be important in either a 2°C or a 4°C world.

  13. An adaptive wavelet stochastic collocation method for irregular solutions of stochastic partial differential equations

    SciTech Connect

    Webster, Clayton G; Zhang, Guannan; Gunzburger, Max D

    2012-10-01

    Accurate predictive simulations of complex real world applications require numerical approximations to first, oppose the curse of dimensionality and second, converge quickly in the presence of steep gradients, sharp transitions, bifurcations or finite discontinuities in high-dimensional parameter spaces. In this paper we present a novel multi-dimensional multi-resolution adaptive (MdMrA) sparse grid stochastic collocation method, that utilizes hierarchical multiscale piecewise Riesz basis functions constructed from interpolating wavelets. The basis for our non-intrusive method forms a stable multiscale splitting and thus, optimal adaptation is achieved. Error estimates and numerical examples will used to compare the efficiency of the method with several other techniques.

  14. CTEPP STANDARD OPERATING PROCEDURE FOR PREPARATION OF SURROGATE RECOVERY STANDARD AND INTERNAL STANDARD SOLUTIONS FOR NEUTRAL TARGET ANALYTES (SOP-5.25)

    EPA Science Inventory

    This standard operating procedure describes the method used for preparing internal standard, surrogate recovery standard and calibration standard solutions for neutral analytes used for gas chromatography/mass spectrometry analysis.

  15. AP-Cloud: Adaptive Particle-in-Cloud method for optimal solutions to Vlasov-Poisson equation

    NASA Astrophysics Data System (ADS)

    Wang, Xingyu; Samulyak, Roman; Jiao, Xiangmin; Yu, Kwangmin

    2016-07-01

    We propose a new adaptive Particle-in-Cloud (AP-Cloud) method for obtaining optimal numerical solutions to the Vlasov-Poisson equation. Unlike the traditional particle-in-cell (PIC) method, which is commonly used for solving this problem, the AP-Cloud adaptively selects computational nodes or particles to deliver higher accuracy and efficiency when the particle distribution is highly non-uniform. Unlike other adaptive techniques for PIC, our method balances the errors in PDE discretization and Monte Carlo integration, and discretizes the differential operators using a generalized finite difference (GFD) method based on a weighted least square formulation. As a result, AP-Cloud is independent of the geometric shapes of computational domains and is free of artificial parameters. Efficient and robust implementation is achieved through an octree data structure with 2:1 balance. We analyze the accuracy and convergence order of AP-Cloud theoretically, and verify the method using an electrostatic problem of a particle beam with halo. Simulation results show that the AP-Cloud method is substantially more accurate and faster than the traditional PIC, and it is free of artificial forces that are typical for some adaptive PIC techniques.

  16. AP-Cloud: Adaptive particle-in-cloud method for optimal solutions to Vlasov–Poisson equation

    DOE PAGES

    Wang, Xingyu; Samulyak, Roman; Jiao, Xiangmin; Yu, Kwangmin

    2016-04-19

    We propose a new adaptive Particle-in-Cloud (AP-Cloud) method for obtaining optimal numerical solutions to the Vlasov–Poisson equation. Unlike the traditional particle-in-cell (PIC) method, which is commonly used for solving this problem, the AP-Cloud adaptively selects computational nodes or particles to deliver higher accuracy and efficiency when the particle distribution is highly non-uniform. Unlike other adaptive techniques for PIC, our method balances the errors in PDE discretization and Monte Carlo integration, and discretizes the differential operators using a generalized finite difference (GFD) method based on a weighted least square formulation. As a result, AP-Cloud is independent of the geometric shapes ofmore » computational domains and is free of artificial parameters. Efficient and robust implementation is achieved through an octree data structure with 2:1 balance. We analyze the accuracy and convergence order of AP-Cloud theoretically, and verify the method using an electrostatic problem of a particle beam with halo. Here, simulation results show that the AP-Cloud method is substantially more accurate and faster than the traditional PIC, and it is free of artificial forces that are typical for some adaptive PIC techniques.« less

  17. Convergent Aeronautics Solutions (CAS) Showcase Presentation on Mission Adaptive Digital Composite Aerostructure Technologies (MADCAT)

    NASA Technical Reports Server (NTRS)

    Swei, Sean; Cheung, Kenneth

    2016-01-01

    This project is to develop a novel aerostructure concept that takes advantage of emerging digital composite materials and manufacturing methods to build high stiffness-to-density ratio, ultra-light structures that can provide mission adaptive and aerodynamically efficient future N+3N+4 air vehicles.

  18. Deconvolution of post-adaptive optics images of faint circumstellar environments by means of the inexact Bregman procedure

    NASA Astrophysics Data System (ADS)

    Benfenati, A.; La Camera, A.; Carbillet, M.

    2016-02-01

    Aims: High-dynamic range images of astrophysical objects present some difficulties in their restoration because of the presence of very bright point-wise sources surrounded by faint and smooth structures. We propose a method that enables the restoration of this kind of images by taking these kinds of sources into account and, at the same time, improving the contrast enhancement in the final image. Moreover, the proposed approach can help to detect the position of the bright sources. Methods: The classical variational scheme in the presence of Poisson noise aims to find the minimum of a functional compound of the generalized Kullback-Leibler function and a regularization functional: the latter function is employed to preserve some characteristic in the restored image. The inexact Bregman procedure substitutes the regularization function with its inexact Bregman distance. This proposed scheme allows us to take under control the level of inexactness arising in the computed solution and permits us to employ an overestimation of the regularization parameter (which balances the trade-off between the Kullback-Leibler and the Bregman distance). This aspect is fundamental, since the estimation of this kind of parameter is very difficult in the presence of Poisson noise. Results: The inexact Bregman procedure is tested on a bright unresolved binary star with a faint circumstellar environment. When the sources' position is exactly known, this scheme provides us with very satisfactory results. In case of inexact knowledge of the sources' position, it can in addition give some useful information on the true positions. Finally, the inexact Bregman scheme can be also used when information about the binary star's position concerns a connected region instead of isolated pixels.

  19. Triangle based adaptive stencils for the solution of hyperbolic conservation laws

    NASA Technical Reports Server (NTRS)

    Durlofsky, Louis J.; Engquist, Bjorn; Osher, Stanley

    1992-01-01

    A triangle based total variation diminishing (TVD) scheme for the numerical approximation of hyperbolic conservation laws in two space dimensions is constructed. The novelty of the scheme lies in the nature of the preprocessing of the cell averaged data, which is accomplished via a nearest neighbor linear interpolation followed by a slope limiting procedures. Two such limiting procedures are suggested. The resulting method is considerably more simple than other triangle based non-oscillatory approximations which, like this scheme, approximate the flux up to second order accuracy. Numerical results for linear advection and Burgers' equation are presented.

  20. Embedded pitch adapters: A high-yield interconnection solution for strip sensors

    NASA Astrophysics Data System (ADS)

    Ullán, M.; Allport, P. P.; Baca, M.; Broughton, J.; Chisholm, A.; Nikolopoulos, K.; Pyatt, S.; Thomas, J. P.; Wilson, J. A.; Kierstead, J.; Kuczewski, P.; Lynn, D.; Hommels, L. B. A.; Fleta, C.; Fernandez-Tejero, J.; Quirion, D.; Bloch, I.; Díez, S.; Gregor, I. M.; Lohwasser, K.; Poley, L.; Tackmann, K.; Hauser, M.; Jakobs, K.; Kuehn, S.; Mahboubi, K.; Mori, R.; Parzefall, U.; Clark, A.; Ferrere, D.; Gonzalez Sevilla, S.; Ashby, J.; Blue, A.; Bates, R.; Buttar, C.; Doherty, F.; McMullen, T.; McEwan, F.; O'Shea, V.; Kamada, S.; Yamamura, K.; Ikegami, Y.; Nakamura, K.; Takubo, Y.; Unno, Y.; Takashima, R.; Chilingarov, A.; Fox, H.; Affolder, A. A.; Casse, G.; Dervan, P.; Forshaw, D.; Greenall, A.; Wonsak, S.; Wormald, M.; Cindro, V.; Kramberger, G.; Mandić, I.; Mikuž, M.; Gorelov, I.; Hoeferkamp, M.; Palni, P.; Seidel, S.; Taylor, A.; Toms, K.; Wang, R.; Hessey, N. P.; Valencic, N.; Hanagaki, K.; Dolezal, Z.; Kodys, P.; Bohm, J.; Mikestikova, M.; Bevan, A.; Beck, G.; Milke, C.; Domingo, M.; Fadeyev, V.; Galloway, Z.; Hibbard-Lubow, D.; Liang, Z.; Sadrozinski, H. F.-W.; Seiden, A.; To, K.; French, R.; Hodgson, P.; Marin-Reyes, H.; Parker, K.; Jinnouchi, O.; Hara, K.; Bernabeu, J.; Civera, J. V.; Garcia, C.; Lacasta, C.; Marti i Garcia, S.; Rodriguez, D.; Santoyo, D.; Solaz, C.; Soldevila, U.

    2016-09-01

    A proposal to fabricate large area strip sensors with integrated, or embedded, pitch adapters is presented for the End-cap part of the Inner Tracker in the ATLAS experiment. To implement the embedded pitch adapters, a second metal layer is used in the sensor fabrication, for signal routing to the ASICs. Sensors with different embedded pitch adapters have been fabricated in order to optimize the design and technology. Inter-strip capacitance, noise, pick-up, cross-talk, signal efficiency, and fabrication yield have been taken into account in their design and fabrication. Inter-strip capacitance tests taking into account all channel neighbors reveal the important differences between the various designs considered. These tests have been correlated with noise figures obtained in full assembled modules, showing that the tests performed on the bare sensors are a valid tool to estimate the final noise in the full module. The full modules have been subjected to test beam experiments in order to evaluate the incidence of cross-talk, pick-up, and signal loss. The detailed analysis shows no indication of cross-talk or pick-up as no additional hits can be observed in any channel not being hit by the beam above 170 mV threshold, and the signal in those channels is always below 1% of the signal recorded in the channel being hit, above 100 mV threshold. First results on irradiated mini-sensors with embedded pitch adapters do not show any change in the interstrip capacitance measurements with only the first neighbors connected.

  1. Cerebellar cathodal tDCS interferes with recalibration and spatial realignment during prism adaptation procedure in healthy subjects.

    PubMed

    Panico, Francesco; Sagliano, Laura; Grossi, Dario; Trojano, Luigi

    2016-06-01

    The aim of this study is to clarify the specific role of the cerebellum during prism adaptation procedure (PAP), considering its involvement in early prism exposure (i.e., in the recalibration process) and in post-exposure phase (i.e., in the after-effect, related to spatial realignment). For this purpose we interfered with cerebellar activity by means of cathodal transcranial direct current stimulation (tDCS), while young healthy individuals were asked to perform a pointing task on a touch screen before, during and after wearing base-left prism glasses. The distance from the target dot in each trial (in terms of pixels) on horizontal and vertical axes was recorded and served as an index of accuracy. Results on horizontal axis, that was shifted by prism glasses, revealed that participants who received cathodal stimulation showed increased rightward deviation from the actual position of the target while wearing prisms and a larger leftward deviation from the target after prisms removal. Results on vertical axis, in which no shift was induced, revealed a general trend in the two groups to improve accuracy through the different phases of the task, and a trend, more visible in cathodal stimulated participants, to worsen accuracy from the first to the last movements in each phase. Data on horizontal axis allow to confirm that the cerebellum is involved in all stages of PAP, contributing to early strategic recalibration process, as well as to spatial realignment. On vertical axis, the improving performance across the different stages of the task and the worsening accuracy within each task phase can be ascribed, respectively, to a learning process and to the task-related fatigue. PMID:27031676

  2. An adaptive-mesh finite-difference solution method for the Navier-Stokes equations

    NASA Astrophysics Data System (ADS)

    Luchini, Paolo

    1987-02-01

    An adjustable variable-spacing grid is presented which permits the addition or deletion of single points during iterative solutions of the Navier-Stokes equations by finite difference methods. The grid is designed for application to two-dimensional steady-flow problems which can be described by partial differential equations whose second derivatives are constrained to the Laplacian operator. An explicit Navier-Stokes equations solution technique defined for use with the grid incorporates a hybrid form of the convective terms. Three methods are developed for automatic modifications of the mesh during calculations.

  3. Catheter for Cleaning Surgical Optics During Surgical Procedures: A Possible Solution for Residue Buildup and Fogging in Video Surgery.

    PubMed

    de Abreu, Igor Renato Louro Bruno; Abrão, Fernando Conrado; Silva, Alessandra Rodrigues; Corrêa, Larissa Teresa Cirera; Younes, Riad Nain

    2015-05-01

    Currently, there is a tendency to perform surgical procedures via laparoscopic or thoracoscopic access. However, even with the impressive technological advancement in surgical materials, such as improvement in quality of monitors, light sources, and optical fibers, surgeons have to face simple problems that can greatly hinder surgery by video. One is the formation of "fog" or residue buildup on the lens, causing decreased visibility. Intracavitary techniques for cleaning surgical optics and preventing fog formation have been described; however, some of these techniques employ the use of expensive and complex devices designed solely for this purpose. Moreover, these techniques allow the cleaning of surgical optics when they becomes dirty, which does not prevent the accumulation of residue in the optics. To solve this problem we have designed a device that allows cleaning the optics with no surgical stops and prevents the fogging and residue accumulation. The objective of this study is to evaluate through experimental testing the effectiveness of a simple device that prevents the accumulation of residue and fogging of optics used in surgical procedures performed through thoracoscopic or laparoscopic access. Ex-vivo experiments were performed simulating the conditions of residue presence in surgical optics during a video surgery. The experiment consists in immersing the optics and catheter set connected to the IV line with crystalloid solution in three types of materials: blood, blood plus fat solution, and 200 mL of distilled water and 1 vial of methylene blue. The optics coupled to the device were immersed in 200 mL of each type of residue, repeating each immersion 10 times for each distinct residue for both thirty and zero degrees optics, totaling 420 experiments. A success rate of 98.1% was observed after the experiments, in these cases the device was able to clean and prevent the residue accumulation in the optics.

  4. A scalable and adaptable solution framework within components of the CCSM

    SciTech Connect

    Evans, Katherine J; Rouson, Damian; Salinger, Andy; Taylor, Mark; White III, James B; Weijer, Wilbert

    2009-01-01

    A framework for a fully implicit solution method is implemented into (1) the High Order Methods Modeling Environment (HOMME), which is a spectral element dynamical core option in the Community Atmosphere Model (CAM), and (2) the Parallel Ocean Program (POP) model of the global ocean. Both of these models are components of the Community Climate System Model (CCSM). HOMME is a development version of CAM and provides a scalable alternative when run with an explicit time integrator. However, it suffers the typical time step size limit to maintain stability. POP uses a time-split semi-implicit time integrator that allows larger time steps but less accuracy when used with scale interacting physics. A fully implicit solution framework allows larger time step sizes and additional climate analysis capability such as model steady state and spin-up efficiency gains without a loss in scalability. This framework is implemented into HOMME and POP using a new Fortran interface to the Trilinos solver library, ForTrilinos, which leverages several new capabilities in the current Fortran standard to maximize robustness and speed. The ForTrilinos solution template was also designed for interchangeability; other solution methods and capability improvements can be more easily implemented into the models as they are developed without severely interacting with the code structure. The utility of this approach is illustrated with a test case for each of the climate component models.

  5. Copper-adapted Suillus luteus, a symbiotic solution for pines colonizing Cu mine spoils.

    PubMed

    Adriaensen, K; Vrålstad, T; Noben, J-P; Vangronsveld, J; Colpaert, J V

    2005-11-01

    Natural populations thriving in heavy-metal-contaminated ecosystems are often subjected to selective pressures for increased resistance to toxic metals. In the present study we describe a population of the ectomycorrhizal fungus Suillus luteus that colonized a toxic Cu mine spoil in Norway. We hypothesized that this population had developed adaptive Cu tolerance and was able to protect pine trees against Cu toxicity. We also tested for the existence of cotolerance to Cu and Zn in S. luteus. Isolates from Cu-polluted, Zn-polluted, and nonpolluted sites were grown in vitro on Cu- or Zn-supplemented medium. The Cu mine isolates exhibited high Cu tolerance, whereas the Zn-tolerant isolates were shown to be Cu sensitive, and vice versa. This indicates the evolution of metal-specific tolerance mechanisms is strongly triggered by the pollution in the local environment. Cotolerance does not occur in the S. luteus isolates studied. In a dose-response experiment, the Cu sensitivity of nonmycorrhizal Pinus sylvestris seedlings was compared to the sensitivity of mycorrhizal seedlings colonized either by a Cu-sensitive or Cu-tolerant S. luteus isolate. In nonmycorrhizal plants and plants colonized by the Cu-sensitive isolate, root growth and nutrient uptake were strongly inhibited under Cu stress conditions. In contrast, plants colonized by the Cu-tolerant isolate were hardly affected. The Cu-adapted S. luteus isolate provided excellent insurance against Cu toxicity in pine seedlings exposed to elevated Cu levels. Such a metal-adapted Suillus-Pinus combination might be suitable for large-scale land reclamation at phytotoxic metalliferous and industrial sites. PMID:16269769

  6. Open cascades as simple solutions to providing ultrasensitivity and adaptation in cellular signaling

    NASA Astrophysics Data System (ADS)

    Srividhya, Jeyaraman; Li, Yongfeng; Pomerening, Joseph R.

    2011-08-01

    Cell signaling is achieved predominantly by reversible phosphorylation-dephosphorylation reaction cascades. Up until now, circuits conferring adaptation have all required the presence of a cascade with some type of closed topology: negative-feedback loop with a buffering node, or incoherent feed-forward loop with a proportioner node. In this paper—using Goldbeter and Koshland-type expressions—we propose a differential equation model to describe a generic, open signaling cascade that elicits an adaptation response. This is accomplished by coupling N phosphorylation-dephosphorylation cycles unidirectionally, without any explicit feedback loops. Using this model, we show that as the length of the cascade grows, the steady states of the downstream cycles reach a limiting value. In other words, our model indicates that there are a minimum number of cycles required to achieve a maximum in sensitivity and amplitude in the response of a signaling cascade. We also describe for the first time that the phenomenon of ultrasensitivity can be further subdivided into three sub-regimes, separated by sharp stimulus threshold values: OFF, OFF-ON-OFF, and ON. In the OFF-ON-OFF regime, an interesting property emerges. In the presence of a basal amount of activity, the temporal evolution of early cycles yields damped peak responses. On the other hand, the downstream cycles switch rapidly to a higher activity state for an extended period of time, prior to settling to an OFF state (OFF-ON-OFF). This response arises from the changing dynamics between a feed-forward activation module and dephosphorylation reactions. In conclusion, our model gives the new perspective that open signaling cascades embedded in complex biochemical circuits may possess the ability to show a switch-like adaptation response, without the need for any explicit feedback circuitry.

  7. Adaptively-refined overlapping grids for the numerical solution of systems of hyperbolic conservation laws

    NASA Technical Reports Server (NTRS)

    Brislawn, Kristi D.; Brown, David L.; Chesshire, Geoffrey S.; Saltzman, Jeffrey S.

    1995-01-01

    Adaptive mesh refinement (AMR) in conjunction with higher-order upwind finite-difference methods have been used effectively on a variety of problems in two and three dimensions. In this paper we introduce an approach for resolving problems that involve complex geometries in which resolution of boundary geometry is important. The complex geometry is represented by using the method of overlapping grids, while local resolution is obtained by refining each component grid with the AMR algorithm, appropriately generalized for this situation. The CMPGRD algorithm introduced by Chesshire and Henshaw is used to automatically generate the overlapping grid structure for the underlying mesh.

  8. Angularly Adaptive P1 - Double P0 Flux-Limited Diffusion Solutions of Non-Equilibrium Grey Radiative Transfer Problems

    SciTech Connect

    Brantley, P S

    2006-08-08

    The double spherical harmonics angular approximation in the lowest order, i.e. double P{sub 0} (DP{sub 0}), is developed for the solution of time-dependent non-equilibrium grey radiative transfer problems in planar geometry. Although the DP{sub 0} diffusion approximation is expected to be less accurate than the P{sub 1} diffusion approximation at and near thermodynamic equilibrium, the DP{sub 0} angular approximation can more accurately capture the complicated angular dependence near a non-equilibrium radiation wave front. In addition, the DP{sub 0} approximation should be more accurate in non-equilibrium optically thin regions where the positive and negative angular domains are largely decoupled. We develop an adaptive angular technique that locally uses either the DP{sub 0} or P{sub 1} flux-limited diffusion approximation depending on the degree to which the radiation and material fields are in thermodynamic equilibrium. Numerical results are presented for two test problems due to Su and Olson and to Ganapol and Pomraning for which semi-analytic transport solutions exist. These numerical results demonstrate that the adaptive P{sub 1}-DP{sub 0} diffusion approximation can yield improvements in accuracy over the standard P{sub 1} diffusion approximation, both without and with flux-limiting, for non-equilibrium grey radiative transfer.

  9. Angularly Adaptive P1-Double P0 Flux-Limited Diffusion Solutions of Non-Equilibrium Grey Radiative Transfer Problems

    SciTech Connect

    Brantley, P S

    2005-12-13

    The double spherical harmonics angular approximation in the lowest order, i.e. double P{sub 0} (DP{sub 0}), is developed for the solution of time-dependent non-equilibrium grey radiative transfer problems in planar geometry. Although the DP{sub 0} diffusion approximation is expected to be less accurate than the P{sub 1} diffusion approximation at and near thermodynamic equilibrium, the DP{sub 0} angular approximation can more accurately capture the complicated angular dependence near a non-equilibrium radiation wave front. In addition, the DP{sub 0} approximation should be more accurate in non-equilibrium optically thin regions where the positive and negative angular domains are largely decoupled. We develop an adaptive angular technique that locally uses either the DP{sub 0} or P{sub 1} flux-limited diffusion approximation depending on the degree to which the radiation and material fields are in thermodynamic equilibrium. Numerical results are presented for two test problems due to Su and Olson and to Ganapol and Pomraning for which semi-analytic transport solutions exist. These numerical results demonstrate that the adaptive P{sub 1}-DP{sub 0} diffusion approximation can yield improvements in accuracy over the standard P{sub 1} diffusion approximation, both without and with flux-limiting, for non-equilibrium grey radiative transfer.

  10. Adaptive Finite-Element Solution of the Nonlinear Poisson-Boltzmann Equation: A Charged Spherical Particle at Various Distances from a Charged Cylindrical Pore in a Charged Planar Surface

    PubMed

    Bowen; Sharif

    1997-03-15

    A Galerkin finite-element approach combined with an error estimator and automatic mesh refinement has been used to provide a flexible numerical solution of the Poisson-Boltzmann equation. A Newton sequence technique was used to solve the nonlinear equations arising from the finite-element discretization procedure. Errors arising from the finite-element solution due to mesh refinement were calculated using the Zienkiewicz-Zhu error estimator, and an automatic remeshing strategy was adopted to achieve a solution satisfying a preset quality. Examples of the performance of the error estimator in adaptive mesh refinement are presented. The adaptive finite-element scheme presented in this study has proved to be an effective technique in minimizing errors in finite-element solutions for a given problem, in particular those of complex geometries. As an example, numerical solutions are presented for the case of a charged spherical particle at various distances from a charged cylindrical pore in a charged planar surface. Such a scheme provides a quantification of the significance of electrostatic interactions for an important industrial technology-membrane separation processes.

  11. Self-adaptive difference method for the effective solution of computationally complex problems of boundary layer theory

    NASA Technical Reports Server (NTRS)

    Schoenauer, W.; Daeubler, H. G.; Glotz, G.; Gruening, J.

    1986-01-01

    An implicit difference procedure for the solution of equations for a chemically reacting hypersonic boundary layer is described. Difference forms of arbitrary error order in the x and y coordinate plane were used to derive estimates for discretization error. Computational complexity and time were minimized by the use of this difference method and the iteration of the nonlinear boundary layer equations was regulated by discretization error. Velocity and temperature profiles are presented for Mach 20.14 and Mach 18.5; variables are velocity profiles, temperature profiles, mass flow factor, Stanton number, and friction drag coefficient; three figures include numeric data.

  12. A local anisotropic adaptive algorithm for the solution of low-Mach transient combustion problems

    NASA Astrophysics Data System (ADS)

    Carpio, Jaime; Prieto, Juan Luis; Vera, Marcos

    2016-02-01

    A novel numerical algorithm for the simulation of transient combustion problems at low Mach and moderately high Reynolds numbers is presented. These problems are often characterized by the existence of a large disparity of length and time scales, resulting in the development of directional flow features, such as slender jets, boundary layers, mixing layers, or flame fronts. This makes local anisotropic adaptive techniques quite advantageous computationally. In this work we propose a local anisotropic refinement algorithm using, for the spatial discretization, unstructured triangular elements in a finite element framework. For the time integration, the problem is formulated in the context of semi-Lagrangian schemes, introducing the semi-Lagrange-Galerkin (SLG) technique as a better alternative to the classical semi-Lagrangian (SL) interpolation. The good performance of the numerical algorithm is illustrated by solving a canonical laminar combustion problem: the flame/vortex interaction. First, a premixed methane-air flame/vortex interaction with simplified transport and chemistry description (Test I) is considered. Results are found to be in excellent agreement with those in the literature, proving the superior performance of the SLG scheme when compared with the classical SL technique, and the advantage of using anisotropic adaptation instead of uniform meshes or isotropic mesh refinement. As a more realistic example, we then conduct simulations of non-premixed hydrogen-air flame/vortex interactions (Test II) using a more complex combustion model which involves state-of-the-art transport and chemical kinetics. In addition to the analysis of the numerical features, this second example allows us to perform a satisfactory comparison with experimental visualizations taken from the literature.

  13. A cellular automaton model adapted to sandboxes to simulate the transport of solutes

    NASA Astrophysics Data System (ADS)

    Lora, Boris; Donado, Leonardo; Castro, Eduardo; Bayuelo, Alfredo

    2016-04-01

    The increasingly use of groundwater sources for human consumption and the growth of the levels of these hydric sources contamination make imperative to reach a deeper understanding how the contaminants are transported by the water, in particular through a heterogeneous porous medium. Accordingly, the present research aims to design a model, which simulates the transport of solutes through a heterogeneous porous medium, using cellular automata. Cellular automata (CA) are a class of spatially (pixels) and temporally discrete mathematical systems characterized by local interaction (neighborhoods). The pixel size and the CA neighborhood were determined in order to reproduce accurately the solute behavior (Ilachinski, 2001). For the design and corresponding validation of the CA model were developed different conservative tracer tests using a sandbox packed heterogeneously with a coarse sand (size # 20 grain diameter 0,85 to 0,6 mm) and clay. We use Uranine and a saline solution with NaCl as a tracer which were measured taking snapshots each 20 seconds. A calibration curve (pixel intensity Vs Concentration) was used to obtain concentration maps. The sandbox was constructed of acrylic (caliber 0,8 cms) with 70 x 45 x 4 cms of dimensions. The "sandbox" had a grid of 35 transversal holes with a diameter of 4 mm each and an uniform separation from one to another of 10 cms. To validate the CA-model it was used a metric consisting in rating the number of correctly predicted pixels over the total per image throughout the entire test run. The CA-model shows that calibrations of pixels and neighborhoods allow reaching results over the 60 % of correctly predictions usually. This makes possible to think that the application of the CA- model could be useful in further researches regarding the transport of contaminants in hydrogeology.

  14. Numerical simulation for horizontal subsurface flow constructed wetlands: A short review including geothermal effects and solution bounding in biodegradation procedures

    NASA Astrophysics Data System (ADS)

    Liolios, K.; Tsihrintzis, V.; Angelidis, P.; Georgiev, K.; Georgiev, I.

    2016-10-01

    Current developments on modeling of groundwater flow and contaminant transport and removal in the porous media of Horizontal Subsurface Flow Constructed Wetlands (HSF CWs) are first reviewed in a short way. The two usual environmental engineering approaches, the black-box and the process-based one, are briefly presented. Next, recent research results obtained by using these two approaches are briefly discussed as application examples, where emphasis is given to the evaluation of the optimal design and operation parameters concerning HSF CWs. For the black-box approach, the use of Artificial Neural Networks is discussed for the formulation of models, which predict the removal performance of HSF CWs. A novel mathematical prove is presented, which concerns the dependence of the first-order removal coefficient on the Temperature and the Hydraulic Residence Time. For the process-based approach, an application example is first discussed which concerns procedures to evaluate the optimal range of values for the removal coefficient, dependent on either the Temperature or the Hydraulic Residence Time. This evaluation is based on simulating available experimental results of pilot-scale units operated in Democritus University of Thrace, Xanthi, Greece. Further, in a second example, a novel enlargement of the system of Partial Differential Equations is presented, in order to include geothermal effects. Finally, in a third example, the case of parameters uncertainty concerning biodegradation procedures is considered and the use of upper and a novel approach is presented, which concerns the upper and the lower solution bound for the practical draft design of HSF CWs.

  15. Non chemical control of helminths in ruminants: adapting solutions for changing worms in a changing world.

    PubMed

    Hoste, H; Torres-Acosta, J F J

    2011-08-01

    Infections with gastrointestinal nematodes (GIN) remain a major threat for ruminant production, health and welfare associated with outdoor breeding. The control of these helminth parasites has relied on the strategic or tactical use of chemical anthelmintic (AH) drugs. However, the expanding development and diffusion of anthelmintic resistance in nematode populations imposes the need to explore and validate novel solutions (or to re-discover old knowledge) for a more sustainable control of GIN. The different solutions refer to three main principles of action. The first one is to limit the contact between the hosts and the infective larvae in the field through grazing management methods. The latter were described since the 1970s and, at present, they benefit from innovations based on computer models. Several biological control agents have also been studied in the last three decades as potential tools to reduce the infective larvae in the field. The second principle aims at improving the host response against GIN infections relying on the genetic selection between or within breeds of sheep or goats, crossbreeding of resistant and susceptible breeds and/or the manipulation of nutrition. These approaches may benefit from a better understanding of the potential underlying mechanisms, in particular in regard of the host immune response against the worms. The third principle is the control of GIN based on non-conventional AH materials (plant or mineral compounds). Worldwide studies show that non conventional AH materials can eliminate worms and/or negatively affect the parasite's biology. The recent developments and pros and cons concerning these various options are discussed. Last, some results are presented which illustrate how the integration of these different solutions can be efficient and applicable in different systems of production and/or epidemiological conditions. The integration of different control tools seems to be a pre-requisite for the sustainable

  16. Practical Study and Solutions Adapted For The Road Noise In The Algiers City

    NASA Astrophysics Data System (ADS)

    Iddir, R.; Boukhaloua, N.; Saadi, T.

    At the present hour where the city spreads on a big space, the road network devel- opment was a following logical of this movement. Generating a considerable impact thus on the environment. This last is a resulting open system of the interaction be- tween the man and the nature, it's affected all side by the different means of transport and by their increasing demand of mobility. The contemporary city development be- got problems bound to the environment and among it : The road noise. This last is a complex phenomenon, essentially by reason of its humans sensory effects, its impact on the environment is considerable, this one concerns the life quality directly, mainly in population zones to strong density. The resonant pollution reached its paroxysm; the road network of Algiers is not conceived to satisfy requirements in resonant pol- lution matter. For it arrangements soundproof should be adapted in order to face of it these new requirements in matter of the acoustic comfort. All these elements drove to the process aiming the attenuation of the hindrance caused by the road traffic and it by actions essentially aiming: vehicles, the structure of the road and the immediate envi- ronment of the system road - structure. From these results, we note that the situation in resonant nuisance matter in this zone with strong traffic is disturbing, and especially on the residents health.

  17. Numerical solution of multi-dimensional compressible reactive flow using a parallel wavelet adaptive multi-resolution method

    NASA Astrophysics Data System (ADS)

    Grenga, Temistocle

    The aim of this research is to further develop a dynamically adaptive algorithm based on wavelets that is able to solve efficiently multi-dimensional compressible reactive flow problems. This work demonstrates the great potential for the method to perform direct numerical simulation (DNS) of combustion with detailed chemistry and multi-component diffusion. In particular, it addresses the performance obtained using a massive parallel implementation and demonstrates important savings in memory storage and computational time over conventional methods. In addition, fully-resolved simulations of challenging three dimensional problems involving mixing and combustion processes are performed. These problems are particularly challenging due to their strong multiscale characteristics. For these solutions, it is necessary to combine the advanced numerical techniques applied to modern computational resources.

  18. An adaptive high-dimensional stochastic model representation technique for the solution of stochastic partial differential equations

    SciTech Connect

    Ma Xiang; Zabaras, Nicholas

    2010-05-20

    A computational methodology is developed to address the solution of high-dimensional stochastic problems. It utilizes high-dimensional model representation (HDMR) technique in the stochastic space to represent the model output as a finite hierarchical correlated function expansion in terms of the stochastic inputs starting from lower-order to higher-order component functions. HDMR is efficient at capturing the high-dimensional input-output relationship such that the behavior for many physical systems can be modeled to good accuracy only by the first few lower-order terms. An adaptive version of HDMR is also developed to automatically detect the important dimensions and construct higher-order terms using only the important dimensions. The newly developed adaptive sparse grid collocation (ASGC) method is incorporated into HDMR to solve the resulting sub-problems. By integrating HDMR and ASGC, it is computationally possible to construct a low-dimensional stochastic reduced-order model of the high-dimensional stochastic problem and easily perform various statistic analysis on the output. Several numerical examples involving elementary mathematical functions and fluid mechanics problems are considered to illustrate the proposed method. The cases examined show that the method provides accurate results for stochastic dimensionality as high as 500 even with large-input variability. The efficiency of the proposed method is examined by comparing with Monte Carlo (MC) simulation.

  19. A MATLAB toolbox for the efficient estimation of the psychometric function using the updated maximum-likelihood adaptive procedure.

    PubMed

    Shen, Yi; Dai, Wei; Richards, Virginia M

    2015-03-01

    A MATLAB toolbox for the efficient estimation of the threshold, slope, and lapse rate of the psychometric function is described. The toolbox enables the efficient implementation of the updated maximum-likelihood (UML) procedure. The toolbox uses an object-oriented architecture for organizing the experimental variables and computational algorithms, which provides experimenters with flexibility in experimental design and data management. Descriptions of the UML procedure and the UML Toolbox are provided, followed by toolbox use examples. Finally, guidelines and recommendations of parameter configurations are given.

  20. Modeling Pb (II) adsorption from aqueous solution by ostrich bone ash using adaptive neural-based fuzzy inference system.

    PubMed

    Amiri, Mohammad J; Abedi-Koupai, Jahangir; Eslamian, Sayed S; Mousavi, Sayed F; Hasheminejad, Hasti

    2013-01-01

    To evaluate the performance of Adaptive Neural-Based Fuzzy Inference System (ANFIS) model in estimating the efficiency of Pb (II) ions removal from aqueous solution by ostrich bone ash, a batch experiment was conducted. Five operational parameters including adsorbent dosage (C(s)), initial concentration of Pb (II) ions (C(o)), initial pH, temperature (T) and contact time (t) were taken as the input data and the adsorption efficiency (AE) of bone ash as the output. Based on the 31 different structures, 5 ANFIS models were tested against the measured adsorption efficiency to assess the accuracy of each model. The results showed that ANFIS5, which used all input parameters, was the most accurate (RMSE = 2.65 and R(2) = 0.95) and ANFIS1, which used only the contact time input, was the worst (RMSE = 14.56 and R(2) = 0.46). In ranking the models, ANFIS4, ANFIS3 and ANFIS2 ranked second, third and fourth, respectively. The sensitivity analysis revealed that the estimated AE is more sensitive to the contact time, followed by pH, initial concentration of Pb (II) ions, adsorbent dosage, and temperature. The results showed that all ANFIS models overestimated the AE. In general, this study confirmed the capabilities of ANFIS model as an effective tool for estimation of AE. PMID:23383640

  1. A full automatic device for sampling small solution volumes in photometric titration procedure based on multicommuted flow system.

    PubMed

    Borges, Sivanildo S; Vieira, Gláucia P; Reis, Boaventura F

    2007-01-01

    In this work, an automatic device to deliver titrant solution into a titration chamber with the ability to determine the dispensed volume of solution, with good precision independent of both elapsed time and flow rate, is proposed. A glass tube maintained at the vertical position was employed as a container for the titrant solution. Electronic devices were coupled to the glass tube in order to control its filling with titrant solution, as well as the stepwise solution delivering into the titration chamber. The detection of the titration end point was performed employing a photometer designed using a green LED (lambda=545 nm) and a phototransistor. The titration flow system comprised three-way solenoid valves, which were assembled to allow that the steps comprising the solution container loading and the titration run were carried out automatically. The device for the solution volume determination was designed employing an infrared LED (lambda=930 nm) and a photodiode. When solution volume delivered from proposed device was within the range of 5 to 105 mul, a linear relationship (R = 0.999) between the delivered volumes and the generated potential difference was achieved. The usefulness of the proposed device was proved performing photometric titration of hydrochloric acid solution with a standardized sodium hydroxide solution and using phenolphthalein as an external indicator. The achieved results presented relative standard deviation of 1.5%. PMID:18317510

  2. Electronic excitation spectra of molecules in solution calculated using the symmetry-adapted cluster-configuration interaction method in the polarizable continuum model with perturbative approach.

    PubMed

    Fukuda, Ryoichi; Ehara, Masahiro; Cammi, Roberto

    2014-02-14

    A perturbative approximation of the state specific polarizable continuum model (PCM) symmetry-adapted cluster-configuration interaction (SAC-CI) method is proposed for efficient calculations of the electronic excitations and absorption spectra of molecules in solutions. This first-order PCM SAC-CI method considers the solvent effects on the energies of excited states up to the first-order with using the zeroth-order wavefunctions. This method can avoid the costly iterative procedure of the self-consistent reaction field calculations. The first-order PCM SAC-CI calculations well reproduce the results obtained by the iterative method for various types of excitations of molecules in polar and nonpolar solvents. The first-order contribution is significant for the excitation energies. The results obtained by the zeroth-order PCM SAC-CI, which considers the fixed ground-state reaction field for the excited-state calculations, are deviated from the results by the iterative method about 0.1 eV, and the zeroth-order PCM SAC-CI cannot predict even the direction of solvent shifts in n-hexane for many cases. The first-order PCM SAC-CI is applied to studying the solvatochromisms of (2,2(')-bipyridine)tetracarbonyltungsten [W(CO)4(bpy), bpy = 2,2(')-bipyridine] and bis(pentacarbonyltungsten)pyrazine [(OC)5W(pyz)W(CO)5, pyz = pyrazine]. The SAC-CI calculations reveal the detailed character of the excited states and the mechanisms of solvent shifts. The energies of metal to ligand charge transfer states are significantly sensitive to solvents. The first-order PCM SAC-CI well reproduces the observed absorption spectra of the tungsten carbonyl complexes in several solvents.

  3. Electronic excitation spectra of molecules in solution calculated using the symmetry-adapted cluster-configuration interaction method in the polarizable continuum model with perturbative approach

    NASA Astrophysics Data System (ADS)

    Fukuda, Ryoichi; Ehara, Masahiro; Cammi, Roberto

    2014-02-01

    A perturbative approximation of the state specific polarizable continuum model (PCM) symmetry-adapted cluster-configuration interaction (SAC-CI) method is proposed for efficient calculations of the electronic excitations and absorption spectra of molecules in solutions. This first-order PCM SAC-CI method considers the solvent effects on the energies of excited states up to the first-order with using the zeroth-order wavefunctions. This method can avoid the costly iterative procedure of the self-consistent reaction field calculations. The first-order PCM SAC-CI calculations well reproduce the results obtained by the iterative method for various types of excitations of molecules in polar and nonpolar solvents. The first-order contribution is significant for the excitation energies. The results obtained by the zeroth-order PCM SAC-CI, which considers the fixed ground-state reaction field for the excited-state calculations, are deviated from the results by the iterative method about 0.1 eV, and the zeroth-order PCM SAC-CI cannot predict even the direction of solvent shifts in n-hexane for many cases. The first-order PCM SAC-CI is applied to studying the solvatochromisms of (2,2'-bipyridine)tetracarbonyltungsten [W(CO)4(bpy), bpy = 2,2'-bipyridine] and bis(pentacarbonyltungsten)pyrazine [(OC)5W(pyz)W(CO)5, pyz = pyrazine]. The SAC-CI calculations reveal the detailed character of the excited states and the mechanisms of solvent shifts. The energies of metal to ligand charge transfer states are significantly sensitive to solvents. The first-order PCM SAC-CI well reproduces the observed absorption spectra of the tungsten carbonyl complexes in several solvents.

  4. Electronic excitation spectra of molecules in solution calculated using the symmetry-adapted cluster-configuration interaction method in the polarizable continuum model with perturbative approach

    SciTech Connect

    Fukuda, Ryoichi Ehara, Masahiro; Cammi, Roberto

    2014-02-14

    A perturbative approximation of the state specific polarizable continuum model (PCM) symmetry-adapted cluster-configuration interaction (SAC-CI) method is proposed for efficient calculations of the electronic excitations and absorption spectra of molecules in solutions. This first-order PCM SAC-CI method considers the solvent effects on the energies of excited states up to the first-order with using the zeroth-order wavefunctions. This method can avoid the costly iterative procedure of the self-consistent reaction field calculations. The first-order PCM SAC-CI calculations well reproduce the results obtained by the iterative method for various types of excitations of molecules in polar and nonpolar solvents. The first-order contribution is significant for the excitation energies. The results obtained by the zeroth-order PCM SAC-CI, which considers the fixed ground-state reaction field for the excited-state calculations, are deviated from the results by the iterative method about 0.1 eV, and the zeroth-order PCM SAC-CI cannot predict even the direction of solvent shifts in n-hexane for many cases. The first-order PCM SAC-CI is applied to studying the solvatochromisms of (2,2{sup ′}-bipyridine)tetracarbonyltungsten [W(CO){sub 4}(bpy), bpy = 2,2{sup ′}-bipyridine] and bis(pentacarbonyltungsten)pyrazine [(OC){sub 5}W(pyz)W(CO){sub 5}, pyz = pyrazine]. The SAC-CI calculations reveal the detailed character of the excited states and the mechanisms of solvent shifts. The energies of metal to ligand charge transfer states are significantly sensitive to solvents. The first-order PCM SAC-CI well reproduces the observed absorption spectra of the tungsten carbonyl complexes in several solvents.

  5. Measuring acuity of the approximate number system reliably and validly: the evaluation of an adaptive test procedure

    PubMed Central

    Lindskog, Marcus; Winman, Anders; Juslin, Peter; Poom, Leo

    2013-01-01

    Two studies investigated the reliability and predictive validity of commonly used measures and models of Approximate Number System acuity (ANS). Study 1 investigated reliability by both an empirical approach and a simulation of maximum obtainable reliability under ideal conditions. Results showed that common measures of the Weber fraction (w) are reliable only when using a substantial number of trials, even under ideal conditions. Study 2 compared different purported measures of ANS acuity as for convergent and predictive validity in a within-subjects design and evaluated an adaptive test using the ZEST algorithm. Results showed that the adaptive measure can reduce the number of trials needed to reach acceptable reliability. Only direct tests with non-symbolic numerosity discriminations of stimuli presented simultaneously were related to arithmetic fluency. This correlation remained when controlling for general cognitive ability and perceptual speed. Further, the purported indirect measure of ANS acuity in terms of the Numeric Distance Effect (NDE) was not reliable and showed no sign of predictive validity. The non-symbolic NDE for reaction time was significantly related to direct w estimates in a direction contrary to the expected. Easier stimuli were found to be more reliable, but only harder (7:8 ratio) stimuli contributed to predictive validity. PMID:23964256

  6. Measuring acuity of the approximate number system reliably and validly: the evaluation of an adaptive test procedure.

    PubMed

    Lindskog, Marcus; Winman, Anders; Juslin, Peter; Poom, Leo

    2013-01-01

    Two studies investigated the reliability and predictive validity of commonly used measures and models of Approximate Number System acuity (ANS). Study 1 investigated reliability by both an empirical approach and a simulation of maximum obtainable reliability under ideal conditions. Results showed that common measures of the Weber fraction (w) are reliable only when using a substantial number of trials, even under ideal conditions. Study 2 compared different purported measures of ANS acuity as for convergent and predictive validity in a within-subjects design and evaluated an adaptive test using the ZEST algorithm. Results showed that the adaptive measure can reduce the number of trials needed to reach acceptable reliability. Only direct tests with non-symbolic numerosity discriminations of stimuli presented simultaneously were related to arithmetic fluency. This correlation remained when controlling for general cognitive ability and perceptual speed. Further, the purported indirect measure of ANS acuity in terms of the Numeric Distance Effect (NDE) was not reliable and showed no sign of predictive validity. The non-symbolic NDE for reaction time was significantly related to direct w estimates in a direction contrary to the expected. Easier stimuli were found to be more reliable, but only harder (7:8 ratio) stimuli contributed to predictive validity. PMID:23964256

  7. Function Allocation in Complex Socio-Technical Systems: Procedure usage in nuclear power and the Context Analysis Method for Identifying Design Solutions (CAMIDS) Model

    NASA Astrophysics Data System (ADS)

    Schmitt, Kara Anne

    This research aims to prove that strict adherence to procedures and rigid compliance to process in the US Nuclear Industry may not prevent incidents or increase safety. According to the Institute of Nuclear Power Operations, the nuclear power industry has seen a recent rise in events, and this research claims that a contributing factor to this rise is organizational, cultural, and based on peoples overreliance on procedures and policy. Understanding the proper balance of function allocation, automation and human decision-making is imperative to creating a nuclear power plant that is safe, efficient, and reliable. This research claims that new generations of operators are less engaged and thinking because they have been instructed to follow procedures to a fault. According to operators, they were once to know the plant and its interrelations, but organizationally more importance is now put on following procedure and policy. Literature reviews were performed, experts were questioned, and a model for context analysis was developed. The Context Analysis Method for Identifying Design Solutions (CAMIDS) Model was created, verified and validated through both peer review and application in real world scenarios in active nuclear power plant simulators. These experiments supported the claim that strict adherence and rigid compliance to procedures may not increase safety by studying the industry's propensity for following incorrect procedures, and when it directly affects the outcome of safety or security of the plant. The findings of this research indicate that the younger generations of operators rely highly on procedures, and the organizational pressures of required compliance to procedures may lead to incidents within the plant because operators feel pressured into following the rules and policy above performing the correct actions in a timely manner. The findings support computer based procedures, efficient alarm systems, and skill of the craft matrices. The solution to

  8. A solid-phase extraction procedure for the clean-up of thiram from aqueous solutions containing high concentrations of humic substances.

    PubMed

    Filipe, O M S; Vidal, M M; Duarte, A C; Santos, E B H

    2007-05-15

    A simple solid-phase extraction (SPE) procedure with an octadecyl bonded phase silica (C(18)) was developed for clean-up of the fungicide thiram from aqueous solutions containing high concentrations of humic substances, for future studies of thiram adsorption onto solid humic substances or soils. Suspensions of humic acids and soil, in aqueous 0.01M CaCl(2) solution, were prepared and used as samples. These extracts were spiked with thiram and immediately applied to a C(18)-SPE cartridge. Thiram was eluted with chloroform and its concentration measured by spectrophotometry at 283nm. Non-spiked aqueous extracts (blanks) and a control sample of thiram in 0.01M CaCl(2) aqueous solution were also prepared and submitted to the same SPE procedure. The results show that humic substances are extensively retained by the C(18) cartridge but are not eluted with CHCl(3). Recoveries of 100-104% were obtained for thiram in the presence of humic substances. The SPE procedure described in this work is an efficient clean-up step to remove the interference of humic substances absorbance and to be coupled to any spectrophotometric or HPLC-UV method, usually used for thiram analysis in food extracts.

  9. Tetrahedral and Hexahedral Mesh Adaptation for CFD Problems

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak; Strawn, Roger C.; Chancellor, Marisa K. (Technical Monitor)

    1997-01-01

    This paper presents two unstructured mesh adaptation schemes for problems in computational fluid dynamics. The procedures allow localized grid refinement and coarsening to efficiently capture aerodynamic flow features of interest. The first procedure is for purely tetrahedral grids; unfortunately, repeated anisotropic adaptation may significantly deteriorate the quality of the mesh. Hexahedral elements, on the other hand, can be subdivided anisotropically without mesh quality problems. Furthermore, hexahedral meshes yield more accurate solutions than their tetrahedral counterparts for the same number of edges. Both the tetrahedral and hexahedral mesh adaptation procedures use edge-based data structures that facilitate efficient subdivision by allowing individual edges to be marked for refinement or coarsening. However, for hexahedral adaptation, pyramids, prisms, and tetrahedra are used as buffer elements between refined and unrefined regions to eliminate hanging vertices. Computational results indicate that the hexahedral adaptation procedure is a viable alternative to adaptive tetrahedral schemes.

  10. Peak procedure performance in young adult and aged rats: acquisition and adaptation to a changing temporal criterion.

    PubMed

    Lejeune, H; Ferrara, A; Soffíe, M; Bronchart, M; Wearden, J H

    1998-08-01

    Twenty-four-month-old and 4-month-old rats were trained on a peak-interval procedure, where the time of reinforcement was varied twice between 20 and 40 sec. Peak times from the old rats were consistently longer than the reinforcement time, whereas those from younger animals tracked the 20- and 40-sec durations more closely. Different measures of performance suggested that the old rats were either (1) systematically misremembering the time of reinforcement or (2) using an internal clock with a substantially greater latency to start and stop timing than the younger animals. Old rats also adjusted more slowly to the first transition from 20 to 40 sec than did the younger ones, but not to later transitions. Correlations between measures derived from within-trial patterns of responding conformed in general to detailed predictions derived from scalar expectancy theory. However, some correlation values more closely resembled those derived from a study of peak-interval performance in humans and a theoretical model developed by Cheng and Westwood (1993), than those obtained in previous work with animals, for reasons that are at present unclear.

  11. The Research of Solution to the Problems of Complex Task Scheduling Based on Self-adaptive Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Zhu, Li; He, Yongxiang; Xue, Haidong; Chen, Leichen

    Traditional genetic algorithms (GA) displays a disadvantage of early-constringency in dealing with scheduling problem. To improve the crossover operators and mutation operators self-adaptively, this paper proposes a self-adaptive GA at the target of multitask scheduling optimization under limited resources. The experiment results show that the proposed algorithm outperforms the traditional GA in evolutive ability to deal with complex task scheduling optimization.

  12. On development of a finite dynamic element and solution of associated eigenproblem by a block Lanczos procedure

    NASA Technical Reports Server (NTRS)

    Gupta, K. K.; Lawson, C. L.; Ahmad, A. R.

    1992-01-01

    The paper first presents the details of the development of a new six-noded plane triangular finite dynamic element. A block Lanczos algorithm is developed next for the accurate and efficient solution of the quadratic matrix eigenvalue problem associated with the finite dynamic element formulation. The resulting computer program fully exploits matrix sparsity inherent in such a discretization and proves to be most efficient for the extraction of the usually required first few roots and vectors, including repeated ones. Most importantly, the present eigenproblem solution is shown to be comparable to that of the corresponding finite element analysis, thereby rendering the associated dynamic element method rather attractive owing to superior convergence characteristics of such elements, presented herein.

  13. Adaptive Mesh Enrichment for the Poisson-Boltzmann Equation

    NASA Astrophysics Data System (ADS)

    Dyshlovenko, Pavel

    2001-09-01

    An adaptive mesh enrichment procedure for a finite-element solution of the two-dimensional Poisson-Boltzmann equation is described. The mesh adaptation is performed by subdividing the cells using information obtained in the previous step of the solution and next rearranging the mesh to be a Delaunay triangulation. The procedure allows the gradual improvement of the quality of the solution and adjustment of the geometry of the problem. The performance of the proposed approach is illustrated by applying it to the problem of two identical colloidal particles in a symmetric electrolyte.

  14. Symmetry-adapted cluster and symmetry-adapted cluster-configuration interaction method in the polarizable continuum model: Theory of the solvent effect on the electronic excitation of molecules in solution

    NASA Astrophysics Data System (ADS)

    Cammi, Roberto; Fukuda, Ryoichi; Ehara, Masahiro; Nakatsuji, Hiroshi

    2010-07-01

    In this paper we present the theory and implementation of the symmetry-adapted cluster (SAC) and symmetry-adapted cluster-configuration interaction (SAC-CI) method, including the solvent effect, using the polarizable continuum model (PCM). The PCM and SAC/SAC-CI were consistently combined in terms of the energy functional formalism. The excitation energies were calculated by means of the state-specific approach, the advantage of which over the linear-response approach has been shown. The single-point energy calculation and its analytical energy derivatives are presented and implemented, where the free-energy and its derivatives are evaluated because of the presence of solute-solvent interactions. We have applied this method to s-trans-acrolein and metylenecyclopropene of their electronic excitation in solution. The molecular geometries in the ground and excited states were optimized in vacuum and in solution, and both the vertical and adiabatic excitations were studied. The PCM-SAC/SAC-CI reproduced the known trend of the solvent effect on the vertical excitation energies but the shift values were underestimated. The excited state geometry in planar and nonplanar conformations was investigated. The importance of using state-specific methods was shown for the solvent effect on the optimized geometry in the excited state. The mechanism of the solvent effect is discussed in terms of the Mulliken charges and electronic dipole moment.

  15. Adaptive superposition of finite element meshes in linear and nonlinear dynamic analysis

    NASA Astrophysics Data System (ADS)

    Yue, Zhihua

    2005-11-01

    The numerical analysis of transient phenomena in solids, for instance, wave propagation and structural dynamics, is a very important and active area of study in engineering. Despite the current evolutionary state of modern computer hardware, practical analysis of large scale, nonlinear transient problems requires the use of adaptive methods where computational resources are locally allocated according to the interpolation requirements of the solution form. Adaptive analysis of transient problems involves obtaining solutions at many different time steps, each of which requires a sequence of adaptive meshes. Therefore, the execution speed of the adaptive algorithm is of paramount importance. In addition, transient problems require that the solution must be passed from one adaptive mesh to the next adaptive mesh with a bare minimum of solution-transfer error since this form of error compromises the initial conditions used for the next time step. A new adaptive finite element procedure (s-adaptive) is developed in this study for modeling transient phenomena in both linear elastic solids and nonlinear elastic solids caused by progressive damage. The adaptive procedure automatically updates the time step size and the spatial mesh discretization in transient analysis, achieving the accuracy and the efficiency requirements simultaneously. The novel feature of the s-adaptive procedure is the original use of finite element mesh superposition to produce spatial refinement in transient problems. The use of mesh superposition enables the s-adaptive procedure to completely avoid the need for cumbersome multipoint constraint algorithms and mesh generators, which makes the s-adaptive procedure extremely fast. Moreover, the use of mesh superposition enables the s-adaptive procedure to minimize the solution-transfer error. In a series of different solid mechanics problem types including 2-D and 3-D linear elastic quasi-static problems, 2-D material nonlinear quasi-static problems

  16. The importance of fixation procedures on DNA template and its suitability for solution-phase polymerase chain reaction and PCR in situ hybridization.

    PubMed

    O'Leary, J J; Browne, G; Landers, R J; Crowley, M; Healy, I B; Street, J T; Pollock, A M; Murphy, J; Johnson, M I; Lewis, F A

    1994-04-01

    Conventional solution-phase polymerase chain reaction (PCR) and in situ PCR/PCR in situ hybridization are powerful tools for retrospective analysis of fixed paraffin wax-embedded material. Amplification failure using these techniques is now encountered in some centres using archival fixed tissues. Such 'failures' may not only be due to absent target DNA sequences in the tissues, but may be a direct effect of the type of fixative, fixation time and/or fixation temperature used. The type of nucleic acid extraction procedure applied will also influence amplification results. This is particularly true with in situ PCR/PCR in situ hybridization. To examine these effects in solution-phase PCR, beta-globin gene was amplified in 100 mg pieces of tonsillar tissue fixed in Formal saline, 10% formalin, neutral buffered formaldehyde, Carnoy's Bouin's, buffered formaldehyde sublimate, Zenker's, Helly's and glutaraldehyde at 0 to 4 degrees C, room temperature and 37 degrees C fixation temperatures and for fixation periods of 6, 24, 48 and 72 hours and 1 week. DNA extraction procedures used were simple boiling and 5 days' proteinase K digestion at 37 degrees C. Amplified product was visible primarily yet variably from tissue fixed in neutral buffered formaldehyde and Carnoy's, whereas fixation in mercuric chloride-based fixatives produced consistently negative results. Room temperature and 37 degrees C fixation temperature appeared most conducive to yielding amplifiable DNA template. Fixation times of 24 and 48 hours in neutral buffered formaldehyde and Carnoy's again favoured amplification.(ABSTRACT TRUNCATED AT 250 WORDS)

  17. Development of a numerical procedure for mixed mode K-solutions and fatigue crack growth in FCC single crystal superalloys

    NASA Astrophysics Data System (ADS)

    Ranjan, Srikant

    2005-11-01

    Fatigue-induced failures in aircraft gas turbine and rocket engine turbopump blades and vanes are a pervasive problem. Turbine blades and vanes represent perhaps the most demanding structural applications due to the combination of high operating temperature, corrosive environment, high monotonic and cyclic stresses, long expected component lifetimes and the enormous consequence of structural failure. Single crystal nickel-base superalloy turbine blades are being utilized in rocket engine turbopumps and jet engines because of their superior creep, stress rupture, melt resistance, and thermomechanical fatigue capabilities over polycrystalline alloys. These materials have orthotropic properties making the position of the crystal lattice relative to the part geometry a significant factor in the overall analysis. Computation of stress intensity factors (SIFs) and the ability to model fatigue crack growth rate at single crystal cracks subject to mixed-mode loading conditions are important parts of developing a mechanistically based life prediction for these complex alloys. A general numerical procedure has been developed to calculate SIFs for a crack in a general anisotropic linear elastic material subject to mixed-mode loading conditions, using three-dimensional finite element analysis (FEA). The procedure does not require an a priori assumption of plane stress or plane strain conditions. The SIFs KI, KII, and KIII are shown to be a complex function of the coupled 3D crack tip displacement field. A comprehensive study of variation of SIFs as a function of crystallographic orientation, crack length, and mode-mixity ratios is presented, based on the 3D elastic orthotropic finite element modeling of tensile and Brazilian Disc (BD) specimens in specific crystal orientations. Variation of SIF through the thickness of the specimens is also analyzed. The resolved shear stress intensity coefficient or effective SIF, Krss, can be computed as a function of crack tip SIFs and the

  18. A two-loop sparse matrix numerical integration procedure for the solution of differential/algebraic equations: Application to multibody systems

    NASA Astrophysics Data System (ADS)

    Shabana, Ahmed A.; Hussein, Bassam A.

    2009-11-01

    In this paper, a two-loop implicit sparse matrix numerical integration (TLISMNI) procedure for the solution of constrained rigid and flexible multibody system differential and algebraic equations is proposed. The proposed method ensures that the kinematic constraint equations are satisfied at the position, velocity and acceleration levels. In this method, a sparse Lagrangian augmented form of the equations of motion that ensures that the constraints are satisfied at the acceleration level is first used to solve for all the accelerations and Lagrange multipliers. The independent coordinates and velocities are then identified and integrated using HTT or Newmark formulas, expressed in this paper in terms of the independent accelerations only. The constraint equations at the position level are then used within an iterative Newton-Raphson procedure to determine the dependent coordinates. The dependent velocities are determined by solving a linear system of algebraic equations. In order to effectively exploit efficient sparse matrix techniques and have minimum storage requirements, a two-loop iterative method is proposed. Equally important, the proposed method avoids the use of numerical differentiation which is commonly associated with the use of implicit integration methods in multibody system algorithms. Numerical examples are presented in order to demonstrate the use of the new integration procedure.

  19. The solution-adaptive numerical simulation of the 3D viscous flow in the serpentine coolant passage of a radial inflow turbine blade

    NASA Astrophysics Data System (ADS)

    Dawes, W. N.

    1992-06-01

    This paper describes the application of a solution-adaptive, three-dimensional Navier-Stokes solver to the problem of the flow in turbine internal coolant passages. First the variation of Nusselt number in a cylindrical, multi-ribbed duct is predicted and found to be in acceptable agreement with experimental data. Then the flow is computed in the serpentine coolant passage of a radial inflow turbine including modeling the internal baffles and pin fins. The aerodynamics of the passage, particularly that associated with the pin fins, is found to be complex. The predicted heat transfer coefficients allow zones of poor coolant penetration and potential hot spots to be identified.

  20. Discontinuous finite element solution of the radiation diffusion equation on arbitrary polygonal meshes and locally adapted quadrilateral grids

    SciTech Connect

    Ragusa, Jean C.

    2015-01-01

    In this paper, we propose a piece-wise linear discontinuous (PWLD) finite element discretization of the diffusion equation for arbitrary polygonal meshes. It is based on the standard diffusion form and uses the symmetric interior penalty technique, which yields a symmetric positive definite linear system matrix. A preconditioned conjugate gradient algorithm is employed to solve the linear system. Piece-wise linear approximations also allow a straightforward implementation of local mesh adaptation by allowing unrefined cells to be interpreted as polygons with an increased number of vertices. Several test cases, taken from the literature on the discretization of the radiation diffusion equation, are presented: random, sinusoidal, Shestakov, and Z meshes are used. The last numerical example demonstrates the application of the PWLD discretization to adaptive mesh refinement.

  1. An adaptive quantum mechanics/molecular mechanics method for the infrared spectrum of water: incorporation of the quantum effect between solute and solvent.

    PubMed

    Watanabe, Hiroshi C; Banno, Misa; Sakurai, Minoru

    2016-03-14

    Quantum effects in solute-solvent interactions, such as the many-body effect and the dipole-induced dipole, are known to be critical factors influencing the infrared spectra of species in the liquid phase. For accurate spectrum evaluation, the surrounding solvent molecules, in addition to the solute of interest, should be treated using a quantum mechanical method. However, conventional quantum mechanics/molecular mechanics (QM/MM) methods cannot handle free QM solvent molecules during molecular dynamics (MD) simulation because of the diffusion problem. To deal with this problem, we have previously proposed an adaptive QM/MM "size-consistent multipartitioning (SCMP) method". In the present study, as the first application of the SCMP method, we demonstrate the reproduction of the infrared spectrum of liquid-phase water, and evaluate the quantum effect in comparison with conventional QM/MM simulations.

  2. Combined procedure of vascularized bone marrow transplantation and mesenchymal stem cells graft - an effective solution for rapid hematopoietic reconstitution and prevention of graft-versus-host disease.

    PubMed

    Coliţă, Andrei; Coliţă, Anca; Zamfirescu, Dragos; Lupu, Anca Roxana

    2012-09-01

    Hematopoietic stem cell transplantation (HSCT) is a a standard therapeutic option for several diseases. The success of the procedure depends on quality and quantity of transplanted cells and on stromal capacity to create an optimal microenvironment, that supports survival and development of the hematopoietic elements. Conditions associated with stromal dysfunction lead to slower/insufficient engraftment and/or immune reconstitution. A possible solution to this problem is to realize a combined graft of hematopoietic stem cells along with the medular stroma in the form of vascularized bone marrow transplant (VBMT). Another major drawback of HSCT is the risk of graft versus host disease (GVHD). Recently, mesenchymal stromal cells (MSC) have demonstrated the capacity to down-regulate alloreactive T-cell and to enhance the engraftment. Cotransplantation of MSC could be a therapeutic option for a better engraftment and GVHD prevention. PMID:22677297

  3. Multiple solution of systems of linear algebraic equations by an iterative method with the adaptive recalculation of the preconditioner

    NASA Astrophysics Data System (ADS)

    Akhunov, R. R.; Gazizov, T. R.; Kuksenko, S. P.

    2016-08-01

    The mean time needed to solve a series of systems of linear algebraic equations (SLAEs) as a function of the number of SLAEs is investigated. It is proved that this function has an extremum point. An algorithm for adaptively determining the time when the preconditioner matrix should be recalculated when a series of SLAEs is solved is developed. A numerical experiment with multiply solving a series of SLAEs using the proposed algorithm for computing 100 capacitance matrices with two different structures—microstrip when its thickness varies and a modal filter as the gap between the conductors varies—is carried out. The speedups turned out to be close to the optimal ones.

  4. Evaluation of total effective dose due to certain environmentally placed naturally occurring radioactive materials using a procedural adaptation of RESRAD code.

    PubMed

    Beauvais, Z S; Thompson, K H; Kearfott, K J

    2009-07-01

    Due to a recent upward trend in the price of uranium and subsequent increased interest in uranium mining, accurate modeling of baseline dose from environmental sources of radioactivity is of increasing interest. Residual radioactivity model and code (RESRAD) is a program used to model environmental movement and calculate the dose due to the inhalation, ingestion, and exposure to radioactive materials following a placement. This paper presents a novel use of RESRAD for the calculation of dose from non-enhanced, or ancient, naturally occurring radioactive material (NORM). In order to use RESRAD to calculate the total effective dose (TED) due to ancient NORM, a procedural adaptation was developed to negate the effects of time progressive distribution of radioactive materials. A dose due to United States' average concentrations of uranium, actinium, and thorium series radionuclides was then calculated. For adults exposed in a residential setting and assumed to eat significant amounts of food grown in NORM concentrated areas, the annual dose due to national average NORM concentrations was 0.935 mSv y(-1). A set of environmental dose factors were calculated for simple estimation of dose from uranium, thorium, and actinium series radionuclides for various age groups and exposure scenarios as a function of elemental uranium and thorium activity concentrations in groundwater and soil. The values of these factors for uranium were lowest for an adult exposed in an industrial setting: 0.00476 microSv kg Bq(-1) y(-1) for soil and 0.00596 microSv m(3) Bq(-1) y(-1) for water (assuming a 1:1 234U:238U activity ratio in water). The uranium factors were highest for infants exposed in a residential setting and assumed to ingest food grown onsite: 34.8 microSv kg Bq(-1) y(-1) in soil and 13.0 microSv m(3) Bq(-1) y(-1) in water. PMID:19509509

  5. The impact of head movements on EEG and contact impedance: an adaptive filtering solution for motion artifact reduction.

    PubMed

    Mihajlovic, Vojkan; Patki, Shrishail; Grundlehner, Bernard

    2014-01-01

    Designing and developing a comfortable and convenient EEG system for daily usage that can provide reliable and robust EEG signal, encompasses a number of challenges. Among them, the most ambitious is the reduction of artifacts due to body movements. This paper studies the effect of head movement artifacts on the EEG signal and on the dry electrode-tissue impedance (ETI), monitored continuously using the imec's wireless EEG headset. We have shown that motion artifacts have huge impact on the EEG spectral content in the frequency range lower than 20 Hz. Coherence and spectral analysis revealed that ETI is not capable of describing disturbances at very low frequencies (below 2 Hz). Therefore, we devised a motion artifact reduction (MAR) method that uses a combination of a band-pass filtering and multi-channel adaptive filtering (AF), suitable for real-time MAR. This method was capable of substantially reducing artifacts produced by head movements.

  6. Electronic excitation of molecules in solution calculated using the symmetry-adapted cluster–configuration interaction method in the polarizable continuum model

    SciTech Connect

    Fukuda, Ryoichi Ehara, Masahiro

    2015-12-31

    The effects from solvent environment are specific to the electronic states; therefore, a computational scheme for solvent effects consistent with the electronic states is necessary to discuss electronic excitation of molecules in solution. The PCM (polarizable continuum model) SAC (symmetry-adapted cluster) and SAC-CI (configuration interaction) methods are developed for such purposes. The PCM SAC-CI adopts the state-specific (SS) solvation scheme where solvent effects are self-consistently considered for every ground and excited states. For efficient computations of many excited states, we develop a perturbative approximation for the PCM SAC-CI method, which is called corrected linear response (cLR) scheme. Our test calculations show that the cLR PCM SAC-CI is a very good approximation of the SS PCM SAC-CI method for polar and nonpolar solvents.

  7. A framework for constructing adaptive and reconfigurable systems

    SciTech Connect

    Poirot, Pierre-Etienne; Nogiec, Jerzy; Ren, Shangping; /IIT, Chicago

    2007-05-01

    This paper presents a software approach to augmenting existing real-time systems with self-adaptation capabilities. In this approach, based on the control loop paradigm commonly used in industrial control, self-adaptation is decomposed into observing system events, inferring necessary changes based on a system's functional model, and activating appropriate adaptation procedures. The solution adopts an architectural decomposition that emphasizes independence and separation of concerns. It encapsulates observation, modeling and correction into separate modules to allow for easier customization of the adaptive behavior and flexibility in selecting implementation technologies.

  8. Halotolerance in Methanosarcina spp.: Role of N(sup(epsilon))-Acetyl-(beta)-Lysine, (alpha)-Glutamate, Glycine Betaine, and K(sup+) as Compatible Solutes for Osmotic Adaptation

    PubMed Central

    Sowers, K. R.; Gunsalus, R. P.

    1995-01-01

    The methanogenic Archaea, like the Bacteria and Eucarya, possess several osmoregulatory strategies that enable them to adapt to osmotic changes in their environment. The physiological responses of Methanosarcina species to different osmotic pressures were studied in extracellular osmolalities ranging from 0.3 to 2.0 osmol/kg. Regardless of the isolation source, the maximum rate of growth for species from freshwater, sewage, and marine sources occurred in extracellular osmolalities between 0.62 and 1.0 osmol/kg and decreased to minimal detectable growth as the solute concentration approached 2.0 osmol/kg. The steady-state water-accessible volume of Methanosarcina thermophila showed a disproportionate decrease of 30% between 0.3 and 0.6 osmol/kg and then a linear decrease of 22% as the solute concentration in the media increased from 0.6 to 2.0 osmol/kg. The total intracellular K(sup+) ion concentration in M. thermophila increased from 0.12 to 0.5 mol/kg as the medium osmolality was raised from 0.3 to 1.0 osmol/kg and then remained above 0.4 mol/kg as extracellular osmolality was increased to 2.0 osmol/kg. Concurrent with K(sup+) accumulation, M. thermophila synthesized and accumulated (alpha)-glutamate as the predominant intracellular osmoprotectant in media containing up to 1.0 osmol of solute per kg. At medium osmolalities greater than 1.0 osmol/kg, the (alpha)-glutamate concentration leveled off and the zwitterionic (beta)-amino acid N(sup(epsilon))-acetyl-(beta)-lysine was synthesized, accumulating to an intracellular concentration exceeding 1.1 osmol/kg at an osmolality of 2.0 osmol/kg. When glycine betaine was added to culture medium, it caused partial repression of de novo (alpha)-glutamate and N(sup(epsilon))-acetyl-(beta)-lysine synthesis and was accumulated by the cell as the predominant compatible solute. The distribution and concentration of compatible solutes in eight strains representing five Methanosarcina spp. were similar to those found in M

  9. Adapting protein solubility by glycosylation. N-glycosylation mutants of Coprinus cinereus peroxidase in salt and organic solutions.

    PubMed

    Tams, J W; Vind, J; Welinder, K G

    1999-07-13

    Protein solubility is a fundamental parameter in biology and biotechnology. In the present study we have constructed and analyzed five mutants of Coprinus cinereus peroxidase (CIP) with 0, 1, 2, 4 and 6 N-glycosylation sites. All mutants contain Man(x)(GlcNAc)(2) glycans. The peroxidase activity was the same for wild-type CIP and all the glycosylation mutants when measured with the large substrate 2,2'-azino-bis(-3-ethylbenzthiazoline-6-sulfonic acid). The solubility of the five CIP mutants showed a linear dependence on the number of carbohydrate residues attached to the protein in buffered solution of both ammonium sulfate (AMS) and acetone, increasing in AMS and decreasing in acetone. Moreover, the change in free energy of solvation appears to be a constant, though with opposite signs in these solvents, giving DeltaDeltaG degrees (sol)=-0.32+/-0.05 kJ/mol per carbohydrate residue in 2.0 M AMS, a value previously obtained comparing ordinary and deglycosylated horseradish peroxidase, and 0. 37+/-0.10 kJ/mol in 60 v/v% acetone.

  10. LSENS: A General Chemical Kinetics and Sensitivity Analysis Code for homogeneous gas-phase reactions. Part 1: Theory and numerical solution procedures

    NASA Technical Reports Server (NTRS)

    Radhakrishnan, Krishnan

    1994-01-01

    LSENS, the Lewis General Chemical Kinetics and Sensitivity Analysis Code, has been developed for solving complex, homogeneous, gas-phase chemical kinetics problems and contains sensitivity analysis for a variety of problems, including nonisothermal situations. This report is part 1 of a series of three reference publications that describe LENS, provide a detailed guide to its usage, and present many example problems. Part 1 derives the governing equations and describes the numerical solution procedures for the types of problems that can be solved. The accuracy and efficiency of LSENS are examined by means of various test problems, and comparisons with other methods and codes are presented. LSENS is a flexible, convenient, accurate, and efficient solver for chemical reaction problems such as static system; steady, one-dimensional, inviscid flow; reaction behind incident shock wave, including boundary layer correction; and perfectly stirred (highly backmixed) reactor. In addition, the chemical equilibrium state can be computed for the following assigned states: temperature and pressure, enthalpy and pressure, temperature and volume, and internal energy and volume. For static problems the code computes the sensitivity coefficients of the dependent variables and their temporal derivatives with respect to the initial values of the dependent variables and/or the three rate coefficient parameters of the chemical reactions.

  11. TranAir: A full-potential, solution-adaptive, rectangular grid code for predicting subsonic, transonic, and supersonic flows about arbitrary configurations. Theory document

    NASA Technical Reports Server (NTRS)

    Johnson, F. T.; Samant, S. S.; Bieterman, M. B.; Melvin, R. G.; Young, D. P.; Bussoletti, J. E.; Hilmes, C. L.

    1992-01-01

    A new computer program, called TranAir, for analyzing complex configurations in transonic flow (with subsonic or supersonic freestream) was developed. This program provides accurate and efficient simulations of nonlinear aerodynamic flows about arbitrary geometries with the ease and flexibility of a typical panel method program. The numerical method implemented in TranAir is described. The method solves the full potential equation subject to a set of general boundary conditions and can handle regions with differing total pressure and temperature. The boundary value problem is discretized using the finite element method on a locally refined rectangular grid. The grid is automatically constructed by the code and is superimposed on the boundary described by networks of panels; thus no surface fitted grid generation is required. The nonlinear discrete system arising from the finite element method is solved using a preconditioned Krylov subspace method embedded in an inexact Newton method. The solution is obtained on a sequence of successively refined grids which are either constructed adaptively based on estimated solution errors or are predetermined based on user inputs. Many results obtained by using TranAir to analyze aerodynamic configurations are presented.

  12. Computationally efficient solution to the Cahn-Hilliard equation: Adaptive implicit time schemes, mesh sensitivity analysis and the 3D isoperimetric problem

    NASA Astrophysics Data System (ADS)

    Wodo, Olga; Ganapathysubramanian, Baskar

    2011-07-01

    We present an efficient numerical framework for analyzing spinodal decomposition described by the Cahn-Hilliard equation. We focus on the analysis of various implicit time schemes for two and three dimensional problems. We demonstrate that significant computational gains can be obtained by applying embedded, higher order Runge-Kutta methods in a time adaptive setting. This allows accessing time-scales that vary by five orders of magnitude. In addition, we also formulate a set of test problems that isolate each of the sub-processes involved in spinodal decomposition: interface creation and bulky phase coarsening. We analyze the error fluctuations using these test problems on the split form of the Cahn-Hilliard equation solved using the finite element method with basis functions of different orders. Any scheme that ensures at least four elements per interface satisfactorily captures both sub-processes. Our findings show that linear basis functions have superior error-to-cost properties. This strategy - coupled with a domain decomposition based parallel implementation - let us notably augment the efficiency of a numerical Cahn-Hillard solver, and open new venues for its practical applications, especially when three dimensional problems are considered. We use this framework to address the isoperimetric problem of identifying local solutions in the periodic cube in three dimensions. The framework is able to generate all five hypothesized candidates for the local solution of periodic isoperimetric problem in 3D - sphere, cylinder, lamella, doubly periodic surface with genus two (Lawson surface) and triply periodic minimal surface (P Schwarz surface).

  13. Adaptive management

    USGS Publications Warehouse

    Allen, Craig R.; Garmestani, Ahjond S.

    2015-01-01

    Adaptive management is an approach to natural resource management that emphasizes learning through management where knowledge is incomplete, and when, despite inherent uncertainty, managers and policymakers must act. Unlike a traditional trial and error approach, adaptive management has explicit structure, including a careful elucidation of goals, identification of alternative management objectives and hypotheses of causation, and procedures for the collection of data followed by evaluation and reiteration. The process is iterative, and serves to reduce uncertainty, build knowledge and improve management over time in a goal-oriented and structured process.

  14. A comparison of the structures of lean and rich axisymmetric laminar Bunsen flames: application of local rectangular refinement solution-adaptive gridding

    NASA Astrophysics Data System (ADS)

    Bennett, Beth Anne V.; Fielding, Joseph; Mauro, Richard J.; Long, Marshall B.; Smooke, Mitchell D.

    1999-12-01

    Axisymmetric laminar methane-air Bunsen flames are computed for two equivalence ratios: lean (icons/Journals/Common/Phi" ALT="Phi" ALIGN="TOP"/> = 0.776), in which the traditional Bunsen cone forms above the burner; and rich (icons/Journals/Common/Phi" ALT="Phi" ALIGN="TOP"/> = 1.243), in which the premixed Bunsen cone is accompanied by a diffusion flame halo located further downstream. Because the extremely large gradients at premixed flame fronts greatly exceed those in diffusion flames, their resolution requires a more sophisticated adaptive numerical method than those ordinarily applied to diffusion flames. The local rectangular refinement (LRR) solution-adaptive gridding method produces robust unstructured rectangular grids, utilizes multiple-scale finite-difference discretizations, and incorporates Newton's method to solve elliptic partial differential equation systems simultaneously. The LRR method is applied to the vorticity-velocity formulation of the fully elliptic governing equations, in conjunction with detailed chemistry, multicomponent transport and an optically-thin radiation model. The computed lean flame is lifted above the burner, and this liftoff is verified experimentally. For both lean and rich flames, grid spacing greatly influences the Bunsen cone's position, which only stabilizes with adequate refinement. In the rich configuration, the oxygen-free region above the Bunsen cone inhibits the complete decay of CH4, thus indirectly initiating the diffusion flame halo where CO oxidizes to CO2. In general, the results computed by the LRR method agree quite well with those obtained on equivalently refined conventional grids, yet the former require less than half the computational resources.

  15. A Comparison of Item Selection Procedures Using Different Ability Estimation Methods in Computerized Adaptive Testing Based on the Generalized Partial Credit Model

    ERIC Educational Resources Information Center

    Ho, Tsung-Han

    2010-01-01

    Computerized adaptive testing (CAT) provides a highly efficient alternative to the paper-and-pencil test. By selecting items that match examinees' ability levels, CAT not only can shorten test length and administration time but it can also increase measurement precision and reduce measurement error. In CAT, maximum information (MI) is the most…

  16. Mesh adaptation on the sphere using optimal transport and the numerical solution of a Monge-Ampère type equation

    NASA Astrophysics Data System (ADS)

    Weller, Hilary; Browne, Philip; Budd, Chris; Cullen, Mike

    2016-03-01

    An equation of Monge-Ampère type has, for the first time, been solved numerically on the surface of the sphere in order to generate optimally transported (OT) meshes, equidistributed with respect to a monitor function. Optimal transport generates meshes that keep the same connectivity as the original mesh, making them suitable for r-adaptive simulations, in which the equations of motion can be solved in a moving frame of reference in order to avoid mapping the solution between old and new meshes and to avoid load balancing problems on parallel computers. The semi-implicit solution of the Monge-Ampère type equation involves a new linearisation of the Hessian term, and exponential maps are used to map from old to new meshes on the sphere. The determinant of the Hessian is evaluated as the change in volume between old and new mesh cells, rather than using numerical approximations to the gradients. OT meshes are generated to compare with centroidal Voronoi tessellations on the sphere and are found to have advantages and disadvantages; OT equidistribution is more accurate, the number of iterations to convergence is independent of the mesh size, face skewness is reduced and the connectivity does not change. However anisotropy is higher and the OT meshes are non-orthogonal. It is shown that optimal transport on the sphere leads to meshes that do not tangle. However, tangling can be introduced by numerical errors in calculating the gradient of the mesh potential. Methods for alleviating this problem are explored. Finally, OT meshes are generated using observed precipitation as a monitor function, in order to demonstrate the potential power of the technique.

  17. Dynamic Load Balancing for Adaptive Unstructured Grids

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak; Saini, Subhash (Technical Monitor)

    1998-01-01

    Dynamic mesh adaptation on unstructured grids is a powerful tool for computing unsteady three-dimensional problems that require grid modifications to efficiently resolve solution features. By locally refining and coarsening the mesh to capture phenomena of interest, such procedures make standard computational methods more cost effective. Highly refined meshes are required to accurately capture shock waves, contact discontinuities, vortices, and shear layers in fluid flow problems. Adaptive meshes have also proved to be useful in several other areas of computational science and engineering like computer vision and graphics, semiconductor device modeling, and structural mechanics. Local mesh adaptation provides the opportunity to obtain solutions that are comparable to those obtained on globally-refined grids but at a much lower cost. Additional information is contained in the original extended abstract.

  18. Cartesian Off-Body Grid Adaption for Viscous Time- Accurate Flow Simulation

    NASA Technical Reports Server (NTRS)

    Buning, Pieter G.; Pulliam, Thomas H.

    2011-01-01

    An improved solution adaption capability has been implemented in the OVERFLOW overset grid CFD code. Building on the Cartesian off-body approach inherent in OVERFLOW and the original adaptive refinement method developed by Meakin, the new scheme provides for automated creation of multiple levels of finer Cartesian grids. Refinement can be based on the undivided second-difference of the flow solution variables, or on a specific flow quantity such as vorticity. Coupled with load-balancing and an inmemory solution interpolation procedure, the adaption process provides very good performance for time-accurate simulations on parallel compute platforms. A method of using refined, thin body-fitted grids combined with adaption in the off-body grids is presented, which maximizes the part of the domain subject to adaption. Two- and three-dimensional examples are used to illustrate the effectiveness and performance of the adaption scheme.

  19. Self-adaptive Solution Strategies

    NASA Technical Reports Server (NTRS)

    Padovan, J.

    1984-01-01

    The development of enhancements to current generation nonlinear finite element algorithms of the incremental Newton-Raphson type was overviewed. Work was introduced on alternative formulations which lead to improve algorithms that avoid the need for global level updating and inversion. To quantify the enhanced Newton-Raphson scheme and the new alternative algorithm, the results of several benchmarks are presented.

  20. An assessment of the adaptive unstructured tetrahedral grid, Euler Flow Solver Code FELISA

    NASA Technical Reports Server (NTRS)

    Djomehri, M. Jahed; Erickson, Larry L.

    1994-01-01

    A three-dimensional solution-adaptive Euler flow solver for unstructured tetrahedral meshes is assessed, and the accuracy and efficiency of the method for predicting sonic boom pressure signatures about simple generic models are demonstrated. Comparison of computational and wind tunnel data and enhancement of numerical solutions by means of grid adaptivity are discussed. The mesh generation is based on the advancing front technique. The FELISA code consists of two solvers, the Taylor-Galerkin and the Runge-Kutta-Galerkin schemes, both of which are spacially discretized by the usual Galerkin weighted residual finite-element methods but with different explicit time-marching schemes to steady state. The solution-adaptive grid procedure is based on either remeshing or mesh refinement techniques. An alternative geometry adaptive procedure is also incorporated.

  1. Adaptively biased sequential importance sampling for rare events in reaction networks with comparison to exact solutions from finite buffer dCME method

    PubMed Central

    Cao, Youfang; Liang, Jie

    2013-01-01

    Critical events that occur rarely in biological processes are of great importance, but are challenging to study using Monte Carlo simulation. By introducing biases to reaction selection and reaction rates, weighted stochastic simulation algorithms based on importance sampling allow rare events to be sampled more effectively. However, existing methods do not address the important issue of barrier crossing, which often arises from multistable networks and systems with complex probability landscape. In addition, the proliferation of parameters and the associated computing cost pose significant problems. Here we introduce a general theoretical framework for obtaining optimized biases in sampling individual reactions for estimating probabilities of rare events. We further describe a practical algorithm called adaptively biased sequential importance sampling (ABSIS) method for efficient probability estimation. By adopting a look-ahead strategy and by enumerating short paths from the current state, we estimate the reaction-specific and state-specific forward and backward moving probabilities of the system, which are then used to bias reaction selections. The ABSIS algorithm can automatically detect barrier-crossing regions, and can adjust bias adaptively at different steps of the sampling process, with bias determined by the outcome of exhaustively generated short paths. In addition, there are only two bias parameters to be determined, regardless of the number of the reactions and the complexity of the network. We have applied the ABSIS method to four biochemical networks: the birth-death process, the reversible isomerization, the bistable Schlögl model, and the enzymatic futile cycle model. For comparison, we have also applied the finite buffer discrete chemical master equation (dCME) method recently developed to obtain exact numerical solutions of the underlying discrete chemical master equations of these problems. This allows us to assess sampling results objectively

  2. Adaptive Image Denoising by Mixture Adaptation

    NASA Astrophysics Data System (ADS)

    Luo, Enming; Chan, Stanley H.; Nguyen, Truong Q.

    2016-10-01

    We propose an adaptive learning procedure to learn patch-based image priors for image denoising. The new algorithm, called the Expectation-Maximization (EM) adaptation, takes a generic prior learned from a generic external database and adapts it to the noisy image to generate a specific prior. Different from existing methods that combine internal and external statistics in ad-hoc ways, the proposed algorithm is rigorously derived from a Bayesian hyper-prior perspective. There are two contributions of this paper: First, we provide full derivation of the EM adaptation algorithm and demonstrate methods to improve the computational complexity. Second, in the absence of the latent clean image, we show how EM adaptation can be modified based on pre-filtering. Experimental results show that the proposed adaptation algorithm yields consistently better denoising results than the one without adaptation and is superior to several state-of-the-art algorithms.

  3. CTEPP STANDARD OPERATING PROCEDURE FOR PREPARATION OF SURROGATE RECOVERY STANDARD AND INTERNAL STANDARD SOLUTIONS FOR POLAR TARGET ANALYTES (SOP-5.26)

    EPA Science Inventory

    This SOP describes the method used for preparing surrogate recovery standard and internal standard solutions for the analysis of polar target analytes. It also describes the method for preparing calibration standard solutions for polar analytes used for gas chromatography/mass sp...

  4. AEST: Adaptive Eigenvalue Stability Code

    NASA Astrophysics Data System (ADS)

    Zheng, L.-J.; Kotschenreuther, M.; Waelbroeck, F.; van Dam, J. W.; Berk, H.

    2002-11-01

    An adaptive eigenvalue linear stability code is developed. The aim is on one hand to include the non-ideal MHD effects into the global MHD stability calculation for both low and high n modes and on the other hand to resolve the numerical difficulty involving MHD singularity on the rational surfaces at the marginal stability. Our code follows some parts of philosophy of DCON by abandoning relaxation methods based on radial finite element expansion in favor of an efficient shooting procedure with adaptive gridding. The δ W criterion is replaced by the shooting procedure and subsequent matrix eigenvalue problem. Since the technique of expanding a general solution into a summation of the independent solutions employed, the rank of the matrices involved is just a few hundreds. This makes easier to solve the eigenvalue problem with non-ideal MHD effects, such as FLR or even full kinetic effects, as well as plasma rotation effect, taken into account. To include kinetic effects, the approach of solving for the distribution function as a local eigenvalue ω problem as in the GS2 code will be employed in the future. Comparison of the ideal MHD version of the code with DCON, PEST, and GATO will be discussed. The non-ideal MHD version of the code will be employed to study as an application the transport barrier physics in tokamak discharges.

  5. Adaptive antennas

    NASA Astrophysics Data System (ADS)

    Barton, P.

    1987-04-01

    The basic principles of adaptive antennas are outlined in terms of the Wiener-Hopf expression for maximizing signal to noise ratio in an arbitrary noise environment; the analogy with generalized matched filter theory provides a useful aid to understanding. For many applications, there is insufficient information to achieve the above solution and thus non-optimum constrained null steering algorithms are also described, together with a summary of methods for preventing wanted signals being nulled by the adaptive system. The three generic approaches to adaptive weight control are discussed; correlation steepest descent, weight perturbation and direct solutions based on sample matrix conversion. The tradeoffs between hardware complexity and performance in terms of null depth and convergence rate are outlined. The sidelobe cancellor technique is described. Performance variation with jammer power and angular distribution is summarized and the key performance limitations identified. The configuration and performance characteristics of both multiple beam and phase scan array antennas are covered, with a brief discussion of performance factors.

  6. Genetic algorithms in adaptive fuzzy control

    NASA Technical Reports Server (NTRS)

    Karr, C. Lucas; Harper, Tony R.

    1992-01-01

    Researchers at the U.S. Bureau of Mines have developed adaptive process control systems in which genetic algorithms (GA's) are used to augment fuzzy logic controllers (FLC's). GA's are search algorithms that rapidly locate near-optimum solutions to a wide spectrum of problems by modeling the search procedures of natural genetics. FLC's are rule based systems that efficiently manipulate a problem environment by modeling the 'rule-of-thumb' strategy used in human decision making. Together, GA's and FLC's possess the capabilities necessary to produce powerful, efficient, and robust adaptive control systems. To perform efficiently, such control systems require a control element to manipulate the problem environment, an analysis element to recognize changes in the problem environment, and a learning element to adjust fuzzy membership functions in response to the changes in the problem environment. Details of an overall adaptive control system are discussed. A specific computer-simulated chemical system is used to demonstrate the ideas presented.

  7. Adaptation to hot environmental conditions: an exploration of the performance basis, procedures and future directions to optimise opportunities for elite athletes.

    PubMed

    Guy, Joshua H; Deakin, Glen B; Edwards, Andrew M; Miller, Catherine M; Pyne, David B

    2015-03-01

    Extreme environmental conditions present athletes with diverse challenges; however, not all sporting events are limited by thermoregulatory parameters. The purpose of this leading article is to identify specific instances where hot environmental conditions either compromise or augment performance and, where heat acclimation appears justified, evaluate the effectiveness of pre-event acclimation processes. To identify events likely to be receptive to pre-competition heat adaptation protocols, we clustered and quantified the magnitude of difference in performance of elite athletes competing in International Association of Athletics Federations (IAAF) World Championships (1999-2011) in hot environments (>25 °C) with those in cooler temperate conditions (<25 °C). Athletes in endurance events performed worse in hot conditions (~3 % reduction in performance, Cohen's d > 0.8; large impairment), while in contrast, performance in short-duration sprint events was augmented in the heat compared with temperate conditions (~1 % improvement, Cohen's d > 0.8; large performance gain). As endurance events were identified as compromised by the heat, we evaluated common short-term heat acclimation (≤7 days, STHA) and medium-term heat acclimation (8-14 days, MTHA) protocols. This process identified beneficial effects of heat acclimation on performance using both STHA (2.4 ± 3.5 %) and MTHA protocols (10.2 ± 14.0 %). These effects were differentially greater for MTHA, which also demonstrated larger reductions in both endpoint exercise heart rate (STHA: -3.5 ± 1.8 % vs MTHA: -7.0 ± 1.9 %) and endpoint core temperature (STHA: -0.7 ± 0.7 % vs -0.8 ± 0.3 %). It appears that worthwhile acclimation is achievable for endurance athletes via both short-and medium-length protocols but more is gained using MTHA. Conversely, it is also conceivable that heat acclimation may be counterproductive for sprinters. As high-performance athletes are often time-poor, shorter duration protocols may

  8. Heparin and penicillamine-hypotaurine-epinephrine (PHE) solution during bovine in vitro fertilization procedures impair the quality of spermatozoa but improve normal oocyte fecundation and early embryonic development.

    PubMed

    Gonçalves, F S; Barretto, L S S; Arruda, R P; Perri, S H V; Mingoti, G Z

    2014-01-01

    The presence of heparin and a mixture of penicillamine, hypotaurine, and epinephrine (PHE) solution in the in vitro fertilization (IVF) media seem to be a prerequisite when bovine spermatozoa are capacitated in vitro, in order to stimulate sperm motility and acrosome reaction. The present study was designed to determine the effect of the addition of heparin and PHE during IVF on the quality and penetrability of spermatozoa into bovine oocytes and on subsequent embryo development. Sperm quality, evaluated by the integrity of plasma and acrosomal membranes and mitochondrial function, was diminished (P<0.05) in the presence of heparin and PHE. Oocyte penetration and normal pronuclear formation rates, as well as the percentage of zygotes presenting more than two pronuclei, was higher (P<0.05) in the presence of heparin and PHE. No differences were observed in cleavage rates between treatment and control (P>0.05). However, the developmental rate to the blastocyst stage was increased in the presence of heparin and PHE (P>0.05). The quality of embryos that reached the blastocyst stage was evaluated by counting the inner cell mass (ICM) and trophectoderm (TE) cell numbers and total number of cells; the percentage of ICM and TE cells was unaffected (P>0.05) in the presence of heparin and PHE (P<0.05). In conclusion, this study demonstrated that while the supplementation of IVF media with heparin and PHE solution impairs spermatozoa quality, it plays an important role in sperm capacitation, improving pronuclear formation, and early embryonic development.

  9. Heparin and penicillamine-hypotaurine-epinephrine (PHE) solution during bovine in vitro fertilization procedures impair the quality of spermatozoa but improve normal oocyte fecundation and early embryonic development.

    PubMed

    Gonçalves, F S; Barretto, L S S; Arruda, R P; Perri, S H V; Mingoti, G Z

    2014-01-01

    The presence of heparin and a mixture of penicillamine, hypotaurine, and epinephrine (PHE) solution in the in vitro fertilization (IVF) media seem to be a prerequisite when bovine spermatozoa are capacitated in vitro, in order to stimulate sperm motility and acrosome reaction. The present study was designed to determine the effect of the addition of heparin and PHE during IVF on the quality and penetrability of spermatozoa into bovine oocytes and on subsequent embryo development. Sperm quality, evaluated by the integrity of plasma and acrosomal membranes and mitochondrial function, was diminished (P<0.05) in the presence of heparin and PHE. Oocyte penetration and normal pronuclear formation rates, as well as the percentage of zygotes presenting more than two pronuclei, was higher (P<0.05) in the presence of heparin and PHE. No differences were observed in cleavage rates between treatment and control (P>0.05). However, the developmental rate to the blastocyst stage was increased in the presence of heparin and PHE (P>0.05). The quality of embryos that reached the blastocyst stage was evaluated by counting the inner cell mass (ICM) and trophectoderm (TE) cell numbers and total number of cells; the percentage of ICM and TE cells was unaffected (P>0.05) in the presence of heparin and PHE (P<0.05). In conclusion, this study demonstrated that while the supplementation of IVF media with heparin and PHE solution impairs spermatozoa quality, it plays an important role in sperm capacitation, improving pronuclear formation, and early embryonic development. PMID:23949783

  10. Adaptive mesh generation for viscous flows using Delaunay triangulation

    NASA Technical Reports Server (NTRS)

    Mavriplis, Dimitri J.

    1990-01-01

    A method for generating an unstructured triangular mesh in two dimensions, suitable for computing high Reynolds number flows over arbitrary configurations is presented. The method is based on a Delaunay triangulation, which is performed in a locally stretched space, in order to obtain very high aspect ratio triangles in the boundary layer and the wake regions. It is shown how the method can be coupled with an unstructured Navier-Stokes solver to produce a solution adaptive mesh generation procedure for viscous flows.

  11. Adaptive mesh generation for viscous flows using Delaunay triangulation

    NASA Technical Reports Server (NTRS)

    Mavriplis, Dimitri J.

    1988-01-01

    A method for generating an unstructured triangular mesh in two dimensions, suitable for computing high Reynolds number flows over arbitrary configurations is presented. The method is based on a Delaunay triangulation, which is performed in a locally stretched space, in order to obtain very high aspect ratio triangles in the boundary layer and the wake regions. It is shown how the method can be coupled with an unstructured Navier-Stokes solver to produce a solution adaptive mesh generation procedure for viscous flows.

  12. On the dynamics of some grid adaption schemes

    NASA Technical Reports Server (NTRS)

    Sweby, Peter K.; Yee, Helen C.

    1994-01-01

    The dynamics of a one-parameter family of mesh equidistribution schemes coupled with finite difference discretisations of linear and nonlinear convection-diffusion model equations is studied numerically. It is shown that, when time marched to steady state, the grid adaption not only influences the stability and convergence rate of the overall scheme, but can also introduce spurious dynamics to the numerical solution procedure.

  13. Near-Body Grid Adaption for Overset Grids

    NASA Technical Reports Server (NTRS)

    Buning, Pieter G.; Pulliam, Thomas H.

    2016-01-01

    A solution adaption capability for curvilinear near-body grids has been implemented in the OVERFLOW overset grid computational fluid dynamics code. The approach follows closely that used for the Cartesian off-body grids, but inserts refined grids in the computational space of original near-body grids. Refined curvilinear grids are generated using parametric cubic interpolation, with one-sided biasing based on curvature and stretching ratio of the original grid. Sensor functions, grid marking, and solution interpolation tasks are implemented in the same fashion as for off-body grids. A goal-oriented procedure, based on largest error first, is included for controlling growth rate and maximum size of the adapted grid system. The adaption process is almost entirely parallelized using MPI, resulting in a capability suitable for viscous, moving body simulations. Two- and three-dimensional examples are presented.

  14. Adaptive Batch Mode Active Learning.

    PubMed

    Chakraborty, Shayok; Balasubramanian, Vineeth; Panchanathan, Sethuraman

    2015-08-01

    Active learning techniques have gained popularity to reduce human effort in labeling data instances for inducing a classifier. When faced with large amounts of unlabeled data, such algorithms automatically identify the exemplar and representative instances to be selected for manual annotation. More recently, there have been attempts toward a batch mode form of active learning, where a batch of data points is simultaneously selected from an unlabeled set. Real-world applications require adaptive approaches for batch selection in active learning, depending on the complexity of the data stream in question. However, the existing work in this field has primarily focused on static or heuristic batch size selection. In this paper, we propose two novel optimization-based frameworks for adaptive batch mode active learning (BMAL), where the batch size as well as the selection criteria are combined in a single formulation. We exploit gradient-descent-based optimization strategies as well as properties of submodular functions to derive the adaptive BMAL algorithms. The solution procedures have the same computational complexity as existing state-of-the-art static BMAL techniques. Our empirical results on the widely used VidTIMIT and the mobile biometric (MOBIO) data sets portray the efficacy of the proposed frameworks and also certify the potential of these approaches in being used for real-world biometric recognition applications.

  15. Developing Competency in Payroll Procedures

    ERIC Educational Resources Information Center

    Jackson, Allen L.

    1975-01-01

    The author describes a sequence of units that provides for competency in payroll procedures. The units could be the basis for a four to six week minicourse and are adaptable, so that the student, upon completion, will be able to apply his learning to any payroll procedures system. (Author/AJ)

  16. hp-Adaptive time integration based on the BDF for viscous flows

    NASA Astrophysics Data System (ADS)

    Hay, A.; Etienne, S.; Pelletier, D.; Garon, A.

    2015-06-01

    This paper presents a procedure based on the Backward Differentiation Formulas of order 1 to 5 to obtain efficient time integration of the incompressible Navier-Stokes equations. The adaptive algorithm performs both stepsize and order selections to control respectively the solution accuracy and the computational efficiency of the time integration process. The stepsize selection (h-adaptivity) is based on a local error estimate and an error controller to guarantee that the numerical solution accuracy is within a user prescribed tolerance. The order selection (p-adaptivity) relies on the idea that low-accuracy solutions can be computed efficiently by low order time integrators while accurate solutions require high order time integrators to keep computational time low. The selection is based on a stability test that detects growing numerical noise and deems a method of order p stable if there is no method of lower order that delivers the same solution accuracy for a larger stepsize. Hence, it guarantees both that (1) the used method of integration operates inside of its stability region and (2) the time integration procedure is computationally efficient. The proposed time integration procedure also features a time-step rejection and quarantine mechanisms, a modified Newton method with a predictor and dense output techniques to compute solution at off-step points.

  17. Adaptive unstructured meshing for thermal stress analysis of built-up structures

    NASA Technical Reports Server (NTRS)

    Dechaumphai, Pramote

    1992-01-01

    An adaptive unstructured meshing technique for mechanical and thermal stress analysis of built-up structures has been developed. A triangular membrane finite element and a new plate bending element are evaluated on a panel with a circular cutout and a frame stiffened panel. The adaptive unstructured meshing technique, without a priori knowledge of the solution to the problem, generates clustered elements only where needed. An improved solution accuracy is obtained at a reduced problem size and analysis computational time as compared to the results produced by the standard finite element procedure.

  18. Time domain and frequency domain design techniques for model reference adaptive control systems

    NASA Technical Reports Server (NTRS)

    Boland, J. S., III

    1971-01-01

    Some problems associated with the design of model-reference adaptive control systems are considered and solutions to these problems are advanced. The stability of the adapted system is a primary consideration in the development of both the time-domain and the frequency-domain design techniques. Consequentially, the use of Liapunov's direct method forms an integral part of the derivation of the design procedures. The application of sensitivity coefficients to the design of model-reference adaptive control systems is considered. An application of the design techniques is also presented.

  19. Numerical Boundary Condition Procedures

    NASA Technical Reports Server (NTRS)

    1981-01-01

    Topics include numerical procedures for treating inflow and outflow boundaries, steady and unsteady discontinuous surfaces, far field boundaries, and multiblock grids. In addition, the effects of numerical boundary approximations on stability, accuracy, and convergence rate of the numerical solution are discussed.

  20. Validation of a simplified field-adapted procedure for routine determinations of methyl mercury at trace levels in natural water samples using species-specific isotope dilution mass spectrometry.

    PubMed

    Lambertsson, Lars; Björn, Erik

    2004-12-01

    A field-adapted procedure based on species-specific isotope dilution (SSID) methodology for trace-level determinations of methyl mercury (CH(3)Hg(+)) in mire, fresh and sea water samples was developed, validated and applied in a field study. In the field study, mire water samples were filtered, standardised volumetrically with isotopically enriched CH(3) (200)Hg(+), and frozen on dry ice. The samples were derivatised in the laboratory without further pre-treatment using sodium tetraethyl borate (NaB(C(2)H(5))(4)) and the ethylated methyl mercury was purge-trapped on Tenax columns. The analyte was thermo-desorbed onto a GC-ICP-MS system for analysis. Investigations preceding field application of the method showed that when using SSID, for all tested matrices, identical results were obtained between samples that were freeze-preserved or analysed unpreserved. For DOC-rich samples (mire water) additional experiments showed no difference in CH(3)Hg(+) concentration between samples that were derivatised without pre-treatment or after liquid extraction. Extractions of samples for matrix-analyte separation prior to derivatisation are therefore not necessary. No formation of CH(3)Hg(+) was observed during sample storage and treatment when spiking samples with (198)Hg(2+). Total uncertainty budgets for the field application of the method showed that for analyte concentrations higher than 1.5 pg g(-1) (as Hg) the relative expanded uncertainty (REU) was approximately 5% and dominated by the uncertainty in the isotope standard concentration. Below 0.5 pg g(-1) (as Hg), the REU was >10% and dominated by variations in the field blank. The uncertainty of the method is sufficiently low to accurately determine CH(3)Hg(+) concentrations at trace levels. The detection limit was determined to be 4 fg g(-1) (as Hg) based on replicate analyses of laboratory blanks. The described procedure is reliable, considerably faster and simplified compared to non-SSID methods and thereby very

  1. Structured adaptive grid generation using algebraic methods

    NASA Technical Reports Server (NTRS)

    Yang, Jiann-Cherng; Soni, Bharat K.; Roger, R. P.; Chan, Stephen C.

    1993-01-01

    The accuracy of the numerical algorithm depends not only on the formal order of approximation but also on the distribution of grid points in the computational domain. Grid adaptation is a procedure which allows optimal grid redistribution as the solution progresses. It offers the prospect of accurate flow field simulations without the use of an excessively timely, computationally expensive, grid. Grid adaptive schemes are divided into two basic categories: differential and algebraic. The differential method is based on a variational approach where a function which contains a measure of grid smoothness, orthogonality and volume variation is minimized by using a variational principle. This approach provided a solid mathematical basis for the adaptive method, but the Euler-Lagrange equations must be solved in addition to the original governing equations. On the other hand, the algebraic method requires much less computational effort, but the grid may not be smooth. The algebraic techniques are based on devising an algorithm where the grid movement is governed by estimates of the local error in the numerical solution. This is achieved by requiring the points in the large error regions to attract other points and points in the low error region to repel other points. The development of a fast, efficient, and robust algebraic adaptive algorithm for structured flow simulation applications is presented. This development is accomplished in a three step process. The first step is to define an adaptive weighting mesh (distribution mesh) on the basis of the equidistribution law applied to the flow field solution. The second, and probably the most crucial step, is to redistribute grid points in the computational domain according to the aforementioned weighting mesh. The third and the last step is to reevaluate the flow property by an appropriate search/interpolate scheme at the new grid locations. The adaptive weighting mesh provides the information on the desired concentration

  2. Adapted Canoeing for the Handicapped.

    ERIC Educational Resources Information Center

    Frith, Greg H.; Warren, L. D.

    1984-01-01

    Safety as well as instructional recommendations are offered for adapting canoeing as a recreationial activity for handicapped students. Major steps of the instructional program feature orientation to the water and canoe, entry and exit techinques, and mobility procedures. (CL)

  3. Adaptive Sampling Designs.

    ERIC Educational Resources Information Center

    Flournoy, Nancy

    Designs for sequential sampling procedures that adapt to cumulative information are discussed. A familiar illustration is the play-the-winner rule in which there are two treatments; after a random start, the same treatment is continued as long as each successive subject registers a success. When a failure occurs, the other treatment is used until…

  4. Prism Adaptation in Schizophrenia

    ERIC Educational Resources Information Center

    Bigelow, Nirav O.; Turner, Beth M.; Andreasen, Nancy C.; Paulsen, Jane S.; O'Leary, Daniel S.; Ho, Beng-Choon

    2006-01-01

    The prism adaptation test examines procedural learning (PL) in which performance facilitation occurs with practice on tasks without the need for conscious awareness. Dynamic interactions between frontostriatal cortices, basal ganglia, and the cerebellum have been shown to play key roles in PL. Disruptions within these neural networks have also…

  5. Adaptive Physical Education.

    ERIC Educational Resources Information Center

    Muller, Robert M.

    GRADES OR AGES: Elementary grades. SUBJECT MATTER: Adaptive physical education. ORGANIZATION AND PHYSICAL APPEARANCE: The aims and objectives of the program and the screening procedure are described. Common postural deviations are identified and a number of congenital and other defects described. Details of the modified program are given. There is…

  6. Excretion patterns of solute and different-sized particle passage markers in foregut-fermenting proboscis monkey (Nasalis larvatus) do not indicate an adaptation for rumination.

    PubMed

    Matsuda, Ikki; Sha, John C M; Ortmann, Sylvia; Schwarm, Angela; Grandl, Florian; Caton, Judith; Jens, Warner; Kreuzer, Michael; Marlena, Diana; Hagen, Katharina B; Clauss, Marcus

    2015-10-01

    Behavioral observations and small fecal particles compared to other primates indicate that free-ranging proboscis monkeys (Nasalis larvatus) have a strategy of facultative merycism(rumination). In functional ruminants (ruminant and camelids), rumination is facilitated by a particle sorting mechanism in the forestomach that selectively retains larger particles and subjects them to repeated mastication. Using a set of a solute and three particle markers of different sizes (b2, 5 and 8mm),we displayed digesta passage kinetics and measured mean retention times (MRTs) in four captive proboscis monkeys (6–18 kg) and compared the marker excretion patterns to those in domestic cattle. In addition, we evaluated various methods of calculating and displaying passage characteristics. The mean ± SD dry matter intake was 98 ± 22 g kg−0.75 d−1, 68 ± 7% of which was browse. Accounting for sampling intervals in MRT calculation yielded results that were not affected by the sampling frequency. Displaying marker excretion patterns using fecal marker concentrations (rather than amounts) facilitated comparisons with reactor theory outputs and indicated that both proboscis and cattle digestive tracts represent a series of very few tank reactors. However, the separation of the solute and particle marker and the different-sized particle markers, evident in cattle, did not occur in proboscis monkeys, in which all markers moved together, at MRTs of approximately 40 h. The results indicate that the digestive physiology of proboscis monkeys does not show typical characteristics of ruminants, which may explain why merycism is only a facultative strategy in this species. PMID:26004169

  7. Constructed wetland as a low cost and sustainable solution for wastewater treatment adapted to rural settlements: the Chorfech wastewater treatment pilot plant.

    PubMed

    Ghrabi, Ahmed; Bousselmi, Latifa; Masi, Fabio; Regelsberger, Martin

    2011-01-01

    The paper presents the detailed design and some preliminary results obtained from a study regarding a wastewater treatment pilot plant (WWTPP), serving as a multistage constructed wetland (CW) located at the rural settlement of 'Chorfech 24' (Tunisia). The WWTPP implemented at Chorfech 24 is mainly designed as a demonstration of sustainable water management solutions (low-cost wastewater treatment), in order to prove the efficiency of these solutions working under real Tunisian conditions and ultimately allow the further spreading of the demonstrated techniques. The pilot activity also aims to help gain experience with the implemented techniques and to improve them when necessary to be recommended for wide application in rural settlements in Tunisia and similar situations worldwide. The selected WWTPP at Chorfech 24 (rural settlement of 50 houses counting 350 inhabitants) consists of one Imhoff tank for pre-treatment, and three stages in series: as first stage a horizontal subsurface flow CW system, as second stage a subsurface vertical flow CW system, and a third horizontal flow CW. The sludge of the Imhoff tank is treated in a sludge composting bed. The performances of the different components as well as the whole treatment system were presented based on 3 months monitoring. The results shown in this paper are related to carbon, nitrogen and phosphorus removal as well as to reduction of micro-organisms. The mean overall removal rates of the Chorfech WWTPP during the monitored period have been, respectively, equal to 97% for total suspended solids and biochemical oxygen demand (BOD5), 95% for chemical oxygen demand, 71% for total nitrogen and 82% for P-PO4. The removal of E. coli by the whole system is 2.5 log units.

  8. Minimising the error in eigenvalue calculations involving the Boltzmann transport equation using goal-based adaptivity on unstructured meshes

    NASA Astrophysics Data System (ADS)

    Goffin, Mark A.; Baker, Christopher M. J.; Buchan, Andrew G.; Pain, Christopher C.; Eaton, Matthew D.; Smith, Paul N.

    2013-06-01

    This article presents a method for goal-based anisotropic adaptive methods for the finite element method applied to the Boltzmann transport equation. The neutron multiplication factor, k, is used as the goal of the adaptive procedure. The anisotropic adaptive algorithm requires error measures for k with directional dependence. General error estimators are derived for any given functional of the flux and applied to k to acquire the driving force for the adaptive procedure. The error estimators require the solution of an appropriately formed dual equation. Forward and dual error indicators are calculated by weighting the Hessian of each solution with the dual and forward residual respectively. The Hessian is used as an approximation of the interpolation error in the solution which gives rise to the directional dependence. The two indicators are combined to form a single error metric that is used to adapt the finite element mesh. The residual is approximated using a novel technique arising from the sub-grid scale finite element discretisation. Two adaptive routes are demonstrated: (i) a single mesh is used to solve all energy groups, and (ii) a different mesh is used to solve each energy group. The second method aims to capture the benefit from representing the flux from each energy group on a specifically optimised mesh. The k goal-based adaptive method was applied to three examples which illustrate the superior accuracy in criticality problems that can be obtained.

  9. Minimising the error in eigenvalue calculations involving the Boltzmann transport equation using goal-based adaptivity on unstructured meshes

    SciTech Connect

    Goffin, Mark A.; Baker, Christopher M.J.; Buchan, Andrew G.; Pain, Christopher C.; Eaton, Matthew D.; Smith, Paul N.

    2013-06-01

    This article presents a method for goal-based anisotropic adaptive methods for the finite element method applied to the Boltzmann transport equation. The neutron multiplication factor, k{sub eff}, is used as the goal of the adaptive procedure. The anisotropic adaptive algorithm requires error measures for k{sub eff} with directional dependence. General error estimators are derived for any given functional of the flux and applied to k{sub eff} to acquire the driving force for the adaptive procedure. The error estimators require the solution of an appropriately formed dual equation. Forward and dual error indicators are calculated by weighting the Hessian of each solution with the dual and forward residual respectively. The Hessian is used as an approximation of the interpolation error in the solution which gives rise to the directional dependence. The two indicators are combined to form a single error metric that is used to adapt the finite element mesh. The residual is approximated using a novel technique arising from the sub-grid scale finite element discretisation. Two adaptive routes are demonstrated: (i) a single mesh is used to solve all energy groups, and (ii) a different mesh is used to solve each energy group. The second method aims to capture the benefit from representing the flux from each energy group on a specifically optimised mesh. The k{sub eff} goal-based adaptive method was applied to three examples which illustrate the superior accuracy in criticality problems that can be obtained.

  10. An efficient computational scheme for electronic excitation spectra of molecules in solution using the symmetry-adapted cluster–configuration interaction method: The accuracy of excitation energies and intuitive charge-transfer indices

    SciTech Connect

    Fukuda, Ryoichi Ehara, Masahiro

    2014-10-21

    Solvent effects on electronic excitation spectra are considerable in many situations; therefore, we propose an efficient and reliable computational scheme that is based on the symmetry-adapted cluster-configuration interaction (SAC-CI) method and the polarizable continuum model (PCM) for describing electronic excitations in solution. The new scheme combines the recently proposed first-order PCM SAC-CI method with the PTE (perturbation theory at the energy level) PCM SAC scheme. This is essentially equivalent to the usual SAC and SAC-CI computations with using the PCM Hartree-Fock orbital and integrals, except for the additional correction terms that represent solute-solvent interactions. The test calculations demonstrate that the present method is a very good approximation of the more costly iterative PCM SAC-CI method for excitation energies of closed-shell molecules in their equilibrium geometry. This method provides very accurate values of electric dipole moments but is insufficient for describing the charge-transfer (CT) indices in polar solvent. The present method accurately reproduces the absorption spectra and their solvatochromism of push-pull type 2,2{sup ′}-bithiophene molecules. Significant solvent and substituent effects on these molecules are intuitively visualized using the CT indices. The present method is the simplest and theoretically consistent extension of SAC-CI method for including PCM environment, and therefore, it is useful for theoretical and computational spectroscopy.

  11. Adaptive Management

    EPA Science Inventory

    Adaptive management is an approach to natural resource management that emphasizes learning through management where knowledge is incomplete, and when, despite inherent uncertainty, managers and policymakers must act. Unlike a traditional trial and error approach, adaptive managem...

  12. Adaptive Process Control with Fuzzy Logic and Genetic Algorithms

    NASA Technical Reports Server (NTRS)

    Karr, C. L.

    1993-01-01

    Researchers at the U.S. Bureau of Mines have developed adaptive process control systems in which genetic algorithms (GA's) are used to augment fuzzy logic controllers (FLC's). GA's are search algorithms that rapidly locate near-optimum solutions to a wide spectrum of problems by modeling the search procedures of natural genetics. FLC's are rule based systems that efficiently manipulate a problem environment by modeling the 'rule-of-thumb' strategy used in human decision-making. Together, GA's and FLC's possess the capabilities necessary to produce powerful, efficient, and robust adaptive control systems. To perform efficiently, such control systems require a control element to manipulate the problem environment, an analysis element to recognize changes in the problem environment, and a learning element to adjust to the changes in the problem environment. Details of an overall adaptive control system are discussed. A specific laboratory acid-base pH system is used to demonstrate the ideas presented.

  13. Adaptive process control using fuzzy logic and genetic algorithms

    NASA Technical Reports Server (NTRS)

    Karr, C. L.

    1993-01-01

    Researchers at the U.S. Bureau of Mines have developed adaptive process control systems in which genetic algorithms (GA's) are used to augment fuzzy logic controllers (FLC's). GA's are search algorithms that rapidly locate near-optimum solutions to a wide spectrum of problems by modeling the search procedures of natural genetics. FLC's are rule based systems that efficiently manipulate a problem environment by modeling the 'rule-of-thumb' strategy used in human decision making. Together, GA's and FLC's possess the capabilities necessary to produce powerful, efficient, and robust adaptive control systems. To perform efficiently, such control systems require a control element to manipulate the problem environment, and a learning element to adjust to the changes in the problem environment. Details of an overall adaptive control system are discussed. A specific laboratory acid-base pH system is used to demonstrate the ideas presented.

  14. Topology and grid adaption for high-speed flow computations

    NASA Technical Reports Server (NTRS)

    Abolhassani, Jamshid S.; Tiwari, Surendra N.

    1989-01-01

    This study investigates the effects of grid topology and grid adaptation on numerical solutions of the Navier-Stokes equations. In the first part of this study, a general procedure is presented for computation of high-speed flow over complex three-dimensional configurations. The flow field is simulated on the surface of a Butler wing in a uniform stream. Results are presented for Mach number 3.5 and a Reynolds number of 2,000,000. The O-type and H-type grids have been used for this study, and the results are compared together and with other theoretical and experimental results. The results demonstrate that while the H-type grid is suitable for the leading and trailing edges, a more accurate solution can be obtained for the middle part of the wing with an O-type grid. In the second part of this study, methods of grid adaption are reviewed and a method is developed with the capability of adapting to several variables. This method is based on a variational approach and is an algebraic method. Also, the method has been formulated in such a way that there is no need for any matrix inversion. This method is used in conjunction with the calculation of hypersonic flow over a blunt-nose body. A movie has been produced which shows simultaneously the transient behavior of the solution and the grid adaption.

  15. Dental Procedures.

    PubMed

    Ramponi, Denise R

    2016-01-01

    Dental problems are a common complaint in emergency departments in the United States. There are a wide variety of dental issues addressed in emergency department visits such as dental caries, loose teeth, dental trauma, gingival infections, and dry socket syndrome. Review of the most common dental blocks and dental procedures will allow the practitioner the opportunity to make the patient more comfortable and reduce the amount of analgesia the patient will need upon discharge. Familiarity with the dental equipment, tooth, and mouth anatomy will help prepare the practitioner for to perform these dental procedures. PMID:27482994

  16. Fireplace adapters

    SciTech Connect

    Hunt, R.L.

    1983-12-27

    An adapter is disclosed for use with a fireplace. The stove pipe of a stove standing in a room to be heated may be connected to the flue of the chimney so that products of combustion from the stove may be safely exhausted through the flue and outwardly of the chimney. The adapter may be easily installed within the fireplace by removing the damper plate and fitting the adapter to the damper frame. Each of a pair of bolts has a portion which hooks over a portion of the damper frame and a threaded end depending from the hook portion and extending through a hole in the adapter. Nuts are threaded on the bolts and are adapted to force the adapter into a tight fit with the adapter frame.

  17. Adaptive Assessment for Nonacademic Secondary Reading.

    ERIC Educational Resources Information Center

    Hittleman, Daniel R.

    Adaptive assessment procedures are a means of determining the quality of a reader's performance in a variety of reading situations and on a variety of written materials. Such procedures are consistent with the idea that there are functional competencies which change with the reading task. Adaptive assessment takes into account that a lack of…

  18. Interdisciplinarity in Adapted Physical Activity

    ERIC Educational Resources Information Center

    Bouffard, Marcel; Spencer-Cavaliere, Nancy

    2016-01-01

    It is commonly accepted that inquiry in adapted physical activity involves the use of different disciplines to address questions. It is often advanced today that complex problems of the kind frequently encountered in adapted physical activity require a combination of disciplines for their solution. At the present time, individual research…

  19. Adaptive management: Chapter 1

    USGS Publications Warehouse

    Allen, Craig R.; Garmestani, Ahjond S.; Allen, Craig R.; Garmestani, Ahjond S.

    2015-01-01

    Adaptive management is an approach to natural resource management that emphasizes learning through management where knowledge is incomplete, and when, despite inherent uncertainty, managers and policymakers must act. Unlike a traditional trial and error approach, adaptive management has explicit structure, including a careful elucidation of goals, identification of alternative management objectives and hypotheses of causation, and procedures for the collection of data followed by evaluation and reiteration. The process is iterative, and serves to reduce uncertainty, build knowledge and improve management over time in a goal-oriented and structured process.

  20. Adaptive explicit and implicit finite element methods for transient thermal analysis

    NASA Technical Reports Server (NTRS)

    Probert, E. J.; Hassan, O.; Morgan, K.; Peraire, J.

    1992-01-01

    The application of adaptive finite element methods to the solution of transient heat conduction problems in two dimensions is investigated. The computational domain is represented by an unstructured assembly of linear triangular elements and the mesh adaptation is achieved by local regeneration of the grid, using an error estimation procedure coupled to an automatic triangular mesh generator. Two alternative solution procedures are considered. In the first procedure, the solution is advanced by explicit timestepping, with domain decomposition being used to improve the computational efficiency of the method. In the second procedure, an algorithm for constructing continuous lines which pass only once through each node of the mesh is employed. The lines are used as the basis of a fully implicit method, in which the equation system is solved by line relaxation using a block tridiagonal equation solver. The numerical performance of the two procedures is compared for the analysis of a problem involving a moving heat source applied to a convectively cooled cylindrical leading edge.

  1. 3D Structured Grid Adaptation

    NASA Technical Reports Server (NTRS)

    Banks, D. W.; Hafez, M. M.

    1996-01-01

    Grid adaptation for structured meshes is the art of using information from an existing, but poorly resolved, solution to automatically redistribute the grid points in such a way as to improve the resolution in regions of high error, and thus the quality of the solution. This involves: (1) generate a grid vis some standard algorithm, (2) calculate a solution on this grid, (3) adapt the grid to this solution, (4) recalculate the solution on this adapted grid, and (5) repeat steps 3 and 4 to satisfaction. Steps 3 and 4 can be repeated until some 'optimal' grid is converged to but typically this is not worth the effort and just two or three repeat calculations are necessary. They also may be repeated every 5-10 time steps for unsteady calculations.

  2. Investigations in adaptive processing of multispectral data

    NASA Technical Reports Server (NTRS)

    Kriegler, F. J.; Horwitz, H. M.

    1973-01-01

    Adaptive data processing procedures are applied to the problem of classifying objects in a scene scanned by multispectral sensor. These procedures show a performance improvement over standard nonadaptive techniques. Some sources of error in classification are identified and those correctable by adaptive processing are discussed. Experiments in adaptation of signature means by decision-directed methods are described. Some of these methods assume correlation between the trajectories of different signature means; for others this assumption is not made.

  3. Adaptive SPECT

    PubMed Central

    Barrett, Harrison H.; Furenlid, Lars R.; Freed, Melanie; Hesterman, Jacob Y.; Kupinski, Matthew A.; Clarkson, Eric; Whitaker, Meredith K.

    2008-01-01

    Adaptive imaging systems alter their data-acquisition configuration or protocol in response to the image information received. An adaptive pinhole single-photon emission computed tomography (SPECT) system might acquire an initial scout image to obtain preliminary information about the radiotracer distribution and then adjust the configuration or sizes of the pinholes, the magnifications, or the projection angles in order to improve performance. This paper briefly describes two small-animal SPECT systems that allow this flexibility and then presents a framework for evaluating adaptive systems in general, and adaptive SPECT systems in particular. The evaluation is in terms of the performance of linear observers on detection or estimation tasks. Expressions are derived for the ideal linear (Hotelling) observer and the ideal linear (Wiener) estimator with adaptive imaging. Detailed expressions for the performance figures of merit are given, and possible adaptation rules are discussed. PMID:18541485

  4. Adaptive Computing.

    ERIC Educational Resources Information Center

    Harrell, William

    1999-01-01

    Provides information on various adaptive technology resources available to people with disabilities. (Contains 19 references, an annotated list of 129 websites, and 12 additional print resources.) (JOW)

  5. Contour adaptation.

    PubMed

    Anstis, Stuart

    2013-01-01

    It is known that adaptation to a disk that flickers between black and white at 3-8 Hz on a gray surround renders invisible a congruent gray test disk viewed afterwards. This is contrast adaptation. We now report that adapting simply to the flickering circular outline of the disk can have the same effect. We call this "contour adaptation." This adaptation does not transfer interocularly, and apparently applies only to luminance, not color. One can adapt selectively to only some of the contours in a display, making only these contours temporarily invisible. For instance, a plaid comprises a vertical grating superimposed on a horizontal grating. If one first adapts to appropriate flickering vertical lines, the vertical components of the plaid disappears and it looks like a horizontal grating. Also, we simulated a Cornsweet (1970) edge, and we selectively adapted out the subjective and objective contours of a Kanisza (1976) subjective square. By temporarily removing edges, contour adaptation offers a new technique to study the role of visual edges, and it demonstrates how brightness information is concentrated in edges and propagates from them as it fills in surfaces.

  6. PLUM: Parallel Load Balancing for Unstructured Adaptive Meshes

    NASA Technical Reports Server (NTRS)

    Oliker, Leonid

    1998-01-01

    Dynamic mesh adaption on unstructured grids is a powerful tool for computing large-scale problems that require grid modifications to efficiently resolve solution features. Unfortunately, an efficient parallel implementation is difficult to achieve, primarily due to the load imbalance created by the dynamically-changing nonuniform grid. To address this problem, we have developed PLUM, an automatic portable framework for performing adaptive large-scale numerical computations in a message-passing environment. First, we present an efficient parallel implementation of a tetrahedral mesh adaption scheme. Extremely promising parallel performance is achieved for various refinement and coarsening strategies on a realistic-sized domain. Next we describe PLUM, a novel method for dynamically balancing the processor workloads in adaptive grid computations. This research includes interfacing the parallel mesh adaption procedure based on actual flow solutions to a data remapping module, and incorporating an efficient parallel mesh repartitioner. A significant runtime improvement is achieved by observing that data movement for a refinement step should be performed after the edge-marking phase but before the actual subdivision. We also present optimal and heuristic remapping cost metrics that can accurately predict the total overhead for data redistribution. Several experiments are performed to verify the effectiveness of PLUM on sequences of dynamically adapted unstructured grids. Portability is demonstrated by presenting results on the two vastly different architectures of the SP2 and the Origin2OOO. Additionally, we evaluate the performance of five state-of-the-art partitioning algorithms that can be used within PLUM. It is shown that for certain classes of unsteady adaption, globally repartitioning the computational mesh produces higher quality results than diffusive repartitioning schemes. We also demonstrate that a coarse starting mesh produces high quality load balancing, at

  7. Climate Literacy and Adaptation Solutions for Society

    NASA Astrophysics Data System (ADS)

    Sohl, L. E.; Chandler, M. A.

    2011-12-01

    Many climate literacy programs and resources are targeted specifically at children and young adults, as part of the concerted effort to improve STEM education in the U.S. This work is extremely important in building a future society that is well prepared to adopt policies promoting climate change resilience. What these climate literacy efforts seldom do, however, is reach the older adult population that is making economic decisions right now (or not, as the case may be) on matters that can be impacted by climate change. The result is a lack of appreciation of "climate intelligence" - information that could be incorporated into the decision-making process, to maximize opportunities, minimize risk, and create a climate-resilient economy. A National Climate Service, akin to the National Weather Service, would help provide legitimacy to the need for climate intelligence, and would certainly also be the first stop for both governments and private sector concerns seeking climate information for operational purposes. However, broader collaboration between the scientific and business communities is also needed, so that they become co-creators of knowledge that is beneficial and informative to all. The stakeholder-driven research that is the focus of NOAA's RISA (Regional Integrated Sciences and Assessments) projects is one example of how such collaborations can be developed.

  8. Adaptive triangular mesh generation

    NASA Technical Reports Server (NTRS)

    Erlebacher, G.; Eiseman, P. R.

    1984-01-01

    A general adaptive grid algorithm is developed on triangular grids. The adaptivity is provided by a combination of node addition, dynamic node connectivity and a simple node movement strategy. While the local restructuring process and the node addition mechanism take place in the physical plane, the nodes are displaced on a monitor surface, constructed from the salient features of the physical problem. An approximation to mean curvature detects changes in the direction of the monitor surface, and provides the pulling force on the nodes. Solutions to the axisymmetric Grad-Shafranov equation demonstrate the capturing, by triangles, of the plasma-vacuum interface in a free-boundary equilibrium configuration.

  9. Climate adaptation

    NASA Astrophysics Data System (ADS)

    Kinzig, Ann P.

    2015-03-01

    This paper is intended as a brief introduction to climate adaptation in a conference devoted otherwise to the physics of sustainable energy. Whereas mitigation involves measures to reduce the probability of a potential event, such as climate change, adaptation refers to actions that lessen the impact of climate change. Mitigation and adaptation differ in other ways as well. Adaptation does not necessarily have to be implemented immediately to be effective; it only needs to be in place before the threat arrives. Also, adaptation does not necessarily require global, coordinated action; many effective adaptation actions can be local. Some urban communities, because of land-use change and the urban heat-island effect, currently face changes similar to some expected under climate change, such as changes in water availability, heat-related morbidity, or changes in disease patterns. Concern over those impacts might motivate the implementation of measures that would also help in climate adaptation, despite skepticism among some policy makers about anthropogenic global warming. Studies of ancient civilizations in the southwestern US lends some insight into factors that may or may not be important to successful adaptation.

  10. The development and application of the self-adaptive grid code, SAGE

    NASA Technical Reports Server (NTRS)

    Davies, Carol B.

    1993-01-01

    The multidimensional self-adaptive grid code, SAGE, has proven to be a flexible and useful tool in the solution of complex flow problems. Both 2- and 3-D examples given in this report show the code to be reliable and to substantially improve flowfield solutions. Since the adaptive procedure is a marching scheme the code is extremely fast and uses insignificant CPU time compared to the corresponding flow solver. The SAGE program is also machine and flow solver independent. Significant effort was made to simplify user interaction, though some parameters still need to be chosen with care. It is also difficult to tell when the adaption process has provided its best possible solution. This is particularly true if no experimental data are available or if there is a lack of theoretical understanding of the flow. Another difficulty occurs if local features are important but missing in the original grid; the adaption to this solution will not result in any improvement, and only grid refinement can result in an improved solution. These are complex issues that need to be explored within the context of each specific problem.

  11. Adaptive Force Control in Compliant Motion

    NASA Technical Reports Server (NTRS)

    Seraji, H.

    1994-01-01

    This paper addresses the problem of controlling a manipulator in compliant motion while in contact with an environment having an unknown stiffness. Two classes of solutions are discussed: adaptive admittance control and adaptive compliance control. In both admittance and compliance control schemes, compensator adaptation is used to ensure a stable and uniform system performance.

  12. Procedural knowledge

    SciTech Connect

    Georgeff, M.P.; Lansky, A.L.

    1986-10-01

    Much of commonsense knowledge about the real world is in the form of procedures or sequences of actions for achieving particular goals. In this paper, a formalism is presented for representing such knowledge using the notion of process. A declarative semantics for the representation is given, which allows a user to state facts about the effects of doing things in the problem domain of interest. An operational semantics is also provided, which shows how this knowledge can be used to achieve particular goals or to form intentions regarding their achievement. Given both semantics, our formalism additionally serves as an executable specification language suitable for constructing complex systems. A system based on this formalism is described, and examples involving control of an autonomous robot and fault diagnosis for NASA's space shuttle are provided.

  13. Adjoint-Based, Three-Dimensional Error Prediction and Grid Adaptation

    NASA Technical Reports Server (NTRS)

    Park, Michael A.

    2002-01-01

    Engineering computational fluid dynamics (CFD) analysis and design applications focus on output functions (e.g., lift, drag). Errors in these output functions are generally unknown and conservatively accurate solutions may be computed. Computable error estimates can offer the possibility to minimize computational work for a prescribed error tolerance. Such an estimate can be computed by solving the flow equations and the linear adjoint problem for the functional of interest. The computational mesh can be modified to minimize the uncertainty of a computed error estimate. This robust mesh-adaptation procedure automatically terminates when the simulation is within a user specified error tolerance. This procedure for estimating and adapting to error in a functional is demonstrated for three-dimensional Euler problems. An adaptive mesh procedure that links to a Computer Aided Design (CAD) surface representation is demonstrated for wing, wing-body, and extruded high lift airfoil configurations. The error estimation and adaptation procedure yielded corrected functions that are as accurate as functions calculated on uniformly refined grids with ten times as many grid points.

  14. Application of Sequential Interval Estimation to Adaptive Mastery Testing

    ERIC Educational Resources Information Center

    Chang, Yuan-chin Ivan

    2005-01-01

    In this paper, we apply sequential one-sided confidence interval estimation procedures with beta-protection to adaptive mastery testing. The procedures of fixed-width and fixed proportional accuracy confidence interval estimation can be viewed as extensions of one-sided confidence interval procedures. It can be shown that the adaptive mastery…

  15. Adaptive Algebraic Multigrid Methods

    SciTech Connect

    Brezina, M; Falgout, R; MacLachlan, S; Manteuffel, T; McCormick, S; Ruge, J

    2004-04-09

    Our ability to simulate physical processes numerically is constrained by our ability to solve the resulting linear systems, prompting substantial research into the development of multiscale iterative methods capable of solving these linear systems with an optimal amount of effort. Overcoming the limitations of geometric multigrid methods to simple geometries and differential equations, algebraic multigrid methods construct the multigrid hierarchy based only on the given matrix. While this allows for efficient black-box solution of the linear systems associated with discretizations of many elliptic differential equations, it also results in a lack of robustness due to assumptions made on the near-null spaces of these matrices. This paper introduces an extension to algebraic multigrid methods that removes the need to make such assumptions by utilizing an adaptive process. The principles which guide the adaptivity are highlighted, as well as their application to algebraic multigrid solution of certain symmetric positive-definite linear systems.

  16. Adaptive multiresolution modeling of groundwater flow in heterogeneous porous media

    NASA Astrophysics Data System (ADS)

    Malenica, Luka; Gotovac, Hrvoje; Srzic, Veljko; Andric, Ivo

    2016-04-01

    Proposed methodology was originally developed by our scientific team in Split who designed multiresolution approach for analyzing flow and transport processes in highly heterogeneous porous media. The main properties of the adaptive Fup multi-resolution approach are: 1) computational capabilities of Fup basis functions with compact support capable to resolve all spatial and temporal scales, 2) multi-resolution presentation of heterogeneity as well as all other input and output variables, 3) accurate, adaptive and efficient strategy and 4) semi-analytical properties which increase our understanding of usually complex flow and transport processes in porous media. The main computational idea behind this approach is to separately find the minimum number of basis functions and resolution levels necessary to describe each flow and transport variable with the desired accuracy on a particular adaptive grid. Therefore, each variable is separately analyzed, and the adaptive and multi-scale nature of the methodology enables not only computational efficiency and accuracy, but it also describes subsurface processes closely related to their understood physical interpretation. The methodology inherently supports a mesh-free procedure, avoiding the classical numerical integration, and yields continuous velocity and flux fields, which is vitally important for flow and transport simulations. In this paper, we will show recent improvements within the proposed methodology. Since "state of the art" multiresolution approach usually uses method of lines and only spatial adaptive procedure, temporal approximation was rarely considered as a multiscale. Therefore, novel adaptive implicit Fup integration scheme is developed, resolving all time scales within each global time step. It means that algorithm uses smaller time steps only in lines where solution changes are intensive. Application of Fup basis functions enables continuous time approximation, simple interpolation calculations across

  17. Toothbrush Adaptations.

    ERIC Educational Resources Information Center

    Exceptional Parent, 1987

    1987-01-01

    Suggestions are presented for helping disabled individuals learn to use or adapt toothbrushes for proper dental care. A directory lists dental health instructional materials available from various organizations. (CB)

  18. Knowledge Retrieval Solutions.

    ERIC Educational Resources Information Center

    Khan, Kamran

    1998-01-01

    Excalibur RetrievalWare offers true knowledge retrieval solutions. Its fundamental technologies, Adaptive Pattern Recognition Processing and Semantic Networks, have capabilities for knowledge discovery and knowledge management of full-text, structured and visual information. The software delivers a combination of accuracy, extensibility,…

  19. An efficient method-of-lines simulation procedure for organic semiconductor devices.

    PubMed

    Rogel-Salazar, J; Bradley, D D C; Cash, J R; Demello, J C

    2009-03-14

    We describe an adaptive grid method-of-lines (MOL) solution procedure for modelling charge transport and recombination in organic semiconductor devices. The procedure we describe offers an efficient, robust and versatile means of simulating semiconductor devices that allows for much simpler coding of the underlying equations than alternative simulation procedures. The MOL technique is especially well-suited to modelling the extremely stiff (and hence difficult to solve) equations that arise during the simulation of organic-and some inorganic-semiconductor devices. It also has wider applications in other areas, including reaction kinetics, combustion and aero- and fluid dynamics, where its ease of implementation also makes it an attractive choice. The MOL procedure we use converts the underlying semiconductor equations into a series of coupled ordinary differential equations (ODEs) that can be integrated forward in time using an appropriate ODE solver. The time integration is periodically interrupted, the numerical solution is interpolated onto a new grid that is better matched to the solution profile, and the time integration is then resumed on the new grid. The efficacy of the simulation procedure is assessed by considering a single layer device structure, for which exact analytical solutions are available for the electric potential, the charge distributions and the current-voltage characteristics. Two separate state-of-the-art ODE solvers are tested: the single-step Runge-Kutta solver Radau5 and the multi-step solver ODE15s, which is included as part of the Matlab ODE suite. In both cases, the numerical solutions show excellent agreement with the exact analytical solutions, yielding results that are accurate to one part in 1 x 10(4). The single-step Radau5 solver, however, is found to provide faster convergence since its efficiency is not compromised by the periodic interruption of the time integration when the grid is updated.

  20. Solutions For Smart Metering Under Harsh Environmental Condicions

    NASA Astrophysics Data System (ADS)

    Kunicina, N.; Zabasta, A.; Kondratjevs, K.; Asmanis, G.

    2015-02-01

    The described case study concerns application of wireless sensor networks to the smart control of power supply substations. The solution proposed for metering is based on the modular principle and has been tested in the intersystem communication paradigm using selectable interface modules (IEEE 802.3, ISM radio interface, GSM/GPRS). The solution modularity gives 7 % savings of maintenance costs. The developed solution can be applied to the control of different critical infrastructure networks using adapted modules. The proposed smart metering is suitable for outdoor installation, indoor industrial installations, operation under electromagnetic pollution, temperature and humidity impact. The results of tests have shown a good electromagnetic compatibility of the prototype meter with other electronic devices. The metering procedure is exemplified by operation of a testing company's workers under harsh environmental conditions.

  1. Lattice model for water-solute mixtures

    NASA Astrophysics Data System (ADS)

    Furlan, A. P.; Almarza, N. G.; Barbosa, M. C.

    2016-10-01

    A lattice model for the study of mixtures of associating liquids is proposed. Solvent and solute are modeled by adapting the associating lattice gas (ALG) model. The nature of interaction of solute/solvent is controlled by tuning the energy interactions between the patches of ALG model. We have studied three set of parameters, resulting in, hydrophilic, inert, and hydrophobic interactions. Extensive Monte Carlo simulations were carried out, and the behavior of pure components and the excess properties of the mixtures have been studied. The pure components, water (solvent) and solute, have quite similar phase diagrams, presenting gas, low density liquid, and high density liquid phases. In the case of solute, the regions of coexistence are substantially reduced when compared with both the water and the standard ALG models. A numerical procedure has been developed in order to attain series of results at constant pressure from simulations of the lattice gas model in the grand canonical ensemble. The excess properties of the mixtures, volume and enthalpy as the function of the solute fraction, have been studied for different interaction parameters of the model. Our model is able to reproduce qualitatively well the excess volume and enthalpy for different aqueous solutions. For the hydrophilic case, we show that the model is able to reproduce the excess volume and enthalpy of mixtures of small alcohols and amines. The inert case reproduces the behavior of large alcohols such as propanol, butanol, and pentanol. For the last case (hydrophobic), the excess properties reproduce the behavior of ionic liquids in aqueous solution.

  2. Adaptation of the pseudo-metal-oxide-semiconductor field effect transistor technique to ultrathin silicon-on-insulator wafers characterization: Improved set-up, measurement procedure, parameter extraction, and modeling

    NASA Astrophysics Data System (ADS)

    Van Den Daele, W.; Malaquin, C.; Baumel, N.; Kononchuk, O.; Cristoloveanu, S.

    2013-10-01

    This paper revisits and adapts of the pseudo-MOSFET (Ψ-MOSFET) characterization technique for advanced fully depleted silicon on insulator (FDSOI) wafers. We review the current challenges for standard Ψ-MOSFET set-up on ultra-thin body (12 nm) over ultra-thin buried oxide (25 nm BOX) and propose a novel set-up enabling the technique on FDSOI structures. This novel configuration embeds 4 probes with large tip radius (100-200 μm) and low pressure to avoid oxide damage. Compared with previous 4-point probe measurements, we introduce a simplified and faster methodology together with an adapted Y-function. The models for parameters extraction are revisited and calibrated through systematic measurements of SOI wafers with variable film thickness. We propose an in-depth analysis of the FDSOI structure through comparison of experimental data, TCAD (Technology Computed Aided Design) simulations, and analytical modeling. TCAD simulations are used to unify previously reported thickness-dependent analytical models by analyzing the BOX/substrate potential and the electrical field in ultrathin films. Our updated analytical models are used to explain the results and to extract correct electrical parameters such as low-field electron and hole mobility, subthreshold slope, and film/BOX interface traps density.

  3. Adaptive Development

    NASA Technical Reports Server (NTRS)

    2005-01-01

    The goal of this research is to develop and demonstrate innovative adaptive seal technologies that can lead to dramatic improvements in engine performance, life, range, and emissions, and enhance operability for next generation gas turbine engines. This work is concentrated on the development of self-adaptive clearance control systems for gas turbine engines. Researchers have targeted the high-pressure turbine (HPT) blade tip seal location for following reasons: Current active clearance control (ACC) systems (e.g., thermal case-cooling schemes) cannot respond to blade tip clearance changes due to mechanical, thermal, and aerodynamic loads. As such they are prone to wear due to the required tight running clearances during operation. Blade tip seal wear (increased clearances) reduces engine efficiency, performance, and service life. Adaptive sealing technology research has inherent impact on all envisioned 21st century propulsion systems (e.g. distributed vectored, hybrid and electric drive propulsion concepts).

  4. Organization of Distributed Adaptive Learning

    ERIC Educational Resources Information Center

    Vengerov, Alexander

    2009-01-01

    The growing sensitivity of various systems and parts of industry, society, and even everyday individual life leads to the increased volume of changes and needs for adaptation and learning. This creates a new situation where learning from being purely academic knowledge transfer procedure is becoming a ubiquitous always-on essential part of all…

  5. Adaptive Thresholds

    SciTech Connect

    Bremer, P. -T.

    2014-08-26

    ADAPT is a topological analysis code that allow to compute local threshold, in particular relevance based thresholds for features defined in scalar fields. The initial target application is vortex detection but the software is more generally applicable to all threshold based feature definitions.

  6. School Solutions for Cyberbullying

    ERIC Educational Resources Information Center

    Sutton, Susan

    2009-01-01

    This article offers solutions and steps to prevent cyberbullying. Schools can improve their ability to handle cyberbullying by educating staff members, students, and parents and by implementing rules and procedures for how to handle possible incidents. Among the steps is to include a section about cyberbullying and expectations in the student…

  7. Adaptive Units of Learning and Educational Videogames

    ERIC Educational Resources Information Center

    Moreno-Ger, Pablo; Thomas, Pilar Sancho; Martinez-Ortiz, Ivan; Sierra, Jose Luis; Fernandez-Manjon, Baltasar

    2007-01-01

    In this paper, we propose three different ways of using IMS Learning Design to support online adaptive learning modules that include educational videogames. The first approach relies on IMS LD to support adaptation procedures where the educational games are considered as Learning Objects. These games can be included instead of traditional content…

  8. Connector adapter

    NASA Technical Reports Server (NTRS)

    Hacker, Scott C. (Inventor); Dean, Richard J. (Inventor); Burge, Scott W. (Inventor); Dartez, Toby W. (Inventor)

    2007-01-01

    An adapter for installing a connector to a terminal post, wherein the connector is attached to a cable, is presented. In an embodiment, the adapter is comprised of an elongated collet member having a longitudinal axis comprised of a first collet member end, a second collet member end, an outer collet member surface, and an inner collet member surface. The inner collet member surface at the first collet member end is used to engage the connector. The outer collet member surface at the first collet member end is tapered for a predetermined first length at a predetermined taper angle. The collet includes a longitudinal slot that extends along the longitudinal axis initiating at the first collet member end for a predetermined second length. The first collet member end is formed of a predetermined number of sections segregated by a predetermined number of channels and the longitudinal slot.

  9. Adaptive VFH

    NASA Astrophysics Data System (ADS)

    Odriozola, Iñigo; Lazkano, Elena; Sierra, Basi

    2011-10-01

    This paper investigates the improvement of the Vector Field Histogram (VFH) local planning algorithm for mobile robot systems. The Adaptive Vector Field Histogram (AVFH) algorithm has been developed to improve the effectiveness of the traditional VFH path planning algorithm overcoming the side effects of using static parameters. This new algorithm permits the adaptation of planning parameters for the different type of areas in an environment. Genetic Algorithms are used to fit the best VFH parameters to each type of sector and, afterwards, every section in the map is labelled with the sector-type which best represents it. The Player/Stage simulation platform has been chosen for making all sort of tests and to prove the new algorithm's adequateness. Even though there is still much work to be carried out, the developed algorithm showed good navigation properties and turned out to be softer and more effective than the traditional VFH algorithm.

  10. Adaptive sampler

    DOEpatents

    Watson, B.L.; Aeby, I.

    1980-08-26

    An adaptive data compression device for compressing data is described. The device has a frequency content, including a plurality of digital filters for analyzing the content of the data over a plurality of frequency regions, a memory, and a control logic circuit for generating a variable rate memory clock corresponding to the analyzed frequency content of the data in the frequency region and for clocking the data into the memory in response to the variable rate memory clock.

  11. Adaptive sampler

    DOEpatents

    Watson, Bobby L.; Aeby, Ian

    1982-01-01

    An adaptive data compression device for compressing data having variable frequency content, including a plurality of digital filters for analyzing the content of the data over a plurality of frequency regions, a memory, and a control logic circuit for generating a variable rate memory clock corresponding to the analyzed frequency content of the data in the frequency region and for clocking the data into the memory in response to the variable rate memory clock.

  12. Unstructured adaptive mesh computations of rotorcraft high-speed impulsive noise

    NASA Technical Reports Server (NTRS)

    Strawn, Roger; Garceau, Michael; Biswas, Rupak

    1993-01-01

    A new method is developed for modeling helicopter high-speed impulsive (HSI) noise. The aerodynamics and acoustics near the rotor blade tip are computed by solving the Euler equations on an unstructured grid. A stationary Kirchhoff surface integral is then used to propagate these acoustic signals to the far field. The near-field Euler solver uses a solution-adaptive grid scheme to improve the resolution of the acoustic signal. Grid points are locally added and/or deleted from the mesh at each adaptive step. An important part of this procedure is the choice of an appropriate error indicator. The error indicator is computed from the flow field solution and determines the regions for mesh coarsening and refinement. Computed results for HSI noise compare favorably with experimental data for three different hovering rotor cases.

  13. PLUM: Parallel Load Balancing for Unstructured Adaptive Meshes. Degree awarded by Colorado Univ.

    NASA Technical Reports Server (NTRS)

    Oliker, Leonid

    1998-01-01

    Dynamic mesh adaption on unstructured grids is a powerful tool for computing large-scale problems that require grid modifications to efficiently resolve solution features. By locally refining and coarsening the mesh to capture physical phenomena of interest, such procedures make standard computational methods more cost effective. Unfortunately, an efficient parallel implementation of these adaptive methods is rather difficult to achieve, primarily due to the load imbalance created by the dynamically-changing nonuniform grid. This requires significant communication at runtime, leading to idle processors and adversely affecting the total execution time. Nonetheless, it is generally thought that unstructured adaptive- grid techniques will constitute a significant fraction of future high-performance supercomputing. Various dynamic load balancing methods have been reported to date; however, most of them either lack a global view of loads across processors or do not apply their techniques to realistic large-scale applications.

  14. Improved procedures for in vitro skin irritation testing of sticky and greasy natural botanicals.

    PubMed

    Molinari, J; Eskes, C; Andres, E; Remoué, N; Sá-Rocha, V M; Hurtado, S P; Barrichello, C

    2013-02-01

    Skin irritation evaluation is an important endpoint for the safety assessment of cosmetic ingredients required by various regulatory authorities for notification and/or import of test substances. The present study was undertaken to investigate possible protocol adaptations of the currently validated in vitro skin irritation test methods based on reconstructed human epidermis (RhE) for the testing of plant extracts and natural botanicals. Due to their specific physico-chemical properties, such as lipophilicity, sticky/buttery-like texture, waxy/creamy foam characteristics, normal washing procedures can lead to an incomplete removal of these materials and/or to mechanical damage to the tissues, resulting in an impaired prediction of the true skin irritation potential of the materials. For this reason different refined washing procedures were evaluated for their ability to ensure appropriate removal of greasy and sticky substances while not altering the normal responses of the validated RhE test method. Amongst the different procedures evaluated, the use of a SDS 0.1% PBS solution to remove the sticky and greasy test material prior to the normal washing procedures was found to be the most suitable adaptation to ensure efficient removal of greasy and sticky in-house controls without affecting the results of the negative control. The predictive capacity of the refined SDS 0.1% washing procedure, was investigated by using twelve oily and viscous compounds having known skin irritation effects supported by raw and/or peer reviewed in vivo data. The normal washing procedure resulted in 8 out of 10 correctly predicted compounds as compared to 9 out of 10 with the refined washing procedures, showing an increase in the predictive ability of the assay. The refined washing procedure allowed to correctly identify all in vivo skin irritant materials showing the same sensitivity as the normal washing procedures, and further increased the specificity of the assay from 5 to 6 correct

  15. [Adaptive clinical study methodologies in drug development].

    PubMed

    Antal, János

    2015-11-29

    The evolution of drug development in human, clinical phase studies triggers the overview of those technologies and procedures which are labelled as adaptive clinical trials. The most relevant procedural and operational aspects will be discussed in this overview from points of view of clinico-methodological aspect.

  16. Evaluating Content Alignment in Computerized Adaptive Testing

    ERIC Educational Resources Information Center

    Wise, Steven L.; Kingsbury, G. Gage; Webb, Norman L.

    2015-01-01

    The alignment between a test and the content domain it measures represents key evidence for the validation of test score inferences. Although procedures have been developed for evaluating the content alignment of linear tests, these procedures are not readily applicable to computerized adaptive tests (CATs), which require large item pools and do…

  17. Adaptive sequential testing for multiple comparisons.

    PubMed

    Gao, Ping; Liu, Lingyun; Mehta, Cyrus

    2014-01-01

    We propose a Markov process theory-based adaptive sequential testing procedure for multiple comparisons. The procedure can be used for confirmative trials involving multi-comparisons, including dose selection or population enrichment. Dose or subpopulation selection and sample size modification can be made at any interim analysis. Type I error control is exact. PMID:24926848

  18. Ritz Procedure for COSMIC/NASTRAN

    NASA Technical Reports Server (NTRS)

    Citerley, R. L.; Woytowitz, P. J.

    1985-01-01

    An analysis procedure has been developed and incorporated into COSMIC/NASTRAN that permits large dynamic degree of freedom models to be processed accurately with little or no extra effort required by the user. The method employs existing capabilities without the need for approximate Guyan reduction techniques. Comparisons to existing solution procedures presently within NASTRAN are discussed.

  19. Multiple Comparison Procedures when Population Variances Differ.

    ERIC Educational Resources Information Center

    Olejnik, Stephen; Lee, JaeShin

    A review of the literature on multiple comparison procedures suggests several alternative approaches for comparing means when population variances differ. These include: (1) the approach of P. A. Games and J. F. Howell (1976); (2) C. W. Dunnett's C confidence interval (1980); and (3) Dunnett's T3 solution (1980). These procedures control the…

  20. The benefits of using customized procedure packs.

    PubMed

    Baines, R; Colquhoun, G; Jones, N; Bateman, R

    2001-01-01

    Discrete item purchasing is the traditional approach for hospitals to obtain consumable supplies for theatre procedures. Although most items are relatively low cost, the management and co-ordination of the supply chain, raising orders, controlling stock, picking and delivering to each operating theatre can be complex and costly. Customized procedure packs provide a solution. PMID:11892113

  1. An Adaptive TVD Limiter

    NASA Astrophysics Data System (ADS)

    Jeng, Yih Nen; Payne, Uon Jan

    1995-05-01

    An adaptive TVD limiter, based on a limiter approximating the upper boundary of the TVD range and that of the third-order upwind TVD scheme, is developed in this work. The limiter switches to the comprressive limiter near a discontinuity, to the third-order TVD scheme's limiter in the smooth region, and to a weighted averaged scheme in the transition region between smooth and high gradient solutions. Numerical experiments show that the proposed scheme works very well for one-dimensional scalar equation problems but becomes less effective in one- and two-dimensional Euler equation problems. Further study is required for the two-dimensional scalar equation problems.

  2. Reentry vehicle adaptive telemetry

    SciTech Connect

    Kidner, R.E.

    1993-09-01

    In RF telemetry (TM) the allowable RF bandwidth limits the amount of data in the telemetered data set. Typically the data set is less than ideal to accommodate all aspects of a test. In the case of diagnostic data, the compromise often leaves insufficient diagnostic data when problems occur. As a solution, intelligence was designed into a TM, allowing it to adapt to changing data requirements. To minimize the computational requirements for an intelligent TM, a fuzzy logic inference engine was developed. This reference engine was simulated on a PC and then loaded into a TM hardware package for final testing.

  3. Reentry vehicle adaptive telemetry

    NASA Astrophysics Data System (ADS)

    Kidner, R. E.

    1993-09-01

    In RF telemetry (TM) the allowable RF bandwidth limits the amount of data in the telemetered data set. Typically the data set is less than ideal to accommodate all aspects of a test. In the case of diagnostic data, the compromise often leaves insufficient diagnostic data when problems occur. As a solution, intelligence was designed into a TM allowing it to adapt to changing data requirements. To minimize the computational requirements for an intelligent TM, a fuzzy logic inference engine was developed. This reference engine was simulated on a PC and then loaded into a TM hardware package for final testing.

  4. Adaptation strategies for high order discontinuous Galerkin methods based on Tau-estimation

    NASA Astrophysics Data System (ADS)

    Kompenhans, Moritz; Rubio, Gonzalo; Ferrer, Esteban; Valero, Eusebio

    2016-02-01

    In this paper three p-adaptation strategies based on the minimization of the truncation error are presented for high order discontinuous Galerkin methods. The truncation error is approximated by means of a τ-estimation procedure and enables the identification of mesh regions that require adaptation. Three adaptation strategies are developed and termed a posteriori, quasi-a priori and quasi-a priori corrected. All strategies require fine solutions, which are obtained by enriching the polynomial order, but while the former needs time converged solutions, the last two rely on non-converged solutions, which lead to faster computations. In addition, the high order method permits the spatial decoupling for the estimated errors and enables anisotropic p-adaptation. These strategies are verified and compared in terms of accuracy and computational cost for the Euler and the compressible Navier-Stokes equations. It is shown that the two quasi-a priori methods achieve a significant reduction in computational cost when compared to a uniform polynomial enrichment. Namely, for a viscous boundary layer flow, we obtain a speedup of 6.6 and 7.6 for the quasi-a priori and quasi-a priori corrected approaches, respectively.

  5. Pipe Cleaning Operating Procedures

    SciTech Connect

    Clark, D.; Wu, J.; /Fermilab

    1991-01-24

    This cleaning procedure outlines the steps involved in cleaning the high purity argon lines associated with the DO calorimeters. The procedure is broken down into 7 cycles: system setup, initial flush, wash, first rinse, second rinse, final rinse and drying. The system setup involves preparing the pump cart, line to be cleaned, distilled water, and interconnecting hoses and fittings. The initial flush is an off-line flush of the pump cart and its plumbing in order to preclude contaminating the line. The wash cycle circulates the detergent solution (Micro) at 180 degrees Fahrenheit through the line to be cleaned. The first rinse is then intended to rid the line of the majority of detergent and only needs to run for 30 minutes and at ambient temperature. The second rinse (if necessary) should eliminate the remaining soap residue. The final rinse is then intended to be a check that there is no remaining soap or other foreign particles in the line, particularly metal 'chips.' The final rinse should be run at 180 degrees Fahrenheit for at least 90 minutes. The filters should be changed after each cycle, paying particular attention to the wash cycle and the final rinse cycle return filters. These filters, which should be bagged and labeled, prove that the pipeline is clean. Only distilled water should be used for all cycles, especially rinsing. The level in the tank need not be excessive, merely enough to cover the heater float switch. The final rinse, however, may require a full 50 gallons. Note that most of the details of the procedure are included in the initial flush description. This section should be referred to if problems arise in the wash or rinse cycles.

  6. Adaptive sampling for noisy problems

    SciTech Connect

    Cantu-Paz, E

    2004-03-26

    The usual approach to deal with noise present in many real-world optimization problems is to take an arbitrary number of samples of the objective function and use the sample average as an estimate of the true objective value. The number of samples is typically chosen arbitrarily and remains constant for the entire optimization process. This paper studies an adaptive sampling technique that varies the number of samples based on the uncertainty of deciding between two individuals. Experiments demonstrate the effect of adaptive sampling on the final solution quality reached by a genetic algorithm and the computational cost required to find the solution. The results suggest that the adaptive technique can effectively eliminate the need to set the sample size a priori, but in many cases it requires high computational costs.

  7. Collected radiochemical and geochemical procedures

    SciTech Connect

    Kleinberg, J

    1990-05-01

    This revision of LA-1721, 4th Ed., Collected Radiochemical Procedures, reflects the activities of two groups in the Isotope and Nuclear Chemistry Division of the Los Alamos National Laboratory: INC-11, Nuclear and radiochemistry; and INC-7, Isotope Geochemistry. The procedures fall into five categories: I. Separation of Radionuclides from Uranium, Fission-Product Solutions, and Nuclear Debris; II. Separation of Products from Irradiated Targets; III. Preparation of Samples for Mass Spectrometric Analysis; IV. Dissolution Procedures; and V. Geochemical Procedures. With one exception, the first category of procedures is ordered by the positions of the elements in the Periodic Table, with separate parts on the Representative Elements (the A groups); the d-Transition Elements (the B groups and the Transition Triads); and the Lanthanides (Rare Earths) and Actinides (the 4f- and 5f-Transition Elements). The members of Group IIIB-- scandium, yttrium, and lanthanum--are included with the lanthanides, elements they resemble closely in chemistry and with which they occur in nature. The procedures dealing with the isolation of products from irradiated targets are arranged by target element.

  8. Adaptive Hybrid Mesh Refinement for Multiphysics Applications

    SciTech Connect

    Khamayseh, Ahmed K; de Almeida, Valmor F

    2007-01-01

    The accuracy and convergence of computational solutions of mesh-based methods is strongly dependent on the quality of the mesh used. We have developed methods for optimizing meshes that are comprised of elements of arbitrary polygonal and polyhedral type. We present in this research the development of r-h hybrid adaptive meshing technology tailored to application areas relevant to multi-physics modeling and simulation. Solution-based adaptation methods are used to reposition mesh nodes (r-adaptation) or to refine the mesh cells (h-adaptation) to minimize solution error. The numerical methods perform either the r-adaptive mesh optimization or the h-adaptive mesh refinement method on the initial isotropic or anisotropic meshes to maximize the equidistribution of a weighted geometric and/or solution function. We have successfully introduced r-h adaptivity to a least-squares method with spherical harmonics basis functions for the solution of the spherical shallow atmosphere model used in climate forecasting. In addition, application of this technology also covers a wide range of disciplines in computational sciences, most notably, time-dependent multi-physics, multi-scale modeling and simulation.

  9. Analyzing Hedges in Verbal Communication: An Adaptation-Based Approach

    ERIC Educational Resources Information Center

    Wang, Yuling

    2010-01-01

    Based on Adaptation Theory, the article analyzes the production process of hedges. The procedure consists of the continuous making of choices in linguistic forms and communicative strategies. These choices are made just for adaptation to the contextual correlates. Besides, the adaptation process is dynamic, intentional and bidirectional.

  10. Structured programming: Principles, notation, procedure

    NASA Technical Reports Server (NTRS)

    JOST

    1978-01-01

    Structured programs are best represented using a notation which gives a clear representation of the block encapsulation. In this report, a set of symbols which can be used until binding directives are republished is suggested. Structured programming also allows a new method of procedure for design and testing. Programs can be designed top down, that is, they can start at the highest program plane and can penetrate to the lowest plane by step-wise refinements. The testing methodology also is adapted to this procedure. First, the highest program plane is tested, and the programs which are not yet finished in the next lower plane are represented by so-called dummies. They are gradually replaced by the real programs.

  11. Adaptive evolution of molecular phenotypes

    NASA Astrophysics Data System (ADS)

    Held, Torsten; Nourmohammad, Armita; Lässig, Michael

    2014-09-01

    Molecular phenotypes link genomic information with organismic functions, fitness, and evolution. Quantitative traits are complex phenotypes that depend on multiple genomic loci. In this paper, we study the adaptive evolution of a quantitative trait under time-dependent selection, which arises from environmental changes or through fitness interactions with other co-evolving phenotypes. We analyze a model of trait evolution under mutations and genetic drift in a single-peak fitness seascape. The fitness peak performs a constrained random walk in the trait amplitude, which determines the time-dependent trait optimum in a given population. We derive analytical expressions for the distribution of the time-dependent trait divergence between populations and of the trait diversity within populations. Based on this solution, we develop a method to infer adaptive evolution of quantitative traits. Specifically, we show that the ratio of the average trait divergence and the diversity is a universal function of evolutionary time, which predicts the stabilizing strength and the driving rate of the fitness seascape. From an information-theoretic point of view, this function measures the macro-evolutionary entropy in a population ensemble, which determines the predictability of the evolutionary process. Our solution also quantifies two key characteristics of adapting populations: the cumulative fitness flux, which measures the total amount of adaptation, and the adaptive load, which is the fitness cost due to a population's lag behind the fitness peak.

  12. Parallel Adaptive Mesh Refinement

    SciTech Connect

    Diachin, L; Hornung, R; Plassmann, P; WIssink, A

    2005-03-04

    As large-scale, parallel computers have become more widely available and numerical models and algorithms have advanced, the range of physical phenomena that can be simulated has expanded dramatically. Many important science and engineering problems exhibit solutions with localized behavior where highly-detailed salient features or large gradients appear in certain regions which are separated by much larger regions where the solution is smooth. Examples include chemically-reacting flows with radiative heat transfer, high Reynolds number flows interacting with solid objects, and combustion problems where the flame front is essentially a two-dimensional sheet occupying a small part of a three-dimensional domain. Modeling such problems numerically requires approximating the governing partial differential equations on a discrete domain, or grid. Grid spacing is an important factor in determining the accuracy and cost of a computation. A fine grid may be needed to resolve key local features while a much coarser grid may suffice elsewhere. Employing a fine grid everywhere may be inefficient at best and, at worst, may make an adequately resolved simulation impractical. Moreover, the location and resolution of fine grid required for an accurate solution is a dynamic property of a problem's transient features and may not be known a priori. Adaptive mesh refinement (AMR) is a technique that can be used with both structured and unstructured meshes to adjust local grid spacing dynamically to capture solution features with an appropriate degree of resolution. Thus, computational resources can be focused where and when they are needed most to efficiently achieve an accurate solution without incurring the cost of a globally-fine grid. Figure 1.1 shows two example computations using AMR; on the left is a structured mesh calculation of a impulsively-sheared contact surface and on the right is the fuselage and volume discretization of an RAH-66 Comanche helicopter [35]. Note the

  13. 20 CFR 655.1293 - Special procedures.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... FOREIGN WORKERS IN THE UNITED STATES Labor Certification Process for Temporary Agricultural Employment in... sheepherders in the Western States (and adaptation of such procedures to occupations in the range production of... employers for the certification of employment of nonimmigrant workers in agricultural......

  14. 20 CFR 655.1293 - Special procedures.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... FOREIGN WORKERS IN THE UNITED STATES Labor Certification Process for Temporary Agricultural Employment in... sheepherders in the Western States (and adaptation of such procedures to occupations in the range production of... employers for the certification of employment of nonimmigrant workers in agricultural......

  15. 20 CFR 655.102 - Special procedures.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... flexibility in carrying out the Secretary's responsibilities under the Immigration and Nationality Act (INA... FOREIGN WORKERS IN THE UNITED STATES Labor Certification Process for Temporary Agricultural Employment in... Western States (and adaptation of such procedures to occupations in the range production of......

  16. 20 CFR 655.102 - Special procedures.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... flexibility in carrying out the Secretary's responsibilities under the Immigration and Nationality Act (INA... FOREIGN WORKERS IN THE UNITED STATES Labor Certification Process for Temporary Agricultural Employment in... Western States (and adaptation of such procedures to occupations in the range production of......

  17. Perceptual adaptation helps us identify faces.

    PubMed

    Rhodes, Gillian; Watson, Tamara L; Jeffery, Linda; Clifford, Colin W G

    2010-05-12

    Adaptation is a fundamental property of perceptual processing. In low-level vision, it can calibrate perception to current inputs, increasing coding efficiency and enhancing discrimination around the adapted level. Adaptation also occurs in high-level vision, as illustrated by face aftereffects. However, the functional consequences of face adaptation remain uncertain. Here we investigated whether adaptation can enhance identification performance for faces from an adapted, relative to an unadapted, population. Five minutes of adaptation to an average Asian or Caucasian face reduced identification thresholds for faces from the adapted relative to the unadapted race. We replicated this interaction in two studies, using different participants, faces and adapting procedures. These results suggest that adaptation has a functional role in high-level, as well as low-level, visual processing. We suggest that adaptation to the average of a population may reduce responses to common properties shared by all members of the population, effectively orthogonalizing identity vectors in a multi-dimensional face space and freeing neural resources to code distinctive properties, which are useful for identification.

  18. Adaptive wavelets and relativistic magnetohydrodynamics

    NASA Astrophysics Data System (ADS)

    Hirschmann, Eric; Neilsen, David; Anderson, Matthe; Debuhr, Jackson; Zhang, Bo

    2016-03-01

    We present a method for integrating the relativistic magnetohydrodynamics equations using iterated interpolating wavelets. Such provide an adaptive implementation for simulations in multidimensions. A measure of the local approximation error for the solution is provided by the wavelet coefficients. They place collocation points in locations naturally adapted to the flow while providing expected conservation. We present demanding 1D and 2D tests includingthe Kelvin-Helmholtz instability and the Rayleigh-Taylor instability. Finally, we consider an outgoing blast wave that models a GRB outflow.

  19. Linearly-Constrained Adaptive Signal Processing Methods

    NASA Astrophysics Data System (ADS)

    Griffiths, Lloyd J.

    1988-01-01

    In adaptive least-squares estimation problems, a desired signal d(n) is estimated using a linear combination of L observation values samples xi (n), x2(n), . . . , xL-1(n) and denoted by the vector X(n). The estimate is formed as the inner product of this vector with a corresponding L-dimensional weight vector W. One particular weight vector of interest is Wopt which minimizes the mean-square between d(n) and the estimate. In this context, the term `mean-square difference' is a quadratic measure such as statistical expectation or time average. The specific value of W which achieves the minimum is given by the prod-uct of the inverse data covariance matrix and the cross-correlation between the data vector and the desired signal. The latter is often referred to as the P-vector. For those cases in which time samples of both the desired and data vector signals are available, a variety of adaptive methods have been proposed which will guarantee that an iterative weight vector Wa(n) converges (in some sense) to the op-timal solution. Two which have been extensively studied are the recursive least-squares (RLS) method and the LMS gradient approximation approach. There are several problems of interest in the communication and radar environment in which the optimal least-squares weight set is of interest and in which time samples of the desired signal are not available. Examples can be found in array processing in which only the direction of arrival of the desired signal is known and in single channel filtering where the spectrum of the desired response is known a priori. One approach to these problems which has been suggested is the P-vector algorithm which is an LMS-like approximate gradient method. Although it is easy to derive the mean and variance of the weights which result with this algorithm, there has never been an identification of the corresponding underlying error surface which the procedure searches. The purpose of this paper is to suggest an alternative

  20. Vortical Flow Prediction Using an Adaptive Unstructured Grid Method

    NASA Technical Reports Server (NTRS)

    Pirzadeh, Shahyar Z.

    2003-01-01

    A computational fluid dynamics (CFD) method has been employed to compute vortical flows around slender wing/body configurations. The emphasis of the paper is on the effectiveness of an adaptive grid procedure in "capturing" concentrated vortices generated at sharp edges or flow separation lines of lifting surfaces flying at high angles of attack. The method is based on a tetrahedral unstructured grid technology developed at the NASA Langley Research Center. Two steady-state, subsonic, inviscid and Navier-Stokes flow test cases are presented to demonstrate the applicability of the method for solving practical vortical flow problems. The first test case concerns vortex flow over a simple 65 delta wing with different values of leading-edge radius. Although the geometry is quite simple, it poses a challenging problem for computing vortices originating from blunt leading edges. The second case is that of a more complex fighter configuration. The superiority of the adapted solutions in capturing the vortex flow structure over the conventional unadapted results is demonstrated by comparisons with the wind-tunnel experimental data. The study shows that numerical prediction of vortical flows is highly sensitive to the local grid resolution and that the implementation of grid adaptation is essential when applying CFD methods to such complicated flow problems.

  1. Toward unsupervised adaptation of LDA for brain-computer interfaces.

    PubMed

    Vidaurre, C; Kawanabe, M; von Bünau, P; Blankertz, B; Müller, K R

    2011-03-01

    There is a step of significant difficulty experienced by brain-computer interface (BCI) users when going from the calibration recording to the feedback application. This effect has been previously studied and a supervised adaptation solution has been proposed. In this paper, we suggest a simple unsupervised adaptation method of the linear discriminant analysis (LDA) classifier that effectively solves this problem by counteracting the harmful effect of nonclass-related nonstationarities in electroencephalography (EEG) during BCI sessions performed with motor imagery tasks. For this, we first introduce three types of adaptation procedures and investigate them in an offline study with 19 datasets. Then, we select one of the proposed methods and analyze it further. The chosen classifier is offline tested in data from 80 healthy users and four high spinal cord injury patients. Finally, for the first time in BCI literature, we apply this unsupervised classifier in online experiments. Additionally, we show that its performance is significantly better than the state-of-the-art supervised approach.

  2. Vortical Flow Prediction Using an Adaptive Unstructured Grid Method

    NASA Technical Reports Server (NTRS)

    Pirzadeh, Shahyar Z.

    2001-01-01

    A computational fluid dynamics (CFD) method has been employed to compute vortical flows around slender wing/body configurations. The emphasis of the paper is on the effectiveness of an adaptive grid procedure in "capturing" concentrated vortices generated at sharp edges or flow separation lines of lifting surfaces flying at high angles of attack. The method is based on a tetrahedral unstructured grid technology developed at the NASA Langley Research Center. Two steady-state, subsonic, inviscid and Navier-Stokes flow test cases are presented to demonstrate the applicability of the method for solving practical vortical flow problems. The first test case concerns vortex flow over a simple 65deg delta wing with different values of leading-edge bluntness, and the second case is that of a more complex fighter configuration. The superiority of the adapted solutions in capturing the vortex flow structure over the conventional unadapted results is demonstrated by comparisons with the windtunnel experimental data. The study shows that numerical prediction of vortical flows is highly sensitive to the local grid resolution and that the implementation of grid adaptation is essential when applying CFD methods to such complicated flow problems.

  3. Adapting agriculture to climate change.

    PubMed

    Howden, S Mark; Soussana, Jean-François; Tubiello, Francesco N; Chhetri, Netra; Dunlop, Michael; Meinke, Holger

    2007-12-11

    The strong trends in climate change already evident, the likelihood of further changes occurring, and the increasing scale of potential climate impacts give urgency to addressing agricultural adaptation more coherently. There are many potential adaptation options available for marginal change of existing agricultural systems, often variations of existing climate risk management. We show that implementation of these options is likely to have substantial benefits under moderate climate change for some cropping systems. However, there are limits to their effectiveness under more severe climate changes. Hence, more systemic changes in resource allocation need to be considered, such as targeted diversification of production systems and livelihoods. We argue that achieving increased adaptation action will necessitate integration of climate change-related issues with other risk factors, such as climate variability and market risk, and with other policy domains, such as sustainable development. Dealing with the many barriers to effective adaptation will require a comprehensive and dynamic policy approach covering a range of scales and issues, for example, from the understanding by farmers of change in risk profiles to the establishment of efficient markets that facilitate response strategies. Science, too, has to adapt. Multidisciplinary problems require multidisciplinary solutions, i.e., a focus on integrated rather than disciplinary science and a strengthening of the interface with decision makers. A crucial component of this approach is the implementation of adaptation assessment frameworks that are relevant, robust, and easily operated by all stakeholders, practitioners, policymakers, and scientists.

  4. Adapting agriculture to climate change

    PubMed Central

    Howden, S. Mark; Soussana, Jean-François; Tubiello, Francesco N.; Chhetri, Netra; Dunlop, Michael; Meinke, Holger

    2007-01-01

    The strong trends in climate change already evident, the likelihood of further changes occurring, and the increasing scale of potential climate impacts give urgency to addressing agricultural adaptation more coherently. There are many potential adaptation options available for marginal change of existing agricultural systems, often variations of existing climate risk management. We show that implementation of these options is likely to have substantial benefits under moderate climate change for some cropping systems. However, there are limits to their effectiveness under more severe climate changes. Hence, more systemic changes in resource allocation need to be considered, such as targeted diversification of production systems and livelihoods. We argue that achieving increased adaptation action will necessitate integration of climate change-related issues with other risk factors, such as climate variability and market risk, and with other policy domains, such as sustainable development. Dealing with the many barriers to effective adaptation will require a comprehensive and dynamic policy approach covering a range of scales and issues, for example, from the understanding by farmers of change in risk profiles to the establishment of efficient markets that facilitate response strategies. Science, too, has to adapt. Multidisciplinary problems require multidisciplinary solutions, i.e., a focus on integrated rather than disciplinary science and a strengthening of the interface with decision makers. A crucial component of this approach is the implementation of adaptation assessment frameworks that are relevant, robust, and easily operated by all stakeholders, practitioners, policymakers, and scientists. PMID:18077402

  5. Electromarking solution

    DOEpatents

    Bullock, Jonathan S.; Harper, William L.; Peck, Charles G.

    1976-06-22

    This invention is directed to an aqueous halogen-free electromarking solution which possesses the capacity for marking a broad spectrum of metals and alloys selected from different classes. The aqueous solution comprises basically the nitrate salt of an amphoteric metal, a chelating agent, and a corrosion-inhibiting agent.

  6. Advances in adaptive structures at Jet Propulsion Laboratory

    NASA Technical Reports Server (NTRS)

    Wada, Ben K.; Garba, John A.

    1993-01-01

    Future proposed NASA missions with the need for large deployable or erectable precision structures will require solutions to many technical problems. The Jet Propulsion Laboratory (JPL) is developing new technologies in Adaptive Structures to meet these challenges. The technology requirements, approaches to meet the requirements using Adaptive Structures, and the recent JPL research results in Adaptive Structures are described.

  7. Designing Flightdeck Procedures

    NASA Technical Reports Server (NTRS)

    Barshi, Immanuel; Mauro, Robert; Degani, Asaf; Loukopoulou, Loukia

    2016-01-01

    The primary goal of this document is to provide guidance on how to design, implement, and evaluate flight deck procedures. It provides a process for developing procedures that meet clear and specific requirements. This document provides a brief overview of: 1) the requirements for procedures, 2) a process for the design of procedures, and 3) a process for the design of checklists. The brief overview is followed by amplified procedures that follow the above steps and provide details for the proper design, implementation and evaluation of good flight deck procedures and checklists.

  8. Computerized procedures system

    DOEpatents

    Lipner, Melvin H.; Mundy, Roger A.; Franusich, Michael D.

    2010-10-12

    An online data driven computerized procedures system that guides an operator through a complex process facility's operating procedures. The system monitors plant data, processes the data and then, based upon this processing, presents the status of the current procedure step and/or substep to the operator. The system supports multiple users and a single procedure definition supports several interface formats that can be tailored to the individual user. Layered security controls access privileges and revisions are version controlled. The procedures run on a server that is platform independent of the user workstations that the server interfaces with and the user interface supports diverse procedural views.

  9. Adaptive Confidence Bands for Nonparametric Regression Functions

    PubMed Central

    Cai, T. Tony; Low, Mark; Ma, Zongming

    2014-01-01

    A new formulation for the construction of adaptive confidence bands in non-parametric function estimation problems is proposed. Confidence bands are constructed which have size that adapts to the smoothness of the function while guaranteeing that both the relative excess mass of the function lying outside the band and the measure of the set of points where the function lies outside the band are small. It is shown that the bands adapt over a maximum range of Lipschitz classes. The adaptive confidence band can be easily implemented in standard statistical software with wavelet support. Numerical performance of the procedure is investigated using both simulated and real datasets. The numerical results agree well with the theoretical analysis. The procedure can be easily modified and used for other nonparametric function estimation models. PMID:26269661

  10. Research in digital adaptive flight controllers

    NASA Technical Reports Server (NTRS)

    Kaufman, H.

    1976-01-01

    A design study of adaptive control logic suitable for implementation in modern airborne digital flight computers was conducted. Both explicit controllers which directly utilize parameter identification and implicit controllers which do not require identification were considered. Extensive analytical and simulation efforts resulted in the recommendation of two explicit digital adaptive flight controllers. Interface weighted least squares estimation procedures with control logic were developed using either optimal regulator theory or with control logic based upon single stage performance indices.

  11. Auto-adaptive finite element meshes

    NASA Technical Reports Server (NTRS)

    Richter, Roland; Leyland, Penelope

    1995-01-01

    Accurate capturing of discontinuities within compressible flow computations is achieved by coupling a suitable solver with an automatic adaptive mesh algorithm for unstructured triangular meshes. The mesh adaptation procedures developed rely on non-hierarchical dynamical local refinement/derefinement techniques, which hence enable structural optimization as well as geometrical optimization. The methods described are applied for a number of the ICASE test cases are particularly interesting for unsteady flow simulations.

  12. An adaptive level set method

    SciTech Connect

    Milne, R.B.

    1995-12-01

    This thesis describes a new method for the numerical solution of partial differential equations of the parabolic type on an adaptively refined mesh in two or more spatial dimensions. The method is motivated and developed in the context of the level set formulation for the curvature dependent propagation of surfaces in three dimensions. In that setting, it realizes the multiple advantages of decreased computational effort, localized accuracy enhancement, and compatibility with problems containing a range of length scales.

  13. ADAPTATION AND ADAPTABILITY, THE BELLEFAIRE FOLLOWUP STUDY.

    ERIC Educational Resources Information Center

    ALLERHAND, MELVIN E.; AND OTHERS

    A RESEARCH TEAM STUDIED INFLUENCES, ADAPTATION, AND ADAPTABILITY IN 50 POORLY ADAPTING BOYS AT BELLEFAIRE, A REGIONAL CHILD CARE CENTER FOR EMOTIONALLY DISTURBED CHILDREN. THE TEAM ATTEMPTED TO GAUGE THE SUCCESS OF THE RESIDENTIAL TREATMENT CENTER IN TERMS OF THE PSYCHOLOGICAL PATTERNS AND ROLE PERFORMANCES OF THE BOYS DURING INDIVIDUAL CASEWORK…

  14. The adaptive deep brain stimulation challenge.

    PubMed

    Arlotti, Mattia; Rosa, Manuela; Marceglia, Sara; Barbieri, Sergio; Priori, Alberto

    2016-07-01

    Sub-optimal clinical outcomes of conventional deep brain stimulation (cDBS) in treating Parkinson's Disease (PD) have boosted the development of new solutions to improve DBS therapy. Adaptive DBS (aDBS), consisting of closed-loop, real-time changing of stimulation parameters according to the patient's clinical state, promises to achieve this goal and is attracting increasing interest in overcoming all of the challenges posed by its development and adoption. In the design, implementation, and application of aDBS, the choice of the control variable and of the control algorithm represents the core challenge. The proposed approaches, in fact, differ in the choice of the control variable and control policy, in the system design and its technological limits, in the patient's target symptom, and in the surgical procedure needed. Here, we review the current proposals for aDBS systems, focusing on the choice of the control variable and its advantages and drawbacks, thus providing a general overview of the possible pathways for the clinical translation of aDBS with its benefits, limitations and unsolved issues. PMID:27079257

  15. Public Sector Impasse Procedures.

    ERIC Educational Resources Information Center

    Vadakin, James C.

    The subject of collective bargaining negotiation impasse procedures in the public sector, which includes public school systems, is a broad one. In this speech, the author introduces the various procedures, explains how they are used, and lists their advantages and disadvantages. Procedures discussed are mediation, fact-finding, arbitration,…

  16. 49 CFR 572.142 - Head assembly and test procedure.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 49 Transportation 7 2010-10-01 2010-10-01 false Head assembly and test procedure. 572.142 Section...-year-Old Child Crash Test Dummy, Alpha Version § 572.142 Head assembly and test procedure. (a) The head assembly (refer to § 572.140(a)(1)(i)) for this test consists of the head (drawing 210-1000), adapter...

  17. 49 CFR 572.142 - Head assembly and test procedure.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 49 Transportation 7 2013-10-01 2013-10-01 false Head assembly and test procedure. 572.142 Section...-year-Old Child Crash Test Dummy, Alpha Version § 572.142 Head assembly and test procedure. (a) The head assembly (refer to § 572.140(a)(1)(i)) for this test consists of the head (drawing 210-1000), adapter...

  18. 49 CFR 572.142 - Head assembly and test procedure.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 49 Transportation 7 2011-10-01 2011-10-01 false Head assembly and test procedure. 572.142 Section...-year-Old Child Crash Test Dummy, Alpha Version § 572.142 Head assembly and test procedure. (a) The head assembly (refer to § 572.140(a)(1)(i)) for this test consists of the head (drawing 210-1000), adapter...

  19. Evaluation of the CATSIB DIF Procedure in a Pretest Setting

    ERIC Educational Resources Information Center

    Nandakumar, Ratna; Roussos, Louis

    2004-01-01

    A new procedure, CATSIB, for assessing differential item functioning (DIF) on computerized adaptive tests (CATs) is proposed. CATSIB, a modified SIBTEST procedure, matches test takers on estimated ability and controls for impact-induced Type 1 error inflation by employing a CAT version of the IBTEST "regression correction." The performance of…

  20. Developing policies and procedures.

    PubMed

    Randolph, Susan A

    2006-11-01

    The development of policies and procedures is an integral part of the occupational health nurse's role. Policies and procedures serve as the foundation for the occupational health service and are based on its vision, mission, culture, and values. The design and layout selected for the policies and procedures should be simple, consistent, and easy to use. The same format should be used for all existing and new policies and procedures. Policies and procedures should be reviewed periodically based on a specified time frame (i.e., annually). However, some policies may require a more frequent review if they involve rapidly changing external standards, ethical issues, or emerging exposures. PMID:17124968

  1. Inverse solutions for electric and potential field imaging

    NASA Astrophysics Data System (ADS)

    Johnson, Christopher R.; MacLeod, Robert S.

    1993-08-01

    One of the fundamental problems in theoretical electrocardiography can be characterized by an inverse problem. In this paper, we present new methods for achieving better estimates of heart surface potential distributions in terms of torso potentials through an inverse procedure. First, an adaptive meshing algorithm is described which minimizes the error in the forward problem due to spatial discretization. We have found that since the inverse problem relies directly on the accuracy of the forward solution, adaptive meshing produces a more accurate inverse transfer matrix. Secondly, we introduce a new local regularization procedure. This method works by breaking the global transfer matrix into sub-matrices and performing regularization only on those sub-matrices which have large condition numbers. Furthermore, the regularization parameters are specifically 'tuned' for each sub-matrix using an a priori scheme based on the L-curve method. This local regularization method provides substantial increases in accuracy when compared to global regularization schemes. Finally, we present specific examples of the implementation of these schemes using models derived from magnetic resonance imaging data from a human subject.

  2. Modified sham feeding of sweet solutions in women with and without bulimia nervosa.

    PubMed

    Klein, D A; Schebendach, J E; Brown, A J; Smith, G P; Walsh, B T

    2009-01-01

    Although it is possible that binge eating in humans is due to increased responsiveness of orosensory excitatory controls of eating, there is no direct evidence for this because food ingested during a test meal stimulates both orosensory excitatory and postingestive inhibitory controls. To overcome this problem, we adapted the modified sham feeding technique (MSF) to measure the orosensory excitatory control of intake of a series of sweetened solutions. Previously published data showed the feasibility of a "sip-and-spit" procedure in nine healthy control women using solutions flavored with cherry Kool Aid and sweetened with sucrose (0-20%). The current study extended this technique to measure the intake of artificially sweetened solutions in women with bulimia nervosa (BN) and in women with no history of eating disorders. Ten healthy women and 11 women with BN were randomly presented with cherry Kool Aid solutions sweetened with five concentrations of aspartame (0, 0.01, 0.03, 0.08 and 0.28%) in a closed opaque container fitted with a straw. They were instructed to sip as much as they wanted of the solution during 1-minute trials and to spit the fluid out into another opaque container. Across all subjects, presence of sweetener increased intake (p<0.001). Women with BN sipped 40.5-53.1% more of all solutions than controls (p=0.03 for total intake across all solutions). Self-report ratings of liking, wanting and sweetness of solutions did not differ between groups. These results support the feasibility of a MSF procedure using artificially sweetened solutions, and the hypothesis that the orosensory stimulation of MSF provokes larger intake in women with BN than controls.

  3. Clause Elimination Procedures for CNF Formulas

    NASA Astrophysics Data System (ADS)

    Heule, Marijn; Järvisalo, Matti; Biere, Armin

    We develop and analyze clause elimination procedures, a specific family of simplification techniques for conjunctive normal form (CNF) formulas. Extending known procedures such as tautology, subsumption, and blocked clause elimination, we introduce novel elimination procedures based on hidden and asymmetric variants of these techniques. We analyze the resulting nine (including five new) clause elimination procedures from various perspectives: size reduction, BCP-preservance, confluence, and logical equivalence. For the variants not preserving logical equivalence, we show how to reconstruct solutions to original CNFs from satisfying assignments to simplified CNFs. We also identify a clause elimination procedure that does a transitive reduction of the binary implication graph underlying any CNF formula purely on the CNF level.

  4. Line relaxation methods for the solution of 2D and 3D compressible flows

    NASA Technical Reports Server (NTRS)

    Hassan, O.; Probert, E. J.; Morgan, K.; Peraire, J.

    1993-01-01

    An implicit finite element based algorithm for the compressible Navier-Stokes equations is outlined, and the solution of the resulting equation by a line relaxation on general meshes of triangles or tetrahedra is described. The problem of generating and adapting unstructured meshes for viscous flows is reexamined, and an approach for both 2D and 3D simulations is proposed. An efficient approach appears to be the use of an implicit/explicit procedure, with the implicit treatment being restricted to those regions of the mesh where viscous effects are known to be dominant. Numerical examples demonstrating the computational performance of the proposed techniques are given.

  5. Bias-free procedure for the measurement of the minimum resolvable temperature difference and minimum resolvable contrast

    NASA Astrophysics Data System (ADS)

    Bijl, Piet; Valeton, J. Mathieu

    1999-10-01

    The characterization of electro-optical system performance by means of the standard minimum resolvable temperature difference (MRTD) or the minimum resolvable contrast (MRC) has a number of serious disadvantages. One of the problems is that they depend on the subjective decision criterion of the observer. We present an improved measurement procedure in which the results are free from observer bias. In an adaptive two-alternative forced-choice procedure, both the standard four-bar pattern and a five-bar reference pattern of the same size and contrast are presented consecutively in random order. The observer decides which of the two presentations contains the four-bar pattern. Misjudgments are made if the bars cannot be resolved or are distorted by sampling. The procedure converges to the contrast at which 75% of the observer responses are correct. The reliability of the responses is tested statistically. Curves cut off near the Nyquist frequency, so that it is not necessary to artificially set a frequency limit for sampling array cameras. The procedure enables better and easier measurement, yields more stable results than the standard procedure, and avoids disputes between different measuring teams. The presented procedure is a `quick fix' solution for some of the problems with the MRTD and MRC, and is recommended as long as bar patterns are used as the stimulus. A new and fundamentally better method to characterize electro-optical system performance, called the triangle orientation discrimination threshold was recently proposed by Bijl and Valeton (1998).

  6. Polyelectrolyte Solutions

    NASA Astrophysics Data System (ADS)

    Colby, Ralph H.

    2008-03-01

    Pierre-Gilles de Gennes once described polyelectrolytes as the ``least understood form of condensed matter''. In this talk, I will describe the state of the polyelectrolyte field before and after de Gennes' seminal contributions published 1976-1980. De Gennes clearly explained why electrostatic interactions only stretch the polyelectrolyte chains on intermediate scales in semidilute solution (between the electrostatic blob size and the correlation length) and why the scattering function has a peak corresponding to the correlation length (the distance to the next chain). Despite many other ideas being suggested since then, the simple de Gennes scaling picture of polyelectrolyte conformation in solution has stood the test of time. How that model is used today, including consequences for dynamics in polyelectrolyte solutions, and what questions remain, will clarify the importance of de Gennes' ideas.

  7. Control of microorganisms in flowing nutrient solutions.

    PubMed

    Evans, R D

    1994-11-01

    Controlling microorganisms in flowing nutrient solutions involves different techniques when targeting the nutrient solution, hardware surfaces in contact with the solution, or the active root zone. This review presents basic principles and applications of a number of treatment techniques, including disinfection by chemicals, ultrafiltration, ultrasonics, and heat treatment, with emphasis on UV irradiation and ozone treatment. Procedures for control of specific pathogens by nutrient solution conditioning also are reviewed.

  8. Organic compatible solutes of halotolerant and halophilic microorganisms

    PubMed Central

    Roberts, Mary F

    2005-01-01

    Microorganisms that adapt to moderate and high salt environments use a variety of solutes, organic and inorganic, to counter external osmotic pressure. The organic solutes can be zwitterionic, noncharged, or anionic (along with an inorganic cation such as K+). The range of solutes, their diverse biosynthetic pathways, and physical properties of the solutes that effect molecular stability are reviewed. PMID:16176595

  9. Organic compatible solutes of halotolerant and halophilic microorganisms.

    PubMed

    Roberts, Mary F

    2005-01-01

    Microorganisms that adapt to moderate and high salt environments use a variety of solutes, organic and inorganic, to counter external osmotic pressure. The organic solutes can be zwitterionic, noncharged, or anionic (along with an inorganic cation such as K(+)). The range of solutes, their diverse biosynthetic pathways, and physical properties of the solutes that effect molecular stability are reviewed.

  10. An adaptive gridless methodology in one dimension

    SciTech Connect

    Snyder, N.T.; Hailey, C.E.

    1996-09-01

    Gridless numerical analysis offers great potential for accurately solving for flow about complex geometries or moving boundary problems. Because gridless methods do not require point connection, the mesh cannot twist or distort. The gridless method utilizes a Taylor series about each point to obtain the unknown derivative terms from the current field variable estimates. The governing equation is then numerically integrated to determine the field variables for the next iteration. Effects of point spacing and Taylor series order on accuracy are studied, and they follow similar trends of traditional numerical techniques. Introducing adaption by point movement using a spring analogy allows the solution method to track a moving boundary. The adaptive gridless method models linear, nonlinear, steady, and transient problems. Comparison with known analytic solutions is given for these examples. Although point movement adaption does not provide a significant increase in accuracy, it helps capture important features and provides an improved solution.

  11. Definition and use of Solution-focused Sustainability Assessment: A novel approach to generate, explore and decide on sustainable solutions for wicked problems.

    PubMed

    Zijp, Michiel C; Posthuma, Leo; Wintersen, Arjen; Devilee, Jeroen; Swartjes, Frank A

    2016-05-01

    This paper introduces Solution-focused Sustainability Assessment (SfSA), provides practical guidance formatted as a versatile process framework, and illustrates its utility for solving a wicked environmental management problem. Society faces complex and increasingly wicked environmental problems for which sustainable solutions are sought. Wicked problems are multi-faceted, and deriving of a management solution requires an approach that is participative, iterative, innovative, and transparent in its definition of sustainability and translation to sustainability metrics. We suggest to add the use of a solution-focused approach. The SfSA framework is collated from elements from risk assessment, risk governance, adaptive management and sustainability assessment frameworks, expanded with the 'solution-focused' paradigm as recently proposed in the context of risk assessment. The main innovation of this approach is the broad exploration of solutions upfront in assessment projects. The case study concerns the sustainable management of slightly contaminated sediments continuously formed in ditches in rural, agricultural areas. This problem is wicked, as disposal of contaminated sediment on adjacent land is potentially hazardous to humans, ecosystems and agricultural products. Non-removal would however reduce drainage capacity followed by increased risks of flooding, while contaminated sediment removal followed by offsite treatment implies high budget costs and soil subsidence. Application of the steps in the SfSA-framework served in solving this problem. Important elements were early exploration of a wide 'solution-space', stakeholder involvement from the onset of the assessment, clear agreements on the risk and sustainability metrics of the problem and on the interpretation and decision procedures, and adaptive management. Application of the key elements of the SfSA approach eventually resulted in adoption of a novel sediment management policy. The stakeholder

  12. Definition and use of Solution-focused Sustainability Assessment: A novel approach to generate, explore and decide on sustainable solutions for wicked problems.

    PubMed

    Zijp, Michiel C; Posthuma, Leo; Wintersen, Arjen; Devilee, Jeroen; Swartjes, Frank A

    2016-05-01

    This paper introduces Solution-focused Sustainability Assessment (SfSA), provides practical guidance formatted as a versatile process framework, and illustrates its utility for solving a wicked environmental management problem. Society faces complex and increasingly wicked environmental problems for which sustainable solutions are sought. Wicked problems are multi-faceted, and deriving of a management solution requires an approach that is participative, iterative, innovative, and transparent in its definition of sustainability and translation to sustainability metrics. We suggest to add the use of a solution-focused approach. The SfSA framework is collated from elements from risk assessment, risk governance, adaptive management and sustainability assessment frameworks, expanded with the 'solution-focused' paradigm as recently proposed in the context of risk assessment. The main innovation of this approach is the broad exploration of solutions upfront in assessment projects. The case study concerns the sustainable management of slightly contaminated sediments continuously formed in ditches in rural, agricultural areas. This problem is wicked, as disposal of contaminated sediment on adjacent land is potentially hazardous to humans, ecosystems and agricultural products. Non-removal would however reduce drainage capacity followed by increased risks of flooding, while contaminated sediment removal followed by offsite treatment implies high budget costs and soil subsidence. Application of the steps in the SfSA-framework served in solving this problem. Important elements were early exploration of a wide 'solution-space', stakeholder involvement from the onset of the assessment, clear agreements on the risk and sustainability metrics of the problem and on the interpretation and decision procedures, and adaptive management. Application of the key elements of the SfSA approach eventually resulted in adoption of a novel sediment management policy. The stakeholder

  13. Sound Solutions

    ERIC Educational Resources Information Center

    Starkman, Neal

    2007-01-01

    Poor classroom acoustics are impairing students' hearing and their ability to learn. However, technology has come up with a solution: tools that focus voices in a way that minimizes intrusive ambient noise and gets to the intended receiver--not merely amplifying the sound, but also clarifying and directing it. One provider of classroom audio…

  14. Polymer solutions

    DOEpatents

    Krawczyk, Gerhard Erich; Miller, Kevin Michael

    2011-07-26

    There is provided a method of making a polymer solution comprising polymerizing one or more monomer in a solvent, wherein said monomer comprises one or more ethylenically unsaturated monomer that is a multi-functional Michael donor, and wherein said solvent comprises 40% or more by weight, based on the weight of said solvent, one or more multi-functional Michael donor.

  15. Candidate CDTI procedures study

    NASA Technical Reports Server (NTRS)

    Ace, R. E.

    1981-01-01

    A concept with potential for increasing airspace capacity by involving the pilot in the separation control loop is discussed. Some candidate options are presented. Both enroute and terminal area procedures are considered and, in many cases, a technologically advanced Air Traffic Control structure is assumed. Minimum display characteristics recommended for each of the described procedures are presented. Recommended sequencing of the operational testing of each of the candidate procedures is presented.

  16. Adapting Aquatic Circuit Training for Special Populations.

    ERIC Educational Resources Information Center

    Thome, Kathleen

    1980-01-01

    The author discusses how land activities can be adapted to water so that individuals with handicapping conditions can participate in circuit training activities. An initial section lists such organizational procedures as providing vocal and/or visual cues for activities, having assistants accompany the performers throughout the circuit, and…

  17. Computerized Adaptive Testing with Item Cloning.

    ERIC Educational Resources Information Center

    Glas, Cees A. W.; van der Linden, Wim J.

    2003-01-01

    Developed a multilevel item response (IRT) model that allows for differences between the distributions of item parameters of families of item clones. Results from simulation studies based on an item pool from the Law School Admission Test illustrate the accuracy of the item pool calibration and adaptive testing procedures based on the model. (SLD)

  18. Procedural pediatric dermatology.

    PubMed

    Metz, Brandie J

    2013-04-01

    Due to many factors, including parental anxiety, a child's inability to understand the necessity of a procedure and a child's unwillingness to cooperate, it can be much more challenging to perform dermatologic procedures in children. This article reviews pre-procedural preparation of patients and parents, techniques for minimizing injection-related pain and optimal timing of surgical intervention. The risks and benefits of general anesthesia in the setting of pediatric dermatologic procedures are discussed. Additionally, the surgical approach to a few specific types of birthmarks is addressed.

  19. An adaptive grid with directional control

    NASA Technical Reports Server (NTRS)

    Brackbill, J. U.

    1993-01-01

    An adaptive grid generator for adaptive node movement is here derived by combining a variational formulation of Winslow's (1981) variable-diffusion method with a directional control functional. By applying harmonic-function theory, it becomes possible to define conditions under which there exist unique solutions of the resulting elliptic equations. The results obtained for the grid generator's application to the complex problem posed by the fluid instability-driven magnetic field reconnection demonstrate one-tenth the computational cost of either a Eulerian grid or an adaptive grid without directional control.

  20. Quadtree-adaptive tsunami modelling

    NASA Astrophysics Data System (ADS)

    Popinet, Stéphane

    2011-09-01

    The well-balanced, positivity-preserving scheme of Audusse et al. (SIAM J Sci Comput 25(6):2050-2065, 2004), for the solution of the Saint-Venant equations with wetting and drying, is generalised to an adaptive quadtree spatial discretisation. The scheme is validated using an analytical solution for the oscillation of a fluid in a parabolic container, as well as the classic Monai tsunami laboratory benchmark. An efficient database system able to dynamically reconstruct a multiscale bathymetry based on extremely large datasets is also described. This combination of methods is successfully applied to the adaptive modelling of the 2004 Indian ocean tsunami. Adaptivity is shown to significantly decrease the exponent of the power law describing computational cost as a function of spatial resolution. The new exponent is directly related to the fractal dimension of the geometrical structures characterising tsunami propagation. The implementation of the method as well as the data and scripts necessary to reproduce the results presented are freely available as part of the open-source Gerris Flow Solver framework.

  1. An Arbitrary Lagrangian-Eulerian Method with Local Adaptive Mesh Refinement for Modeling Compressible Flow

    NASA Astrophysics Data System (ADS)

    Anderson, Robert; Pember, Richard; Elliott, Noah

    2001-11-01

    We present a method, ALE-AMR, for modeling unsteady compressible flow that combines a staggered grid arbitrary Lagrangian-Eulerian (ALE) scheme with structured local adaptive mesh refinement (AMR). The ALE method is a three step scheme on a staggered grid of quadrilateral cells: Lagrangian advance, mesh relaxation, and remap. The AMR scheme uses a mesh hierarchy that is dynamic in time and is composed of nested structured grids of varying resolution. The integration algorithm on the hierarchy is a recursive procedure in which the coarse grids are advanced a single time step, the fine grids are advanced to the same time, and the coarse and fine grid solutions are synchronized. The novel details of ALE-AMR are primarily motivated by the need to reconcile and extend AMR techniques typically employed for stationary rectangular meshes with cell-centered quantities to the moving quadrilateral meshes with staggered quantities used in the ALE scheme. Solutions of several test problems are discussed.

  2. Application of ameliorative and adaptive approaches to revegetation of historic high altitude mining waste

    SciTech Connect

    Bellitto, M.W.; Williams, H.T.; Ward, J.N.

    1999-07-01

    High altitude, historic, gold and silver tailings deposits, which included a more recent cyanide heap leach operation, were decommissioned, detoxified, re-contoured and revegetated. Detoxification of the heap included rinsing with hydrogen peroxide, lime and ferric chloride, followed by evaporation and land application of remaining solution. Grading included the removal of solution ponds, construction of a geosynthetic/clay lined pond, heap removal and site drainage development. Ameliorative and adaptive revegetation methodologies were utilized. Revegetation was complicated by limited access, lack of topsoil, low pH and evaluated metals concentrations in the tailings, and a harsh climate. Water quality sampling results for the first year following revegetation, indicate reclamation activities have contributed to a decrease in metals and sediment loading to surface waters downgradient of the site. Procedures, methodologies and results, following the first year of vegetation growth, are provided.

  3. Comparability of naturalistic and controlled observation assessment of adaptive behavior.

    PubMed

    Millham, J; Chilcutt, J; Atkinson, B L

    1978-07-01

    The comparability of retrospective naturalistic and controlled observation assessment of adaptive behavior was evaluated. The number, degree, and direction of discrepancies were evaluated with respect to level of retardation of the client, rater differences, behavior domain sampled, and prior observational base for the ratings. Generally poor comparability between the procedures was found and questions were raised concerning the types of generalizability that can be made from adaptive behavior assessment obtained under the two procedures.

  4. Development of a novel adaptive model to represent global ionosphere information from combining space geodetic measurement systems

    NASA Astrophysics Data System (ADS)

    Erdogan, Eren; Durmaz, Murat; Liang, Wenjing; Kappelsberger, Maria; Dettmering, Denise; Limberger, Marco; Schmidt, Michael; Seitz, Florian

    2015-04-01

    This project focuses on the development of a novel near real-time data adaptive filtering framework for global modeling of the vertical total electron content (VTEC). Ionospheric data can be acquired from various space geodetic observation techniques such as GNSS, altimetry, DORIS and radio occultation. The project aims to model the temporal and spatial variations of the ionosphere by a combination of these techniques in an adaptive data assimilation framework, which utilizes appropriate basis functions to represent the VTEC. The measurements naturally have inhomogeneous data distribution both in time and space. Therefore, integrating the aforementioned observation techniques into data adaptive basis selection methods (e.g. Multivariate Adaptive Regression B-Splines) with recursive filtering (e.g. Kalman filtering) to model the daily global ionosphere may deliver important improvements over classical estimation methods. Since ionospheric inverse problems are ill-posed, a suitable regularization procedure might stabilize the solution. In this contribution we present first results related to the selected evaluation procedure. Comparisons made with respect to applicability, efficiency, accuracy, and numerical efforts.

  5. Efficient solution procedures for systems with local non-linearities

    NASA Astrophysics Data System (ADS)

    Ibrahimbegovic, Adnan; Wilson, Edward L.

    1992-06-01

    This paper presents several methods for enhancing computational efficiency in both static and dynamic analysis of structural systems with localized nonlinear behavior. A significant reduction of computational effort with respect to brute-force nonlinear analysis is achieved in all cases at the insignificant (or no) loss of accuracy. The presented methodologies are easily incorporated into a standard computer program for linear analysis.

  6. Rapid, generalized adaptation to asynchronous audiovisual speech.

    PubMed

    Van der Burg, Erik; Goodbourn, Patrick T

    2015-04-01

    The brain is adaptive. The speed of propagation through air, and of low-level sensory processing, differs markedly between auditory and visual stimuli; yet the brain can adapt to compensate for the resulting cross-modal delays. Studies investigating temporal recalibration to audiovisual speech have used prolonged adaptation procedures, suggesting that adaptation is sluggish. Here, we show that adaptation to asynchronous audiovisual speech occurs rapidly. Participants viewed a brief clip of an actor pronouncing a single syllable. The voice was either advanced or delayed relative to the corresponding lip movements, and participants were asked to make a synchrony judgement. Although we did not use an explicit adaptation procedure, we demonstrate rapid recalibration based on a single audiovisual event. We find that the point of subjective simultaneity on each trial is highly contingent upon the modality order of the preceding trial. We find compelling evidence that rapid recalibration generalizes across different stimuli, and different actors. Finally, we demonstrate that rapid recalibration occurs even when auditory and visual events clearly belong to different actors. These results suggest that rapid temporal recalibration to audiovisual speech is primarily mediated by basic temporal factors, rather than higher-order factors such as perceived simultaneity and source identity.

  7. Rapid, generalized adaptation to asynchronous audiovisual speech

    PubMed Central

    Van der Burg, Erik; Goodbourn, Patrick T.

    2015-01-01

    The brain is adaptive. The speed of propagation through air, and of low-level sensory processing, differs markedly between auditory and visual stimuli; yet the brain can adapt to compensate for the resulting cross-modal delays. Studies investigating temporal recalibration to audiovisual speech have used prolonged adaptation procedures, suggesting that adaptation is sluggish. Here, we show that adaptation to asynchronous audiovisual speech occurs rapidly. Participants viewed a brief clip of an actor pronouncing a single syllable. The voice was either advanced or delayed relative to the corresponding lip movements, and participants were asked to make a synchrony judgement. Although we did not use an explicit adaptation procedure, we demonstrate rapid recalibration based on a single audiovisual event. We find that the point of subjective simultaneity on each trial is highly contingent upon the modality order of the preceding trial. We find compelling evidence that rapid recalibration generalizes across different stimuli, and different actors. Finally, we demonstrate that rapid recalibration occurs even when auditory and visual events clearly belong to different actors. These results suggest that rapid temporal recalibration to audiovisual speech is primarily mediated by basic temporal factors, rather than higher-order factors such as perceived simultaneity and source identity. PMID:25716790

  8. Expressing Adaptation Strategies Using Adaptation Patterns

    ERIC Educational Resources Information Center

    Zemirline, N.; Bourda, Y.; Reynaud, C.

    2012-01-01

    Today, there is a real challenge to enable personalized access to information. Several systems have been proposed to address this challenge including Adaptive Hypermedia Systems (AHSs). However, the specification of adaptation strategies remains a difficult task for creators of such systems. In this paper, we consider the problem of the definition…

  9. Enucleation Procedure Manual.

    ERIC Educational Resources Information Center

    Davis, Kevin; Poston, George

    This manual provides information on the enucleation procedure (removal of the eyes for organ banks). An introductory section focuses on the anatomy of the eye and defines each of the parts. Diagrams of the eye are provided. A list of enucleation materials follows. Other sections present outlines of (1) a sterile procedure; (2) preparation for eye…

  10. Useful Procedures of Inquiry.

    ERIC Educational Resources Information Center

    Handy, Rollo; Harwood, E. C.

    This book discusses and analyzes the many different procedures of inquiry, both old and new, which have been used in an attempt to solve the problems men encounter. Section A examines some outmoded procedures of inquiry, describes scientific inquiry, and presents the Dewey-Bentley view of scientific method. Sections B and C, which comprise the…

  11. A wavelet packet adaptive filtering algorithm for enhancing manatee vocalizations.

    PubMed

    Gur, M Berke; Niezrecki, Christopher

    2011-04-01

    Approximately a quarter of all West Indian manatee (Trichechus manatus latirostris) mortalities are attributed to collisions with watercraft. A boater warning system based on the passive acoustic detection of manatee vocalizations is one possible solution to reduce manatee-watercraft collisions. The success of such a warning system depends on effective enhancement of the vocalization signals in the presence of high levels of background noise, in particular, noise emitted from watercraft. Recent research has indicated that wavelet domain pre-processing of the noisy vocalizations is capable of significantly improving the detection ranges of passive acoustic vocalization detectors. In this paper, an adaptive denoising procedure, implemented on the wavelet packet transform coefficients obtained from the noisy vocalization signals, is investigated. The proposed denoising algorithm is shown to improve the manatee detection ranges by a factor ranging from two (minimum) to sixteen (maximum) compared to high-pass filtering alone, when evaluated using real manatee vocalization and background noise signals of varying signal-to-noise ratios (SNR). Furthermore, the proposed method is also shown to outperform a previously suggested feedback adaptive line enhancer (FALE) filter on average 3.4 dB in terms of noise suppression and 0.6 dB in terms of waveform preservation.

  12. A wavelet packet adaptive filtering algorithm for enhancing manatee vocalizations.

    PubMed

    Gur, M Berke; Niezrecki, Christopher

    2011-04-01

    Approximately a quarter of all West Indian manatee (Trichechus manatus latirostris) mortalities are attributed to collisions with watercraft. A boater warning system based on the passive acoustic detection of manatee vocalizations is one possible solution to reduce manatee-watercraft collisions. The success of such a warning system depends on effective enhancement of the vocalization signals in the presence of high levels of background noise, in particular, noise emitted from watercraft. Recent research has indicated that wavelet domain pre-processing of the noisy vocalizations is capable of significantly improving the detection ranges of passive acoustic vocalization detectors. In this paper, an adaptive denoising procedure, implemented on the wavelet packet transform coefficients obtained from the noisy vocalization signals, is investigated. The proposed denoising algorithm is shown to improve the manatee detection ranges by a factor ranging from two (minimum) to sixteen (maximum) compared to high-pass filtering alone, when evaluated using real manatee vocalization and background noise signals of varying signal-to-noise ratios (SNR). Furthermore, the proposed method is also shown to outperform a previously suggested feedback adaptive line enhancer (FALE) filter on average 3.4 dB in terms of noise suppression and 0.6 dB in terms of waveform preservation. PMID:21476661

  13. The Adaptive Analysis of Visual Cognition using Genetic Algorithms

    PubMed Central

    Cook, Robert G.; Qadri, Muhammad A. J.

    2014-01-01

    Two experiments used a novel, open-ended, and adaptive test procedure to examine visual cognition in animals. Using a genetic algorithm, a pigeon was tested repeatedly from a variety of different initial conditions for its solution to an intermediate brightness search task. On each trial, the animal had to accurately locate and peck a target element of intermediate brightness from among a variable number of surrounding darker and lighter distractor elements. Displays were generated from six parametric variables, or genes (distractor number, element size, shape, spacing, target brightness, distractor brightness). Display composition changed over time, or evolved, as a function of the bird’s differential accuracy within the population of values for each gene. Testing three randomized initial conditions and one set of controlled initial conditions, element size and number of distractors were identified as the most important factors controlling search accuracy, with distractor brightness, element shape, and spacing making secondary contributions. The resulting changes in this multidimensional stimulus space suggested the existence of a set of conditions that the bird repeatedly converged upon regardless of initial conditions. This psychological “attractor” represents the cumulative action of the cognitive operations used by the pigeon in solving and performing this search task. The results are discussed regarding their implications for visual cognition in pigeons and the usefulness of adaptive, subject-driven experimentation for investigating human and animal cognition more generally. PMID:24000905

  14. Adaptive Assessment of Young Children with Visual Impairment

    ERIC Educational Resources Information Center

    Ruiter, Selma; Nakken, Han; Janssen, Marleen; Van Der Meulen, Bieuwe; Looijestijn, Paul

    2011-01-01

    The aim of this study was to assess the effect of adaptations for children with low vision of the Bayley Scales, a standardized developmental instrument widely used to assess development in young children. Low vision adaptations were made to the procedures, item instructions and play material of the Dutch version of the Bayley Scales of Infant…

  15. Parallel Adaptive Multi-Mechanics Simulations using Diablo

    SciTech Connect

    Parsons, D; Solberg, J

    2004-12-03

    Coupled multi-mechanics simulations (such as thermal-stress and fluidstructure interaction problems) are of substantial interest to engineering analysts. In addition, adaptive mesh refinement techniques present an attractive alternative to current mesh generation procedures and provide quantitative error bounds that can be used for model verification. This paper discusses spatially adaptive multi-mechanics implicit simulations using the Diablo computer code. (U)

  16. Minimally invasive procedures

    PubMed Central

    Baltayiannis, Nikolaos; Michail, Chandrinos; Lazaridis, George; Anagnostopoulos, Dimitrios; Baka, Sofia; Mpoukovinas, Ioannis; Karavasilis, Vasilis; Lampaki, Sofia; Papaiwannou, Antonis; Karavergou, Anastasia; Kioumis, Ioannis; Pitsiou, Georgia; Katsikogiannis, Nikolaos; Tsakiridis, Kosmas; Rapti, Aggeliki; Trakada, Georgia; Zissimopoulos, Athanasios; Zarogoulidis, Konstantinos

    2015-01-01

    Minimally invasive procedures, which include laparoscopic surgery, use state-of-the-art technology to reduce the damage to human tissue when performing surgery. Minimally invasive procedures require small “ports” from which the surgeon inserts thin tubes called trocars. Carbon dioxide gas may be used to inflate the area, creating a space between the internal organs and the skin. Then a miniature camera (usually a laparoscope or endoscope) is placed through one of the trocars so the surgical team can view the procedure as a magnified image on video monitors in the operating room. Specialized equipment is inserted through the trocars based on the type of surgery. There are some advanced minimally invasive surgical procedures that can be performed almost exclusively through a single point of entry—meaning only one small incision, like the “uniport” video-assisted thoracoscopic surgery (VATS). Not only do these procedures usually provide equivalent outcomes to traditional “open” surgery (which sometimes require a large incision), but minimally invasive procedures (using small incisions) may offer significant benefits as well: (I) faster recovery; (II) the patient remains for less days hospitalized; (III) less scarring and (IV) less pain. In our current mini review we will present the minimally invasive procedures for thoracic surgery. PMID:25861610

  17. Robust numerical methods for conservation laws using a biased averaging procedure

    NASA Astrophysics Data System (ADS)

    Choi, Hwajeong

    In this thesis, we introduce a new biased averaging procedure (BAP) and use it in developing high resolution schemes for conservation laws. Systems of conservation laws arise in variety of physical problems, such as the Euler equation of compressible flows, magnetohydrodynamics, multicomponent flows, the blast waves and the flow of glaciers. Many modern shock capturing schemes are based on solution reconstructions by high order polynomial interpolations, and time evolution by the solutions of Riemann problems. Due to the existence of discontinuities in the solution, the interpolating polynomial has to be carefully constructed to avoid possible oscillations near discontinuities. The BAP is a more general and simpler way to approximate higher order derivatives of given data without introducing oscillations, compared to limiters and the essentially non-oscillatory interpolations. For the solution of a system of conservation laws, we present a finite volume method which employs a flux splitting and uses componentwise reconstruction of the upwind fluxes. A high order piecewise polynomial constructed by using BAP is used to approximate the component of upwind fluxes. This scheme does not require characteristic decomposition nor Riemann solver, offering easy implementation and a relatively small computational cost. More importantly, the BAP is naturally extended for unstructured grids and it will be demonstrated through a cell-centered finite volume method, along with adaptive mesh refinement. A number of numerical experiments from various applications demonstrates the robustness and the accuracy of this approach, and show the potential of this approach for other practical applications.

  18. Response-Adaptive Allocation for Circular Data.

    PubMed

    Biswas, Atanu; Dutta, Somak; Laha, Arnab Kumar; Bakshi, Partho K

    2015-01-01

    Response-adaptive designs are used in phase III clinical trials to allocate a larger proportion of patients to the better treatment. Circular data is a natural outcome in many clinical trial setup, e.g., some measurements in opthalmologic studies, degrees of rotation of hand or waist, etc. There is no available work on response-adaptive designs for circular data. With reference to a dataset on cataract surgery we provide some response-adaptive designs where the responses are of circular nature and propose some test statistics for treatment comparison under adaptive data allocation procedure. Detailed simulation study and the analysis of the dataset, including redesigning the cataract surgery data, are carried out.

  19. Application of the Flood-IMPAT procedure in the Valle d'Aosta Region, Italy

    NASA Astrophysics Data System (ADS)

    Minucci, Guido; Mendoza, Marina Tamara; Molinari, Daniela; Atun, Funda; Menoni, Scira; Ballio, Francesco

    2016-04-01

    Flood Risk Management Plans (FRMPs) established by European "Floods" Directive (Directive 2007/60/EU) to Member States in order to address all aspects of flood risk management, taking into account costs and benefits of proposed mitigation tools must be reviewed by the same law every six years. This is aimed at continuously increasing the effectiveness of risk management, on the bases of the most advanced knowledge of flood risk and most (economically) feasible solutions, also taking into consideration achievements of the previous management cycle. Within this context, the Flood-IMPAT (i.e. Integrated Meso-scale Procedure to Assess Territorial flood risk) procedure has been developed aiming at overcoming limits of risk maps produced by the Po River Basin Authority and adopted for the first version of the Po River FRMP. The procedure allows the estimation of flood risk at the meso-scale and it is characterized by three main peculiarities. First is its feasibility for the entire Italian territory. Second is the possibility to express risk in monetary terms (i.e. expected damage), at least for those categories of damage for which suitable models are available. Finally, independent modules compose the procedure: each module allows the estimation of a certain type of damage (i.e. direct, indirect, intangibles) on a certain sector (e.g. residential, industrial, agriculture, environment, etc.) separately, guaranteeing flexibility in the implementation. This paper shows the application of the Flood-IMPAT procedure and the recent advancements in the procedure, aiming at increasing its reliability and usability. Through a further implementation of the procedure in the Dora Baltea River Basin (North of Italy), it was possible to test the sensitivity of risk estimates supplied by Flood-IMPAT with respect to different damage models and different approaches for the estimation of assets at risk. Risk estimates were also compared with observed damage data in the investigated areas

  20. Adaptive computing for people with disabilities.

    PubMed

    Merrow, S L; Corbett, C D

    1994-01-01

    Adaptive computing is a relatively new area, and little has been written in the nursing literature on the topic. "Adaptive computing" refers to the professional services and the technology (both hardware and software) that make computing technology accessible for persons with disabilities. Nurses in many settings such as schools, industry, rehabilitation facilities, and the community, can use knowledge of adaptive computing as they counsel, advise, and advocate for people with disabilities. Nurses with an awareness and knowledge of adaptive computing will be better able to promote high-level wellness for individuals with disabilities, thus maximizing their potential for an active fulfilling life. People with different types of disabilities, including visual, mobility, hearing, learning, communication disorders and acquired brain injuries may benefit from computer adaptations. Disabled people encounter barriers to computing in six major areas: 1) the environment, 2) data entry, 3) information output, 4) technical documentation, 5) support, and 6) training. After a discussion of these barriers, the criteria for selecting appropriate adaptations and selected examples of adaptations are presented. Several cases studies illustrate the evaluation process and the development of adaptive computer solutions. PMID:8082064

  1. Alpha-Stratified Multistage Computerized Adaptive Testing with beta Blocking.

    ERIC Educational Resources Information Center

    Chang, Hua-Hua; Qian, Jiahe; Yang, Zhiliang

    2001-01-01

    Proposed a refinement, based on the stratification of items developed by D. Weiss (1973), of the computerized adaptive testing item selection procedure of H. Chang and Z. Ying (1999). Simulation studies using an item bank from the Graduate Record Examination show the benefits of the new procedure. (SLD)

  2. Balancing Flexible Constraints and Measurement Precision in Computerized Adaptive Testing

    ERIC Educational Resources Information Center

    Moyer, Eric L.; Galindo, Jennifer L.; Dodd, Barbara G.

    2012-01-01

    Managing test specifications--both multiple nonstatistical constraints and flexibly defined constraints--has become an important part of designing item selection procedures for computerized adaptive tests (CATs) in achievement testing. This study compared the effectiveness of three procedures: constrained CAT, flexible modified constrained CAT,…

  3. Short Nuss bar procedure

    PubMed Central

    2016-01-01

    The Nuss procedure is now the preferred operation for surgical correction of pectus excavatum (PE). It is a minimally invasive technique, whereby one to three curved metal bars are inserted behind the sternum in order to push it into a normal position. The bars are left in situ for three years and then removed. This procedure significantly improves quality of life and, in most cases, also improves cardiac performance. Previously, the modified Ravitch procedure was used with resection of cartilage and the use of posterior support. This article details the new modified Nuss procedure, which requires the use of shorter bars than specified by the original technique. This technique facilitates the operation as the bar may be guided manually through the chest wall and no additional stabilizing sutures are necessary. PMID:27747185

  4. Dynamic alarm response procedures

    SciTech Connect

    Martin, J.; Gordon, P.; Fitch, K.

    2006-07-01

    The Dynamic Alarm Response Procedure (DARP) system provides a robust, Web-based alternative to existing hard-copy alarm response procedures. This paperless system improves performance by eliminating time wasted looking up paper procedures by number, looking up plant process values and equipment and component status at graphical display or panels, and maintenance of the procedures. Because it is a Web-based system, it is platform independent. DARP's can be served from any Web server that supports CGI scripting, such as Apache{sup R}, IIS{sup R}, TclHTTPD, and others. DARP pages can be viewed in any Web browser that supports Javascript and Scalable Vector Graphics (SVG), such as Netscape{sup R}, Microsoft Internet Explorer{sup R}, Mozilla Firefox{sup R}, Opera{sup R}, and others. (authors)

  5. Cardiac ablation procedures

    MedlinePlus

    ... the heart. During the procedure, small wires called electrodes are placed inside your heart to measure your ... is in place, your doctor will place small electrodes in different areas of your heart. These electrodes ...

  6. Common Interventional Radiology Procedures

    MedlinePlus

    ... of common interventional techniques is below. Common Interventional Radiology Procedures Angiography An X-ray exam of the ... into the vertebra. Copyright © 2016 Society of Interventional Radiology. All rights reserved. 3975 Fair Ridge Drive • Suite ...

  7. Using a solutions approach.

    PubMed

    Kimberley, Mike

    2004-06-01

    Companies today are placing an even greater emphasis on keeping all recordable employee injuries to a minimum. A reduction in hand and finger injuries, along with their associated medical and indemnity costs, can have a positive impact on the company's bottom line. Safety actually can provide revenue when the safety program extends beyond the confines of specific product applications. Conducting a careful and complete analysis of all of the critical issues in a company's production process and the procedures in its safety program will allow the organization to identify opportunities for cutting costs while enhancing worker comfort and safety. Identifying business solutions--and not just product applications--will provide organizations with additional cost saving opportunities. Tighter controls, standardization, SKU reduction, productivity improvements, and recycling are just a few of the potential solutions that can be applied. Partnering with a reputable glove manufacturer that offers a critical safety program analysis has the potential to provide numerous, long-term advantages. A business solutions approach can provide potential productivity improvements, injury reductions, standardization of best practices, and SKU reductions, all of which result in a safer work environment. PMID:15232914

  8. Costing imaging procedures.

    PubMed

    Bretland, P M

    1988-01-01

    The existing National Health Service financial system makes comprehensive costing of any service very difficult. A method of costing using modern commercial methods has been devised, classifying costs into variable, semi-variable and fixed and using the principle of overhead absorption for expenditure not readily allocated to individual procedures. It proved possible to establish a cost spectrum over the financial year 1984-85. The cheapest examinations were plain radiographs outside normal working hours, followed by plain radiographs, ultrasound, special procedures, fluoroscopy, nuclear medicine, angiography and angiographic interventional procedures in normal working hours. This differs from some published figures, particularly those in the Körner report. There was some overlap between fluoroscopic interventional and the cheaper nuclear medicine procedures, and between some of the more expensive nuclear medicine procedures and the cheaper angiographic ones. Only angiographic and the few more expensive nuclear medicine procedures exceed the cost of the inpatient day. The total cost of the imaging service to the district was about 4% of total hospital expenditure. It is shown that where more procedures are undertaken, the semi-variable and fixed (including capital) elements of the cost decrease (and vice versa) so that careful study is required to assess the value of proposed economies. The method is initially time-consuming and requires a computer system with 512 Kb of memory, but once the basic costing system is established in a department, detailed financial monitoring should become practicable. The necessity for a standard comprehensive costing procedure of this nature, based on sound cost accounting principles, appears inescapable, particularly in view of its potential application to management budgeting. PMID:3349241

  9. Safety referral procedures clarified.

    PubMed

    2014-12-01

    Two types of referrals are available for the purpose of harmonising pharmacovigilance decisions across the EU: the urgent procedure and the "normal" procedure. In both cases, the Pharmacovigilance Risk Assessment Committee (PRAC) issues a recommendation that the marketing authorisation committees concerned must take into account when formulating their opinions. If Member States disagree in their decisions, a final referral is available, although it lacks transparency. The European Commission's final decision is binding on all Member States. PMID:25629154

  10. Costing imaging procedures.

    PubMed

    Bretland, P M

    1988-01-01

    The existing National Health Service financial system makes comprehensive costing of any service very difficult. A method of costing using modern commercial methods has been devised, classifying costs into variable, semi-variable and fixed and using the principle of overhead absorption for expenditure not readily allocated to individual procedures. It proved possible to establish a cost spectrum over the financial year 1984-85. The cheapest examinations were plain radiographs outside normal working hours, followed by plain radiographs, ultrasound, special procedures, fluoroscopy, nuclear medicine, angiography and angiographic interventional procedures in normal working hours. This differs from some published figures, particularly those in the Körner report. There was some overlap between fluoroscopic interventional and the cheaper nuclear medicine procedures, and between some of the more expensive nuclear medicine procedures and the cheaper angiographic ones. Only angiographic and the few more expensive nuclear medicine procedures exceed the cost of the inpatient day. The total cost of the imaging service to the district was about 4% of total hospital expenditure. It is shown that where more procedures are undertaken, the semi-variable and fixed (including capital) elements of the cost decrease (and vice versa) so that careful study is required to assess the value of proposed economies. The method is initially time-consuming and requires a computer system with 512 Kb of memory, but once the basic costing system is established in a department, detailed financial monitoring should become practicable. The necessity for a standard comprehensive costing procedure of this nature, based on sound cost accounting principles, appears inescapable, particularly in view of its potential application to management budgeting.

  11. Visual Contrast Sensitivity Functions Obtained from Untrained Observers Using Tracking and Staircase Procedures. Final Report.

    ERIC Educational Resources Information Center

    Geri, George A.; Hubbard, David C.

    Two adaptive psychophysical procedures (tracking and "yes-no" staircase) for obtaining human visual contrast sensitivity functions (CSF) were evaluated. The procedures were chosen based on their proven validity and the desire to evaluate the practical effects of stimulus transients, since tracking procedures traditionally employ gradual stimulus…

  12. A Comparison of Exposure Control Procedures in CATs Using the 3PL Model

    ERIC Educational Resources Information Center

    Leroux, Audrey J.; Lopez, Myriam; Hembry, Ian; Dodd, Barbara G.

    2013-01-01

    This study compares the progressive-restricted standard error (PR-SE) exposure control procedure to three commonly used procedures in computerized adaptive testing, the randomesque, Sympson-Hetter (SH), and no exposure control methods. The performance of these four procedures is evaluated using the three-parameter logistic model under the…

  13. Solution Leaching

    NASA Astrophysics Data System (ADS)

    Chun, Tiejun; Zhu, Deqing; Pan, Jian; He, Zhen

    2014-06-01

    Recovery of alumina from magnetic separation tailings of red mud has been investigated by Na2CO3 solution leaching. X-ray diffraction (XRD) results show that most of the alumina is present as 12CaO·7Al2O3 and CaO·Al2O3 in the magnetic separation tailings. The shrinking core model was employed to describe the leaching kinetics. The results show that the calculated activation energy of 8.31 kJ/mol is characteristic for an internal diffusion-controlled process. The kinetic equation can be used to describe the leaching process. The effects of Na2CO3 concentration, liquid-to-solid ratio, and particle size on recovery of Al2O3 were examined.

  14. Comparing Anisotropic Output-Based Grid Adaptation Methods by Decomposition

    NASA Technical Reports Server (NTRS)

    Park, Michael A.; Loseille, Adrien; Krakos, Joshua A.; Michal, Todd

    2015-01-01

    Anisotropic grid adaptation is examined by decomposing the steps of flow solution, ad- joint solution, error estimation, metric construction, and simplex grid adaptation. Multiple implementations of each of these steps are evaluated by comparison to each other and expected analytic results when available. For example, grids are adapted to analytic metric fields and grid measures are computed to illustrate the properties of multiple independent implementations of grid adaptation mechanics. Different implementations of each step in the adaptation process can be evaluated in a system where the other components of the adaptive cycle are fixed. Detailed examination of these properties allows comparison of different methods to identify the current state of the art and where further development should be targeted.

  15. Post-processing procedure for industrial quantum key distribution systems

    NASA Astrophysics Data System (ADS)

    Kiktenko, Evgeny; Trushechkin, Anton; Kurochkin, Yury; Fedorov, Aleksey

    2016-08-01

    We present algorithmic solutions aimed on post-processing procedure for industrial quantum key distribution systems with hardware sifting. The main steps of the procedure are error correction, parameter estimation, and privacy amplification. Authentication of classical public communication channel is also considered.

  16. Constrained adaptation for feedback cancellation in hearing aids.

    PubMed

    Kates, J M

    1999-08-01

    In feedback cancellation in hearing aids, an adaptive filter is used to model the feedback path. The output of the adaptive filter is subtracted from the microphone signal to cancel the acoustic and mechanical feedback picked up by the microphone, thus allowing more gain in the hearing aid. In general, the feedback-cancellation filter adapts on the hearing-aid input signal, and signal cancellation and coloration artifacts can occur for a narrow-band input. In this paper, two procedures for LMS adaptation with a constraint on the magnitude of the adaptive weight vector are derived. The constraints greatly reduce the probability that the adaptive filter will cancel a narrow-band input. Simulation results are used to demonstrate the efficacy of the constrained adaptation. PMID:10462806

  17. A Novel Approach for Adaptive Signal Processing

    NASA Technical Reports Server (NTRS)

    Chen, Ya-Chin; Juang, Jer-Nan

    1998-01-01

    Adaptive linear predictors have been used extensively in practice in a wide variety of forms. In the main, their theoretical development is based upon the assumption of stationarity of the signals involved, particularly with respect to the second order statistics. On this basis, the well-known normal equations can be formulated. If high- order statistical stationarity is assumed, then the equivalent normal equations involve high-order signal moments. In either case, the cross moments (second or higher) are needed. This renders the adaptive prediction procedure non-blind. A novel procedure for blind adaptive prediction has been proposed and considerable implementation has been made in our contributions in the past year. The approach is based upon a suitable interpretation of blind equalization methods that satisfy the constant modulus property and offers significant deviations from the standard prediction methods. These blind adaptive algorithms are derived by formulating Lagrange equivalents from mechanisms of constrained optimization. In this report, other new update algorithms are derived from the fundamental concepts of advanced system identification to carry out the proposed blind adaptive prediction. The results of the work can be extended to a number of control-related problems, such as disturbance identification. The basic principles are outlined in this report and differences from other existing methods are discussed. The applications implemented are speech processing, such as coding and synthesis. Simulations are included to verify the novel modelling method.

  18. Improved solution methods for an inverse problem related to a population balance model in chemical engineering

    NASA Astrophysics Data System (ADS)

    Groh, Andreas; Krebs, Jochen

    2012-08-01

    In this paper, a population balance equation, originating from applications in chemical engineering, is considered and novel solution techniques for a related inverse problem are presented. This problem consists in the determination of the breakage rate and the daughter drop distribution of an evolving drop size distribution from time-dependent measurements under the assumption of self-similarity. We analyze two established solution methods for this ill-posed problem and improve the two procedures by adapting suitable data fitting and inversion algorithms to the specific situation. In addition, we introduce a novel technique that, compared to the former, does not require certain a priori information. The improved stability properties of the resulting algorithms are substantiated with numerical examples.

  19. Assessing institutional capacities to adapt to climate change - integrating psychological dimensions in the Adaptive Capacity Wheel

    NASA Astrophysics Data System (ADS)

    Grothmann, T.; Grecksch, K.; Winges, M.; Siebenhüner, B.

    2013-03-01

    Several case studies show that "soft social factors" (e.g. institutions, perceptions, social capital) strongly affect social capacities to adapt to climate change. Many soft social factors can probably be changed faster than "hard social factors" (e.g. economic and technological development) and are therefore particularly important for building social capacities. However, there are almost no methodologies for the systematic assessment of soft social factors. Gupta et al. (2010) have developed the Adaptive Capacity Wheel (ACW) for assessing the adaptive capacity of institutions. The ACW differentiates 22 criteria to assess six dimensions: variety, learning capacity, room for autonomous change, leadership, availability of resources, fair governance. To include important psychological factors we extended the ACW by two dimensions: "adaptation motivation" refers to actors' motivation to realise, support and/or promote adaptation to climate. "Adaptation belief" refers to actors' perceptions of realisability and effectiveness of adaptation measures. We applied the extended ACW to assess adaptive capacities of four sectors - water management, flood/coastal protection, civil protection and regional planning - in North Western Germany. The assessments of adaptation motivation and belief provided a clear added value. The results also revealed some methodological problems in applying the ACW (e.g. overlap of dimensions), for which we propose methodological solutions.

  20. Assessing institutional capacities to adapt to climate change: integrating psychological dimensions in the Adaptive Capacity Wheel

    NASA Astrophysics Data System (ADS)

    Grothmann, T.; Grecksch, K.; Winges, M.; Siebenhüner, B.

    2013-12-01

    Several case studies show that social factors like institutions, perceptions and social capital strongly affect social capacities to adapt to climate change. Together with economic and technological development they are important for building social capacities. However, there are almost no methodologies for the systematic assessment of social factors. After reviewing existing methodologies we identify the Adaptive Capacity Wheel (ACW) by Gupta et al. (2010), developed for assessing the adaptive capacity of institutions, as the most comprehensive and operationalised framework to assess social factors. The ACW differentiates 22 criteria to assess 6 dimensions: variety, learning capacity, room for autonomous change, leadership, availability of resources, fair governance. To include important psychological factors we extended the ACW by two dimensions: "adaptation motivation" refers to actors' motivation to realise, support and/or promote adaptation to climate; "adaptation belief" refers to actors' perceptions of realisability and effectiveness of adaptation measures. We applied the extended ACW to assess adaptive capacities of four sectors - water management, flood/coastal protection, civil protection and regional planning - in northwestern Germany. The assessments of adaptation motivation and belief provided a clear added value. The results also revealed some methodological problems in applying the ACW (e.g. overlap of dimensions), for which we propose methodological solutions.

  1. Adaptive mesh refinement techniques for electrical impedance tomography.

    PubMed

    Molinari, M; Cox, S J; Blott, B H; Daniell, G J

    2001-02-01

    Adaptive mesh refinement techniques can be applied to increase the efficiency of electrical impedance tomography reconstruction algorithms by reducing computational and storage cost as well as providing problem-dependent solution structures. A self-adaptive refinement algorithm based on an a posteriori error estimate has been developed and its results are shown in comparison with uniform mesh refinement for a simple head model.

  2. Construction of a Computerized Adaptive Testing Version of the Quebec Adaptive Behavior Scale.

    ERIC Educational Resources Information Center

    Tasse, Marc J.; And Others

    Multilog (Thissen, 1991) was used to estimate parameters of 225 items from the Quebec Adaptive Behavior Scale (QABS). A database containing actual data from 2,439 subjects was used for the parameterization procedures. The two-parameter-logistic model was used in estimating item parameters and in the testing strategy. MicroCAT (Assessment Systems…

  3. Cerebrospinal fluid composition modifications after neuroendoscopic procedures.

    PubMed

    Salvador, L; Valero, R; Carrero, E; Caral, L; Fernández, S; Marín, J L; Ferrer, E; Fábregas, N

    2007-02-01

    Normal saline solution is currently used as the ventricular irrigation fluid during neuroendoscopic procedures. The aim of this study is to determine the alterations in the cerebrospinal fluid (CSF) composition after neuroendoscopic interventions. Twenty nine patients who underwent a neuroendoscopic procedure under general anaesthesia were studied. Temperature inside the cerebral ventricle was measured and samples of CSF were taken to determinate oxygen and carbon dioxide partial pressures, pH, base excess, ionised calcium, standard bicarbonate, glucose, sodium, potassium, magnesium, total calcium, proteins, chlorine and osmolality before initiating the irrigation and after the neuronavigation. Patient demographics, neuronavigation time, total fluid volume used and temperature of the irrigation solution and complications that appeared in the first 24 hours were collected. Mean age of the patients was 42+/-18 years. The mean neuronavigation time was 21.5+/-15.4 minutes. The mean amount of saline solution used for irrigation was 919.6+/-994.7 mL. All the values studied in the CSF, except osmolality, showed significant variations. There was a significant correlation between the CSF variation of pH, oxygen and carbon dioxide partial pressures, base excess, standard bicarbonate, glucose and total calcium with respect to the total volume of irrigation solution, but not with respect to the neuronavigation time. A cut-off point of 500 mL of irrigation solution (sensitivity 0.7; specificity 0.87) was related with a CSF pH decrease greater than 0.2. The use of saline as irrigation solution during neuroendoscopic procedures produces important changes in CSF.

  4. Cerebrospinal fluid composition modifications after neuroendoscopic procedures.

    PubMed

    Salvador, L; Valero, R; Carrero, E; Caral, L; Fernández, S; Marín, J L; Ferrer, E; Fábregas, N

    2007-02-01

    Normal saline solution is currently used as the ventricular irrigation fluid during neuroendoscopic procedures. The aim of this study is to determine the alterations in the cerebrospinal fluid (CSF) composition after neuroendoscopic interventions. Twenty nine patients who underwent a neuroendoscopic procedure under general anaesthesia were studied. Temperature inside the cerebral ventricle was measured and samples of CSF were taken to determinate oxygen and carbon dioxide partial pressures, pH, base excess, ionised calcium, standard bicarbonate, glucose, sodium, potassium, magnesium, total calcium, proteins, chlorine and osmolality before initiating the irrigation and after the neuronavigation. Patient demographics, neuronavigation time, total fluid volume used and temperature of the irrigation solution and complications that appeared in the first 24 hours were collected. Mean age of the patients was 42+/-18 years. The mean neuronavigation time was 21.5+/-15.4 minutes. The mean amount of saline solution used for irrigation was 919.6+/-994.7 mL. All the values studied in the CSF, except osmolality, showed significant variations. There was a significant correlation between the CSF variation of pH, oxygen and carbon dioxide partial pressures, base excess, standard bicarbonate, glucose and total calcium with respect to the total volume of irrigation solution, but not with respect to the neuronavigation time. A cut-off point of 500 mL of irrigation solution (sensitivity 0.7; specificity 0.87) was related with a CSF pH decrease greater than 0.2. The use of saline as irrigation solution during neuroendoscopic procedures produces important changes in CSF. PMID:17546545

  5. Mobile Energy Laboratory Procedures

    SciTech Connect

    Armstrong, P.R.; Batishko, C.R.; Dittmer, A.L.; Hadley, D.L.; Stoops, J.L.

    1993-09-01

    Pacific Northwest Laboratory (PNL) has been tasked to plan and implement a framework for measuring and analyzing the efficiency of on-site energy conversion, distribution, and end-use application on federal facilities as part of its overall technical support to the US Department of Energy (DOE) Federal Energy Management Program (FEMP). The Mobile Energy Laboratory (MEL) Procedures establish guidelines for specific activities performed by PNL staff. PNL provided sophisticated energy monitoring, auditing, and analysis equipment for on-site evaluation of energy use efficiency. Specially trained engineers and technicians were provided to conduct tests in a safe and efficient manner with the assistance of host facility staff and contractors. Reports were produced to describe test procedures, results, and suggested courses of action. These reports may be used to justify changes in operating procedures, maintenance efforts, system designs, or energy-using equipment. The MEL capabilities can subsequently be used to assess the results of energy conservation projects. These procedures recognize the need for centralized NM administration, test procedure development, operator training, and technical oversight. This need is evidenced by increasing requests fbr MEL use and the economies available by having trained, full-time MEL operators and near continuous MEL operation. DOE will assign new equipment and upgrade existing equipment as new capabilities are developed. The equipment and trained technicians will be made available to federal agencies that provide funding for the direct costs associated with MEL use.

  6. Organizational Adaptation and Higher Education.

    ERIC Educational Resources Information Center

    Cameron, Kim S.

    1984-01-01

    Organizational adaptation and types of adaptation needed in academe in the future are reviewed and major conceptual approaches to organizational adaptation are presented. The probable environment that institutions will face in the future that will require adaptation is discussed. (MLW)

  7. Hybrid Surface Mesh Adaptation for Climate Modeling

    SciTech Connect

    Ahmed Khamayseh; Valmor de Almeida; Glen Hansen

    2008-10-01

    Solution-driven mesh adaptation is becoming quite popular for spatial error control in the numerical simulation of complex computational physics applications, such as climate modeling. Typically, spatial adaptation is achieved by element subdivision (h adaptation) with a primary goal of resolving the local length scales of interest. A second, less-popular method of spatial adaptivity is called “mesh motion” (r adaptation); the smooth repositioning of mesh node points aimed at resizing existing elements to capture the local length scales. This paper proposes an adaptation method based on a combination of both element subdivision and node point repositioning (rh adaptation). By combining these two methods using the notion of a mobility function, the proposed approach seeks to increase the flexibility and extensibility of mesh motion algorithms while providing a somewhat smoother transition between refined regions than is produced by element subdivision alone. Further, in an attempt to support the requirements of a very general class of climate simulation applications, the proposed method is designed to accommodate unstructured, polygonal mesh topologies in addition to the most popular mesh types.

  8. Hybrid Surface Mesh Adaptation for Climate Modeling

    SciTech Connect

    Khamayseh, Ahmed K; de Almeida, Valmor F; Hansen, Glen

    2008-01-01

    Solution-driven mesh adaptation is becoming quite popular for spatial error control in the numerical simulation of complex computational physics applications, such as climate modeling. Typically, spatial adaptation is achieved by element subdivision (h adaptation) with a primary goal of resolving the local length scales of interest. A second, less-popular method of spatial adaptivity is called "mesh motion" (r adaptation); the smooth repositioning of mesh node points aimed at resizing existing elements to capture the local length scales. This paper proposes an adaptation method based on a combination of both element subdivision and node point repositioning (rh adaptation). By combining these two methods using the notion of a mobility function, the proposed approach seeks to increase the flexibility and extensibility of mesh motion algorithms while providing a somewhat smoother transition between refined regions than is produced by element subdivision alone. Further, in an attempt to support the requirements of a very general class of climate simulation applications, the proposed method is designed to accommodate unstructured, polygonal mesh topologies in addition to the most popular mesh types.

  9. Parallel object-oriented adaptive mesh refinement

    SciTech Connect

    Balsara, D.; Quinlan, D.J.

    1997-04-01

    In this paper we study adaptive mesh refinement (AMR) for elliptic and hyperbolic systems. We use the Asynchronous Fast Adaptive Composite Grid Method (AFACX), a parallel algorithm based upon the of Fast Adaptive Composite Grid Method (FAC) as a test case of an adaptive elliptic solver. For our hyperbolic system example we use TVD and ENO schemes for solving the Euler and MHD equations. We use the structured grid load balancer MLB as a tool for obtaining a load balanced distribution in a parallel environment. Parallel adaptive mesh refinement poses difficulties in expressing both the basic single grid solver, whether elliptic or hyperbolic, in a fashion that parallelizes seamlessly. It also requires that these basic solvers work together within the adaptive mesh refinement algorithm which uses the single grid solvers as one part of its adaptive solution process. We show that use of AMR++, an object-oriented library within the OVERTURE Framework, simplifies the development of AMR applications. Parallel support is provided and abstracted through the use of the P++ parallel array class.

  10. Human heat adaptation.

    PubMed

    Taylor, Nigel A S

    2014-01-01

    In this overview, human morphological and functional adaptations during naturally and artificially induced heat adaptation are explored. Through discussions of adaptation theory and practice, a theoretical basis is constructed for evaluating heat adaptation. It will be argued that some adaptations are specific to the treatment used, while others are generalized. Regarding ethnic differences in heat tolerance, the case is put that reported differences in heat tolerance are not due to natural selection, but can be explained on the basis of variations in adaptation opportunity. These concepts are expanded to illustrate how traditional heat adaptation and acclimatization represent forms of habituation, and thermal clamping (controlled hyperthermia) is proposed as a superior model for mechanistic research. Indeed, this technique has led to questioning the perceived wisdom of body-fluid changes, such as the expansion and subsequent decay of plasma volume, and sudomotor function, including sweat habituation and redistribution. Throughout, this contribution was aimed at taking another step toward understanding the phenomenon of heat adaptation and stimulating future research. In this regard, research questions are posed concerning the influence that variations in morphological configuration may exert upon adaptation, the determinants of postexercise plasma volume recovery, and the physiological mechanisms that modify the cholinergic sensitivity of sweat glands, and changes in basal metabolic rate and body core temperature following adaptation.

  11. S Solution

    NASA Astrophysics Data System (ADS)

    Dezhi, Zeng; Gang, Tian; Junying, Hu; Zhi, Zhang; Taihe, Shi; Wanying, Liu; Qiang, Lu; Shaobo, Feng

    2014-11-01

    During drilling process, if oil and gas overflow containing H2S enters drilling fluids, the performance of drill pipes will decline significantly within a short time. In this paper, S135 drill pipe specimen was immersed in the saturated solution of H2S at room temperature for 6, 12, 18, and 24 h, respectively. The tensile properties and impact properties of S135 drill pipe were determined before and after immersion for comparison. In addition, the S135 specimens were immersed for 3 days at 80 °C to determine the changes in fatigue performance. The test results indicated that the yield strength of S135 material fluctuated with immersion time increasing and the tensile strength slightly varied with immersion time. But the plasticity index of S135 decreased significantly with the increase in immersion time. The impact energy of S135 steel also fluctuated with the increase in immersion time. After 3-day immersion at 80 °C, the fatigue properties of S135 steel decreased, and fatigue life showed the one order of magnitude difference under the same stress conditions. Moreover, fatigue strength was also decreased by about 10%. The study can guide security management of S135 drill pipe under the working conditions with oil and gas overflow containing H2S, reduce drilling tool failures, and provide technical support for drilling safety.

  12. Life's Solution

    NASA Astrophysics Data System (ADS)

    Morris, Simon Conway

    2004-11-01

    Life's Solution builds a persuasive case for the predictability of evolutionary outcomes. The case rests on a remarkable compilation of examples of convergent evolution, in which two or more lineages have independently evolved similar structures and functions. The examples range from the aerodynamics of hovering moths and hummingbirds to the use of silk by spiders and some insects to capture prey. Going against the grain of Darwinian orthodoxy, this book is a must read for anyone grappling with the meaning of evolution and our place in the Universe. Simon Conway Morris is the Ad Hominen Professor in the Earth Science Department at the University of Cambridge and a Fellow of St. John's College and the Royal Society. His research focuses on the study of constraints on evolution, and the historical processes that lead to the emergence of complexity, especially with respect to the construction of the major animal body parts in the Cambrian explosion. Previous books include The Crucible of Creation (Getty Center for Education in the Arts, 1999) and co-author of Solnhofen (Cambridge, 1990). Hb ISBN (2003) 0-521-82704-3

  13. Life's Solution

    NASA Astrophysics Data System (ADS)

    Morris, Simon Conway

    2003-09-01

    Life's Solution builds a persuasive case for the predictability of evolutionary outcomes. The case rests on a remarkable compilation of examples of convergent evolution, in which two or more lineages have independently evolved similar structures and functions. The examples range from the aerodynamics of hovering moths and hummingbirds to the use of silk by spiders and some insects to capture prey. Going against the grain of Darwinian orthodoxy, this book is a must read for anyone grappling with the meaning of evolution and our place in the Universe. Simon Conway Morris is the Ad Hominen Professor in the Earth Science Department at the University of Cambridge and a Fellow of St. John's College and the Royal Society. His research focuses on the study of constraints on evolution, and the historical processes that lead to the emergence of complexity, especially with respect to the construction of the major animal body parts in the Cambrian explosion. Previous books include The Crucible of Creation (Getty Center for Education in the Arts, 1999) and co-author of Solnhofen (Cambridge, 1990). Hb ISBN (2003) 0-521-82704-3

  14. Percutaneous urinary procedures - discharge

    MedlinePlus

    ... x 4-inch gauze sponges, tape, connecting tube, hydrogen peroxide, and warm water (plus a clean container ... cotton swab soaked with a solution of half hydrogen peroxide and half warm water. Pat it dry ...

  15. Alloy solution hardening with solute pairs

    DOEpatents

    Mitchell, John W.

    1976-08-24

    Solution hardened alloys are formed by using at least two solutes which form associated solute pairs in the solvent metal lattice. Copper containing equal atomic percentages of aluminum and palladium is an example.

  16. Environmental Test Screening Procedure

    NASA Technical Reports Server (NTRS)

    Zeidler, Janet

    2000-01-01

    This procedure describes the methods to be used for environmental stress screening (ESS) of the Lightning Mapper Sensor (LMS) lens assembly. Unless otherwise specified, the procedures shall be completed in the order listed, prior to performance of the Acceptance Test Procedure (ATP). The first unit, S/N 001, will be subjected to the Qualification Vibration Levels, while the remainder will be tested at the Operational Level. Prior to ESS, all units will undergo Pre-ESS Functional Testing that includes measuring the on-axis and plus or minus 0.95 full field Modulation Transfer Function and Back Focal Length. Next, all units will undergo ESS testing, and then Acceptance testing per PR 460.

  17. Epoxy impregnation procedure for hardened-cement samples. Progress report

    SciTech Connect

    Struble, L.; Stutzman, P.

    1988-05-01

    A method was previously developed for epoxy impregnation of hydrated cementitious materials for microscopical examination without drying the samples, by sequentially replacing pore solution with ethanol, then the ethanol with epoxy. During subsequent application of the procedure, many specimens were cured. Studies were carried out to identify the cause of these problems and to modify the procedure for more reliable impregnation. Contamination with low levels (4%) of water or ethanol was found to prevent proper curing. Modifications in the procedure to prevent contamination, including monitoring the replacement of pore solution by ethanol, were shown to provide consistent and reliable impregnation.

  18. Antiseptic skin agents for percutaneous procedures.

    PubMed

    Lepor, Norman E; Madyoon, Hooman

    2009-01-01

    Infections associated with percutaneously implanted devices, such as pacemakers, internal cardiac defibrillators, and endovascular prostheses, create difficult and complex clinical scenarios because management can entail complete device removal, antibiotic therapy, and prolonged hospitalization. A source for pathogens is often thought to be the skin surface, making skin preparation at the time of the procedure a critical part of minimizing implantation of infected devices and prostheses. The most common skin preparation agents used today include products containing iodophors or chlorhexidine gluconate. Agents are further classified by whether they are aqueous-based or alcoholbased solutions. Traditional aqueous-based iodophors, such as povidone-iodine, are one of the few products that can be safely used on mucous membrane surfaces. Alcohol-based solutions are quick, sustained, and durable, with broader spectrum antimicrobial activity. These agents seem ideal for percutaneous procedures associated with prosthesis implantation, when it is critical to minimize skin colony counts to prevent hardware infection.

  19. Arianespace streamlines launch procedures

    NASA Astrophysics Data System (ADS)

    Lenorovitch, Jeffrey M.

    1992-06-01

    Ariane has entered a new operational phase in which launch procedures have been enhanced to reduce the length of launch campaigns, lower mission costs, and increase operational availability/flexibility of the three-stage vehicle. The V50 mission utilized the first vehicle from a 50-launcher production lot ordered by Arianespace, and was the initial flight with a stretched third stage that enhances Ariane's performance. New operational procedures were introduced gradually over more than a year, starting with the V42 launch in January 1991.

  20. Mini-Bentall procedure

    PubMed Central

    2015-01-01

    An important goal in cardiovascular and thoracic surgery is reducing surgical trauma to achieve faster recovery for our patients. Mini-Bentall procedure encompasses aortic root and ascending aortic replacement with re-implantation of coronary buttons, performed via a mini-sternotomy. The skin incision extends from the angle of Louis to the third intercostal space, usually measuring 5-7 cm in length. Through this incision, it is possible to perform isolated aortic root surgery and/or hemi-arch replacement. The present illustrated article describes the technical details on how I perform a Mini-Bentall procedure with hemi-arch replacement. PMID:25870816

  1. Technology transfer for adaptation

    NASA Astrophysics Data System (ADS)

    Biagini, Bonizella; Kuhl, Laura; Gallagher, Kelly Sims; Ortiz, Claudia

    2014-09-01

    Technology alone will not be able to solve adaptation challenges, but it is likely to play an important role. As a result of the role of technology in adaptation and the importance of international collaboration for climate change, technology transfer for adaptation is a critical but understudied issue. Through an analysis of Global Environment Facility-managed adaptation projects, we find there is significantly more technology transfer occurring in adaptation projects than might be expected given the pessimistic rhetoric surrounding technology transfer for adaptation. Most projects focused on demonstration and early deployment/niche formation for existing technologies rather than earlier stages of innovation, which is understandable considering the pilot nature of the projects. Key challenges for the transfer process, including technology selection and appropriateness under climate change, markets and access to technology, and diffusion strategies are discussed in more detail.

  2. Origins of adaptive immunity.

    PubMed

    Liongue, Clifford; John, Liza B; Ward, Alister

    2011-01-01

    Adaptive immunity, involving distinctive antibody- and cell-mediated responses to specific antigens based on "memory" of previous exposure, is a hallmark of higher vertebrates. It has been argued that adaptive immunity arose rapidly, as articulated in the "big bang theory" surrounding its origins, which stresses the importance of coincident whole-genome duplications. Through a close examination of the key molecules and molecular processes underpinning adaptive immunity, this review suggests a less-extreme model, in which adaptive immunity emerged as part of longer evolutionary journey. Clearly, whole-genome duplications provided additional raw genetic materials that were vital to the emergence of adaptive immunity, but a variety of other genetic events were also required to generate some of the key molecules, whereas others were preexisting and simply co-opted into adaptive immunity.

  3. Adaptation and visual coding

    PubMed Central

    Webster, Michael A.

    2011-01-01

    Visual coding is a highly dynamic process and continuously adapting to the current viewing context. The perceptual changes that result from adaptation to recently viewed stimuli remain a powerful and popular tool for analyzing sensory mechanisms and plasticity. Over the last decade, the footprints of this adaptation have been tracked to both higher and lower levels of the visual pathway and over a wider range of timescales, revealing that visual processing is much more adaptable than previously thought. This work has also revealed that the pattern of aftereffects is similar across many stimulus dimensions, pointing to common coding principles in which adaptation plays a central role. However, why visual coding adapts has yet to be fully answered. PMID:21602298

  4. Parallel Anisotropic Tetrahedral Adaptation

    NASA Technical Reports Server (NTRS)

    Park, Michael A.; Darmofal, David L.

    2008-01-01

    An adaptive method that robustly produces high aspect ratio tetrahedra to a general 3D metric specification without introducing hybrid semi-structured regions is presented. The elemental operators and higher-level logic is described with their respective domain-decomposed parallelizations. An anisotropic tetrahedral grid adaptation scheme is demonstrated for 1000-1 stretching for a simple cube geometry. This form of adaptation is applicable to more complex domain boundaries via a cut-cell approach as demonstrated by a parallel 3D supersonic simulation of a complex fighter aircraft. To avoid the assumptions and approximations required to form a metric to specify adaptation, an approach is introduced that directly evaluates interpolation error. The grid is adapted to reduce and equidistribute this interpolation error calculation without the use of an intervening anisotropic metric. Direct interpolation error adaptation is illustrated for 1D and 3D domains.

  5. Origins of adaptive immunity.

    PubMed

    Liongue, Clifford; John, Liza B; Ward, Alister

    2011-01-01

    Adaptive immunity, involving distinctive antibody- and cell-mediated responses to specific antigens based on "memory" of previous exposure, is a hallmark of higher vertebrates. It has been argued that adaptive immunity arose rapidly, as articulated in the "big bang theory" surrounding its origins, which stresses the importance of coincident whole-genome duplications. Through a close examination of the key molecules and molecular processes underpinning adaptive immunity, this review suggests a less-extreme model, in which adaptive immunity emerged as part of longer evolutionary journey. Clearly, whole-genome duplications provided additional raw genetic materials that were vital to the emergence of adaptive immunity, but a variety of other genetic events were also required to generate some of the key molecules, whereas others were preexisting and simply co-opted into adaptive immunity. PMID:21395512

  6. Gravitational adaptation of animals

    NASA Technical Reports Server (NTRS)

    Smith, A. H.; Burton, R. R.

    1982-01-01

    The effect of gravitational adaptation is studied in a group of five Leghorn cocks which had become physiologically adapted to 2 G after 162 days of centrifugation. After this period of adaptation, they are periodically exposed to a 2 G field, accompanied by five previously unexposed hatch-mates, and the degree of retained acceleration adaptation is estimated from the decrease in lymphocyte frequency after 24 hr at 2 G. Results show that the previously adapted birds exhibit an 84% greater lymphopenia than the unexposed birds, and that the lymphocyte frequency does not decrease to a level below that found at the end of 162 days at 2 G. In addition, the capacity for adaptation to chronic acceleration is found to be highly heritable. An acceleration tolerant strain of birds shows lesser mortality during chronic acceleration, particularly in intermediate fields, although the result of acceleration selection is largely quantitative (a greater number of survivors) rather than qualitative (behavioral or physiological changes).

  7. Toddler test or procedure preparation

    MedlinePlus

    Preparing toddler for test/procedure; Test/procedure preparation - toddler; Preparing for a medical test or procedure - toddler ... Before the test, know that your child will probably cry. Even if you prepare, your child may feel some discomfort or ...

  8. Preschooler test or procedure preparation

    MedlinePlus

    Preparing preschoolers for test/procedure; Test/procedure preparation - preschooler ... Preparing children for medical tests can reduce their distress. It can also make them less likely to cry and resist the procedure. Research shows that ...

  9. Experimental adaptive Bayesian tomography

    NASA Astrophysics Data System (ADS)

    Kravtsov, K. S.; Straupe, S. S.; Radchenko, I. V.; Houlsby, N. M. T.; Huszár, F.; Kulik, S. P.

    2013-06-01

    We report an experimental realization of an adaptive quantum state tomography protocol. Our method takes advantage of a Bayesian approach to statistical inference and is naturally tailored for adaptive strategies. For pure states, we observe close to N-1 scaling of infidelity with overall number of registered events, while the best nonadaptive protocols allow for N-1/2 scaling only. Experiments are performed for polarization qubits, but the approach is readily adapted to any dimension.

  10. Adaptive Pairing Reversible Watermarking.

    PubMed

    Dragoi, Ioan-Catalin; Coltuc, Dinu

    2016-05-01

    This letter revisits the pairwise reversible watermarking scheme of Ou et al., 2013. An adaptive pixel pairing that considers only pixels with similar prediction errors is introduced. This adaptive approach provides an increased number of pixel pairs where both pixels are embedded and decreases the number of shifted pixels. The adaptive pairwise reversible watermarking outperforms the state-of-the-art low embedding bit-rate schemes proposed so far.

  11. Adaptation as organism design

    PubMed Central

    Gardner, Andy

    2009-01-01

    The problem of adaptation is to explain the apparent design of organisms. Darwin solved this problem with the theory of natural selection. However, population geneticists, whose responsibility it is to formalize evolutionary theory, have long neglected the link between natural selection and organismal design. Here, I review the major historical developments in theory of organismal adaptation, clarifying what adaptation is and what it is not, and I point out future avenues for research. PMID:19793739

  12. Digital adaptive sampling.

    NASA Technical Reports Server (NTRS)

    Breazeale, G. J.; Jones, L. E.

    1971-01-01

    Discussion of digital adaptive sampling, which is consistently better than fixed sampling in noise-free cases. Adaptive sampling is shown to be feasible and, it is considered, should be studied further. It should be noted that adaptive sampling is a class of variable rate sampling in which the variability depends on system signals. Digital rather than analog laws should be studied, because cases can arise in which the analog signals are not even available. An extremely important problem is implementation.

  13. An Adaptively-Refined, Cartesian, Cell-Based Scheme for the Euler and Navier-Stokes Equations. Ph.D. Thesis - Michigan Univ.

    NASA Technical Reports Server (NTRS)

    Coirier, William John

    1994-01-01

    A Cartesian, cell-based scheme for solving the Euler and Navier-Stokes equations in two dimensions is developed and tested. Grids about geometrically complicated bodies are generated automatically, by recursive subdivision of a single Cartesian cell encompassing the entire flow domain. Where the resulting cells intersect bodies, polygonal 'cut' cells are created. The geometry of the cut cells is computed using polygon-clipping algorithms. The grid is stored in a binary-tree data structure which provides a natural means of obtaining cell-to-cell connectivity and of carrying out solution-adaptive refinement. The Euler and Navier-Stokes equations are solved on the resulting grids using a finite-volume formulation. The convective terms are upwinded, with a limited linear reconstruction of the primitive variables used to provide input states to an approximate Riemann solver for computing the fluxes between neighboring cells. A multi-stage time-stepping scheme is used to reach a steady-state solution. Validation of the Euler solver with benchmark numerical and exact solutions is presented. An assessment of the accuracy of the approach is made by uniform and adaptive grid refinements for a steady, transonic, exact solution to the Euler equations. The error of the approach is directly compared to a structured solver formulation. A non smooth flow is also assessed for grid convergence, comparing uniform and adaptively refined results. Several formulations of the viscous terms are assessed analytically, both for accuracy and positivity. The two best formulations are used to compute adaptively refined solutions of the Navier-Stokes equations. These solutions are compared to each other, to experimental results and/or theory for a series of low and moderate Reynolds numbers flow fields. The most suitable viscous discretization is demonstrated for geometrically-complicated internal flows. For flows at high Reynolds numbers, both an altered grid-generation procedure and a

  14. Evaluation of truncation error and adaptive grid generation for the transonic full potential flow calculations

    NASA Technical Reports Server (NTRS)

    Nakamura, S.

    1983-01-01

    The effects of truncation error on the numerical solution of transonic flows using the full potential equation are studied. The effects of adapting grid point distributions to various solution aspects including shock waves is also discussed. A conclusion is that a rapid change of grid spacing is damaging to the accuracy of the flow solution. Therefore, in a solution adaptive grid application an optimal grid is obtained as a tradeoff between the amount of grid refinement and the rate of grid stretching.

  15. Adaptation of Selenastrum capricornutum (Chlorophyceae) to copper

    USGS Publications Warehouse

    Kuwabara, J.S.; Leland, H.V.

    1986-01-01

    Selenastrum capricornutum Printz, growing in a chemically defined medium, was used as a model for studying adaptation of algae to a toxic metal (copper) ion. Cells exhibited lag-phase adaptation to 0.8 ??M total Cu (10-12 M free ion concentration) after 20 generations of Cu exposure. Selenastrum adapted to the same concentration when Cu was gradually introduced over an 8-h period using a specially designed apparatus that provided a transient increase in exposure concentration. Cu adaptation was not attributable to media conditioning by algal exudates. Duration of lag phase was a more sensitive index of copper toxicity to Selenastrum that was growth rate or stationary-phase cell density under the experimental conditions used. Chemical speciation of the Cu dosing solution influenced the duration of lag phase even when media formulations were identical after dosing. Selenastrum initially exposed to Cu in a CuCl2 injection solution exhibited a lag phase of 3.9 d, but this was reduced to 1.5 d when a CuEDTA solution was used to achieve the same total Cu and EDTA concentrations. Physical and chemical processes that accelerated the rate of increase in cupric ion concentration generally increased the duration of lag phase. ?? 1986.

  16. Quantifying the Adaptive Cycle.

    PubMed

    Angeler, David G; Allen, Craig R; Garmestani, Ahjond S; Gunderson, Lance H; Hjerne, Olle; Winder, Monika

    2015-01-01

    The adaptive cycle was proposed as a conceptual model to portray patterns of change in complex systems. Despite the model having potential for elucidating change across systems, it has been used mainly as a metaphor, describing system dynamics qualitatively. We use a quantitative approach for testing premises (reorganisation, conservatism, adaptation) in the adaptive cycle, using Baltic Sea phytoplankton communities as an example of such complex system dynamics. Phytoplankton organizes in recurring spring and summer blooms, a well-established paradigm in planktology and succession theory, with characteristic temporal trajectories during blooms that may be consistent with adaptive cycle phases. We used long-term (1994-2011) data and multivariate analysis of community structure to assess key components of the adaptive cycle. Specifically, we tested predictions about: reorganisation: spring and summer blooms comprise distinct community states; conservatism: community trajectories during individual adaptive cycles are conservative; and adaptation: phytoplankton species during blooms change in the long term. All predictions were supported by our analyses. Results suggest that traditional ecological paradigms such as phytoplankton successional models have potential for moving the adaptive cycle from a metaphor to a framework that can improve our understanding how complex systems organize and reorganize following collapse. Quantifying reorganization, conservatism and adaptation provides opportunities to cope with the intricacies and uncertainties associated with fast ecological change, driven by shifting system controls. Ultimately, combining traditional ecological paradigms with heuristics of complex system dynamics using quantitative approaches may help refine ecological theory and improve our understanding of the resilience of ecosystems. PMID:26716453

  17. Quantifying the adaptive cycle

    USGS Publications Warehouse

    Angeler, David G.; Allen, Craig R.; Garmestani, Ahjond S.; Gunderson, Lance H.; Hjerne, Olle; Winder, Monika

    2015-01-01

    The adaptive cycle was proposed as a conceptual model to portray patterns of change in complex systems. Despite the model having potential for elucidating change across systems, it has been used mainly as a metaphor, describing system dynamics qualitatively. We use a quantitative approach for testing premises (reorganisation, conservatism, adaptation) in the adaptive cycle, using Baltic Sea phytoplankton communities as an example of such complex system dynamics. Phytoplankton organizes in recurring spring and summer blooms, a well-established paradigm in planktology and succession theory, with characteristic temporal trajectories during blooms that may be consistent with adaptive cycle phases. We used long-term (1994–2011) data and multivariate analysis of community structure to assess key components of the adaptive cycle. Specifically, we tested predictions about: reorganisation: spring and summer blooms comprise distinct community states; conservatism: community trajectories during individual adaptive cycles are conservative; and adaptation: phytoplankton species during blooms change in the long term. All predictions were supported by our analyses. Results suggest that traditional ecological paradigms such as phytoplankton successional models have potential for moving the adaptive cycle from a metaphor to a framework that can improve our understanding how complex systems organize and reorganize following collapse. Quantifying reorganization, conservatism and adaptation provides opportunities to cope with the intricacies and uncertainties associated with fast ecological change, driven by shifting system controls. Ultimately, combining traditional ecological paradigms with heuristics of complex system dynamics using quantitative approaches may help refine ecological theory and improve our understanding of the resilience of ecosystems.

  18. Human adaptation to smog

    SciTech Connect

    Evans, G.W. Jacobs, S.V.; Frager, N.B.

    1982-10-01

    This study examined the health effects of human adaptation to photochemical smog. A group of recent arrivals to the Los Angeles air basin were compared to long-term residents of the basin. Evidence for adaptation included greater irritation and respiratory problems among the recent arrivals and desensitization among the long-term residents in their judgments of the severity of the smog problem to their health. There was no evidence for biochemical adaptation as measured by hemoglobin response to oxidant challenge. The results were discussed in terms of psychological adaption to chronic environmental stressors.

  19. Adaptive parallel logic networks

    NASA Technical Reports Server (NTRS)

    Martinez, Tony R.; Vidal, Jacques J.

    1988-01-01

    Adaptive, self-organizing concurrent systems (ASOCS) that combine self-organization with massive parallelism for such applications as adaptive logic devices, robotics, process control, and system malfunction management, are presently discussed. In ASOCS, an adaptive network composed of many simple computing elements operating in combinational and asynchronous fashion is used and problems are specified by presenting if-then rules to the system in the form of Boolean conjunctions. During data processing, which is a different operational phase from adaptation, the network acts as a parallel hardware circuit.

  20. Quantifying the Adaptive Cycle

    PubMed Central

    Angeler, David G.; Allen, Craig R.; Garmestani, Ahjond S.; Gunderson, Lance H.; Hjerne, Olle; Winder, Monika

    2015-01-01

    The adaptive cycle was proposed as a conceptual model to portray patterns of change in complex systems. Despite the model having potential for elucidating change across systems, it has been used mainly as a metaphor, describing system dynamics qualitatively. We use a quantitative approach for testing premises (reorganisation, conservatism, adaptation) in the adaptive cycle, using Baltic Sea phytoplankton communities as an example of such complex system dynamics. Phytoplankton organizes in recurring spring and summer blooms, a well-established paradigm in planktology and succession theory, with characteristic temporal trajectories during blooms that may be consistent with adaptive cycle phases. We used long-term (1994–2011) data and multivariate analysis of community structure to assess key components of the adaptive cycle. Specifically, we tested predictions about: reorganisation: spring and summer blooms comprise distinct community states; conservatism: community trajectories during individual adaptive cycles are conservative; and adaptation: phytoplankton species during blooms change in the long term. All predictions were supported by our analyses. Results suggest that traditional ecological paradigms such as phytoplankton successional models have potential for moving the adaptive cycle from a metaphor to a framework that can improve our understanding how complex systems organize and reorganize following collapse. Quantifying reorganization, conservatism and adaptation provides opportunities to cope with the intricacies and uncertainties associated with fast ecological change, driven by shifting system controls. Ultimately, combining traditional ecological paradigms with heuristics of complex system dynamics using quantitative approaches may help refine ecological theory and improve our understanding of the resilience of ecosystems. PMID:26716453

  1. Decentralized adaptive control

    NASA Technical Reports Server (NTRS)

    Oh, B. J.; Jamshidi, M.; Seraji, H.

    1988-01-01

    A decentralized adaptive control is proposed to stabilize and track the nonlinear, interconnected subsystems with unknown parameters. The adaptation of the controller gain is derived by using model reference adaptive control theory based on Lyapunov's direct method. The adaptive gains consist of sigma, proportional, and integral combination of the measured and reference values of the corresponding subsystem. The proposed control is applied to the joint control of a two-link robot manipulator, and the performance in computer simulation corresponds with what is expected in theoretical development.

  2. Impact of adaptation time on contrast sensitivity

    NASA Astrophysics Data System (ADS)

    Apelt, Dörte; Strasburger, Hans; Klein, Jan; Preim, Bernhard

    2010-02-01

    For softcopy-reading of mammograms, a room illuminance of 10 lx is recommended in standard procedures. Room illuminance affects both the maximal monitor contrast and the global luminance adaptation of the visual system. A radiologist observer has to adapt to low luminance levels, when entering the reading room. Since the observer's sensitivity to low-contrast patterns depends on adaptation state and processes, it would be expected that the contrast sensitivity is lower at the beginning of a reading session. We investigated the effect of an initial time of dark adaptation on the contrast sensitivity. A study with eight observers was conducted in the context of mammographic softcopy-reading. Using Gabor patterns with varying spatial frequency, orientation, and contrast level as stimuli in an orientation discrimination task, the intra-observer contrast sensitivity was determined for foveal vision. Before performing the discrimination task, the observers adapted for two minutes to an average illuminance of 450 lx. Thereafter, contrast thresholds were repeatedly measured at 10 lx room illuminance over a course of 15 minutes. The results show no significant variations in contrast sensitivity during the 15 minutes period. Thus, it can be concluded that taking an initial adaptation time does not affect the perception of lowcontrast objects in mammographic images presented in the typical softcopy-reading environment. Therefore, the reading performance would not be negatively influenced when the observer started immediately with reading of mammograms. The results can be used to optimize the workflow in the radiology reading room.

  3. PROCESS OF ELIMINATING HYDROGEN PEROXIDE IN SOLUTIONS CONTAINING PLUTONIUM VALUES

    DOEpatents

    Barrick, J.G.; Fries, B.A.

    1960-09-27

    A procedure is given for peroxide precipitation processes for separating and recovering plutonium values contained in an aqueous solution. When plutonium peroxide is precipitated from an aqueous solution, the supernatant contains appreciable quantities of plutonium and peroxide. It is desirable to process this solution further to recover plutonium contained therein, but the presence of the peroxide introduces difficulties; residual hydrogen peroxide contained in the supernatant solution is eliminated by adding a nitrite or a sulfite to this solution.

  4. Simulating Laboratory Procedures.

    ERIC Educational Resources Information Center

    Baker, J. E.; And Others

    1986-01-01

    Describes the use of computer assisted instruction in a medical microbiology course. Presents examples of how computer assisted instruction can present case histories in which the laboratory procedures are simulated. Discusses an authoring system used to prepare computer simulations and provides one example of a case history dealing with fractured…

  5. Least Squares Procedures.

    ERIC Educational Resources Information Center

    Hester, Yvette

    Least squares methods are sophisticated mathematical curve fitting procedures used in all classical parametric methods. The linear least squares approximation is most often associated with finding the "line of best fit" or the regression line. Since all statistical analyses are correlational and all classical parametric methods are least square…

  6. Advanced intrarenal ureteroscopic procedures.

    PubMed

    Monga, Manoj; Beeman, William W

    2004-02-01

    The role of flexible ureteroscopy in the management of intrarenal pathology has undergone a dramatic evolution, powered by improvements in flexible ureteroscope design; deflection and image quality; diversification of small, disposable instrumentation; and the use of holmium laser lithotripsy. This article reviews the application of flexible ureteroscopy for advanced intrarenal procedures.

  7. Visual Screening: A Procedure.

    ERIC Educational Resources Information Center

    Williams, Robert T.

    Vision is a complex process involving three phases: physical (acuity), physiological (integrative), and psychological (perceptual). Although these phases cannot be considered discrete, they provide the basis for the visual screening procedure used by the Reading Services of Colorado State University and described in this document. Ten tests are…

  8. Student Loan Collection Procedures.

    ERIC Educational Resources Information Center

    National Association of College and University Business Officers, Washington, DC.

    This manual on the collection of student loans is intended for the use of business officers and loan collection personnel of colleges and universities of all sizes. The introductory chapter is an overview of sound collection practices and procedures. It discusses the making of a loan, in-school servicing of the accounts, the exit interview, the…

  9. PLATO Courseware Development Procedures.

    ERIC Educational Resources Information Center

    Mahler, William A.; And Others

    This is an exploratory study of methods for the preparation of computer curriculum materials. It deals with courseware development procedures for the PLATO IV computer-based education system, and draws on interviews with over 100 persons engaged in courseware production. The report presents a five stage model of development: (1) planning, (2)…

  10. Parliamentary Procedure Made Easy.

    ERIC Educational Resources Information Center

    Hayden, Ellen T.

    Based on the newly revised "Robert's Rules of Order," these self-contained learning activities will help students successfully and actively participate in school, social, civic, political, or professional organizations. There are 13 lessons. Topics studied include the what, why, and history of parliamentary procedure; characteristics of the ideal…

  11. Grievance Procedure Problems.

    ERIC Educational Resources Information Center

    Green, Gary J.

    This paper presents two actual problems involving grievance procedures. Both problems involve pending litigation and one of them involves pending arbitration. The first problem occurred in a wealthy Minnesota school district and involved a seniority list. Because of changes in the financial basis for supporting public schools, it became necessary…

  12. Procedures and Policies Manual

    ERIC Educational Resources Information Center

    Davis, Jane M.

    2006-01-01

    This document was developed by the Middle Tennessee State University James E. Walker Library Collection Management Department to provide policies and procedural guidelines for the cataloging and processing of bibliographic materials. This document includes policies for cataloging monographs, serials, government documents, machine-readable data…

  13. Educational Accounting Procedures.

    ERIC Educational Resources Information Center

    Tidwell, Sam B.

    This chapter of "Principles of School Business Management" reviews the functions, procedures, and reports with which school business officials must be familiar in order to interpret and make decisions regarding the school district's financial position. Among the accounting functions discussed are financial management, internal auditing, annual…

  14. Terrestrial photovoltaic measurement procedures

    NASA Technical Reports Server (NTRS)

    1977-01-01

    Procedures for obtaining cell and array current-voltage measurements both outdoors in natural sunlight and indoors in simulated sunlight are presented. A description of the necessary apparatus and equipment is given for the calibration and use of reference solar cells. Some comments relating to concentration cell measurements, and a revised terrestrial solar spectrum for use in theoretical calculations, are included.

  15. Attractor mechanism as a distillation procedure

    SciTech Connect

    Levay, Peter; Szalay, Szilard

    2010-07-15

    In a recent paper it was shown that for double extremal static spherical symmetric BPS black hole solutions in the STU model the well-known process of moduli stabilization at the horizon can be recast in a form of a distillation procedure of a three-qubit entangled state of a Greenberger-Horne-Zeilinger type. By studying the full flow in moduli space in this paper we investigate this distillation procedure in more detail. We introduce a three-qubit state with amplitudes depending on the conserved charges, the warp factor, and the moduli. We show that for the recently discovered non-BPS solutions it is possible to see how the distillation procedure unfolds itself as we approach the horizon. For the non-BPS seed solutions at the asymptotically Minkowski region we are starting with a three-qubit state having seven nonequal nonvanishing amplitudes and finally at the horizon we get a Greenberger-Horne-Zeilinger state with merely four nonvanishing ones with equal magnitudes. The magnitude of the surviving nonvanishing amplitudes is proportional to the macroscopic black hole entropy. A systematic study of such attractor states shows that their properties reflect the structure of the fake superpotential. We also demonstrate that when starting with the very special values for the moduli corresponding to flat directions the uniform structure at the horizon deteriorates due to errors generalizing the usual bit flips acting on the qubits of the attractor states.

  16. Enhanced quantum procedures that resolve difficult problems

    NASA Astrophysics Data System (ADS)

    Klauder, John R.

    2015-06-01

    A careful study of the classical/quantum connection with the aid of coherent states offers new insights into various technical problems. This analysis includes both canonical as well as closely related affine quantization procedures. The new tools are applied to several examples including: (1) A quantum formulation that is invariant under arbitrary classical canonical transformations of coordinates; (2) A toy model that for all positive energy solutions has singularities which are removed at the classical level when the correct quantum corrections are applied; (3) A fairly simple model field theory with non-trivial classical behavior that, when conventionally quantized, becomes trivial, but nevertheless finds a proper solution using the enhanced procedures; (4) A model of scalar field theories with non-trivial classical behavior that, when conventionally quantized, becomes trivial, but nevertheless finds a proper solution using the enhanced procedures; (5) A viable formulation of the kinematics of quantum gravity that respects the strict positivity of the spatial metric in both its classical and quantum versions; and (6) A proposal for a non-trivial quantization of φ 44 that is ripe for study by Monte Carlo computational methods. All of these examples use fairly general arguments that can be understood by a broad audience.

  17. Adaptive mesh strategies for the spectral element method

    NASA Technical Reports Server (NTRS)

    Mavriplis, Catherine

    1992-01-01

    An adaptive spectral method was developed for the efficient solution of time dependent partial differential equations. Adaptive mesh strategies that include resolution refinement and coarsening by three different methods are illustrated on solutions to the 1-D viscous Burger equation and the 2-D Navier-Stokes equations for driven flow in a cavity. Sharp gradients, singularities, and regions of poor resolution are resolved optimally as they develop in time using error estimators which indicate the choice of refinement to be used. The adaptive formulation presents significant increases in efficiency, flexibility, and general capabilities for high order spectral methods.

  18. Residual Distribution Schemes for Conservation Laws Via Adaptive Quadrature

    NASA Technical Reports Server (NTRS)

    Barth, Timothy; Abgrall, Remi; Biegel, Bryan (Technical Monitor)

    2000-01-01

    This paper considers a family of nonconservative numerical discretizations for conservation laws which retains the correct weak solution behavior in the limit of mesh refinement whenever sufficient order numerical quadrature is used. Our analysis of 2-D discretizations in nonconservative form follows the 1-D analysis of Hou and Le Floch. For a specific family of nonconservative discretizations, it is shown under mild assumptions that the error arising from non-conservation is strictly smaller than the discretization error in the scheme. In the limit of mesh refinement under the same assumptions, solutions are shown to satisfy an entropy inequality. Using results from this analysis, a variant of the "N" (Narrow) residual distribution scheme of van der Weide and Deconinck is developed for first-order systems of conservation laws. The modified form of the N-scheme supplants the usual exact single-state mean-value linearization of flux divergence, typically used for the Euler equations of gasdynamics, by an equivalent integral form on simplex interiors. This integral form is then numerically approximated using an adaptive quadrature procedure. This renders the scheme nonconservative in the sense described earlier so that correct weak solutions are still obtained in the limit of mesh refinement. Consequently, we then show that the modified form of the N-scheme can be easily applied to general (non-simplicial) element shapes and general systems of first-order conservation laws equipped with an entropy inequality where exact mean-value linearization of the flux divergence is not readily obtained, e.g. magnetohydrodynamics, the Euler equations with certain forms of chemistry, etc. Numerical examples of subsonic, transonic and supersonic flows containing discontinuities together with multi-level mesh refinement are provided to verify the analysis.

  19. Physiologic adaptation to space - Space adaptation syndrome

    NASA Technical Reports Server (NTRS)

    Vanderploeg, J. M.

    1985-01-01

    The adaptive changes of the neurovestibular system to microgravity, which result in space motion sickness (SMS), are studied. A list of symptoms, which range from vomiting to drowsiness, is provided. The two patterns of symptom development, rapid and gradual, and the duration of the symptoms are described. The concept of sensory conflict and rearrangements to explain SMS is being investigated.

  20. Neural Adaptation Effects in Conceptual Processing

    PubMed Central

    Marino, Barbara F. M.; Borghi, Anna M.; Gemmi, Luca; Cacciari, Cristina; Riggio, Lucia

    2015-01-01

    We investigated the conceptual processing of nouns referring to objects characterized by a highly typical color and orientation. We used a go/no-go task in which we asked participants to categorize each noun as referring or not to natural entities (e.g., animals) after a selective adaptation of color-edge neurons in the posterior LV4 region of the visual cortex was induced by means of a McCollough effect procedure. This manipulation affected categorization: the green-vertical adaptation led to slower responses than the green-horizontal adaptation, regardless of the specific color and orientation of the to-be-categorized noun. This result suggests that the conceptual processing of natural entities may entail the activation of modality-specific neural channels with weights proportional to the reliability of the signals produced by these channels during actual perception. This finding is discussed with reference to the debate about the grounded cognition view. PMID:26264031

  1. [Surgical procedures for bone neoplasms in children].

    PubMed

    Woźniak, W

    1991-01-01

    The treatment of 40 patients with bone tumors have been presented. The primary tumors were located in the following sites: femur (14), tibia (8), fibula (4), humerus (4), scapula (1), clavicle (2), pelvis (5), hand (1). Investigated group were: osteosarcoma (18), Ewing's sarcoma (14), chondrosarcoma (2), fibrosarcoma (1), synovial sarcoma (1), chondroblastoma (4). In the most frequent malignant bone tumors, osteosarcoma and Ewing's sarcoma, unified management was adapted. The treatment was initiated with multidrug chemotherapy and followed by surgery or radiotherapy (Ewing's sarcoma) of the primary site. Surgery was performed in 30 cases: 19 mutilating operations because of the broad local invasion, 11 conservative surgical procedures (limb -- salvage operations). Satisfactory oncological and functional effect can be achieved after limb-salvage surgical procedures in the cases of localized, especially semimalignant bone tumors. PMID:1369876

  2. Adaptive Peer Sampling with Newscast

    NASA Astrophysics Data System (ADS)

    Tölgyesi, Norbert; Jelasity, Márk

    The peer sampling service is a middleware service that provides random samples from a large decentralized network to support gossip-based applications such as multicast, data aggregation and overlay topology management. Lightweight gossip-based implementations of the peer sampling service have been shown to provide good quality random sampling while also being extremely robust to many failure scenarios, including node churn and catastrophic failure. We identify two problems with these approaches. The first problem is related to message drop failures: if a node experiences a higher-than-average message drop rate then the probability of sampling this node in the network will decrease. The second problem is that the application layer at different nodes might request random samples at very different rates which can result in very poor random sampling especially at nodes with high request rates. We propose solutions for both problems. We focus on Newscast, a robust implementation of the peer sampling service. Our solution is based on simple extensions of the protocol and an adaptive self-control mechanism for its parameters, namely—without involving failure detectors—nodes passively monitor local protocol events using them as feedback for a local control loop for self-tuning the protocol parameters. The proposed solution is evaluated by simulation experiments.

  3. Coherent optical adaptive techniques.

    PubMed

    Bridges, W B; Brunner, P T; Lazzara, S P; Nussmeier, T A; O'Meara, T R; Sanguinet, J A; Brown, W P

    1974-02-01

    The theory of multidither adaptive optical radar phased arrays is briefly reviewed as an introduction to the experimental results obtained with seven-element linear and three-element triangular array systems operating at 0.6328 microm. Atmospheric turbulence compensation and adaptive tracking capabilities are demonstrated.

  4. Research, Adaptation, & Change.

    ERIC Educational Resources Information Center

    Morris, Lee A., Ed.; And Others

    Research adaptation is an endeavor that implies solid collaboration among school practitioners and university and college researchers. This volume addresses the broad issues of research as an educational endeavor, adaptation as a necessary function associated with applying research findings to school situations, and change as an inevitable…

  5. Uncertainty in adaptive capacity

    NASA Astrophysics Data System (ADS)

    Adger, W. Neil; Vincent, Katharine

    2005-03-01

    The capacity to adapt is a critical element of the process of adaptation: it is the vector of resources that represent the asset base from which adaptation actions can be made. Adaptive capacity can in theory be identified and measured at various scales, from the individual to the nation. The assessment of uncertainty within such measures comes from the contested knowledge domain and theories surrounding the nature of the determinants of adaptive capacity and the human action of adaptation. While generic adaptive capacity at the national level, for example, is often postulated as being dependent on health, governance and political rights, and literacy, and economic well-being, the determinants of these variables at national levels are not widely understood. We outline the nature of this uncertainty for the major elements of adaptive capacity and illustrate these issues with the example of a social vulnerability index for countries in Africa. To cite this article: W.N. Adger, K. Vincent, C. R. Geoscience 337 (2005).

  6. [Postvagotomy adaptation syndrome].

    PubMed

    Shapovalov, V A

    1998-01-01

    It was established in experiment, that the changes of the natural resistance of organism indexes and of the peritoneal cavity cytology has compensatory-adaptational character while the denervation-adaptational syndrome occurrence and progress, which may be assessed as eustress. Vagotomy and operative trauma cause qualitatively different reactions of an organism.

  7. Adaptive Sampling Proxy Application

    2012-10-22

    ASPA is an implementation of an adaptive sampling algorithm [1-3], which is used to reduce the computational expense of computer simulations that couple disparate physical scales. The purpose of ASPA is to encapsulate the algorithms required for adaptive sampling independently from any specific application, so that alternative algorithms and programming models for exascale computers can be investigated more easily.

  8. Water Resource Adaptation Program

    EPA Science Inventory

    The Water Resource Adaptation Program (WRAP) contributes to the U.S. Environmental Protection Agency’s (U.S. EPA) efforts to provide water resource managers and decision makers with the tools needed to adapt water resources to demographic and economic development, and future clim...

  9. Retinal Imaging: Adaptive Optics

    NASA Astrophysics Data System (ADS)

    Goncharov, A. S.; Iroshnikov, N. G.; Larichev, Andrey V.

    This chapter describes several factors influencing the performance of ophthalmic diagnostic systems with adaptive optics compensation of human eye aberration. Particular attention is paid to speckle modulation, temporal behavior of aberrations, and anisoplanatic effects. The implementation of a fundus camera with adaptive optics is considered.

  10. Adaptive management for a turbulent future.

    PubMed

    Allen, Craig R; Fontaine, Joseph J; Pope, Kevin L; Garmestani, Ahjond S

    2011-05-01

    The challenges that face humanity today differ from the past because as the scale of human influence has increased, our biggest challenges have become global in nature, and formerly local problems that could be addressed by shifting populations or switching resources, now aggregate (i.e., "scale up") limiting potential management options. Adaptive management is an approach to natural resource management that emphasizes learning through management based on the philosophy that knowledge is incomplete and much of what we think we know is actually wrong. Adaptive management has explicit structure, including careful elucidation of goals, identification of alternative management objectives and hypotheses of causation, and procedures for the collection of data followed by evaluation and reiteration. It is evident that adaptive management has matured, but it has also reached a crossroads. Practitioners and scientists have developed adaptive management and structured decision making techniques, and mathematicians have developed methods to reduce the uncertainties encountered in resource management, yet there continues to be misapplication of the method and misunderstanding of its purpose. Ironically, the confusion over the term "adaptive management" may stem from the flexibility inherent in the approach, which has resulted in multiple interpretations of "adaptive management" that fall along a continuum of complexity and a priori design. Adaptive management is not a panacea for the navigation of 'wicked problems' as it does not produce easy answers, and is only appropriate in a subset of natural resource management problems where both uncertainty and controllability are high. Nonetheless, the conceptual underpinnings of adaptive management are simple; there will always be inherent uncertainty and unpredictability in the dynamics and behavior of complex social-ecological systems, but management decisions must still be made, and whenever possible, we should incorporate

  11. Adaptive management for a turbulent future

    USGS Publications Warehouse

    Allen, C.R.; Fontaine, J.J.; Pope, K.L.; Garmestani, A.S.

    2011-01-01

    The challenges that face humanity today differ from the past because as the scale of human influence has increased, our biggest challenges have become global in nature, and formerly local problems that could be addressed by shifting populations or switching resources, now aggregate (i.e., "scale up") limiting potential management options. Adaptive management is an approach to natural resource management that emphasizes learning through management based on the philosophy that knowledge is incomplete and much of what we think we know is actually wrong. Adaptive management has explicit structure, including careful elucidation of goals, identification of alternative management objectives and hypotheses of causation, and procedures for the collection of data followed by evaluation and reiteration. It is evident that adaptive management has matured, but it has also reached a crossroads. Practitioners and scientists have developed adaptive management and structured decision making techniques, and mathematicians have developed methods to reduce the uncertainties encountered in resource management, yet there continues to be misapplication of the method and misunderstanding of its purpose. Ironically, the confusion over the term "adaptive management" may stem from the flexibility inherent in the approach, which has resulted in multiple interpretations of "adaptive management" that fall along a continuum of complexity and a priori design. Adaptive management is not a panacea for the navigation of 'wicked problems' as it does not produce easy answers, and is only appropriate in a subset of natural resource management problems where both uncertainty and controllability are high. Nonetheless, the conceptual underpinnings of adaptive management are simple; there will always be inherent uncertainty and unpredictability in the dynamics and behavior of complex social-ecological systems, but management decisions must still be made, and whenever possible, we should incorporate

  12. Adaptive Management for a Turbulent Future

    USGS Publications Warehouse

    Allen, Craig R.; Fontaine, Joseph J.; Pope, Kevin L.; Garmestani, Ahjond S.

    2011-01-01

    The challenges that face humanity today differ from the past because as the scale of human influence has increased, our biggest challenges have become global in nature, and formerly local problems that could be addressed by shifting populations or switching resources, now aggregate (i.e., "scale up") limiting potential management options. Adaptive management is an approach to natural resource management that emphasizes learning through management based on the philosophy that knowledge is incomplete and much of what we think we know is actually wrong. Adaptive management has explicit structure, including careful elucidation of goals, identification of alternative management objectives and hypotheses of causation, and procedures for the collection of data followed by evaluation and reiteration. It is evident that adaptive management has matured, but it has also reached a crossroads. Practitioners and scientists have developed adaptive management and structured decision making techniques, and mathematicians have developed methods to reduce the uncertainties encountered in resource management, yet there continues to be misapplication of the method and misunderstanding of its purpose. Ironically, the confusion over the term "adaptive management" may stem from the flexibility inherent in the approach, which has resulted in multiple interpretations of "adaptive management" that fall along a continuum of complexity and a priori design. Adaptive management is not a panacea for the navigation of 'wicked problems' as it does not produce easy answers, and is only appropriate in a subset of natural resource management problems where both uncertainty and controllability are high. Nonetheless, the conceptual underpinnings of adaptive management are simple; there will always be inherent uncertainty and unpredictability in the dynamics and behavior of complex social-ecological systems, but management decisions must still be made, and whenever possible, we should incorporate

  13. Robust adaptive control of MEMS triaxial gyroscope using fuzzy compensator.

    PubMed

    Fei, Juntao; Zhou, Jian

    2012-12-01

    In this paper, a robust adaptive control strategy using a fuzzy compensator for MEMS triaxial gyroscope, which has system nonlinearities, including model uncertainties and external disturbances, is proposed. A fuzzy logic controller that could compensate for the model uncertainties and external disturbances is incorporated into the adaptive control scheme in the Lyapunov framework. The proposed adaptive fuzzy controller can guarantee the convergence and asymptotical stability of the closed-loop system. The proposed adaptive fuzzy control strategy does not depend on accurate mathematical models, which simplifies the design procedure. The innovative development of intelligent control methods incorporated with conventional control for the MEMS gyroscope is derived with the strict theoretical proof of the Lyapunov stability. Numerical simulations are investigated to verify the effectiveness of the proposed adaptive fuzzy control scheme and demonstrate the satisfactory tracking performance and robustness against model uncertainties and external disturbances compared with conventional adaptive control method.

  14. Vortical Flow Prediction using an Adaptive Unstructured Grid Method. Chapter 11

    NASA Technical Reports Server (NTRS)

    Pirzadeh, Shahyar Z.

    2009-01-01

    A computational fluid dynamics (CFD) method has been employed to compute vortical flows around slender wing/body configurations. The emphasis of the paper is on the effectiveness of an adaptive grid procedure in "capturing" concentrated vortices generated at sharp edges or flow separation lines of lifting surfaces flying at high angles of attack. The method is based on a tetrahedral unstructured grid technology developed at the NASA Langley Research Center. Two steady-state, subsonic, inviscid and Navier-Stokes flow test cases are presented to demonstrate the applicability of the method for solving vortical flow problems. The first test case concerns vortex flow over a simple 65 delta wing with different values of leading-edge radius. Although the geometry is quite simple, it poses a challenging problem for computing vortices originating from blunt leading edges. The second case is that of a more complex fighter configuration. The superiority of the adapted solutions in capturing the vortex flow structure over the conventional unadapted results is demonstrated by comparisons with the wind-tunnel experimental data. The study shows that numerical prediction of vortical flows is highly sensitive to the local grid resolution and that the implementation of grid adaptation is essential when applying CFD methods to such complicated flow problems.

  15. Assessing Children's Implicit Attitudes Using the Affect Misattribution Procedure

    ERIC Educational Resources Information Center

    Williams, Amanda; Steele, Jennifer R.; Lipman, Corey

    2016-01-01

    In the current research, we examined whether the Affect Misattribution Procedure (AMP) could be successfully adapted as an implicit measure of children's attitudes. We tested this possibility in 3 studies with 5- to 10-year-old children. In Study 1, we found evidence that children misattribute affect elicited by attitudinally positive (e.g., cute…

  16. On the Adaptive Control of the False Discovery Rate in Multiple Testing with Independent Statistics.

    ERIC Educational Resources Information Center

    Benjamini, Yoav; Hochberg, Yosef

    2000-01-01

    Presents an adaptive approach to multiple significance testing based on the procedure of Y. Benjamini and Y. Hochberg (1995) that first estimates the number of true null hypotheses and then uses that estimate in the Benjamini and Hochberg procedure. Uses the new procedure in examples from educational and behavioral studies and shows its control of…

  17. Adaptive optical interconnects: the ADDAPT project

    NASA Astrophysics Data System (ADS)

    Henker, Ronny; Pliva, Jan; Khafaji, Mahdi; Ellinger, Frank; Toifl, Thomas; Offrein, Bert; Cevrero, Alessandro; Oezkaya, Ilter; Seifried, Marc; Ledentsov, Nikolay; Kropp, Joerg-R.; Shchukin, Vitaly; Zoldak, Martin; Halmo, Leos; Turkiewicz, Jaroslaw; Meredith, Wyn; Eddie, Iain; Georgiades, Michael; Charalambides, Savvas; Duis, Jeroen; van Leeuwen, Pieter

    2015-09-01

    Existing optical networks are driven by dynamic user and application demands but operate statically at their maximum performance. Thus, optical links do not offer much adaptability and are not very energy-efficient. In this paper a novel approach of implementing performance and power adaptivity from system down to optical device, electrical circuit and transistor level is proposed. Depending on the actual data load, the number of activated link paths and individual device parameters like bandwidth, clock rate, modulation format and gain are adapted to enable lowering the components supply power. This enables flexible energy-efficient optical transmission links which pave the way for massive reductions of CO2 emission and operating costs in data center and high performance computing applications. Within the FP7 research project Adaptive Data and Power Aware Transceivers for Optical Communications (ADDAPT) dynamic high-speed energy-efficient transceiver subsystems are developed for short-range optical interconnects taking up new adaptive technologies and methods. The research of eight partners from industry, research and education spanning seven European countries includes the investigation of several adaptive control types and algorithms, the development of a full transceiver system, the design and fabrication of optical components and integrated circuits as well as the development of high-speed, low loss packaging solutions. This paper describes and discusses the idea of ADDAPT and provides an overview about the latest research results in this field.

  18. Procedural Quantum Programming

    NASA Astrophysics Data System (ADS)

    Ömer, Bernhard

    2002-09-01

    While classical computing science has developed a variety of methods and programming languages around the concept of the universal computer, the typical description of quantum algorithms still uses a purely mathematical, non-constructive formalism which makes no difference between a hydrogen atom and a quantum computer. This paper investigates, how the concept of procedural programming languages, the most widely used classical formalism for describing and implementing algorithms, can be adopted to the field of quantum computing, and how non-classical features like the reversibility of unitary transformations, the non-observability of quantum states or the lack of copy and erase operations can be reflected semantically. It introduces the key concepts of procedural quantum programming (hybrid target architecture, operator hierarchy, quantum data types, memory management, etc.) and presents the experimental language QCL, which implements these principles.

  19. Standards of neurosurgical procedures.

    PubMed

    Steiger, H J

    2001-01-01

    Written specifications with regard to procedures performed, equipment used, and training of the involved personnel are widely used in the industry and aviation to guarantee constant quality. Similar systems are progressively being introduced to medicine. We have made an effort to standardize surgical procedures by introducing step-by-step guidelines and checklists. The current experience shows that a system of written standards is applicable to neurosurgery and that the use of checklists contributes to the prevention of forgetting essential details. Written standards and checklists are also a useful training tool within a university hospital and facilitate communication of essentials to the residents. Comparison with aviation suggests that standardization leads to a remarkable but nonetheless limited reduction of adverse incidents. PMID:11840739

  20. Multi-model predictive control based on LMI: from the adaptation of the state-space model to the analytic description of the control law

    NASA Astrophysics Data System (ADS)

    Falugi, P.; Olaru, S.; Dumur, D.

    2010-08-01

    This article proposes an explicit robust predictive control solution based on linear matrix inequalities (LMIs). The considered predictive control strategy uses different local descriptions of the system dynamics and uncertainties and thus allows the handling of less conservative input constraints. The computed control law guarantees constraint satisfaction and asymptotic stability. The technique is effective for a class of nonlinear systems embedded into polytopic models. A detailed discussion of the procedures which adapt the partition of the state space is presented. For the practical implementation the construction of suitable (explicit) descriptions of the control law are described upon concrete algorithms.

  1. A computerized procedure for estimating nutrient intake.

    PubMed

    Williamson, M; Azen, C; Acosta, P

    1976-11-01

    A procedure was devised for computing intake in terms of calories, total protein, phenylalanine, carbohydrate, and fat. The procedure used a magnetic tape containing 3,122 numbered food items. The nutrient composition of each food was reported for 100 g of the edible portion of the food. In addition, diet diaries were prepared in which the foods eaten during the preceding 24-hr period, the code for each food corresponding to the number for the same item on the magnetic tape, and the number of units of each food eaten were recorded. A computer program then was written that calculated the amounts of intake per day for each nutrient. Application of the procedure for 42 consecutive days on the daily diet records of 43 adult carriers of the phenylalanine hydroxylase enzyme formed the data base used to determine if aspartame significantly increased levels of phenylalanine in the blood. Adaptations of the procedure permit calculations of intake for periods from 1 to 30 days and analyses of additional nutrients including calcium, phosphorous, iron, vitamin A, thiamine, riboflavin, niacin, and ascorbic acid.

  2. Practical procedures: oxygen therapy.

    PubMed

    Olive, Sandra

    Knowing when to start patients on oxygen therapy can save lives, but ongoing assessment and evaluation must be carried out to ensure the treatment is safe and effective. This article outlines when oxygen therapy should be used and the procedures to follow. It also describes the delivery methods applicable to different patient groups, along with the appropriate target saturation ranges, and details relevant nurse competencies.

  3. The Superintendent and Grievance Procedures.

    ERIC Educational Resources Information Center

    Kleinmann, Jack H.

    Grievance adjustment between teachers and administrators is viewed as a misunderstood process. The problem is treated under four main headings: (1) Purposes and characteristics of an effective grievance procedure, (2) status of grievance procedures in education, (3) relationship of grievance procedures to professional negotiation procedures, and…

  4. Vectorizable algorithms for adaptive schemes for rapid analysis of SSME flows

    NASA Technical Reports Server (NTRS)

    Oden, J. Tinsley

    1987-01-01

    An initial study into vectorizable algorithms for use in adaptive schemes for various types of boundary value problems is described. The focus is on two key aspects of adaptive computational methods which are crucial in the use of such methods (for complex flow simulations such as those in the Space Shuttle Main Engine): the adaptive scheme itself and the applicability of element-by-element matrix computations in a vectorizable format for rapid calculations in adaptive mesh procedures.

  5. 46 CFR 153.1065 - Sodium chlorate solutions.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 46 Shipping 5 2012-10-01 2012-10-01 false Sodium chlorate solutions. 153.1065 Section 153.1065... Procedures § 153.1065 Sodium chlorate solutions. (a) No person may load sodium chlorate solutions into a... before loading. (b) The person in charge of cargo transfer shall make sure that spills of sodium...

  6. 46 CFR 153.1065 - Sodium chlorate solutions.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 46 Shipping 5 2011-10-01 2011-10-01 false Sodium chlorate solutions. 153.1065 Section 153.1065... Procedures § 153.1065 Sodium chlorate solutions. (a) No person may load sodium chlorate solutions into a... before loading. (b) The person in charge of cargo transfer shall make sure that spills of sodium...

  7. Favorite Demonstrations: Exothermic Crystallization from a Supersaturated Solution.

    ERIC Educational Resources Information Center

    Kauffman, George B.; And Others

    1986-01-01

    The use of sodium acetate solution to show supersaturation is a favorite among lecture demonstrations. However, careful adjustment of the solute-to-water ratio must be made to attain the most spectacular effect--complete solidification of the solution. Procedures to accomplish this are provided and discussed. (JN)

  8. 46 CFR 153.1035 - Acetone cyanohydrin or lactonitrile solutions.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 46 Shipping 5 2013-10-01 2013-10-01 false Acetone cyanohydrin or lactonitrile solutions. 153.1035... Special Cargo Procedures § 153.1035 Acetone cyanohydrin or lactonitrile solutions. No person may operate a tankship carrying a cargo of acetone cyanohydrin or lactonitrile solutions, unless that cargo is...

  9. 46 CFR 153.1035 - Acetone cyanohydrin or lactonitrile solutions.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 46 Shipping 5 2011-10-01 2011-10-01 false Acetone cyanohydrin or lactonitrile solutions. 153.1035... Special Cargo Procedures § 153.1035 Acetone cyanohydrin or lactonitrile solutions. No person may operate a tankship carrying a cargo of acetone cyanohydrin or lactonitrile solutions, unless that cargo is...

  10. 46 CFR 153.1035 - Acetone cyanohydrin or lactonitrile solutions.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 46 Shipping 5 2014-10-01 2014-10-01 false Acetone cyanohydrin or lactonitrile solutions. 153.1035... Special Cargo Procedures § 153.1035 Acetone cyanohydrin or lactonitrile solutions. No person may operate a tankship carrying a cargo of acetone cyanohydrin or lactonitrile solutions, unless that cargo is...

  11. 46 CFR 153.1035 - Acetone cyanohydrin or lactonitrile solutions.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 46 Shipping 5 2012-10-01 2012-10-01 false Acetone cyanohydrin or lactonitrile solutions. 153.1035... Special Cargo Procedures § 153.1035 Acetone cyanohydrin or lactonitrile solutions. No person may operate a tankship carrying a cargo of acetone cyanohydrin or lactonitrile solutions, unless that cargo is...

  12. Solution-Focused Therapy: Toward the Identification of Therapeutic Tasks.

    ERIC Educational Resources Information Center

    Molnar, Alex; de Shazer, Steve

    1987-01-01

    Notes that brief therapy has often been regarded as "problem solving therapy." Discusses development of a solution-focused approach to clinical practice and describes solution-focused therapeutic tasks and interventions. Outlines some of clinical procedures and interventions possible when a solution-focused approach is used. (Author/NB)

  13. 46 CFR 153.1035 - Acetone cyanohydrin or lactonitrile solutions.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 46 Shipping 5 2010-10-01 2010-10-01 false Acetone cyanohydrin or lactonitrile solutions. 153.1035... Special Cargo Procedures § 153.1035 Acetone cyanohydrin or lactonitrile solutions. No person may operate a tankship carrying a cargo of acetone cyanohydrin or lactonitrile solutions, unless that cargo is...

  14. An adaptive pseudospectral method for discontinuous problems

    NASA Technical Reports Server (NTRS)

    Augenbaum, Jeffrey M.

    1988-01-01

    The accuracy of adaptively chosen, mapped polynomial approximations is studied for functions with steep gradients or discontinuities. It is shown that, for steep gradient functions, one can obtain spectral accuracy in the original coordinate system by using polynomial approximations in a transformed coordinate system with substantially fewer collocation points than are necessary using polynomial expansion directly in the original, physical, coordinate system. It is also shown that one can avoid the usual Gibbs oscillation associated with steep gradient solutions of hyperbolic pde's by approximation in suitably chosen coordinate systems. Continuous, high gradient solutions are computed with spectral accuracy (as measured in the physical coordinate system). Discontinuous solutions associated with nonlinear hyperbolic equations can be accurately computed by using an artificial viscosity chosen to smooth out the solution in the mapped, computational domain. Thus, shocks can be effectively resolved on a scale that is subgrid to the resolution available with collocation only in the physical domain. Examples with Fourier and Chebyshev collocation are given.

  15. Logarithmic Adaptive Quantization Projection for Audio Watermarking

    NASA Astrophysics Data System (ADS)

    Zhao, Xuemin; Guo, Yuhong; Liu, Jian; Yan, Yonghong; Fu, Qiang

    In this paper, a logarithmic adaptive quantization projection (LAQP) algorithm for digital watermarking is proposed. Conventional quantization index modulation uses a fixed quantization step in the watermarking embedding procedure, which leads to poor fidelity. Moreover, the conventional methods are sensitive to value-metric scaling attack. The LAQP method combines the quantization projection scheme with a perceptual model. In comparison to some conventional quantization methods with a perceptual model, the LAQP only needs to calculate the perceptual model in the embedding procedure, avoiding the decoding errors introduced by the difference of the perceptual model used in the embedding and decoding procedure. Experimental results show that the proposed watermarking scheme keeps a better fidelity and is robust against the common signal processing attack. More importantly, the proposed scheme is invariant to value-metric scaling attack.

  16. An adaptive lidar

    NASA Astrophysics Data System (ADS)

    Oshlakov, V. G.; Andreev, M. I.; Malykh, D. D.

    2009-09-01

    Using the polarization characteristics of a target and its underlying surface one can change the target contrast range. As the target one can use the compact and discrete structures with different characteristics to reflect electromagnetic waves. An important problem, solved by the adaptive polarization lidar, is to determine the availability and identification of different targets based on their polarization characteristics against the background of underlying surface, which polarization characteristics are unknown. Another important problem of the adaptive polarization lidar is a search for the objects, which polarization characteristics are unknown, against the background of underlying surface, which polarization characteristics are known. The adaptive polarization lidar makes it possible to determine the presence of impurities in sea water. The characteristics of the adaptive polarization lidar undergo variations, i.e., polarization characteristics of a sensing signal and polarization characteristics of the receiver are varied depending on the problem to be solved. One of the versions of construction of the adaptive polarization lidar is considered. The increase of the contrast in the adaptive lidar has been demonstrated by the numerical experiment when sensing hydrosols on the background of the Rayleigh scattering, caused by clear water. The numerical experiment has also demonstrated the increase of the contrast in the adaptive lidar when sensing at two wavelengths of dry haze and dense haze on the background of the Rayleigh scattering, caused by the clear atmosphere. The most effective wavelength was chosen.

  17. Assessment of Three “WHO” Patient Safety Solutions: Where Do We Stand and What Can We Do?

    PubMed Central

    Banihashemi, Sheida; Hatam, Nahid; Zand, Farid; Kharazmi, Erfan; Nasimi, Soheila; Askarian, Mehrdad

    2015-01-01

    Background: Most medical errors are preventable. The aim of this study was to compare the current execution of the 3 patient safety solutions with WHO suggested actions and standards. Methods: Data collection forms and direct observation were used to determine the status of implementation of existing protocols, resources, and tools. Results: In the field of patient hand-over, there was no standardized approach. In the field of the performance of correct procedure at the correct body site, there were no safety checklists, guideline, and educational content for informing the patients and their families about the procedure. In the field of hand hygiene (HH), although availability of necessary resources was acceptable, availability of promotional HH posters and reminders was substandard. Conclusions: There are some limitations of resources, protocols, and standard checklists in all three areas. We designed some tools that will help both wards to improve patient safety by the implementation of adapted WHO suggested actions. PMID:26900434

  18. Self-adaptive predictor-corrector algorithm for static nonlinear structural analysis

    NASA Technical Reports Server (NTRS)

    Padovan, J.

    1981-01-01

    A multiphase selfadaptive predictor corrector type algorithm was developed. This algorithm enables the solution of highly nonlinear structural responses including kinematic, kinetic and material effects as well as pro/post buckling behavior. The strategy involves three main phases: (1) the use of a warpable hyperelliptic constraint surface which serves to upperbound dependent iterate excursions during successive incremental Newton Ramphson (INR) type iterations; (20 uses an energy constraint to scale the generation of successive iterates so as to maintain the appropriate form of local convergence behavior; (3) the use of quality of convergence checks which enable various self adaptive modifications of the algorithmic structure when necessary. The restructuring is achieved by tightening various conditioning parameters as well as switch to different algorithmic levels to improve the convergence process. The capabilities of the procedure to handle various types of static nonlinear structural behavior are illustrated.

  19. [Adaptive optics for ophthalmology].

    PubMed

    Saleh, M

    2016-04-01

    Adaptive optics is a technology enhancing the visual performance of an optical system by correcting its optical aberrations. Adaptive optics have already enabled several breakthroughs in the field of visual sciences, such as improvement of visual acuity in normal and diseased eyes beyond physiologic limits, and the correction of presbyopia. Adaptive optics technology also provides high-resolution, in vivo imaging of the retina that may eventually help to detect the onset of retinal conditions at an early stage and provide better assessment of treatment efficacy.

  20. Adaptive network countermeasures.

    SciTech Connect

    McClelland-Bane, Randy; Van Randwyk, Jamie A.; Carathimas, Anthony G.; Thomas, Eric D.

    2003-10-01

    This report describes the results of a two-year LDRD funded by the Differentiating Technologies investment area. The project investigated the use of countermeasures in protecting computer networks as well as how current countermeasures could be changed in order to adapt with both evolving networks and evolving attackers. The work involved collaboration between Sandia employees and students in the Sandia - California Center for Cyber Defenders (CCD) program. We include an explanation of the need for adaptive countermeasures, a description of the architecture we designed to provide adaptive countermeasures, and evaluations of the system.