#### Sample records for adaptive numerical integration

1. Numerical Integration

ERIC Educational Resources Information Center

Sozio, Gerry

2009-01-01

Senior secondary students cover numerical integration techniques in their mathematics courses. In particular, students would be familiar with the "midpoint rule," the elementary "trapezoidal rule" and "Simpson's rule." This article derives these techniques by methods which secondary students may not be familiar with and an approach that…

2. Adaptive Numerical Integration for Item Response Theory. Research Report. ETS RR-07-06

ERIC Educational Resources Information Center

Antal, Tamás; Oranje, Andreas

2007-01-01

Well-known numerical integration methods are applied to item response theory (IRT) with special emphasis on the estimation of the latent regression model of NAEP [National Assessment of Educational Progress]. An argument is made that the Gauss-Hermite rule enhanced with Cholesky decomposition and normal approximation of the response likelihood is…

3. Integrated numerical modeling of a landslide early warning system in a context of adaptation to future climatic pressures

Khabarov, Nikolay; Huggel, Christian; Obersteiner, Michael; Ramírez, Juan Manuel

2010-05-01

Mountain regions are typically characterized by rugged terrain which is susceptible to different types of landslides during high-intensity precipitation. Landslides account for billions of dollars of damage and many casualties, and are expected to increase in frequency in the future due to a projected increase of precipitation intensity. Early warning systems (EWS) are thought to be a primary tool for related disaster risk reduction and climate change adaptation to extreme climatic events and hydro-meteorological hazards, including landslides. An EWS for hazards such as landslides consist of different components, including environmental monitoring instruments (e.g. rainfall or flow sensors), physical or empirical process models to support decision-making (warnings, evacuation), data and voice communication, organization and logistics-related procedures, and population response. Considering this broad range, EWS are highly complex systems, and it is therefore difficult to understand the effect of the different components and changing conditions on the overall performance, ultimately being expressed as human lives saved or structural damage reduced. In this contribution we present a further development of our approach to assess a landslide EWS in an integral way, both at the system and component level. We utilize a numerical model using 6 hour rainfall data as basic input. A threshold function based on a rainfall-intensity/duration relation was applied as a decision criterion for evacuation. Damage to infrastructure and human lives was defined as a linear function of landslide magnitude, with the magnitude modelled using a power function of landslide frequency. Correct evacuation was assessed with a ‘true' reference rainfall dataset versus a dataset of artificially reduced quality imitating the observation system component. Performance of the EWS using these rainfall datasets was expressed in monetary terms (i.e. damage related to false and correct evacuation). We

4. Theory of axially symmetric cusped focusing: numerical evaluation of a Bessoid integral by an adaptive contour algorithm

Kirk, N. P.; Connor, J. N. L.; Curtis, P. R.; Hobbs, C. A.

2000-07-01

A numerical procedure for the evaluation of the Bessoid canonical integral J({x,y}) is described. J({x,y}) is defined, for x and y real, by eq1 where J0(·) is a Bessel function of order zero. J({x,y}) plays an important role in the description of cusped focusing when there is axial symmetry present. It arises in the diffraction theory of aberrations, in the design of optical instruments and of highly directional microwave antennas and in the theory of image formation for high-resolution electron microscopes. The numerical procedure replaces the integration path along the real t axis with a more convenient contour in the complex t plane, thereby rendering the oscillatory integrand more amenable to numerical quadrature. The computations use a modified version of the CUSPINT computer code (Kirk et al 2000 Comput. Phys. Commun. at press), which evaluates the cuspoid canonical integrals and their first-order partial derivatives. Plots and tables of J({x,y}) and its zeros are presented for the grid -8.0≤x≤8.0 and -8.0≤y≤8.0. Some useful series expansions of J({x,y}) are also derived.

PubMed Central

2015-01-01

Conformational changes that occur upon ligand binding may be too slow to observe on the time scales routinely accessible using molecular dynamics simulations. The adaptive integration method (AIM) leverages the notion that when a ligand is either fully coupled or decoupled, according to λ, barrier heights may change, making some conformational transitions more accessible at certain λ values. AIM adaptively changes the value of λ in a single simulation so that conformations sampled at one value of λ seed the conformational space sampled at another λ value. Adapting the value of λ throughout a simulation, however, does not resolve issues in sampling when barriers remain high regardless of the λ value. In this work, we introduce a new method, called Accelerated AIM (AcclAIM), in which the potential energy function is flattened at intermediate values of λ, promoting the exploration of conformational space as the ligand is decoupled from its receptor. We show, with both a simple model system (Bromocyclohexane) and the more complex biomolecule Thrombin, that AcclAIM is a promising approach to overcome high barriers in the calculation of free energies, without the need for any statistical reweighting or additional processors. PMID:24780083

6. Adaptive Numerical Algorithms in Space Weather Modeling

NASA Technical Reports Server (NTRS)

Toth, Gabor; vanderHolst, Bart; Sokolov, Igor V.; DeZeeuw, Darren; Gombosi, Tamas I.; Fang, Fang; Manchester, Ward B.; Meng, Xing; Nakib, Dalal; Powell, Kenneth G.; Stout, Quentin F.; Glocer, Alex; Ma, Ying-Juan; Opher, Merav

2010-01-01

Space weather describes the various processes in the Sun-Earth system that present danger to human health and technology. The goal of space weather forecasting is to provide an opportunity to mitigate these negative effects. Physics-based space weather modeling is characterized by disparate temporal and spatial scales as well as by different physics in different domains. A multi-physics system can be modeled by a software framework comprising of several components. Each component corresponds to a physics domain, and each component is represented by one or more numerical models. The publicly available Space Weather Modeling Framework (SWMF) can execute and couple together several components distributed over a parallel machine in a flexible and efficient manner. The framework also allows resolving disparate spatial and temporal scales with independent spatial and temporal discretizations in the various models. Several of the computationally most expensive domains of the framework are modeled by the Block-Adaptive Tree Solar wind Roe Upwind Scheme (BATS-R-US) code that can solve various forms of the magnetohydrodynamics (MHD) equations, including Hall, semi-relativistic, multi-species and multi-fluid MHD, anisotropic pressure, radiative transport and heat conduction. Modeling disparate scales within BATS-R-US is achieved by a block-adaptive mesh both in Cartesian and generalized coordinates. Most recently we have created a new core for BATS-R-US: the Block-Adaptive Tree Library (BATL) that provides a general toolkit for creating, load balancing and message passing in a 1, 2 or 3 dimensional block-adaptive grid. We describe the algorithms of BATL and demonstrate its efficiency and scaling properties for various problems. BATS-R-US uses several time-integration schemes to address multiple time-scales: explicit time stepping with fixed or local time steps, partially steady-state evolution, point-implicit, semi-implicit, explicit/implicit, and fully implicit numerical

7. Adaptive numerical algorithms in space weather modeling

Tóth, Gábor; van der Holst, Bart; Sokolov, Igor V.; De Zeeuw, Darren L.; Gombosi, Tamas I.; Fang, Fang; Manchester, Ward B.; Meng, Xing; Najib, Dalal; Powell, Kenneth G.; Stout, Quentin F.; Glocer, Alex; Ma, Ying-Juan; Opher, Merav

2012-02-01

Space weather describes the various processes in the Sun-Earth system that present danger to human health and technology. The goal of space weather forecasting is to provide an opportunity to mitigate these negative effects. Physics-based space weather modeling is characterized by disparate temporal and spatial scales as well as by different relevant physics in different domains. A multi-physics system can be modeled by a software framework comprising several components. Each component corresponds to a physics domain, and each component is represented by one or more numerical models. The publicly available Space Weather Modeling Framework (SWMF) can execute and couple together several components distributed over a parallel machine in a flexible and efficient manner. The framework also allows resolving disparate spatial and temporal scales with independent spatial and temporal discretizations in the various models. Several of the computationally most expensive domains of the framework are modeled by the Block-Adaptive Tree Solarwind Roe-type Upwind Scheme (BATS-R-US) code that can solve various forms of the magnetohydrodynamic (MHD) equations, including Hall, semi-relativistic, multi-species and multi-fluid MHD, anisotropic pressure, radiative transport and heat conduction. Modeling disparate scales within BATS-R-US is achieved by a block-adaptive mesh both in Cartesian and generalized coordinates. Most recently we have created a new core for BATS-R-US: the Block-Adaptive Tree Library (BATL) that provides a general toolkit for creating, load balancing and message passing in a 1, 2 or 3 dimensional block-adaptive grid. We describe the algorithms of BATL and demonstrate its efficiency and scaling properties for various problems. BATS-R-US uses several time-integration schemes to address multiple time-scales: explicit time stepping with fixed or local time steps, partially steady-state evolution, point-implicit, semi-implicit, explicit/implicit, and fully implicit

8. Automatic numerical integration methods for Feynman integrals through 3-loop

de Doncker, E.; Yuasa, F.; Kato, K.; Ishikawa, T.; Olagbemi, O.

2015-05-01

We give numerical integration results for Feynman loop diagrams through 3-loop such as those covered by Laporta [1]. The methods are based on automatic adaptive integration, using iterated integration and extrapolation with programs from the QUADPACK package, or multivariate techniques from the ParInt package. The Dqags algorithm from QuadPack accommodates boundary singularities of fairly general types. PARINT is a package for multivariate integration layered over MPI (Message Passing Interface), which runs on clusters and incorporates advanced parallel/distributed techniques such as load balancing among processes that may be distributed over a network of nodes. Results are included for 3-loop self-energy diagrams without IR (infra-red) or UV (ultra-violet) singularities. A procedure based on iterated integration and extrapolation yields a novel method of numerical regularization for integrals with UV terms, and is applied to a set of 2-loop self-energy diagrams with UV singularities.

9. Cuba: Multidimensional numerical integration library

Hahn, Thomas

2016-08-01

The Cuba library offers four independent routines for multidimensional numerical integration: Vegas, Suave, Divonne, and Cuhre. The four algorithms work by very different methods, and can integrate vector integrands and have very similar Fortran, C/C++, and Mathematica interfaces. Their invocation is very similar, making it easy to cross-check by substituting one method by another. For further safeguarding, the output is supplemented by a chi-square probability which quantifies the reliability of the error estimate.

10. Recursive adaptive frame integration limited

Rafailov, Michael K.

2006-05-01

Recursive Frame Integration Limited was proposed as a way to improve frame integration performance and mitigate issues related to high data rate needed for conventional frame integration. The technique applies two thresholds - one tuned for optimum probability of detection, the other to manage required false alarm rate - and allows a non-linear integration process that, along with Signal-to-Noise Ratio (SNR) gain, provides system designers more capability where cost, weight, or power considerations limit system data rate, processing, or memory capability. However, Recursive Frame Integration Limited may have performance issues when single frame SNR is really low. Recursive Adaptive Frame Integration Limited is proposed as a means to improve limited integration performance with really low single frame SNR. It combines the benefits of nonlinear recursive limited frame integration and adaptive thresholds with a kind of conventional frame integration.

SciTech Connect

Harrison, Robert J.; Beylkin, Gregory; Bischoff, Florian A.; Calvin, Justus A.; Fann, George I.; Fosso-Tande, Jacob; Galindo, Diego; Hammond, Jeff R.; Hartman-Baker, Rebecca; Hill, Judith C.; Jia, Jun; Kottmann, Jakob S.; Yvonne Ou, M-J.; Pei, Junchen; Ratcliff, Laura E.; Reuter, Matthew G.; Richie-Halford, Adam C.; Romero, Nichols A.; Sekino, Hideo; Shelton, William A.; Sundahl, Bryan E.; Thornton, W. Scott; Valeev, Edward F.; Vázquez-Mayagoitia, Álvaro; Vence, Nicholas; Yanai, Takeshi; Yokoi, Yukina

2016-01-01

MADNESS (multiresolution adaptive numerical environment for scientific simulation) is a high-level software environment for solving integral and differential equations in many dimensions that uses adaptive and fast harmonic analysis methods with guaranteed precision based on multiresolution analysis and separated representations. Underpinning the numerical capabilities is a powerful petascale parallel programming environment that aims to increase both programmer productivity and code scalability. This paper describes the features and capabilities of MADNESS and briefly discusses some current applications in chemistry and several areas of physics.

SciTech Connect

Harrison, Robert J.; Beylkin, Gregory; Bischoff, Florian A.; Calvin, Justus A.; Fann, George I.; Fosso-Tande, Jacob; Galindo, Diego; Hammond, Jeff R.; Hartman-Baker, Rebecca; Hill, Judith C.; Jia, Jun; Kottmann, Jakob S.; Yvonne Ou, M-J.; Pei, Junchen; Ratcliff, Laura E.; Reuter, Matthew G.; Richie-Halford, Adam C.; Romero, Nichols A.; Sekino, Hideo; Shelton, William A.; Sundahl, Bryan E.; Thornton, W. Scott; Valeev, Edward F.; Vázquez-Mayagoitia, Álvaro; Vence, Nicholas; Yanai, Takeshi; Yokoi, Yukina

2016-01-01

We present MADNESS (multiresolution adaptive numerical environment for scientific simulation) that is a high-level software environment for solving integral and differential equations in many dimensions that uses adaptive and fast harmonic analysis methods with guaranteed precision that are based on multiresolution analysis and separated representations. Underpinning the numerical capabilities is a powerful petascale parallel programming environment that aims to increase both programmer productivity and code scalability. This paper describes the features and capabilities of MADNESS and briefly discusses some current applications in chemistry and several areas of physics.

13. Adaptive Encoding for Numerical Data Compression.

ERIC Educational Resources Information Center

Yokoo, Hidetoshi

1994-01-01

Discusses the adaptive compression of computer files of numerical data whose statistical properties are not given in advance. A new lossless coding method for this purpose, which utilizes Adelson-Velskii and Landis (AVL) trees, is proposed. The method is effective to any word length. Its application to the lossless compression of gray-scale images…

14. Adaptive Urban Dispersion Integrated Model

SciTech Connect

Wissink, A; Chand, K; Kosovic, B; Chan, S; Berger, M; Chow, F K

2005-11-03

Numerical simulations represent a unique predictive tool for understanding the three-dimensional flow fields and associated concentration distributions from contaminant releases in complex urban settings (Britter and Hanna 2003). Utilization of the most accurate urban models, based on fully three-dimensional computational fluid dynamics (CFD) that solve the Navier-Stokes equations with incorporated turbulence models, presents many challenges. We address two in this work; first, a fast but accurate way to incorporate the complex urban terrain, buildings, and other structures to enforce proper boundary conditions in the flow solution; second, ways to achieve a level of computational efficiency that allows the models to be run in an automated fashion such that they may be used for emergency response and event reconstruction applications. We have developed a new integrated urban dispersion modeling capability based on FEM3MP (Gresho and Chan 1998, Chan and Stevens 2000), a CFD model from Lawrence Livermore National Lab. The integrated capability incorporates fast embedded boundary mesh generation for geometrically complex problems and full three-dimensional Cartesian adaptive mesh refinement (AMR). Parallel AMR and embedded boundary gridding support are provided through the SAMRAI library (Wissink et al. 2001, Hornung and Kohn 2002). Embedded boundary mesh generation has been demonstrated to be an automatic, fast, and efficient approach for problem setup. It has been used for a variety of geometrically complex applications, including urban applications (Pullen et al. 2005). The key technology we introduce in this work is the application of AMR, which allows the application of high-resolution modeling to certain important features, such as individual buildings and high-resolution terrain (including important vegetative and land-use features). It also allows the urban scale model to be readily interfaced with coarser resolution meso or regional scale models. This talk

15. Numerical design of an adaptive aileron

Amendola, Gianluca; Dimino, Ignazio; Concilio, Antonio; Magnifico, Marco; Pecora, Rosario

2016-04-01

16. GRChombo: Numerical relativity with adaptive mesh refinement

Clough, Katy; Figueras, Pau; Finkel, Hal; Kunesch, Markus; Lim, Eugene A.; Tunyasuvunakool, Saran

2015-12-01

In this work, we introduce {\\mathtt{GRChombo}}: a new numerical relativity code which incorporates full adaptive mesh refinement (AMR) using block structured Berger-Rigoutsos grid generation. The code supports non-trivial ‘many-boxes-in-many-boxes’ mesh hierarchies and massive parallelism through the message passing interface. {\\mathtt{GRChombo}} evolves the Einstein equation using the standard BSSN formalism, with an option to turn on CCZ4 constraint damping if required. The AMR capability permits the study of a range of new physics which has previously been computationally infeasible in a full 3 + 1 setting, while also significantly simplifying the process of setting up the mesh for these problems. We show that {\\mathtt{GRChombo}} can stably and accurately evolve standard spacetimes such as binary black hole mergers and scalar collapses into black holes, demonstrate the performance characteristics of our code, and discuss various physics problems which stand to benefit from the AMR technique.

17. Numerical Integration: One Step at a Time

ERIC Educational Resources Information Center

Yang, Yajun; Gordon, Sheldon P.

2016-01-01

This article looks at the effects that adding a single extra subdivision has on the level of accuracy of some common numerical integration routines. Instead of automatically doubling the number of subdivisions for a numerical integration rule, we investigate what happens with a systematic method of judiciously selecting one extra subdivision for…

18. Numerical integration using Wang Landau sampling

Li, Y. W.; Wüst, T.; Landau, D. P.; Lin, H. Q.

2007-09-01

We report a new application of Wang-Landau sampling to numerical integration that is straightforward to implement. It is applicable to a wide variety of integrals without restrictions and is readily generalized to higher-dimensional problems. The feasibility of the method results from a reinterpretation of the density of states in statistical physics to an appropriate measure for numerical integration. The properties of this algorithm as a new kind of Monte Carlo integration scheme are investigated with some simple integrals, and a potential application of the method is illustrated by the evaluation of integrals arising in perturbation theory of quantum many-body systems.

19. Adaptive Through-Thickness Integration Strategy for Shell Elements

Burchitz, I. A.; Meinders, T.; Huétink, J.

2007-05-01

Reliable numerical prediction of springback in sheet metal forming is essential for the automotive industry. There are numerous factors that influence the accuracy of springback prediction by using the finite element method. One of the reasons is the through-thickness numerical integration of shell elements. It is known that even for simple problems the traditional integration schemes may require up to 50 integration points to achieve a high accuracy of springback analysis. An adaptive through-thickness integration strategy can be a good alternative. The strategy defines abscissas and weights depending on the integrand's properties and, thus, can adapt itself to improve the accuracy of integration. A concept of the adaptive through-thickness integration strategy for shell elements is presented. It is tested using a simple problem of bending of a beam under tension. Results show that for a similar set of material and process parameters the adaptive Simpson's rule with 7 integration points performs better than the traditional trapezoidal rule with 50 points. The adaptive through-thickness integration strategy for shell elements can improve the accuracy of springback prediction at minimal costs.

20. An Integrative Theory of Numerical Development

ERIC Educational Resources Information Center

Siegler, Robert; Lortie-Forgues, Hugues

2014-01-01

Understanding of numerical development is growing rapidly, but the volume and diversity of findings can make it difficult to perceive any coherence in the process. The integrative theory of numerical development posits that a coherent theme is present, however--progressive broadening of the set of numbers whose magnitudes can be accurately…

1. Orientation of the earth by numerical integration

NASA Technical Reports Server (NTRS)

Fajemirokun, F. A.; Hotter, F. D.; Mueller, I. I.

1976-01-01

A fundamental problem is the determination of the orientation of the earth in the celestial coordinate system. Classical reductions for precession and nutation can be expected to be consistent with the present-day observations, however, corrections to the classical theory are difficult to model because of the large number of coefficients involved. Consequently, a portion of the research has been devoted to numerically integrating the Eulerian equations of motion for a rigid earth and considering the six initial conditions of the integration as unknowns. Comparison of the three adjusted Eulerian angles from the numerical integration over 1000 days indicates agreement with classical theory to within 0.003 seconds of arc.

2. Adaptive numerical methods for partial differential equations

SciTech Connect

Cololla, P.

1995-07-01

This review describes a structured approach to adaptivity. The Automated Mesh Refinement (ARM) algorithms developed by M Berger are described, touching on hyperbolic and parabolic applications. Adaptivity is achieved by overlaying finer grids only in areas flagged by a generalized error criterion. The author discusses some of the issues involved in abutting disparate-resolution grids, and demonstrates that suitable algorithms exist for dissipative as well as hyperbolic systems.

3. Space-time adaptive numerical methods for geophysical applications.

PubMed

Castro, C E; Käser, M; Toro, E F

2009-11-28

In this paper we present high-order formulations of the finite volume and discontinuous Galerkin finite-element methods for wave propagation problems with a space-time adaptation technique using unstructured meshes in order to reduce computational cost without reducing accuracy. Both methods can be derived in a similar mathematical framework and are identical in their first-order version. In their extension to higher order accuracy in space and time, both methods use spatial polynomials of higher degree inside each element, a high-order solution of the generalized Riemann problem and a high-order time integration method based on the Taylor series expansion. The static adaptation strategy uses locally refined high-resolution meshes in areas with low wave speeds to improve the approximation quality. Furthermore, the time step length is chosen locally adaptive such that the solution is evolved explicitly in time by an optimal time step determined by a local stability criterion. After validating the numerical approach, both schemes are applied to geophysical wave propagation problems such as tsunami waves and seismic waves comparing the new approach with the classical global time-stepping technique. The problem of mesh partitioning for large-scale applications on multi-processor architectures is discussed and a new mesh partition approach is proposed and tested to further reduce computational cost.

4. Fibonacci numerical integration on a sphere

Hannay, J. H.; Nye, J. F.

2004-12-01

For elementary numerical integration on a sphere, there is a distinct advantage in using an oblique array of integration sampling points based on a chosen pair of successive Fibonacci numbers. The pattern has a familiar appearance of intersecting spirals, avoiding the local anisotropy of a conventional latitude longitude array. Besides the oblique Fibonacci array, the prescription we give is also based on a non-uniform scaling used for one-dimensional numerical integration, and indeed achieves the same order of accuracy as for one dimension: error ~N-6 for N points. This benefit of Fibonacci is not shared by domains of integration with boundaries (e.g., a square, for which it was originally proposed); with non-uniform scaling the error goes as N-3, with or without Fibonacci. For experimental measurements over a sphere our prescription is realized by a non-uniform Fibonacci array of weighted sampling points.

5. Highly Parallel, High-Precision Numerical Integration

SciTech Connect

Bailey, David H.; Borwein, Jonathan M.

2005-04-22

This paper describes a scheme for rapidly computing numerical values of definite integrals to very high accuracy, ranging from ordinary machine precision to hundreds or thousands of digits, even for functions with singularities or infinite derivatives at endpoints. Such a scheme is of interest not only in computational physics and computational chemistry, but also in experimental mathematics, where high-precision numerical values of definite integrals can be used to numerically discover new identities. This paper discusses techniques for a parallel implementation of this scheme, then presents performance results for 1-D and 2-D test suites. Results are also given for a certain problem from mathematical physics, which features a difficult singularity, confirming a conjecture to 20,000 digit accuracy. The performance rate for this latter calculation on 1024 CPUs is 690 Gflop/s. We believe that this and one other 20,000-digit integral evaluation that we report are the highest-precision non-trivial numerical integrations performed to date.

6. Adaptive integral robust control and application to electromechanical servo systems.

PubMed

Deng, Wenxiang; Yao, Jianyong

2017-03-01

This paper proposes a continuous adaptive integral robust control with robust integral of the sign of the error (RISE) feedback for a class of uncertain nonlinear systems, in which the RISE feedback gain is adapted online to ensure the robustness against disturbances without the prior bound knowledge of the additive disturbances. In addition, an adaptive compensation integrated with the proposed adaptive RISE feedback term is also constructed to further reduce design conservatism when the system also exists parametric uncertainties. Lyapunov analysis reveals the proposed controllers could guarantee the tracking errors are asymptotically converging to zero with continuous control efforts. To illustrate the high performance nature of the developed controllers, numerical simulations are provided. At the end, an application case of an actual electromechanical servo system driven by motor is also studied, with some specific design consideration, and comparative experimental results are obtained to verify the effectiveness of the proposed controllers.

7. Adapting Inspection Data for Computer Numerical Control

NASA Technical Reports Server (NTRS)

Hutchison, E. E.

1986-01-01

Machining time for repetitive tasks reduced. Program converts measurements of stub post locations by coordinate-measuring machine into form used by numerical-control computer. Work time thus reduced by 10 to 15 minutes for each post. Since there are 600 such posts on each injector, time saved per injector is 100 to 150 hours. With modifications this approach applicable to machining of many precise holes on large machine frames and similar objects.

8. Numerical integration routines for near-earth operations

NASA Technical Reports Server (NTRS)

Powers, W. F.

1973-01-01

Two general purpose numerical integration schemes were built into the NASA-JSC computer system. The state-of-the-art of numerical integration, the particular integrators built into the JSC computer system, and the use of the new integration packages are described. Background information about numerical integration and the variable-order, variable-stepsize Adams numerical integration technique is discussed. Results concerning the PEACE parameter optimization program are given along with recommendations and conclusions.

9. Efficient numerical evaluation of Feynman integrals

Li, Zhao; Wang, Jian; Yan, Qi-Shu; Zhao, Xiaoran

2016-03-01

Feynman loop integrals are a key ingredient for the calculation of higher order radiation effects, and are responsible for reliable and accurate theoretical prediction. We improve the efficiency of numerical integration in sector decomposition by implementing a quasi-Monte Carlo method associated with the CUDA/GPU technique. For demonstration we present the results of several Feynman integrals up to two loops in both Euclidean and physical kinematic regions in comparison with those obtained from FIESTA3. It is shown that both planar and non-planar two-loop master integrals in the physical kinematic region can be evaluated in less than half a minute with accuracy, which makes the direct numerical approach viable for precise investigation of higher order effects in multi-loop processes, e.g. the next-to-leading order QCD effect in Higgs pair production via gluon fusion with a finite top quark mass. Supported by the Natural Science Foundation of China (11305179 11475180), Youth Innovation Promotion Association, CAS, IHEP Innovation (Y4545170Y2), State Key Lab for Electronics and Particle Detectors, Open Project Program of State Key Laboratory of Theoretical Physics, Institute of Theoretical Physics, Chinese Academy of Sciences, China (Y4KF061CJ1), Cluster of Excellence Precision Physics, Fundamental Interactions and Structure of Matter (PRISMA-EXC 1098)

10. Numerical integration of diffraction integrals for a circular aperture

Cooper, I. J.; Sheppard, C. J. R.; Sharma, M.

It is possible to obtain an accurate irradiance distribution for the diffracted wave field from an aperture by the numerical evaluation of the two-dimensional diffraction integrals using a product-integration method in which Simpson's 1/3 rule is applied twice. The calculations can be done quickly using a standard PC by utilizing matrix operations on complex numbers with Matlab. The diffracted wave field can be calculated from the plane of the aperture to the far field without introducing many of the standard approximations that are used to give Fresnel or Fraunhofer diffraction. The numerical method is used to compare the diffracted irradiance distribution from a circular aperture as predicted by Kirchhoff, Rayleigh-Sommerfeld 1 and Rayleigh-Sommerfeld 2 diffraction integrals.

11. Adaptive Control of Event Integration

ERIC Educational Resources Information Center

Akyurek, Elkan G.; Toffanin, Paolo; Hommel, Bernhard

2008-01-01

Identifying 2 target stimuli in a rapid stream of visual symbols is much easier if the 2nd target appears immediately after the 1st target (i.e., at Lag 1) than if distractor stimuli intervene. As this phenomenon comes with a strong tendency to confuse the order of the targets, it seems to be due to the integration of both targets into the same…

12. Numerical Integral of Resistance Coefficients in Diffusion

Zhang, Q. S.

2017-01-01

The resistance coefficients in the screened Coulomb potential of stellar plasma are evaluated to high accuracy. I have analyzed the possible singularities in the integral of scattering angle. There are possible singularities in the case of an attractive potential. This may result in a problem for the numerical integral. In order to avoid the problem, I have used a proper scheme, e.g., splitting into many subintervals where the width of each subinterval is determined by the variation of the integrand, to calculate the scattering angle. The collision integrals are calculated by using Romberg’s method, therefore the accuracy is high (i.e., ∼10‑12). The results of collision integrals and their derivatives for ‑7 ≤ ψ ≤ 5 are listed. By using Hermite polynomial interpolation from those data, the collision integrals can be obtained with an accuracy of 10‑10. For very weakly coupled plasma (ψ ≥ 4.5), analytical fittings for collision integrals are available with an accuracy of 10‑11. I have compared the final results of resistance coefficients with other works and found that, for a repulsive potential, the results are basically the same as others’ for an attractive potential, the results in cases of intermediate and strong coupling show significant differences. The resulting resistance coefficients are tested in the solar model. Comparing with the widely used models of Cox et al. and Thoul et al., the resistance coefficients in the screened Coulomb potential lead to a slightly weaker effect in the solar model, which is contrary to the expectation of attempts to solve the solar abundance problem.

13. Protecting genome integrity during CRISPR immune adaptation.

PubMed

Wright, Addison V; Doudna, Jennifer A

2016-10-01

Bacterial CRISPR-Cas systems include genomic arrays of short repeats flanking foreign DNA sequences and provide adaptive immunity against viruses. Integration of foreign DNA must occur specifically to avoid damaging the genome or the CRISPR array, but surprisingly promiscuous activity occurs in vitro. Here we reconstituted full-site DNA integration and show that the Streptococcus pyogenes type II-A Cas1-Cas2 integrase maintains specificity in part through limitations on the second integration step. At non-CRISPR sites, integration stalls at the half-site intermediate, thereby enabling reaction reversal. S. pyogenes Cas1-Cas2 is highly specific for the leader-proximal repeat and recognizes the repeat's palindromic ends, thus fitting a model of independent recognition by distal Cas1 active sites. These findings suggest that DNA-insertion sites are less common than suggested by previous work, thereby preventing toxicity during CRISPR immune adaptation and maintaining host genome integrity.

14. Self-Adaptive Filon's Integration Method and Its Application to Computing Synthetic Seismograms

Zhang, Hai-Ming; Chen, Xiao-Fei

2001-03-01

Based on the principle of the self-adaptive Simpson integration method, and by incorporating the `fifth-order' Filon's integration algorithm [Bull. Seism. Soc. Am. 73(1983)913], we have proposed a simple and efficient numerical integration method, i.e., the self-adaptive Filon's integration method (SAFIM), for computing synthetic seismograms at large epicentral distances. With numerical examples, we have demonstrated that the SAFIM is not only accurate but also very efficient. This new integration method is expected to be very useful in seismology, as well as in computing similar oscillatory integrals in other branches of physics.

15. Numerical integration of asymptotic solutions of ordinary differential equations

NASA Technical Reports Server (NTRS)

Thurston, Gaylen A.

1989-01-01

Classical asymptotic analysis of ordinary differential equations derives approximate solutions that are numerically stable. However, the analysis also leads to tedious expansions in powers of the relevant parameter for a particular problem. The expansions are replaced with integrals that can be evaluated by numerical integration. The resulting numerical solutions retain the linear independence that is the main advantage of asymptotic solutions. Examples, including the Falkner-Skan equation from laminar boundary layer theory, illustrate the method of asymptotic analysis with numerical integration.

16. A Stable Adaptive Numerical Scheme for Hyperbolic Conservation Laws.

DTIC Science & Technology

1983-05-01

Bradley J. Lucier Mathematics Research Center University of Wisconsin- Madison * 610 Walnut Strest Madison, Wisconsin 53706 *May 1983 (Received April 5... 1983 ) ITO FILE COPY~D Approved for public release D TICTE Ditiuinunlimited JUL 2 0 19N3 Sponsored by E U. S. Army Research office National Science...CENTER A STABLE ADAPTIVE NUMERICAL SCHEME FOR HYPERBOLIC CONSERVATION LAWS * Bradley J. Lucier Technical Summary Report #2517 May 1983 ABSTRACT A new

SciTech Connect

Lipnikov, Konstantine; Danilov, A; Vassilevski, Y; Agonzal, A

2010-01-01

18. Efficient numerical integration of neutrino oscillations in matter

Casas, F.; D'Olivo, J. C.; Oteo, J. A.

2016-12-01

A special purpose solver, based on the Magnus expansion, well suited for the integration of the linear three neutrino oscillations equations in matter is proposed. The computations are speeded up to two orders of magnitude with respect to a general numerical integrator, a fact that could smooth the way for massive numerical integration concomitant with experimental data analyses. Detailed illustrations about numerical procedure and computer time costs are provided.

19. Adaptive Mesh Refinement and Adaptive Time Integration for Electrical Wave Propagation on the Purkinje System.

PubMed

Ying, Wenjun; Henriquez, Craig S

2015-01-01

A both space and time adaptive algorithm is presented for simulating electrical wave propagation in the Purkinje system of the heart. The equations governing the distribution of electric potential over the system are solved in time with the method of lines. At each timestep, by an operator splitting technique, the space-dependent but linear diffusion part and the nonlinear but space-independent reactions part in the partial differential equations are integrated separately with implicit schemes, which have better stability and allow larger timesteps than explicit ones. The linear diffusion equation on each edge of the system is spatially discretized with the continuous piecewise linear finite element method. The adaptive algorithm can automatically recognize when and where the electrical wave starts to leave or enter the computational domain due to external current/voltage stimulation, self-excitation, or local change of membrane properties. Numerical examples demonstrating efficiency and accuracy of the adaptive algorithm are presented.

20. Adaptive Mesh Refinement and Adaptive Time Integration for Electrical Wave Propagation on the Purkinje System

PubMed Central

Ying, Wenjun; Henriquez, Craig S.

2015-01-01

A both space and time adaptive algorithm is presented for simulating electrical wave propagation in the Purkinje system of the heart. The equations governing the distribution of electric potential over the system are solved in time with the method of lines. At each timestep, by an operator splitting technique, the space-dependent but linear diffusion part and the nonlinear but space-independent reactions part in the partial differential equations are integrated separately with implicit schemes, which have better stability and allow larger timesteps than explicit ones. The linear diffusion equation on each edge of the system is spatially discretized with the continuous piecewise linear finite element method. The adaptive algorithm can automatically recognize when and where the electrical wave starts to leave or enter the computational domain due to external current/voltage stimulation, self-excitation, or local change of membrane properties. Numerical examples demonstrating efficiency and accuracy of the adaptive algorithm are presented. PMID:26581455

1. Numerical simulation of immiscible viscous fingering using adaptive unstructured meshes

Adam, A.; Salinas, P.; Percival, J. R.; Pavlidis, D.; Pain, C.; Muggeridge, A. H.; Jackson, M.

2015-12-01

Displacement of one fluid by another in porous media occurs in various settings including hydrocarbon recovery, CO2 storage and water purification. When the invading fluid is of lower viscosity than the resident fluid, the displacement front is subject to a Saffman-Taylor instability and is unstable to transverse perturbations. These instabilities can grow, leading to fingering of the invading fluid. Numerical simulation of viscous fingering is challenging. The physics is controlled by a complex interplay of viscous and diffusive forces and it is necessary to ensure physical diffusion dominates numerical diffusion to obtain converged solutions. This typically requires the use of high mesh resolution and high order numerical methods. This is computationally expensive. We demonstrate here the use of a novel control volume - finite element (CVFE) method along with dynamic unstructured mesh adaptivity to simulate viscous fingering with higher accuracy and lower computational cost than conventional methods. Our CVFE method employs a discontinuous representation for both pressure and velocity, allowing the use of smaller control volumes (CVs). This yields higher resolution of the saturation field which is represented CV-wise. Moreover, dynamic mesh adaptivity allows high mesh resolution to be employed where it is required to resolve the fingers and lower resolution elsewhere. We use our results to re-examine the existing criteria that have been proposed to govern the onset of instability.Mesh adaptivity requires the mapping of data from one mesh to another. Conventional methods such as consistent interpolation do not readily generalise to discontinuous fields and are non-conservative. We further contribute a general framework for interpolation of CV fields by Galerkin projection. The method is conservative, higher order and yields improved results, particularly with higher order or discontinuous elements where existing approaches are often excessively diffusive.

PubMed

Tseng, Chien-Hao; Lin, Sheng-Fuu; Jwo, Dah-Jing

2016-07-26

This paper presents a sensor fusion method based on the combination of cubature Kalman filter (CKF) and fuzzy logic adaptive system (FLAS) for the integrated navigation systems, such as the GPS/INS (Global Positioning System/inertial navigation system) integration. The third-degree spherical-radial cubature rule applied in the CKF has been employed to avoid the numerically instability in the system model. In processing navigation integration, the performance of nonlinear filter based estimation of the position and velocity states may severely degrade caused by modeling errors due to dynamics uncertainties of the vehicle. In order to resolve the shortcoming for selecting the process noise covariance through personal experience or numerical simulation, a scheme called the fuzzy adaptive cubature Kalman filter (FACKF) is presented by introducing the FLAS to adjust the weighting factor of the process noise covariance matrix. The FLAS is incorporated into the CKF framework as a mechanism for timely implementing the tuning of process noise covariance matrix based on the information of degree of divergence (DOD) parameter. The proposed FACKF algorithm shows promising accuracy improvement as compared to the extended Kalman filter (EKF), unscented Kalman filter (UKF), and CKF approaches.

PubMed Central

Tseng, Chien-Hao; Lin, Sheng-Fuu; Jwo, Dah-Jing

2016-01-01

This paper presents a sensor fusion method based on the combination of cubature Kalman filter (CKF) and fuzzy logic adaptive system (FLAS) for the integrated navigation systems, such as the GPS/INS (Global Positioning System/inertial navigation system) integration. The third-degree spherical-radial cubature rule applied in the CKF has been employed to avoid the numerically instability in the system model. In processing navigation integration, the performance of nonlinear filter based estimation of the position and velocity states may severely degrade caused by modeling errors due to dynamics uncertainties of the vehicle. In order to resolve the shortcoming for selecting the process noise covariance through personal experience or numerical simulation, a scheme called the fuzzy adaptive cubature Kalman filter (FACKF) is presented by introducing the FLAS to adjust the weighting factor of the process noise covariance matrix. The FLAS is incorporated into the CKF framework as a mechanism for timely implementing the tuning of process noise covariance matrix based on the information of degree of divergence (DOD) parameter. The proposed FACKF algorithm shows promising accuracy improvement as compared to the extended Kalman filter (EKF), unscented Kalman filter (UKF), and CKF approaches. PMID:27472336

4. Adaptive numerical competency in a food-hoarding songbird.

PubMed

Hunt, Simon; Low, Jason; Burns, K C

2008-10-22

Most animals can distinguish between small quantities (less than four) innately. Many animals can also distinguish between larger quantities after extensive training. However, the adaptive significance of numerical discriminations in wild animals is almost completely unknown. We conducted a series of experiments to test whether a food-hoarding songbird, the New Zealand robin Petroica australis, uses numerical judgements when retrieving and pilfering cached food. Different numbers of mealworms were presented sequentially to wild birds in a pair of artificial cache sites, which were then obscured from view. Robins frequently chose the site containing more prey, and the accuracy of their number discriminations declined linearly with the total number of prey concealed, rising above-chance expectations in trials containing up to 12 prey items. A series of complementary experiments showed that these results could not be explained by time, volume, orientation, order or sensory confounds. Lastly, a violation of expectancy experiment, in which birds were allowed to retrieve a fraction of the prey they were originally offered, showed that birds searched for longer when they expected to retrieve more prey. Overall results indicate that New Zealand robins use a sophisticated numerical sense to retrieve and pilfer stored food, thus providing a critical link in understanding the evolution of numerical competency.

5. Orthogonal Metal Cutting Simulation Using Advanced Constitutive Equations with Damage and Fully Adaptive Numerical Procedure

Saanouni, Kkemais; Labergère, Carl; Issa, Mazen; Rassineux, Alain

2010-06-01

This work proposes a complete adaptive numerical methodology which uses `advanced' elastoplastic constitutive equations coupling: thermal effects, large elasto-viscoplasticity with mixed non linear hardening, ductile damage and contact with friction, for 2D machining simulation. Fully coupled (strong coupling) thermo-elasto-visco-plastic-damage constitutive equations based on the state variables under large plastic deformation developed for metal forming simulation are presented. The relevant numerical aspects concerning the local integration scheme as well as the global resolution strategy and the adaptive remeshing facility are briefly discussed. Applications are made to the orthogonal metal cutting by chip formation and segmentation under high velocity. The interactions between hardening, plasticity, ductile damage and thermal effects and their effects on the adiabatic shear band formation including the formation of cracks are investigated.

6. Error Estimates for Numerical Integration Rules

ERIC Educational Resources Information Center

Mercer, Peter R.

2005-01-01

The starting point for this discussion of error estimates is the fact that integrals that arise in Fourier series have properties that can be used to get improved bounds. This idea is extended to more general situations.

7. Modern industrial simulation tools: Kernel-level integration of high performance parallel processing, object-oriented numerics, and adaptive finite element analysis. Final report, July 16, 1993--September 30, 1997

SciTech Connect

Deb, M.K.; Kennon, S.R.

1998-04-01

A cooperative R&D effort between industry and the US government, this project, under the HPPP (High Performance Parallel Processing) initiative of the Dept. of Energy, started the investigations into parallel object-oriented (OO) numerics. The basic goal was to research and utilize the emerging technologies to create a physics-independent computational kernel for applications using adaptive finite element method. The industrial team included Computational Mechanics Co., Inc. (COMCO) of Austin, TX (as the primary contractor), Scientific Computing Associates, Inc. (SCA) of New Haven, CT, Texaco and CONVEX. Sandia National Laboratory (Albq., NM) was the technology partner from the government side. COMCO had the responsibility of the main kernel design and development, SCA had the lead in parallel solver technology and guidance on OO technologies was Sandia`s main expertise in this venture. CONVEX and Texaco supported the partnership by hardware resource and application knowledge, respectively. As such, a minimum of fifty-percent cost-sharing was provided by the industry partnership during this project. This report describes the R&D activities and provides some details about the prototype kernel and example applications.

8. Numerical study of Taylor bubbles with adaptive unstructured meshes

Xie, Zhihua; Pavlidis, Dimitrios; Percival, James; Pain, Chris; Matar, Omar; Hasan, Abbas; Azzopardi, Barry

2014-11-01

The Taylor bubble is a single long bubble which nearly fills the entire cross section of a liquid-filled circular tube. This type of bubble flow regime often occurs in gas-liquid slug flows in many industrial applications, including oil-and-gas production, chemical and nuclear reactors, and heat exchangers. The objective of this study is to investigate the fluid dynamics of Taylor bubbles rising in a vertical pipe filled with oils of extremely high viscosity (mimicking the ``heavy oils'' found in the oil-and-gas industry). A modelling and simulation framework is presented here which can modify and adapt anisotropic unstructured meshes to better represent the underlying physics of bubble rise and reduce the computational effort without sacrificing accuracy. The numerical framework consists of a mixed control-volume and finite-element formulation, a ``volume of fluid''-type method for the interface capturing based on a compressive control volume advection method, and a force-balanced algorithm for the surface tension implementation. Numerical examples of some benchmark tests and the dynamics of Taylor bubbles are presented to show the capability of this method. EPSRC Programme Grant, MEMPHIS, EP/K0039761/1.

9. Numerical integration of ordinary differential equations of various orders

NASA Technical Reports Server (NTRS)

Gear, C. W.

1969-01-01

Report describes techniques for the numerical integration of differential equations of various orders. Modified multistep predictor-corrector methods for general initial-value problems are discussed and new methods are introduced.

10. On the numeric integration of dynamic attitude equations

NASA Technical Reports Server (NTRS)

Crouch, P. E.; Yan, Y.; Grossman, Robert

1992-01-01

We describe new types of numerical integration algorithms developed by the authors. The main aim of the algorithms is to numerically integrate differential equations which evolve on geometric objects, such as the rotation group. The algorithms provide iterates which lie on the prescribed geometric object, either exactly, or to some prescribed accuracy, independent of the order of the algorithm. This paper describes applications of these algorithms to the evolution of the attitude of a rigid body.

11. Canonical algorithms for numerical integration of charged particle motion equations

Efimov, I. N.; Morozov, E. A.; Morozova, A. R.

2017-02-01

A technique for numerically integrating the equation of charged particle motion in a magnetic field is considered. It is based on the canonical transformations of the phase space in Hamiltonian mechanics. The canonical transformations make the integration process stable against counting error accumulation. The integration algorithms contain a minimum possible amount of arithmetics and can be used to design accelerators and devices of electron and ion optics.

12. Adaptive integral dynamic surface control of a hypersonic flight vehicle

Aslam Butt, Waseem; Yan, Lin; Amezquita S., Kendrick

2015-07-01

In this article, non-linear adaptive dynamic surface air speed and flight path angle control designs are presented for the longitudinal dynamics of a flexible hypersonic flight vehicle. The tracking performance of the control design is enhanced by introducing a novel integral term that caters to avoiding a large initial control signal. To ensure feasibility, the design scheme incorporates magnitude and rate constraints on the actuator commands. The uncertain non-linear functions are approximated by an efficient use of the neural networks to reduce the computational load. A detailed stability analysis shows that all closed-loop signals are uniformly ultimately bounded and the ? tracking performance is guaranteed. The robustness of the design scheme is verified through numerical simulations of the flexible flight vehicle model.

13. Extremely Fast Numerical Integration of Ocean Surface Wave Dynamics

DTIC Science & Technology

2007-09-30

1) is a natural two-space-dimension extension of the KdV equation . The periodic KP solutions include directional spreading in the wave field: y η...of the nonlinear preprocessor in the new approach for obtaining numerical solutions to nonlinear wave equations . I will now do so, but without many...analytical study and extremely fast numerical integration of the extended nonlinear Schroedinger equation for fully three dimensional wave motion

14. A chaos detectable and time step-size adaptive numerical scheme for nonlinear dynamical systems

Chen, Yung-Wei; Liu, Chein-Shan; Chang, Jiang-Ren

2007-02-01

The first step in investigation the dynamics of a continuous time system described by ordinary differential equations is to integrate them to obtain trajectories. In this paper, we convert the group-preserving scheme (GPS) developed by Liu [International Journal of Non-Linear Mechanics 36 (2001) 1047-1068] to a time step-size adaptive scheme, x=x+hf(x,t), where x∈R is the system variables we are concerned with, and f(x,t)∈R is a time-varying vector field. The scheme has the form similar to the Euler scheme, x=x+Δtf(x,t), but our step-size h is adaptive automatically. Very interestingly, the ratio h/Δt, which we call the adaptive factor, can forecast the appearance of chaos if the considered dynamical system becomes chaotical. The numerical examples of the Duffing equation, the Lorenz equation and the Rossler equation, which may exhibit chaotic behaviors under certain parameters values, are used to demonstrate these phenomena. Two other non-chaotic examples are included to compare the performance of the GPS and the adaptive one.

15. Integration of hp-Adaptivity and a Two Grid Solver. II. Electromagnetic Problems

DTIC Science & Technology

2005-01-01

for lower order FE spaces. More precisely, let T be a grid,M the associated lowest order Nedelec subspaces ofHD(curl; Ω) of the first kind [24], and W... Nedelec , Mixed finite elements in IR3., Numer. Math., 35 (1980), pp. 315–341. [25] D. Pardo and L. Demkowicz, Integration of hp-adaptivity with a two

16. An Adaptive Cauchy Differential Evolution Algorithm for Global Numerical Optimization

PubMed Central

Choi, Tae Jong; Ahn, Chang Wook; An, Jinung

2013-01-01

Adaptation of control parameters, such as scaling factor (F), crossover rate (CR), and population size (NP), appropriately is one of the major problems of Differential Evolution (DE) literature. Well-designed adaptive or self-adaptive parameter control method can highly improve the performance of DE. Although there are many suggestions for adapting the control parameters, it is still a challenging task to properly adapt the control parameters for problem. In this paper, we present an adaptive parameter control DE algorithm. In the proposed algorithm, each individual has its own control parameters. The control parameters of each individual are adapted based on the average parameter value of successfully evolved individuals' parameter values by using the Cauchy distribution. Through this, the control parameters of each individual are assigned either near the average parameter value or far from that of the average parameter value which might be better parameter value for next generation. The experimental results show that the proposed algorithm is more robust than the standard DE algorithm and several state-of-the-art adaptive DE algorithms in solving various unimodal and multimodal problems. PMID:23935445

17. An adaptive Cauchy differential evolution algorithm for global numerical optimization.

PubMed

Choi, Tae Jong; Ahn, Chang Wook; An, Jinung

2013-01-01

Adaptation of control parameters, such as scaling factor (F), crossover rate (CR), and population size (NP), appropriately is one of the major problems of Differential Evolution (DE) literature. Well-designed adaptive or self-adaptive parameter control method can highly improve the performance of DE. Although there are many suggestions for adapting the control parameters, it is still a challenging task to properly adapt the control parameters for problem. In this paper, we present an adaptive parameter control DE algorithm. In the proposed algorithm, each individual has its own control parameters. The control parameters of each individual are adapted based on the average parameter value of successfully evolved individuals' parameter values by using the Cauchy distribution. Through this, the control parameters of each individual are assigned either near the average parameter value or far from that of the average parameter value which might be better parameter value for next generation. The experimental results show that the proposed algorithm is more robust than the standard DE algorithm and several state-of-the-art adaptive DE algorithms in solving various unimodal and multimodal problems.

18. Fast Quantum Algorithms for Numerical Integrals and Stochastic Processes

NASA Technical Reports Server (NTRS)

Abrams, D.; Williams, C.

1999-01-01

We discuss quantum algorithms that calculate numerical integrals and descriptive statistics of stochastic processes. With either of two distinct approaches, one obtains an exponential speed increase in comparison to the fastest known classical deterministic algotithms and a quadratic speed increase incomparison to classical Monte Carlo methods.

19. Simple and Efficient Numerical Evaluation of Near-Hypersingular Integrals

NASA Technical Reports Server (NTRS)

Fink, Patricia W.; Wilton, D. R.; Khayat, Michael A.

2007-01-01

Simple and efficient numerical procedures for evaluating the gradient of Newton-type potentials are presented. Convergences of both normal and tangential components of the gradient are examined. The convergence of the vector potential is also examined, and it is shown that the scheme for handling near-hypersingular integrals also is effective for the nearly singular potential terms.

20. Monograph - The Numerical Integration of Ordinary Differential Equations.

ERIC Educational Resources Information Center

Hull, T. E.

The materials presented in this monograph are intended to be included in a course on ordinary differential equations at the upper division level in a college mathematics program. These materials provide an introduction to the numerical integration of ordinary differential equations, and they can be used to supplement a regular text on this…

1. Integrated product definition representation for agile numerical control applications

SciTech Connect

Simons, W.R. Jr.; Brooks, S.L.; Kirk, W.J. III; Brown, C.W.

1994-11-01

Realization of agile manufacturing capabilities for a virtual enterprise requires the integration of technology, management, and work force into a coordinated, interdependent system. This paper is focused on technology enabling tools for agile manufacturing within a virtual enterprise specifically relating to Numerical Control (N/C) manufacturing activities and product definition requirements for these activities.

2. Technical Report: Scalable Parallel Algorithms for High Dimensional Numerical Integration

SciTech Connect

Masalma, Yahya; Jiao, Yu

2010-10-01

We implemented a scalable parallel quasi-Monte Carlo numerical high-dimensional integration for tera-scale data points. The implemented algorithm uses the Sobol s quasi-sequences to generate random samples. Sobol s sequence was used to avoid clustering effects in the generated random samples and to produce low-discrepancy random samples which cover the entire integration domain. The performance of the algorithm was tested. Obtained results prove the scalability and accuracy of the implemented algorithms. The implemented algorithm could be used in different applications where a huge data volume is generated and numerical integration is required. We suggest using the hyprid MPI and OpenMP programming model to improve the performance of the algorithms. If the mixed model is used, attention should be paid to the scalability and accuracy.

3. Integration of numerical analysis tools for automated numerical optimization of a transportation package design

SciTech Connect

Witkowski, W.R.; Eldred, M.S.; Harding, D.C.

1994-09-01

The use of state-of-the-art numerical analysis tools to determine the optimal design of a radioactive material (RAM) transportation container is investigated. The design of a RAM package`s components involves a complex coupling of structural, thermal, and radioactive shielding analyses. The final design must adhere to very strict design constraints. The current technique used by cask designers is uncoupled and involves designing each component separately with respect to its driving constraint. With the use of numerical optimization schemes, the complex couplings can be considered directly, and the performance of the integrated package can be maximized with respect to the analysis conditions. This can lead to more efficient package designs. Thermal and structural accident conditions are analyzed in the shape optimization of a simplified cask design. In this paper, details of the integration of numerical analysis tools, development of a process model, nonsmoothness difficulties with the optimization of the cask, and preliminary results are discussed.

4. Ensemble-type numerical uncertainty information from single model integrations

SciTech Connect

Rauser, Florian Marotzke, Jochem; Korn, Peter

2015-07-01

We suggest an algorithm that quantifies the discretization error of time-dependent physical quantities of interest (goals) for numerical models of geophysical fluid dynamics. The goal discretization error is estimated using a sum of weighted local discretization errors. The key feature of our algorithm is that these local discretization errors are interpreted as realizations of a random process. The random process is determined by the model and the flow state. From a class of local error random processes we select a suitable specific random process by integrating the model over a short time interval at different resolutions. The weights of the influences of the local discretization errors on the goal are modeled as goal sensitivities, which are calculated via automatic differentiation. The integration of the weighted realizations of local error random processes yields a posterior ensemble of goal approximations from a single run of the numerical model. From the posterior ensemble we derive the uncertainty information of the goal discretization error. This algorithm bypasses the requirement of detailed knowledge about the models discretization to generate numerical error estimates. The algorithm is evaluated for the spherical shallow-water equations. For two standard test cases we successfully estimate the error of regional potential energy, track its evolution, and compare it to standard ensemble techniques. The posterior ensemble shares linear-error-growth properties with ensembles of multiple model integrations when comparably perturbed. The posterior ensemble numerical error estimates are of comparable size as those of a stochastic physics ensemble.

5. Integrating Adaptive Games in Student-Centered Virtual Learning Environments

ERIC Educational Resources Information Center

del Blanco, Angel; Torrente, Javier; Moreno-Ger, Pablo; Fernandez-Manjon, Baltasar

2010-01-01

The increasing adoption of e-Learning technology is facing new challenges, such as how to produce student-centered systems that can be adapted to each student's needs. In this context, educational video games are proposed as an ideal medium to facilitate adaptation and tracking of students' performance for assessment purposes, but integrating the…

6. Correcting numerical integration errors caused by small aliasing errors

SciTech Connect

Smallwood, D.O.

1997-11-01

Small sampling errors can have a large effect on numerically integrated waveforms. An example is the integration of acceleration to compute velocity and displacement waveforms. These large integration errors complicate checking the suitability of the acceleration waveform for reproduction on shakers. For waveforms typically used for shaker reproduction, the errors become significant when the frequency content of the waveform spans a large frequency range. It is shown that these errors are essentially independent of the numerical integration method used, and are caused by small aliasing errors from the frequency components near the Nyquist frequency. A method to repair the integrated waveforms is presented. The method involves using a model of the acceleration error, and fitting this model to the acceleration, velocity, and displacement waveforms to force the waveforms to fit the assumed initial and final values. The correction is then subtracted from the acceleration before integration. The method is effective where the errors are isolated to a small section of the time history. It is shown that the common method to repair these errors using a high pass filter is sometimes ineffective for this class of problem.

7. Stability of numerical integration techniques for transient rotor dynamics

NASA Technical Reports Server (NTRS)

Kascak, A. F.

1977-01-01

A finite element model of a rotor bearing system was analyzed to determine the stability limits of the forward, backward, and centered Euler; Runge-Kutta; Milne; and Adams numerical integration techniques. The analysis concludes that the highest frequency mode determines the maximum time step for a stable solution. Thus, the number of mass elements should be minimized. Increasing the damping can sometimes cause numerical instability. For a uniform shaft, with 10 mass elements, operating at approximately the first critical speed, the maximum time step for the Runge-Kutta, Milne, and Adams methods is that which corresponds to approximately 1 degree of shaft movement. This is independent of rotor dimensions.

8. Microwave Breast Imaging System Prototype with Integrated Numerical Characterization

PubMed Central

Haynes, Mark; Stang, John; Moghaddam, Mahta

2012-01-01

The increasing number of experimental microwave breast imaging systems and the need to properly model them have motivated our development of an integrated numerical characterization technique. We use Ansoft HFSS and a formalism we developed previously to numerically characterize an S-parameter- based breast imaging system and link it to an inverse scattering algorithm. We show successful reconstructions of simple test objects using synthetic and experimental data. We demonstrate the sensitivity of image reconstructions to knowledge of the background dielectric properties and show the limits of the current model. PMID:22481906

9. Numerical solution of nonlinear Hammerstein fuzzy functional integral equations

Enkov, Svetoslav; Georgieva, Atanaska; Nikolla, Renato

2016-12-01

In this work we investigate nonlinear Hammerstein fuzzy functional integral equation. Our aim is to provide an efficient iterative method of successive approximations by optimal quadrature formula for classes of fuzzy number-valued functions of Lipschitz type to approximate the solution. We prove the convergence of the method by Banach's fixed point theorem and investigate the numerical stability of the presented method with respect to the choice of the first iteration. Finally, illustrative numerical experiment demonstrate the accuracy and the convergence of the proposed method.

10. Numerical simulation of an adaptive optics system with laser propagation in the atmosphere.

PubMed

Yan, H X; Li, S S; Zhang, D L; Chen, S

2000-06-20

A comprehensive model of laser propagation in the atmosphere with a complete adaptive optics (AO) system for phase compensation is presented, and a corresponding computer program is compiled. A direct wave-front gradient control method is used to reconstruct the wave-front phase. With the long-exposure Strehl ratio as the evaluation parameter, a numerical simulation of an AO system in a stationary state with the atmospheric propagation of a laser beam was conducted. It was found that for certain conditions the phase screen that describes turbulence in the atmosphere might not be isotropic. Numerical experiments show that the computational results in imaging of lenses by means of the fast Fourier transform (FFT) method agree well with those computed by means of an integration method. However, the computer time required for the FFT method is 1 order of magnitude less than that of the integration method. Phase tailoring of the calculated phase is presented as a means to solve the problem that variance of the calculated residual phase does not correspond to the correction effectiveness of an AO system. It is found for the first time to our knowledge that for a constant delay time of an AO system, when the lateral wind speed exceeds a threshold, the compensation effectiveness of an AO system is better than that of complete phase conjugation. This finding indicates that the better compensation capability of an AO system does not mean better correction effectiveness.

11. hp-Adaptive time integration based on the BDF for viscous flows

Hay, A.; Etienne, S.; Pelletier, D.; Garon, A.

2015-06-01

This paper presents a procedure based on the Backward Differentiation Formulas of order 1 to 5 to obtain efficient time integration of the incompressible Navier-Stokes equations. The adaptive algorithm performs both stepsize and order selections to control respectively the solution accuracy and the computational efficiency of the time integration process. The stepsize selection (h-adaptivity) is based on a local error estimate and an error controller to guarantee that the numerical solution accuracy is within a user prescribed tolerance. The order selection (p-adaptivity) relies on the idea that low-accuracy solutions can be computed efficiently by low order time integrators while accurate solutions require high order time integrators to keep computational time low. The selection is based on a stability test that detects growing numerical noise and deems a method of order p stable if there is no method of lower order that delivers the same solution accuracy for a larger stepsize. Hence, it guarantees both that (1) the used method of integration operates inside of its stability region and (2) the time integration procedure is computationally efficient. The proposed time integration procedure also features a time-step rejection and quarantine mechanisms, a modified Newton method with a predictor and dense output techniques to compute solution at off-step points.

12. A New Adaptive H-Infinity Filtering Algorithm for the GPS/INS Integrated Navigation.

PubMed

Jiang, Chen; Zhang, Shu-Bi; Zhang, Qiu-Zhao

2016-12-19

The Kalman filter is an optimal estimator with numerous applications in technology, especially in systems with Gaussian distributed noise. Moreover, the adaptive Kalman filtering algorithms, based on the Kalman filter, can control the influence of dynamic model errors. In contrast to the adaptive Kalman filtering algorithms, the H-infinity filter is able to address the interference of the stochastic model by minimization of the worst-case estimation error. In this paper, a novel adaptive H-infinity filtering algorithm, which integrates the adaptive Kalman filter and the H-infinity filter in order to perform a comprehensive filtering algorithm, is presented. In the proposed algorithm, a robust estimation method is employed to control the influence of outliers. In order to verify the proposed algorithm, experiments with real data of the Global Positioning System (GPS) and Inertial Navigation System (INS) integrated navigation, were conducted. The experimental results have shown that the proposed algorithm has multiple advantages compared to the other filtering algorithms.

13. Designing Adaptive Low-Dissipative High Order Schemes for Long-Time Integrations. Chapter 1

NASA Technical Reports Server (NTRS)

Yee, Helen C.; Sjoegreen, B.; Mansour, Nagi N. (Technical Monitor)

2001-01-01

A general framework for the design of adaptive low-dissipative high order schemes is presented. It encompasses a rather complete treatment of the numerical approach based on four integrated design criteria: (1) For stability considerations, condition the governing equations before the application of the appropriate numerical scheme whenever it is possible; (2) For consistency, compatible schemes that possess stability properties, including physical and numerical boundary condition treatments, similar to those of the discrete analogue of the continuum are preferred; (3) For the minimization of numerical dissipation contamination, efficient and adaptive numerical dissipation control to further improve nonlinear stability and accuracy should be used; and (4) For practical considerations, the numerical approach should be efficient and applicable to general geometries, and an efficient and reliable dynamic grid adaptation should be used if necessary. These design criteria are, in general, very useful to a wide spectrum of flow simulations. However, the demand on the overall numerical approach for nonlinear stability and accuracy is much more stringent for long-time integration of complex multiscale viscous shock/shear/turbulence/acoustics interactions and numerical combustion. Robust classical numerical methods for less complex flow physics are not suitable or practical for such applications. The present approach is designed expressly to address such flow problems, especially unsteady flows. The minimization of employing very fine grids to overcome the production of spurious numerical solutions and/or instability due to under-resolved grids is also sought. The incremental studies to illustrate the performance of the approach are summarized. Extensive testing and full implementation of the approach is forthcoming. The results shown so far are very encouraging.

14. Path Integrals and Exotic Options:. Methods and Numerical Results

Bormetti, G.; Montagna, G.; Moreni, N.; Nicrosini, O.

2005-09-01

In the framework of Black-Scholes-Merton model of financial derivatives, a path integral approach to option pricing is presented. A general formula to price path dependent options on multidimensional and correlated underlying assets is obtained and implemented by means of various flexible and efficient algorithms. As an example, we detail the case of Asian call options. The numerical results are compared with those obtained with other procedures used in quantitative finance and found to be in good agreement. In particular, when pricing at the money (ATM) and out of the money (OTM) options, path integral exhibits competitive performances.

15. Adaptive robust controller based on integral sliding mode concept

Taleb, M.; Plestan, F.

2016-09-01

This paper proposes, for a class of uncertain nonlinear systems, an adaptive controller based on adaptive second-order sliding mode control and integral sliding mode control concepts. The adaptation strategy solves the problem of gain tuning and has the advantage of chattering reduction. Moreover, limited information about perturbation and uncertainties has to be known. The control is composed of two parts: an adaptive one whose objective is to reject the perturbation and system uncertainties, whereas the second one is chosen such as the nominal part of the system is stabilised in zero. To illustrate the effectiveness of the proposed approach, an application on an academic example is shown with simulation results.

16. New Numerical Integrators Based on Solvability and Splitting

DTIC Science & Technology

2007-11-02

display a currently valid OMB control number. 1. REPORT DATE 03 JAN 2005 2. REPORT TYPE N/A 3. DATES COVERED - 4. TITLE AND SUBTITLE New...Group Methods And Control Theory Workshop Held on 28 June 2004 - 1 July 2004., The original document contains color images. 14. ABSTRACT 15...Mechanics, NMR spectroscopy, infrared divergences in QED, control theory,... 1.1 Magnus expansion (IV) NEW NUMERICAL INTEGRATORS BASED ON SOLVABILITY AND

17. Adaptive Grid Generation for Numerical Solution of Partial Differential Equations.

DTIC Science & Technology

1983-12-01

RETURN 65 Bibliography 1. Thompson , J . F ., "A Survey of Grid Generation Tecniques in Computational Fluid Dynamics," AIAA Paper No. 83-0447, 1-36...edited by K. N. Ghia and U. Ghia. ASME FED, 5: 35-47 (1983). 3. Thompson , J . F ., Thames, F. C., and Mastin, C. W., "Automated Numerical Generation...Equations," Numerical Grid Generation, Edited by J. F. Thompson. New York: North Holland, 1982. 10. Thompson , J . F ., and Mastin, C. W., "Grid Generation

18. Spiking neural network simulation: numerical integration with the Parker-Sochacki method.

PubMed

Stewart, Robert D; Bair, Wyeth

2009-08-01

Mathematical neuronal models are normally expressed using differential equations. The Parker-Sochacki method is a new technique for the numerical integration of differential equations applicable to many neuronal models. Using this method, the solution order can be adapted according to the local conditions at each time step, enabling adaptive error control without changing the integration timestep. The method has been limited to polynomial equations, but we present division and power operations that expand its scope. We apply the Parker-Sochacki method to the Izhikevich 'simple' model and a Hodgkin-Huxley type neuron, comparing the results with those obtained using the Runge-Kutta and Bulirsch-Stoer methods. Benchmark simulations demonstrate an improved speed/accuracy trade-off for the method relative to these established techniques.

19. Simple and Efficient Numerical Evaluation of Near-Hypersingular Integrals

NASA Technical Reports Server (NTRS)

Fink, Patrick W.; Wilton, Donald R.; Khayat, Michael A.

2007-01-01

Recently, significant progress has been made in the handling of singular and nearly-singular potential integrals that commonly arise in the Boundary Element Method (BEM). To facilitate object-oriented programming and handling of higher order basis functions, cancellation techniques are favored over techniques involving singularity subtraction. However, gradients of the Newton-type potentials, which produce hypersingular kernels, are also frequently required in BEM formulations. As is the case with the potentials, treatment of the near-hypersingular integrals has proven more challenging than treating the limiting case in which the observation point approaches the surface. Historically, numerical evaluation of these near-hypersingularities has often involved a two-step procedure: a singularity subtraction to reduce the order of the singularity, followed by a boundary contour integral evaluation of the extracted part. Since this evaluation necessarily links basis function, Green s function, and the integration domain (element shape), the approach ill fits object-oriented programming concepts. Thus, there is a need for cancellation-type techniques for efficient numerical evaluation of the gradient of the potential. Progress in the development of efficient cancellation-type procedures for the gradient potentials was recently presented. To the extent possible, a change of variables is chosen such that the Jacobian of the transformation cancels the singularity. However, since the gradient kernel involves singularities of different orders, we also require that the transformation leaves remaining terms that are analytic. The terms "normal" and "tangential" are used herein with reference to the source element. Also, since computational formulations often involve the numerical evaluation of both potentials and their gradients, it is highly desirable that a single integration procedure efficiently handles both.

20. Experiences with an adaptive mesh refinement algorithm in numerical relativity.

Choptuik, M. W.

An implementation of the Berger/Oliger mesh refinement algorithm for a model problem in numerical relativity is described. The principles of operation of the method are reviewed and its use in conjunction with leap-frog schemes is considered. The performance of the algorithm is illustrated with results from a study of the Einstein/massless scalar field equations in spherical symmetry.

1. Wang-Landau integration --- The application of Wang-Landau sampling in numerical integration

Li, Ying Wai; Wuest, Thomas; Landau, David P.; Qing Lin, Hai

2007-03-01

Wang-Landau sampling was first introduced to simulate the density of states in energy space for various physical systems. This technique can be extended to numerical integrations due to certain similarities in nature of these two problems. It can be further applied to study quantum many-body systems. We report the feasibility of this application by discussing the correspondence between Wang-Landau integration and Wang-Landau sampling for Ising model. Numerical results for 1D and 2D integrations are shown. In particular, the utilization of this algorithm in the periodic lattice Anderson model is discussed as an illustrative example.

2. Numerical integration of massive two-loop Mellin-Barnes integrals in Minkowskian regions

Dubovyk, I.; Gluza, J.; Riemann, T.; Usovitsch, J.

Mellin-Barnes (MB) techniques applied to integrals emerging in particle physics perturbative calculations are summarized. New versions of AMBRE packages which construct planar and nonplanar MB representations are shortly discussed. The numerical package MBnumerics.m is presented for the first time which is able to calculate with a high precision multidimensional MB integrals in Minkowskian regions. Examples are given for massive vertex integrals which include threshold effects and several scale parameters.

3. Multivariate numerical integration via fluctuationlessness theorem: Case study

Baykara, N. A.; Gürvit, Ercan

2017-01-01

In this work we come up with the statement of the Fluctuationlessness theorem recently conjectured and proven by M. Demiralp and its application to numerical integration of univariate functions by restructuring the Taylor expansion with explicit remainder term. The Fluctuationlessness theorem is stated. Following this step an orthonormal basis set is formed and the necessary formulae for calculating the coefficients of the three term recursion formula are constructed. Then for multivariate numerical integration, instead of dealing with a single formula for multiple remainder terms, a new approach that is already mentioned for bivariate functions is taken into consideration. At every step of a multivariate integration one variable is considered and the others are held constant. In such a way, this gives us the possibility to get rid of the complexity of calculations. The trivariate case is taken into account and its generalization is step by step explained. At the final stage implementations are done for some trivariate functions and the results are tabulated together with the implementation times.

4. Singularity Preserving Numerical Methods for Boundary Integral Equations

NASA Technical Reports Server (NTRS)

Kaneko, Hideaki (Principal Investigator)

1996-01-01

In the past twelve months (May 8, 1995 - May 8, 1996), under the cooperative agreement with Division of Multidisciplinary Optimization at NASA Langley, we have accomplished the following five projects: a note on the finite element method with singular basis functions; numerical quadrature for weakly singular integrals; superconvergence of degenerate kernel method; superconvergence of the iterated collocation method for Hammersteion equations; and singularity preserving Galerkin method for Hammerstein equations with logarithmic kernel. This final report consists of five papers describing these projects. Each project is preceeded by a brief abstract.

5. System integration of pattern recognition, adaptive aided, upper limb prostheses

NASA Technical Reports Server (NTRS)

Lyman, J.; Freedy, A.; Solomonow, M.

1975-01-01

The requirements for successful integration of a computer aided control system for multi degree of freedom artificial arms are discussed. Specifications are established for a system which shares control between a human amputee and an automatic control subsystem. The approach integrates the following subsystems: (1) myoelectric pattern recognition, (2) adaptive computer aiding; (3) local reflex control; (4) prosthetic sensory feedback; and (5) externally energized arm with the functions of prehension, wrist rotation, elbow extension and flexion and humeral rotation.

6. Numerical implementation of the integral-transform solution to Lamb's point-load problem

Georgiadis, H. G.; Vamvatsikos, D.; Vardoulakis, I.

The present work describes a procedure for the numerical evaluation of the classical integral-transform solution of the transient elastodynamic point-load (axisymmetric) Lamb's problem. This solution involves integrals of rapidly oscillatory functions over semi-infinite intervals and inversion of one-sided (time) Laplace transforms. These features introduce difficulties for a numerical treatment and constitute a challenging problem in trying to obtain results for quantities (e.g. displacements) in the interior of the half-space. To deal with the oscillatory integrands, which in addition may take very large values (pseudo-pole behavior) at certain points, we follow the concept of Longman's method but using as accelerator in the summation procedure a modified Epsilon algorithm instead of the standard Euler's transformation. Also, an adaptive procedure using the Gauss 32-point rule is introduced to integrate in the vicinity of the pseudo-pole. The numerical Laplace-transform inversion is based on the robust Fourier-series technique of Dubner/Abate-Crump-Durbin. Extensive results are given for sub-surface displacements, whereas the limit-case results for the surface displacements compare very favorably with previous exact results.

7. Integrating Learning Styles into Adaptive E-Learning System

ERIC Educational Resources Information Center

Truong, Huong May

2015-01-01

This paper provides an overview and update on my PhD research project which focuses on integrating learning styles into adaptive e-learning system. The project, firstly, aims to develop a system to classify students' learning styles through their online learning behaviour. This will be followed by a study on the complex relationship between…

8. Adaptation disrupts motion integration in the primate dorsal stream

PubMed Central

Patterson, Carlyn A.; Wissig, Stephanie C.; Kohn, Adam

2014-01-01

Summary Sensory systems adjust continuously to the environment. The effects of recent sensory experience—or adaptation—are typically assayed by recording in a relevant subcortical or cortical network. However, adaptation effects cannot be localized to a single, local network. Adjustments in one circuit or area will alter the input provided to others, with unclear consequences for computations implemented in the downstream circuit. Here we show that prolonged adaptation with drifting gratings, which alters responses in the early visual system, impedes the ability of area MT neurons to integrate motion signals in plaid stimuli. Perceptual experiments reveal a corresponding loss of plaid coherence. A simple computational model shows how the altered representation of motion signals in early cortex can derail integration in MT. Our results suggest that the effects of adaptation cascade through the visual system, derailing the downstream representation of distinct stimulus attributes. PMID:24507198

9. A wavelet-optimized, very high order adaptive grid and order numerical method

NASA Technical Reports Server (NTRS)

Jameson, Leland

1996-01-01

Differencing operators of arbitrarily high order can be constructed by interpolating a polynomial through a set of data followed by differentiation of this polynomial and finally evaluation of the polynomial at the point where a derivative approximation is desired. Furthermore, the interpolating polynomial can be constructed from algebraic, trigonometric, or, perhaps exponential polynomials. This paper begins with a comparison of such differencing operator construction. Next, the issue of proper grids for high order polynomials is addressed. Finally, an adaptive numerical method is introduced which adapts the numerical grid and the order of the differencing operator depending on the data. The numerical grid adaptation is performed on a Chebyshev grid. That is, at each level of refinement the grid is a Chebvshev grid and this grid is refined locally based on wavelet analysis.

10. Gauge Drift in Numerical Integrations of the Lagrange Planetary Equations

Murison, M. A.; Efroimsky, M.

2003-08-01

Efroimsky (2002) and Newman & Efroimsky (2003) recognized that the Lagrange and Delaunay planetary equations of celestial mechanics may be generalized to allow transformations analogous to the familiar gauge transformations in electrodynamics. As usually presented, the Lagrange equations, which are derived by the method of variation of parameters (invented by Euler and Lagrange for this very purpose), assume the Lagrange constraint, whereby a certain combination of parameter time derivatives is arbitrarily equated to zero. This particular constraint ensures an osculating orbit that is unique. The transformation of the description, as given by the (time-varying) osculating elements, into that given by the Cartesian coordinates and velocities is invertible. Relaxing the constraint enables one to substitute instead an arbitrary gauge function. This breaks the uniqueness and invertibility between the orbit instantaneously described by the orbital elements and the position and velocity components (i.e., many different orbits, precessing at different rates, can at a given instant share the same physical position and physical velocity through space). However, the orbit described by the (varying) orbital elements obeying a different gauge is no longer osculating. In numerical calculations that integrate the traditional Lagrange and Delaunay equations, even starting off in a certain (say, Lagrange's) gauge, some fraction of the numerical errors will, nevertheless, diffuse into violation of the chosen constraint. This results in an unintended ``gauge drift''. Geometrically, numerical errors cause the trajectory in phase space to leave the gauge-defined submanifold to which the motion was constrained, so that it is then moving on a different submanifold. The method of Lagrange multipliers can be utilized to return the motion to the original submanifold (e.g., Nacozy 1971, Murison 1989). Alternatively, the accumulated gauge drift may be compensated by a gauge transformation

11. Quantum Calisthenics: Gaussians, The Path Integral and Guided Numerical Approximations

SciTech Connect

Weinstein, Marvin; /SLAC

2009-02-12

It is apparent to anyone who thinks about it that, to a large degree, the basic concepts of Newtonian physics are quite intuitive, but quantum mechanics is not. My purpose in this talk is to introduce you to a new, much more intuitive way to understand how quantum mechanics works. I begin with an incredibly easy way to derive the time evolution of a Gaussian wave-packet for the case free and harmonic motion without any need to know the eigenstates of the Hamiltonian. This discussion is completely analytic and I will later use it to relate the solution for the behavior of the Gaussian packet to the Feynman path-integral and stationary phase approximation. It will be clear that using the information about the evolution of the Gaussian in this way goes far beyond what the stationary phase approximation tells us. Next, I introduce the concept of the bucket brigade approach to dealing with problems that cannot be handled totally analytically. This approach combines the intuition obtained in the initial discussion, as well as the intuition obtained from the path-integral, with simple numerical tools. My goal is to show that, for any specific process, there is a simple Hilbert space interpretation of the stationary phase approximation. I will then argue that, from the point of view of numerical approximations, the trajectory obtained from my generalization of the stationary phase approximation specifies that subspace of the full Hilbert space that is needed to compute the time evolution of the particular state under the full Hamiltonian. The prescription I will give is totally non-perturbative and we will see, by the grace of Maple animations computed for the case of the anharmonic oscillator Hamiltonian, that this approach allows surprisingly accurate computations to be performed with very little work. I think of this approach to the path-integral as defining what I call a guided numerical approximation scheme. After the discussion of the anharmonic oscillator I will

12. The software package CAOS 7.0: enhanced numerical modelling of astronomical adaptive optics systems

Carbillet, Marcel; La Camera, Andrea; Folcher, Jean-Pierre; Perruchon-Monge, Ulysse; Sy, Adama

2016-07-01

The Software Package CAOS (acronym for Code for Adaptive Optics Systems) is a modular scientific package performing end-to-end numerical modelling of astronomical adaptive optics (AO) systems. It is IDL-based and developed within the eponymous CAOS Problem-Solving Environment, recently completely re-organized. In this paper we present version 7.0 of the Software Package CAOS, containing a number of enhancements and new modules, in particular for wide-field AO systems modelling.

13. Multistep integration formulas for the numerical integration of the satellite problem

NASA Technical Reports Server (NTRS)

Lundberg, J. B.; Tapley, B. D.

1981-01-01

The use of two Class 2/fixed mesh/fixed order/multistep integration packages of the PECE type for the numerical integration of the second order, nonlinear, ordinary differential equation of the satellite orbit problem. These two methods are referred to as the general and the second sum formulations. The derivation of the basic equations which characterize each formulation and the role of the basic equations in the PECE algorithm are discussed. Possible starting procedures are examined which may be used to supply the initial set of values required by the fixed mesh/multistep integrators. The results of the general and second sum integrators are compared to the results of various fixed step and variable step integrators.

14. Comparison of four stable numerical methods for Abel's integral equation

NASA Technical Reports Server (NTRS)

Murio, Diego A.; Mejia, Carlos E.

1991-01-01

The 3-D image reconstruction from cone-beam projections in computerized tomography leads naturally, in the case of radial symmetry, to the study of Abel-type integral equations. If the experimental information is obtained from measured data, on a discrete set of points, special methods are needed in order to restore continuity with respect to the data. A new combined Regularized-Adjoint-Conjugate Gradient algorithm, together with two different implementations of the Mollification Method (one based on a data filtering technique and the other on the mollification of the kernal function) and a regularization by truncation method (initially proposed for 2-D ray sample schemes and more recently extended to 3-D cone-beam image reconstruction) are extensively tested and compared for accuracy and numerical stability as functions of the level of noise in the data.

15. Distributed adaptive simulation through standards-based integration of simulators and adaptive learning systems.

PubMed

Bergeron, Bryan; Cline, Andrew; Shipley, Jaime

2012-01-01

We have developed a distributed, standards-based architecture that enables simulation and simulator designers to leverage adaptive learning systems. Our approach, which incorporates an electronic competency record, open source LMS, and open source microcontroller hardware, is a low-cost, pragmatic option to integrating simulators with traditional courseware.

16. Robust and adaptive techniques for numerical simulation of nonlinear partial differential equations of fractional order

2017-03-01

In this paper, some nonlinear space-fractional order reaction-diffusion equations (SFORDE) on a finite but large spatial domain x ∈ [0, L], x = x(x , y , z) and t ∈ [0, T] are considered. Also in this work, the standard reaction-diffusion system with boundary conditions is generalized by replacing the second-order spatial derivatives with Riemann-Liouville space-fractional derivatives of order α, for 0 < α < 2. Fourier spectral method is introduced as a better alternative to existing low order schemes for the integration of fractional in space reaction-diffusion problems in conjunction with an adaptive exponential time differencing method, and solve a range of one-, two- and three-components SFORDE numerically to obtain patterns in one- and two-dimensions with a straight forward extension to three spatial dimensions in a sub-diffusive (0 < α < 1) and super-diffusive (1 < α < 2) scenarios. It is observed that computer simulations of SFORDE give enough evidence that pattern formation in fractional medium at certain parameter value is practically the same as in the standard reaction-diffusion case. With application to models in biology and physics, different spatiotemporal dynamics are observed and displayed.

17. Integrated numerical prediction of atomization process of liquid hydrogen jet

Ishimoto, Jun; Ohira, Katsuhide; Okabayashi, Kazuki; Chitose, Keiko

2008-05-01

The 3-D structure of the liquid atomization behavior of an LH jet flow through a pinhole nozzle is numerically investigated and visualized by a new type of integrated simulation technique. The present computational fluid dynamics (CFD) analysis focuses on the thermodynamic effect on the consecutive breakup of a cryogenic liquid column, the formation of a liquid film, and the generation of droplets in the outlet section of the pinhole nozzle. Utilizing the governing equations for a high-speed turbulent cryogenic jet flow through a pinhole nozzle based on the thermal nonequilibrium LES-VOF model in conjunction with the CSF model, an integrated parallel computation is performed to clarify the detailed atomization process of a high-speed LH2 jet flow through a pinhole nozzle and to acquire data, which is difficult to confirm by experiment, such as atomization length, liquid core shape, droplet-size distribution, spray angle, droplet velocity profiles, and thermal field surrounding the atomizing jet flow. According to the present computation, the cryogenic atomization rate and the LH2 droplets-gas two-phase flow characteristics are found to be controlled by the turbulence perturbation upstream of the pinhole nozzle, hydrodynamic instabilities at the gas-liquid interface and shear stress between the liquid core and the periphery of the LH2 jet. Furthermore, calculation of the effect of cryogenic atomization on the jet thermal field shows that such atomization extensively enhances the thermal diffusion surrounding the LH2 jet flow.

18. Dissociating conflict adaptation from feature integration: a multiple regression approach.

PubMed

Notebaert, Wim; Verguts, Tom

2007-10-01

Congruency effects are typically smaller after incongruent than after congruent trials. One explanation is in terms of higher levels of cognitive control after detection of conflict (conflict adaptation; e.g., M. M. Botvinick, T. S. Braver, D. M. Barch, C. S. Carter, & J. D. Cohen, 2001). An alternative explanation for these results is based on feature repetition and/or integration effects (e.g., B. Hommel, R. W. Proctor, & K.-P. Vu, 2004; U. Mayr, E. Awh, & P. Laurey, 2003). Previous attempts to dissociate feature integration from conflict adaptation focused on a particular subset of the data in which feature transitions were held constant (J. G. Kerns et al., 2004) or in which congruency transitions were held constant (C. Akcay & E. Hazeltine, in press), but this has a number of disadvantages. In this article, the authors present a multiple regression solution for this problem and discuss its possibilities and pitfalls.

19. Influence of gait loads on implant integration in rat tibiae: experimental and numerical analysis.

PubMed

Piccinini, Marco; Cugnoni, Joel; Botsis, John; Ammann, Patrick; Wiskott, Anselm

2014-10-17

Implanted rat bones play a key role in studies involving fracture healing, bone diseases or drugs delivery among other themes. In most of these studies the implants integration also depends on the animal daily activity and musculoskeletal loads, which affect the implants mechanical environment. However, the tissue adaption to the physiological loads is often filtered through control groups or not inspected. This work aims to investigate experimentally and numerically the effects of the daily activity on the integration of implants inserted in the rat tibia, and to establish a physiological loading condition to analyse the peri-implant bone stresses during gait. Two titanium implants, single and double cortex crossing, are inserted in the rat tibia. The animals are caged under standard conditions and divided in three groups undergoing progressive integration periods. The results highlight a time-dependent increase of bone samples with significant cortical bone loss. The phenomenon is analysed through specimen-specific Finite Element models involving purpose-built musculoskeletal loads. Different boundary conditions replicating the post-surgery bone-implant interaction are adopted. The effects of the gait loads on the implants integration are quantified and agree with the results of the experiments. The observed cortical bone loss can be considered as a transient state of integration due to bone disuse atrophy, initially triggered by a loss of bone-implant adhesion and subsequently by a cyclic opening of the interface.

20. Grid cell distortion and MODFLOW's integrated finite-difference numerical solution.

PubMed

Romero, Dave M; Silver, Steven E

2006-01-01

The ground water flow model MODFLOW inherently implements a nongeneralized integrated finite-difference (IFD) numerical scheme. The IFD numerical scheme allows for construction of finite-difference model grids with curvilinear (piecewise linear) rows. The resulting grid comprises model cells in the shape of trapezoids and is distorted in comparison to a traditional MODFLOW finite-difference grid. A version of MODFLOW-88 (herein referred to as MODFLOW IFD) with the code adapted to make the one-dimensional DELR and DELC arrays two dimensional, so that equivalent conductance between distorted grid cells can be calculated, is described. MODFLOW IFD is used to inspect the sensitivity of the numerical head and velocity solutions to the level of distortion in trapezoidal grid cells within a converging radial flow domain. A test problem designed for the analysis implements a grid oriented such that flow is parallel to columns with converging widths. The sensitivity analysis demonstrates MODFLOW IFD's capacity to numerically derive a head solution and resulting intercell volumetric flow when the internal calculation of equivalent conductance accounts for the distortion of the grid cells. The sensitivity of the velocity solution to grid cell distortion indicates criteria for distorted grid design. In the radial flow test problem described, the numerical head solution is not sensitive to grid cell distortion. The accuracy of the velocity solution is sensitive to cell distortion with error <1% if the angle between the nonparallel sides of trapezoidal cells is <12.5 degrees. The error of the velocity solution is related to the degree to which the spatial discretization of a curve is approximated with piecewise linear segments. Curvilinear finite-difference grid construction adds versatility to spatial discretization of the flow domain. MODFLOW-88's inherent IFD numerical scheme and the test problem results imply that more recent versions of MODFLOW 2000, with minor

1. Analysis of adaptive algorithms for an integrated communication network

NASA Technical Reports Server (NTRS)

Reed, Daniel A.; Barr, Matthew; Chong-Kwon, Kim

1985-01-01

Techniques were examined that trade communication bandwidth for decreased transmission delays. When the network is lightly used, these schemes attempt to use additional network resources to decrease communication delays. As the network utilization rises, the schemes degrade gracefully, still providing service but with minimal use of the network. Because the schemes use a combination of circuit and packet switching, they should respond to variations in the types and amounts of network traffic. Also, a combination of circuit and packet switching to support the widely varying traffic demands imposed on an integrated network was investigated. The packet switched component is best suited to bursty traffic where some delays in delivery are acceptable. The circuit switched component is reserved for traffic that must meet real time constraints. Selected packet routing algorithms that might be used in an integrated network were simulated. An integrated traffic places widely varying workload demands on a network. Adaptive algorithms were identified, ones that respond to both the transient and evolutionary changes that arise in integrated networks. A new algorithm was developed, hybrid weighted routing, that adapts to workload changes.

2. A well-balanced numerical scheme for shallow water simulation on adaptive grids

Zhang, H. J.; Zhou, J. Z.; Bi, S.; Li, Q. Q.; Fan, Y.

2014-04-01

The efficiency of solving two-dimensional shallow-water equations (SWEs) is vital for simulation of large-scale flood inundation. For flood flows over real topography, local high-resolution method, which uses adaptable grids, is required in order to prevent the loss of accuracy of the flow pattern while saving computational cost. This paper introduces an adaptive grid model, which uses an adaptive criterion calculated on the basis of the water lever. The grid adaption is performed by manipulating subdivision levels of the computation grids. As the flow feature varies during the shallow wave propagation, the local grid density changes adaptively and the stored information of neighbor relationship updates correspondingly, achieving a balance between the model accuracy and running efficiency. In this work, a well-balanced (WB) scheme for solving SWEs is introduced. In reconstructions of Riemann state, the definition of the unique bottom elevation on grid interfaces is modified, and the numerical scheme is pre-balanced automatically. By the validation against two idealist test cases, the proposed model is applied to simulate flood inundation due to a dam-break of Zhanghe Reservoir, Hubei province, China. The results show that the presented model is robust and well-balanced, has nice computational efficiency and numerical stability, and thus has bright application prospects.

3. A numerical study of adaptive space and time discretisations for Gross-Pitaevskii equations.

PubMed

Thalhammer, Mechthild; Abhau, Jochen

2012-08-15

As a basic principle, benefits of adaptive discretisations are an improved balance between required accuracy and efficiency as well as an enhancement of the reliability of numerical computations. In this work, the capacity of locally adaptive space and time discretisations for the numerical solution of low-dimensional nonlinear Schrödinger equations is investigated. The considered model equation is related to the time-dependent Gross-Pitaevskii equation arising in the description of Bose-Einstein condensates in dilute gases. The performance of the Fourier-pseudo spectral method constrained to uniform meshes versus the locally adaptive finite element method and of higher-order exponential operator splitting methods with variable time stepsizes is studied. Numerical experiments confirm that a local time stepsize control based on a posteriori local error estimators or embedded splitting pairs, respectively, is effective in different situations with an enhancement either in efficiency or reliability. As expected, adaptive time-splitting schemes combined with fast Fourier transform techniques are favourable regarding accuracy and efficiency when applied to Gross-Pitaevskii equations with a defocusing nonlinearity and a mildly varying regular solution. However, the numerical solution of nonlinear Schrödinger equations in the semi-classical regime becomes a demanding task. Due to the highly oscillatory and nonlinear nature of the problem, the spatial mesh size and the time increments need to be of the size of the decisive parameter [Formula: see text], especially when it is desired to capture correctly the quantitative behaviour of the wave function itself. The required high resolution in space constricts the feasibility of numerical computations for both, the Fourier pseudo-spectral and the finite element method. Nevertheless, for smaller parameter values locally adaptive time discretisations facilitate to determine the time stepsizes sufficiently small in order that

4. A numerical study of adaptive space and time discretisations for Gross–Pitaevskii equations

PubMed Central

Thalhammer, Mechthild; Abhau, Jochen

2012-01-01

As a basic principle, benefits of adaptive discretisations are an improved balance between required accuracy and efficiency as well as an enhancement of the reliability of numerical computations. In this work, the capacity of locally adaptive space and time discretisations for the numerical solution of low-dimensional nonlinear Schrödinger equations is investigated. The considered model equation is related to the time-dependent Gross–Pitaevskii equation arising in the description of Bose–Einstein condensates in dilute gases. The performance of the Fourier-pseudo spectral method constrained to uniform meshes versus the locally adaptive finite element method and of higher-order exponential operator splitting methods with variable time stepsizes is studied. Numerical experiments confirm that a local time stepsize control based on a posteriori local error estimators or embedded splitting pairs, respectively, is effective in different situations with an enhancement either in efficiency or reliability. As expected, adaptive time-splitting schemes combined with fast Fourier transform techniques are favourable regarding accuracy and efficiency when applied to Gross–Pitaevskii equations with a defocusing nonlinearity and a mildly varying regular solution. However, the numerical solution of nonlinear Schrödinger equations in the semi-classical regime becomes a demanding task. Due to the highly oscillatory and nonlinear nature of the problem, the spatial mesh size and the time increments need to be of the size of the decisive parameter 0<ε≪1, especially when it is desired to capture correctly the quantitative behaviour of the wave function itself. The required high resolution in space constricts the feasibility of numerical computations for both, the Fourier pseudo-spectral and the finite element method. Nevertheless, for smaller parameter values locally adaptive time discretisations facilitate to determine the time stepsizes sufficiently small in order that the

5. On the Computation of Integral Curves in Adaptive Mesh Refinement Vector Fields

SciTech Connect

Deines, Eduard; Weber, Gunther H.; Garth, Christoph; Van Straalen, Brian; Borovikov, Sergey; Martin, Daniel F.; Joy, Kenneth I.

2011-06-27

Integral curves, such as streamlines, streaklines, pathlines, and timelines, are an essential tool in the analysis of vector field structures, offering straightforward and intuitive interpretation of visualization results. While such curves have a long-standing tradition in vector field visualization, their application to Adaptive Mesh Refinement (AMR) simulation results poses unique problems. AMR is a highly effective discretization method for a variety of physical simulation problems and has recently been applied to the study of vector fields in flow and magnetohydrodynamic applications. The cell-centered nature of AMR data and discontinuities in the vector field representation arising from AMR level boundaries complicate the application of numerical integration methods to compute integral curves. In this paper, we propose a novel approach to alleviate these problems and show its application to streamline visualization in an AMR model of the magnetic field of the solar system as well as to a simulation of two incompressible viscous vortex rings merging.

6. A New Adaptive H-Infinity Filtering Algorithm for the GPS/INS Integrated Navigation

PubMed Central

Jiang, Chen; Zhang, Shu-Bi; Zhang, Qiu-Zhao

2016-01-01

The Kalman filter is an optimal estimator with numerous applications in technology, especially in systems with Gaussian distributed noise. Moreover, the adaptive Kalman filtering algorithms, based on the Kalman filter, can control the influence of dynamic model errors. In contrast to the adaptive Kalman filtering algorithms, the H-infinity filter is able to address the interference of the stochastic model by minimization of the worst-case estimation error. In this paper, a novel adaptive H-infinity filtering algorithm, which integrates the adaptive Kalman filter and the H-infinity filter in order to perform a comprehensive filtering algorithm, is presented. In the proposed algorithm, a robust estimation method is employed to control the influence of outliers. In order to verify the proposed algorithm, experiments with real data of the Global Positioning System (GPS) and Inertial Navigation System (INS) integrated navigation, were conducted. The experimental results have shown that the proposed algorithm has multiple advantages compared to the other filtering algorithms. PMID:27999361

7. Shaping the Cities of Tomorrow: Integrating Local Urban Adaptation within an Environmental Framework

Georgescu, M.

2014-12-01

Contemporary methods focused on increasing urban sustainability are largely based on the reduction of greenhouse gas emissions. While these efforts are essential steps forward, continued characterization of urban sustainability solely within a biogeochemical framework, with neglect of the biophysical impact of the built environment, omits regional hydroclimatic forcing of the same order of magnitude as greenhouse gas emissions. Using a suite of continuous, multi-year and multi-member continental scale numerical simulations with the WRF model for the U.S., we examine hydroclimatic impacts for a variety of U.S. urban expansion scenarios (for the year 2100) and urban adaptation futures (cool roofs, green roofs, and a hypothetical hybrid approach integrating biophysical properties of both cool and green roofs), and compare those to experiments utilizing a contemporary urban extent. Widespread adoption of adaptation strategies exhibits regionally and seasonally dependent hydroclimatic impacts. For some regions and seasons, urban-induced warming in excess of 3°C can be completely offset by all adaptation approaches examined. For other regions, widespread adoption of some adaptation approaches leads to significant rainfall decline. Sustainable urban expansion therefore requires an integrated assessment that also incorporates biophysically induced urban impacts, and demands tradeoff assessment of various strategies aimed to ameliorate deleterious consequences of growth (e.g., urban heat island reduction).

8. Numerical integration for ab initio many-electron self energy calculations within the GW approximation

SciTech Connect

Liu, Fang; Lin, Lin; Vigil-Fowler, Derek; Lischner, Johannes; Kemper, Alexander F.; Sharifzadeh, Sahar; Jornada, Felipe H. da; Deslippe, Jack; Yang, Chao; and others

2015-04-01

We present a numerical integration scheme for evaluating the convolution of a Green's function with a screened Coulomb potential on the real axis in the GW approximation of the self energy. Our scheme takes the zero broadening limit in Green's function first, replaces the numerator of the integrand with a piecewise polynomial approximation, and performs principal value integration on subintervals analytically. We give the error bound of our numerical integration scheme and show by numerical examples that it is more reliable and accurate than the standard quadrature rules such as the composite trapezoidal rule. We also discuss the benefit of using different self energy expressions to perform the numerical convolution at different frequencies.

9. Carbon Dioxide Dispersion in the Combustion Integrated Rack Simulated Numerically

NASA Technical Reports Server (NTRS)

Wu, Ming-Shin; Ruff, Gary A.

2004-01-01

When discharged into an International Space Station (ISS) payload rack, a carbon dioxide (CO2) portable fire extinguisher (PFE) must extinguish a fire by decreasing the oxygen in the rack by 50 percent within 60 sec. The length of time needed for this oxygen reduction throughout the rack and the length of time that the CO2 concentration remains high enough to prevent the fire from reigniting is important when determining the effectiveness of the response and postfire procedures. Furthermore, in the absence of gravity, the local flow velocity can make the difference between a fire that spreads rapidly and one that self-extinguishes after ignition. A numerical simulation of the discharge of CO2 from PFE into the Combustion Integrated Rack (CIR) in microgravity was performed to obtain the local velocity and CO2 concentration. The complicated flow field around the PFE nozzle exits was modeled by sources of equivalent mass and momentum flux at a location downstream of the nozzle. The time for the concentration of CO2 to reach a level that would extinguish a fire anywhere in the rack was determined using the Fire Dynamics Simulator (FDS), a computational fluid dynamics code developed by the National Institute of Standards and Technology specifically to evaluate the development of a fire and smoke transport. The simulation shows that CO2, as well as any smoke and combustion gases produced by a fire, would be discharged into the ISS cabin through the resource utility panel at the bottom of the rack. These simulations will be validated by comparing the results with velocity and CO2 concentration measurements obtained during the fire suppression system verification tests conducted on the CIR in March 2003. Once these numerical simulations are validated, portions of the ISS labs and living areas will be modeled to determine the local flow conditions before, during, and after a fire event. These simulations can yield specific information about how long it takes for smoke and

10. Integrated modeling of the GMT laser tomography adaptive optics system

Piatrou, Piotr

2014-08-01

Laser Tomography Adaptive Optics (LTAO) is one of adaptive optics systems planned for the Giant Magellan Telescope (GMT). End-to-end simulation tools that are able to cope with the complexity and computational burden of the AO systems to be installed on the extremely large telescopes such as GMT prove to be an integral part of the GMT LTAO system development endeavors. SL95, the Fortran 95 Simulation Library, is one of the software tools successfully used for the LTAO system end-to-end simulations. The goal of SL95 project is to provide a complete set of generic, richly parameterized mathematical models for key elements of the segmented telescope wavefront control systems including both active and adaptive optics as well as the models for atmospheric turbulence, extended light sources like Laser Guide Stars (LGS), light propagation engines and closed-loop controllers. The library is implemented as a hierarchical collection of classes capable of mutual interaction, which allows one to assemble complex wavefront control system configurations with multiple interacting control channels. In this paper we demonstrate the SL95 capabilities by building an integrated end-to-end model of the GMT LTAO system with 7 control channels: LGS tomography with Adaptive Secondary and on-instrument deformable mirrors, tip-tilt and vibration control, LGS stabilization, LGS focus control, truth sensor-based dynamic noncommon path aberration rejection, pupil position control, SLODAR-like embedded turbulence profiler. The rich parameterization of the SL95 classes allows to build detailed error budgets propagating through the system multiple errors and perturbations such as turbulence-, telescope-, telescope misalignment-, segment phasing error-, non-common path-induced aberrations, sensor noises, deformable mirror-to-sensor mis-registration, vibration, temporal errors, etc. We will present a short description of the SL95 architecture, as well as the sample GMT LTAO system simulation

11. Adaptive multi-stage integrators for optimal energy conservation in molecular simulations

Fernández-Pendás, Mario; Akhmatskaya, Elena; Sanz-Serna, J. M.

2016-12-01

We introduce a new Adaptive Integration Approach (AIA) to be used in a wide range of molecular simulations. Given a simulation problem and a step size, the method automatically chooses the optimal scheme out of an available family of numerical integrators. Although we focus on two-stage splitting integrators, the idea may be used with more general families. In each instance, the system-specific integrating scheme identified by our approach is optimal in the sense that it provides the best conservation of energy for harmonic forces. The AIA method has been implemented in the BCAM-modified GROMACS software package. Numerical tests in molecular dynamics and hybrid Monte Carlo simulations of constrained and unconstrained physical systems show that the method successfully realizes the fail-safe strategy. In all experiments, and for each of the criteria employed, the AIA is at least as good as, and often significantly outperforms the standard Verlet scheme, as well as fixed parameter, optimized two-stage integrators. In particular, for the systems where harmonic forces play an important role, the sampling efficiency found in simulations using the AIA is up to 5 times better than the one achieved with other tested schemes.

12. Integrated Decision Support for Global Environmental Change Adaptation

Kumar, S.; Cantrell, S.; Higgins, G. J.; Marshall, J.; VanWijngaarden, F.

2011-12-01

Environmental changes are happening now that has caused concern in many parts of the world; particularly vulnerable are the countries and communities with limited resources and with natural environments that are more susceptible to climate change impacts. Global leaders are concerned about the observed phenomena and events such as Amazon deforestation, shifting monsoon patterns affecting agriculture in the mountain slopes of Peru, floods in Pakistan, water shortages in Middle East, droughts impacting water supplies and wildlife migration in Africa, and sea level rise impacts on low lying coastal communities in Bangladesh. These environmental changes are likely to get exacerbated as the temperatures rise, the weather and climate patterns change, and sea level rise continues. Large populations and billions of dollars of infrastructure could be affected. At Northrop Grumman, we have developed an integrated decision support framework for providing necessary information to stakeholders and planners to adapt to the impacts of climate variability and change at the regional and local levels. This integrated approach takes into account assimilation and exploitation of large and disparate weather and climate data sets, regional downscaling (dynamic and statistical), uncertainty quantification and reduction, and a synthesis of scientific data with demographic and economic data to generate actionable information for the stakeholders and decision makers. Utilizing a flexible service oriented architecture and state-of-the-art visualization techniques, this information can be delivered via tailored GIS portals to meet diverse set of user needs and expectations. This integrated approach can be applied to regional and local risk assessments, predictions and decadal projections, and proactive adaptation planning for vulnerable communities. In this paper we will describe this comprehensive decision support approach with selected applications and case studies to illustrate how this

13. A numerical study of 2D detonation waves with adaptive finite volume methods on unstructured grids

Hu, Guanghui

2017-02-01

In this paper, a framework of adaptive finite volume solutions for the reactive Euler equations on unstructured grids is proposed. The main ingredients of the algorithm include a second order total variation diminishing Runge-Kutta method for temporal discretization, and the finite volume method with piecewise linear solution reconstruction of the conservative variables for the spatial discretization in which the least square method is employed for the reconstruction, and weighted essentially nonoscillatory strategy is used to restrain the potential numerical oscillation. To resolve the high demanding on the computational resources due to the stiffness of the system caused by the reaction term and the shock structure in the solutions, the h-adaptive method is introduced. OpenMP parallelization of the algorithm is also adopted to further improve the efficiency of the implementation. Several one and two dimensional benchmark tests on the ZND model are studied in detail, and numerical results successfully show the effectiveness of the proposed method.

14. Optimizing aircraft performance with adaptive, integrated flight/propulsion control

NASA Technical Reports Server (NTRS)

Smith, R. H.; Chisholm, J. D.; Stewart, J. F.

1991-01-01

The Performance-Seeking Control (PSC) integrated flight/propulsion adaptive control algorithm presented was developed in order to optimize total aircraft performance during steady-state engine operation. The PSC multimode algorithm minimizes fuel consumption at cruise conditions, while maximizing excess thrust during aircraft accelerations, climbs, and dashes, and simultaneously extending engine service life through reduction of fan-driving turbine inlet temperature upon engagement of the extended-life mode. The engine models incorporated by the PSC are continually upgraded, using a Kalman filter to detect anomalous operations. The PSC algorithm will be flight-demonstrated by an F-15 at NASA-Dryden.

15. Replicated evolution of integrated plastic responses during early adaptive divergence.

PubMed

Parsons, Kevin J; Robinson, Beren W

2006-04-01

Colonization of a novel environment is expected to result in adaptive divergence from the ancestral population when selection favors a new phenotypic optimum. Local adaptation in the new environment occurs through the accumulation and integration of character states that positively affect fitness. The role played by plastic traits in adaptation to a novel environment has generally been ignored, except for variable environments. We propose that if conditions in a relatively stable but novel environment induce phenotypically plastic responses in many traits, and if genetic variation exists in the form of those responses, then selection may initially favor the accumulation and integration of functionally useful plastic responses. Early divergence between ancestral and colonist forms will then occur with respect to their plastic responses across the gradient bounded by ancestral and novel environmental conditions. To test this, we compared the magnitude, integration, and pattern of plastic character responses in external body form induced by shallow versus open water conditions between two sunfish ecomorphs that coexist in four postglacial lakes. The novel sunfish ecomorph is present in the deeper open water habitat, whereas the ancestral ecomorph inhabits the shallow waters along the lake margin. Plastic responses by open water ecomorphs were more correlated than those of their local shallow water ecomorph in two of the populations, whereas equal levels of correlated plastic character responses occurred between ecomorphs in the other two populations. Small but persistent differences occurred between ecomorph pairs in the pattern of their character responses, suggesting a recent divergence. Open water ecomorphs shared some similarities in the covariance among plastic responses to rearing environment. Replication in the form of correlated plastic responses among populations of open water ecomorphs suggests that plastic character states may evolve under selection

16. Applying integrals of motion to the numerical solution of differential equations

NASA Technical Reports Server (NTRS)

Vezewski, D. J.

1980-01-01

A method is developed for using the integrals of systems of nonlinear, ordinary, differential equations in a numerical integration process to control the local errors in these integrals and reduce the global errors of the solution. The method is general and can be applied to either scalar or vector integrals. A number of example problems, with accompanying numerical results, are used to verify the analysis and support the conjecture of global error reduction.

17. Applying integrals of motion to the numerical solution of differential equations

NASA Technical Reports Server (NTRS)

Jezewski, D. J.

1979-01-01

A method is developed for using the integrals of systems of nonlinear, ordinary differential equations in a numerical integration process to control the local errors in these integrals and reduce the global errors of the solution. The method is general and can be applied to either scaler or vector integrals. A number of example problems, with accompanying numerical results, are used to verify the analysis and support the conjecture of global error reduction.

18. Multiresolution Wavelet Based Adaptive Numerical Dissipation Control for Shock-Turbulence Computations

NASA Technical Reports Server (NTRS)

Sjoegreen, B.; Yee, H. C.

2001-01-01

The recently developed essentially fourth-order or higher low dissipative shock-capturing scheme of Yee, Sandham and Djomehri (1999) aimed at minimizing nu- merical dissipations for high speed compressible viscous flows containing shocks, shears and turbulence. To detect non smooth behavior and control the amount of numerical dissipation to be added, Yee et al. employed an artificial compression method (ACM) of Harten (1978) but utilize it in an entirely different context than Harten originally intended. The ACM sensor consists of two tuning parameters and is highly physical problem dependent. To minimize the tuning of parameters and physical problem dependence, new sensors with improved detection properties are proposed. The new sensors are derived from utilizing appropriate non-orthogonal wavelet basis functions and they can be used to completely switch to the extra numerical dissipation outside shock layers. The non-dissipative spatial base scheme of arbitrarily high order of accuracy can be maintained without compromising its stability at all parts of the domain where the solution is smooth. Two types of redundant non-orthogonal wavelet basis functions are considered. One is the B-spline wavelet (Mallat & Zhong 1992) used by Gerritsen and Olsson (1996) in an adaptive mesh refinement method, to determine regions where re nement should be done. The other is the modification of the multiresolution method of Harten (1995) by converting it to a new, redundant, non-orthogonal wavelet. The wavelet sensor is then obtained by computing the estimated Lipschitz exponent of a chosen physical quantity (or vector) to be sensed on a chosen wavelet basis function. Both wavelet sensors can be viewed as dual purpose adaptive methods leading to dynamic numerical dissipation control and improved grid adaptation indicators. Consequently, they are useful not only for shock-turbulence computations but also for computational aeroacoustics and numerical combustion. In addition, these

19. Integrating Numerical Computation into the Modeling Instruction Curriculum

ERIC Educational Resources Information Center

Caballero, Marcos D.; Burk, John B.; Aiken, John M.; Thoms, Brian D.; Douglas, Scott S.; Scanlon, Erin M.; Schatz, Michael F.

2014-01-01

Numerical computation (the use of a computer to solve, simulate, or visualize a physical problem) has fundamentally changed the way scientific research is done. Systems that are too difficult to solve in closed form are probed using computation. Experiments that are impossible to perform in the laboratory are studied numerically. Consequently, in…

20. Numerical Integration with GeoGebra in High School

ERIC Educational Resources Information Center

Herceg, Dorde; Herceg, Dragoslav

2010-01-01

The concept of definite integral is almost always introduced as the Riemann integral, which is defined in terms of the Riemann sum, and its geometric interpretation. This definition is hard to understand for high school students. With the aid of mathematical software for visualisation and computation of approximate integrals, the notion of…

1. Adaptive numerical simulation of pulsating planar flames for large Lewis and Zeldovich ranges

Roussel, Olivier; Schneider, Kai

2006-06-01

We study numerically the behaviour of pulsating planar flames in the thermo-diffusive approximation. The numerical scheme is based on a finite volume discretization with an adaptive multi-resolution technique for automatic grid adaption. This allows an accurate and efficient computation of pulsating flames even for very large activation energies. Depending on the Lewis number and the Zeldovich number, we observe different behaviours, like stable or pulsating flames, the latter being either damped, periodic, or a-periodic. A bifurcation diagram in the Lewis-Zeldovich plane is computed and our results are compared with previous computations [Rogg B. The effect of Lewis number greater than unity on an unsteady propagating flame with one-step chemistry. In: Peters N, Warnatz J, editors, Numerical methods in laminar flame propagation, Notes on numerical fluid mechanics, vol. 6. Vieweg; 1982. p. 38-48.] and theoretical predictions [Joulin G, Clavin P. Linear stability analysis of nonadiabatic flames: diffusional-thermal model. Combust Flame 1979;35:139-53]. For Lewis numbers larger than 6 we find that the stability limit is again increasing towards larger Zeldovich numbers and not monotonically decreasing as predicted by the asymptotic theory. A study of the flame velocities for different Zeldovich numbers shows that the amplitude of the pulsations strongly varies with the Lewis number. A Fourier analysis yields information on their frequency.

2. Physiology driven adaptivity for the numerical solution of the bidomain equations.

PubMed

Whiteley, Jonathan P

2007-09-01

Previous work [Whiteley, J. P. IEEE Trans. Biomed. Eng. 53:2139-2147, 2006] derived a stable, semi-implicit numerical scheme for solving the bidomain equations. This scheme allows the timestep used when solving the bidomain equations numerically to be chosen by accuracy considerations rather than stability considerations. In this study we modify this scheme to allow an adaptive numerical solution in both time and space. The spatial mesh size is determined by the gradient of the transmembrane and extracellular potentials while the timestep is determined by the values of: (i) the fast sodium current; and (ii) the calcium release from junctional sarcoplasmic reticulum to myoplasm current. For two-dimensional simulations presented here, combining the numerical algorithm in the paper cited above with the adaptive algorithm presented here leads to an increase in computational efficiency by a factor of around 250 over previous work, together with significantly less computational memory being required. The speedup for three-dimensional simulations is likely to be more impressive.

3. Immune tolerance induction by integrating innate and adaptive immune regulators

PubMed Central

Suzuki, Jun; Ricordi, Camillo; Chen, Zhibin

2009-01-01

A diversity of immune tolerance mechanisms have evolved to protect normal tissues from immune damage. Immune regulatory cells are critical contributors to peripheral tolerance. These regulatory cells, exemplified by the CD4+Foxp3+ regulatory T (Treg) cells and a recently identified population named myeloid-derived suppressor cells (MDSCs), regulate immune responses and limiting immune-mediated pathology. In a chronic inflammatory setting, such as allograft-directed immunity, there may be a dynamic “crosstalk” between the innate and adaptive immunomodulatory mechanisms for an integrated control of immune damage. CTLA4-B7-based interaction between the two branches may function as a molecular “bridge” to facilitate such “crosstalk”. Understanding the interplays among Treg cells, innate suppressors and pathogenic effector T (Teff) cells will be critical in the future to assist in the development of therapeutic strategies to enhance and synergize physiological immunosuppressive elements in the innate and adaptive immune system. Successful development of localized strategies of regulatory cell therapies could circumvent the requirement for very high number of cells and decrease the risks associated with systemic immunosuppression. To realize the potential of innate and adaptive immune regulators for the still-elusive goal of immune tolerance induction, adoptive cell therapies may also need to be coupled with agents enhancing endogenous tolerance mechanisms. PMID:19919733

PubMed Central

Konefal, Sarah; Elliot, Mick; Crespi, Bernard

2013-01-01

Adult neurogenesis in mammals is predominantly restricted to two brain regions, the dentate gyrus (DG) of the hippocampus and the olfactory bulb (OB), suggesting that these two brain regions uniquely share functions that mediate its adaptive significance. Benefits of adult neurogenesis across these two regions appear to converge on increased neuronal and structural plasticity that subserves coding of novel, complex, and fine-grained information, usually with contextual components that include spatial positioning. By contrast, costs of adult neurogenesis appear to center on potential for dysregulation resulting in higher risk of brain cancer or psychological dysfunctions, but such costs have yet to be quantified directly. The three main hypotheses for the proximate functions and adaptive significance of adult neurogenesis, pattern separation, memory consolidation, and olfactory spatial, are not mutually exclusive and can be reconciled into a simple general model amenable to targeted experimental and comparative tests. Comparative analysis of brain region sizes across two major social-ecological groups of primates, gregarious (mainly diurnal haplorhines, visually-oriented, and in large social groups) and solitary (mainly noctural, territorial, and highly reliant on olfaction, as in most rodents) suggest that solitary species, but not gregarious species, show positive associations of population densities and home range sizes with sizes of both the hippocampus and OB, implicating their functions in social-territorial systems mediated by olfactory cues. Integrated analyses of the adaptive significance of adult neurogenesis will benefit from experimental studies motivated and structured by ecologically and socially relevant selective contexts. PMID:23882188

5. Integrated Framework for an Urban Climate Adaptation Tool

Omitaomu, O.; Parish, E. S.; Nugent, P.; Mei, R.; Sylvester, L.; Ernst, K.; Absar, M.

2015-12-01

Cities have an opportunity to become more resilient to future climate change through investments made in urban infrastructure today. However, most cities lack access to credible high-resolution climate change projection information needed to assess and address potential vulnerabilities from future climate variability. Therefore, we present an integrated framework for developing an urban climate adaptation tool (Urban-CAT). Urban-CAT consists of four modules. Firstly, it provides climate projections at different spatial resolutions for quantifying urban landscape. Secondly, this projected data is combined with socio-economic data using leading and lagging indicators for assessing landscape vulnerability to climate extremes (e.g., urban flooding). Thirdly, a neighborhood scale modeling approach is presented for identifying candidate areas for adaptation strategies (e.g., green infrastructure as an adaptation strategy for urban flooding). Finally, all these capabilities are made available as a web-based tool to support decision-making and communication at the neighborhood and city levels. In this paper, we present some of the methods that drive each of the modules and demo some of the capabilities available to-date using the City of Knoxville in Tennessee as a case study.

6. Adaptive Numerical Dissipation Control in High Order Schemes for Multi-D Non-Ideal MHD

NASA Technical Reports Server (NTRS)

Yee, H. C.; Sjoegreen, B.

2005-01-01

The required type and amount of numerical dissipation/filter to accurately resolve all relevant multiscales of complex MHD unsteady high-speed shock/shear/turbulence/combustion problems are not only physical problem dependent, but also vary from one flow region to another. In addition, proper and efficient control of the divergence of the magnetic field (Div(B)) numerical error for high order shock-capturing methods poses extra requirements for the considered type of CPU intensive computations. The goal is to extend our adaptive numerical dissipation control in high order filter schemes and our new divergence-free methods for ideal MHD to non-ideal MHD that include viscosity and resistivity. The key idea consists of automatic detection of different flow features as distinct sensors to signal the appropriate type and amount of numerical dissipation/filter where needed and leave the rest of the region free from numerical dissipation contamination. These scheme-independent detectors are capable of distinguishing shocks/shears, flame sheets, turbulent fluctuations and spurious high-frequency oscillations. The detection algorithm is based on an artificial compression method (ACM) (for shocks/shears), and redundant multiresolution wavelets (WAV) (for the above types of flow feature). These filters also provide a natural and efficient way for the minimization of Div(B) numerical error.

7. Damping of spurious numerical reflections off of coarse-fine adaptive mesh refinement grid boundaries

Chilton, Sven; Colella, Phillip

2010-11-01

Adaptive mesh refinement (AMR) is an efficient technique for solving systems of partial differential equations numerically. The underlying algorithm determines where and when a base spatial and temporal grid must be resolved further in order to achieve the desired precision and accuracy in the numerical solution. However, propagating wave solutions prove problematic for AMR. In systems with low degrees of dissipation (e.g. the Maxwell-Vlasov system) a wave traveling from a finely resolved region into a coarsely resolved region encounters a numerical impedance mismatch, resulting in spurious reflections off of the coarse-fine grid boundary. These reflected waves then become trapped inside the fine region. Here, we present a scheme for damping these spurious reflections. We demonstrate its application to the scalar wave equation and an implementation for Maxwell's Equations. We also discuss a possible extension to the Maxwell-Vlasov system.

8. Numerical Relations and Skill Level Constrain Co-Adaptive Behaviors of Agents in Sports Teams

PubMed Central

Silva, Pedro; Travassos, Bruno; Vilar, Luís; Aguiar, Paulo; Davids, Keith; Araújo, Duarte; Garganta, Júlio

2014-01-01

Similar to other complex systems in nature (e.g., a hunting pack, flocks of birds), sports teams have been modeled as social neurobiological systems in which interpersonal coordination tendencies of agents underpin team swarming behaviors. Swarming is seen as the result of agent co-adaptation to ecological constraints of performance environments by collectively perceiving specific possibilities for action (affordances for self and shared affordances). A major principle of invasion team sports assumed to promote effective performance is to outnumber the opposition (creation of numerical overloads) during different performance phases (attack and defense) in spatial regions adjacent to the ball. Such performance principles are assimilated by system agents through manipulation of numerical relations between teams during training in order to create artificially asymmetrical performance contexts to simulate overloaded and underloaded situations. Here we evaluated effects of different numerical relations differentiated by agent skill level, examining emergent inter-individual, intra- and inter-team coordination. Groups of association football players (national – NLP and regional-level – RLP) participated in small-sided and conditioned games in which numerical relations between system agents were manipulated (5v5, 5v4 and 5v3). Typical grouping tendencies in sports teams (major ranges, stretch indices, distances of team centers to goals and distances between the teams' opposing line-forces in specific team sectors) were recorded by plotting positional coordinates of individual agents through continuous GPS tracking. Results showed that creation of numerical asymmetries during training constrained agents' individual dominant regions, the underloaded teams' compactness and each team's relative position on-field, as well as distances between specific team sectors. We also observed how skill level impacted individual and team coordination tendencies. Data revealed

9. Numerical relations and skill level constrain co-adaptive behaviors of agents in sports teams.

PubMed

Silva, Pedro; Travassos, Bruno; Vilar, Luís; Aguiar, Paulo; Davids, Keith; Araújo, Duarte; Garganta, Júlio

2014-01-01

Similar to other complex systems in nature (e.g., a hunting pack, flocks of birds), sports teams have been modeled as social neurobiological systems in which interpersonal coordination tendencies of agents underpin team swarming behaviors. Swarming is seen as the result of agent co-adaptation to ecological constraints of performance environments by collectively perceiving specific possibilities for action (affordances for self and shared affordances). A major principle of invasion team sports assumed to promote effective performance is to outnumber the opposition (creation of numerical overloads) during different performance phases (attack and defense) in spatial regions adjacent to the ball. Such performance principles are assimilated by system agents through manipulation of numerical relations between teams during training in order to create artificially asymmetrical performance contexts to simulate overloaded and underloaded situations. Here we evaluated effects of different numerical relations differentiated by agent skill level, examining emergent inter-individual, intra- and inter-team coordination. Groups of association football players (national--NLP and regional-level--RLP) participated in small-sided and conditioned games in which numerical relations between system agents were manipulated (5v5, 5v4 and 5v3). Typical grouping tendencies in sports teams (major ranges, stretch indices, distances of team centers to goals and distances between the teams' opposing line-forces in specific team sectors) were recorded by plotting positional coordinates of individual agents through continuous GPS tracking. Results showed that creation of numerical asymmetries during training constrained agents' individual dominant regions, the underloaded teams' compactness and each team's relative position on-field, as well as distances between specific team sectors. We also observed how skill level impacted individual and team coordination tendencies. Data revealed emergence of

10. Experimental and numerical in-plane displacement fields for determine the J-integral on a PMMA cracked specimen

Hedan, S.; Valle, V.; Cottron, M.

2010-06-01

Contrary to J-integral values calculated from the 2D numerical model, calculated J-integrals [1] in the 3D numerical and 3D experimental cases are not very close with J-integral used in the literature. We can note a problem of structure which allows three-dimensional effects surrounding the crack tip to be seen. The aim of this paper is to determine the zone where the Jintegral formulation of the literature is sufficient to estimate the energy release rate (G) for the 3D cracked structure. For that, a numerical model based on the finite element method and an experimental setup are used. A grid method is adapted to experimentally determine the in-plane displacement fields around a crack tip in a Single-Edge-Notch (SEN) tensile polymer (PMMA) specimen. This indirect method composed of experimental in-plane displacement fields and of 2 theoretical formulations, allows the experimental J-integral on the free-surface to be determined and the results obtaining by the 3D numerical simulations to be confirmed.

11. High integrity adaptive SMA components for gas turbine applications

Webster, John

2006-03-01

The use of Shape Memory Alloys (SMAs) is growing rapidly. They have been under serious development for aerospace applications for over 15 years, but are still restricted to niche areas and small scale applications. Very few applications have found their way into service. Whilst they have been predominantly aimed at airframe applications, they also offer major advantages for adaptive gas turbine components. The harsh environment within a gas turbine with its high loads, temperatures and vibration excitation provide considerable challenges which must be met whilst still delivering high integrity, light weight, aerodynamic and efficient structures. A novel method has been developed which will deliver high integrity, stiff mechanical components which can provide massive shape change capability without the need for conventional moving parts. The lead application is for a shape changing engine nozzle to provide noise reduction at take off but will withdraw at cruise to remove any performance penalty. The technology also promises to provide significant advantages for applications in a gas turbine such as shape change aerofoils, heat exchanger controls, and intake shapes. The same mechanism should be directly applicable to other areas such as air frames, automotive and civil structures, where similar high integrity requirements exist.

12. Robust numerical method for integration of point-vortex trajectories in two dimensions.

PubMed

Smith, Spencer A; Boghosian, Bruce M

2011-05-01

The venerable two-dimensional (2D) point-vortex model plays an important role as a simplified version of many disparate physical systems, including superfluids, Bose-Einstein condensates, certain plasma configurations, and inviscid turbulence. This system is also a veritable mathematical playground, touching upon many different disciplines from topology to dynamic systems theory. Point-vortex dynamics are described by a relatively simple system of nonlinear ordinary differential equations which can easily be integrated numerically using an appropriate adaptive time stepping method. As the separation between a pair of vortices relative to all other intervortex length scales decreases, however, the computational time required diverges. Accuracy is usually the most discouraging casualty when trying to account for such vortex motion, though the varying energy of this ostensibly Hamiltonian system is a potentially more serious problem. We solve these problems by a series of coordinate transformations: We first transform to action-angle coordinates, which, to lowest order, treat the close pair as a single vortex amongst all others with an internal degree of freedom. We next, and most importantly, apply Lie transform perturbation theory to remove the higher-order correction terms in succession. The overall transformation drastically increases the numerical efficiency and ensures that the total energy remains constant to high accuracy.

13. Robust numerical method for integration of point-vortex trajectories in two dimensions

Smith, Spencer A.; Boghosian, Bruce M.

2011-05-01

The venerable two-dimensional (2D) point-vortex model plays an important role as a simplified version of many disparate physical systems, including superfluids, Bose-Einstein condensates, certain plasma configurations, and inviscid turbulence. This system is also a veritable mathematical playground, touching upon many different disciplines from topology to dynamic systems theory. Point-vortex dynamics are described by a relatively simple system of nonlinear ordinary differential equations which can easily be integrated numerically using an appropriate adaptive time stepping method. As the separation between a pair of vortices relative to all other intervortex length scales decreases, however, the computational time required diverges. Accuracy is usually the most discouraging casualty when trying to account for such vortex motion, though the varying energy of this ostensibly Hamiltonian system is a potentially more serious problem. We solve these problems by a series of coordinate transformations: We first transform to action-angle coordinates, which, to lowest order, treat the close pair as a single vortex amongst all others with an internal degree of freedom. We next, and most importantly, apply Lie transform perturbation theory to remove the higher-order correction terms in succession. The overall transformation drastically increases the numerical efficiency and ensures that the total energy remains constant to high accuracy.

14. Cooperative drought adaptation: Integrating infrastructure development, conservation, and water transfers into adaptive policy pathways

Zeff, Harrison B.; Herman, Jonathan D.; Reed, Patrick M.; Characklis, Gregory W.

2016-09-01

A considerable fraction of urban water supply capacity serves primarily as a hedge against drought. Water utilities can reduce their dependence on firm capacity and forestall the development of new supplies using short-term drought management actions, such as conservation and transfers. Nevertheless, new supplies will often be needed, especially as demands rise due to population growth and economic development. Planning decisions regarding when and how to integrate new supply projects are fundamentally shaped by the way in which short-term adaptive drought management strategies are employed. To date, the challenges posed by long-term infrastructure sequencing and adaptive short-term drought management are treated independently, neglecting important feedbacks between planning and management actions. This work contributes a risk-based framework that uses continuously updating risk-of-failure (ROF) triggers to capture the feedbacks between short-term drought management actions (e.g., conservation and water transfers) and the selection and sequencing of a set of regional supply infrastructure options over the long term. Probabilistic regional water supply pathways are discovered for four water utilities in the "Research Triangle" region of North Carolina. Furthermore, this study distinguishes the status-quo planning path of independent action (encompassing utility-specific conservation and new supply infrastructure only) from two cooperative formulations: "weak" cooperation, which combines utility-specific conservation and infrastructure development with regional transfers, and "strong" cooperation, which also includes jointly developed regional infrastructure to support transfers. Results suggest that strong cooperation aids utilities in meeting their individual objectives at substantially lower costs and with less overall development. These benefits demonstrate how an adaptive, rule-based decision framework can coordinate integrated solutions that would not be

15. Adaptivity demonstration of inflatable rigidized integrated structures (IRIS)

Natori, M. C.; Higuchi, Ken; Sekine, Koji; Okazaki, Kakuma

1995-10-01

An inflatable rigidized integrated structure (IRIS), which is composed of membrane elements and cable networks, and whose structural accuracy is decided by mainly cable networks, has various design adaptivity, since it is a high performance deployable structure for future space applications. In order to keep some stiffness after deployment, materials of membrane are assumed to be rigidized in space, and sometimes the cable network is also rigidized. The concept can cover various structural elements and structure systems. The accuracy analysis of reflector surface constrained by inside hard points and the manufacturing of a simple reflector model is introduced. Test results of rigidized cable columns to show many variations of IRIS to be feasible are also reported.

16. Cooperative Drought Adaptation: Integrating Infrastructure Development, Conservation, and Water Transfers into Adaptive Policy Pathways

Zeff, H. B.; Characklis, G. W.; Reed, P. M.; Herman, J. D.

2015-12-01

Water supply policies that integrate portfolios of short-term management decisions with long-term infrastructure development enable utilities to adapt to a range of future scenarios. An effective mix of short-term management actions can augment existing infrastructure, potentially forestalling new development. Likewise, coordinated expansion of infrastructure such as regional interconnections and shared treatment capacity can increase the effectiveness of some management actions like water transfers. Highly adaptable decision pathways that mix long-term infrastructure options and short-term management actions require decision triggers capable of incorporating the impact of these time-evolving decisions on growing water supply needs. Here, we adapt risk-based triggers to sequence a set of potential infrastructure options in combination with utility-specific conservation actions and inter-utility water transfers. Individual infrastructure pathways can be augmented with conservation or water transfers to reduce the cost of meeting utility objectives, but they can also include cooperatively developed, shared infrastructure that expands regional capacity to transfer water. This analysis explores the role of cooperation among four water utilities in the 'Research Triangle' region of North Carolina by formulating three distinct categories of adaptive policy pathways: independent action (utility-specific conservation and supply infrastructure only), weak cooperation (utility-specific conservation and infrastructure development with regional transfers), and strong cooperation (utility specific conservation and jointly developed of regional infrastructure that supports transfers). Results suggest that strong cooperation aids the utilities in meeting their individual objections at substantially lower costs and with fewer irreversible infrastructure options.

17. Adaptive weld control for high-integrity welding applications

Adaptive, closed-loop weld control is necessary to maintain high-integrity, zero-defect welds. Conventional weld control techniques using weld parameter feedback control loops are sufficient to maintain set points, but fall short when confronted with unexpected variations in part/tooling temperature and mechanical structure, weldment material, arc skew angle, or calibration in weld parameter feedback measurement. Modern technology allows closed-loop control utilizing input from real-time weld monitoring sensors and inspection devices. Weld puddle parameters, bead profile parameters, and weld seam position are fed back into the weld control loop which adapts for the weld condition variations and drives them back to a desired state, thereby preventing weld defects or perturbations. Parameters such as arc position relative to the weld seam, puddle symmetry, arc length, weld width, and bead shape can be extracted from sensor imagery and used in closed-loop active weld control. All weld bead and puddle measurements are available for real-time display and statistical process control analysis, after which the data is archived to permanent storage or later retrieval and analysis.

18. Adaptive multi-sensor integration for mine detection

SciTech Connect

Baker, J.E.

1997-05-01

State-of-the-art in multi-sensor integration (MSI) application involves extensive research and development time to understand and characterize the application domain; to determine and define the appropriate sensor suite; to analyze, characterize, and calibrate the individual sensor systems; to recognize and accommodate the various sensor interactions; and to develop and optimize robust merging code. Much of this process can benefit from adaptive learning, i.e., an output-based system can take raw sensor data and desired merged results as input and adaptively develop/determine an effective method if interpretation and merger. This approach significantly reduces the time required to apply MSI to a given application, while increasing the quality of the final result and provides a quantitative measure for comparing competing MSI techniques and sensor suites. The ability to automatically develop and optimize MSI techniques for new sensor suites and operating environments makes this approach well suited to the detection of mines and mine-like targets. Perhaps more than any other, this application domain is characterized by diverse, innovative, and dynamic sensor suites, whose nature and interactions are not yet well established. This paper presents such an outcome-based multi-image analysis system. An empirical evaluation of its performance and its application, sensor and domain robustness is presented.

19. Adaptive broadening to improve spectral resolution in the numerical renormalization group

Lee, Seung-Sup B.; Weichselbaum, Andreas

2016-12-01

We propose an adaptive scheme of broadening the discrete spectral data from numerical renormalization group (NRG) calculations to improve the resolution of dynamical properties at finite energies. While the conventional scheme overbroadens narrow features at large frequency by broadening discrete weights with constant width in log-frequency, our scheme broadens each discrete contribution individually based on its sensitivity to a z -shift in the logarithmic discretization intervals. We demonstrate that the adaptive broadening better resolves various features in noninteracting and interacting models at comparable computational cost. The resolution enhancement is more significant for coarser discretization as typically required in multiband calculations. At low frequency below the energy scale of temperature, the discrete NRG data necessarily needs to be broadened on a linear scale. Here we provide a method that minimizes transition artifacts in between these broadening kernels.

20. Integrated Power Adapter: Isolated Converter with Integrated Passives and Low Material Stress

SciTech Connect

2010-09-01

ADEPT Project: CPES at Virginia Tech is developing an extremely efficient power converter that could be used in power adapters for small, lightweight laptops and other types of mobile electronic devices. Power adapters convert electrical energy into useable power for an electronic device, and they currently waste a lot of energy when they are plugged into an outlet to power up. CPES at Virginia Tech is integrating high-density capacitors, new magnetic materials, high-frequency integrated circuits, and a constant-flux transformer to create its efficient power converter. The high-density capacitors enable the power adapter to store more energy. The new magnetic materials also increase energy storage, and they can be precisely dispensed using a low-cost ink-jet printer which keeps costs down. The high-frequency integrated circuits can handle more power, and they can handle it more efficiently. And, the constant-flux transformer processes a consistent flow of electrical current, which makes the converter more efficient.

1. Approximate and exact numerical integration of the gas dynamic equations

NASA Technical Reports Server (NTRS)

Lewis, T. S.; Sirovich, L.

1979-01-01

A highly accurate approximation and a rapidly convergent numerical procedure are developed for two dimensional steady supersonic flow over an airfoil. Examples are given for a symmetric airfoil over a range of Mach numbers. Several interesting features are found in the calculation of the tail shock and the flow behind the airfoil.

2. A Diffusion Approximation and Numerical Methods for Adaptive Neuron Models with Stochastic Inputs.

PubMed

Rosenbaum, Robert

2016-01-01

Characterizing the spiking statistics of neurons receiving noisy synaptic input is a central problem in computational neuroscience. Monte Carlo approaches to this problem are computationally expensive and often fail to provide mechanistic insight. Thus, the field has seen the development of mathematical and numerical approaches, often relying on a Fokker-Planck formalism. These approaches force a compromise between biological realism, accuracy and computational efficiency. In this article we develop an extension of existing diffusion approximations to more accurately approximate the response of neurons with adaptation currents and noisy synaptic currents. The implementation refines existing numerical schemes for solving the associated Fokker-Planck equations to improve computationally efficiency and accuracy. Computer code implementing the developed algorithms is made available to the public.

3. A Diffusion Approximation and Numerical Methods for Adaptive Neuron Models with Stochastic Inputs

PubMed Central

Rosenbaum, Robert

2016-01-01

Characterizing the spiking statistics of neurons receiving noisy synaptic input is a central problem in computational neuroscience. Monte Carlo approaches to this problem are computationally expensive and often fail to provide mechanistic insight. Thus, the field has seen the development of mathematical and numerical approaches, often relying on a Fokker-Planck formalism. These approaches force a compromise between biological realism, accuracy and computational efficiency. In this article we develop an extension of existing diffusion approximations to more accurately approximate the response of neurons with adaptation currents and noisy synaptic currents. The implementation refines existing numerical schemes for solving the associated Fokker-Planck equations to improve computationally efficiency and accuracy. Computer code implementing the developed algorithms is made available to the public. PMID:27148036

Gonzalez-Pinto, S.; Perez-Rodriguez, S.

2009-09-01

The numerical integration of time-dependent PDEs, especially of Advection Diffusion Reaction type, for two and three spatial variables (in short, 2D and 3D problems) in the MoL framework is considered. The spatial discretization is made by using Finite Differences and the time integration is carried out by means of the L-stable, third order formula known as the two stage Radau IIA method. The main point for the solution of the large dimensional ODEs is not to solve the stage values of the Radau method until convergence (because the convergence is very slow on the stiff components), but only giving a very few iterations and take as advancing solution the latter stage value computed. The iterations are carried out by using the Approximate Matrix Factorization (AMF) coupled to a Newton-type iteration (SNI) as indicated in [5], which turns out in an acceptably cheap iteration, like Alternating Directions Methods (ADI) of Peaceman and Rachford (1955). Some stability results for the whole process (AMF)-(SNI) and a local error estimate for an adaptive time-integration are also given. Numerical results on two standard PDEs are presented and some conclusions about our method and other well-known solvers are drawn.

5. Adaptive Numerical Dissipative Control in High Order Schemes for Multi-D Non-Ideal MHD

NASA Technical Reports Server (NTRS)

Yee, H. C.; Sjoegreen, B.

2004-01-01

The goal is to extend our adaptive numerical dissipation control in high order filter schemes and our new divergence-free methods for ideal MHD to non-ideal MHD that include viscosity and resistivity. The key idea consists of automatic detection of different flow features as distinct sensors to signal the appropriate type and amount of numerical dissipation/filter where needed and leave the rest of the region free of numerical dissipation contamination. These scheme-independent detectors are capable of distinguishing shocks/shears, flame sheets, turbulent fluctuations and spurious high-frequency oscillations. The detection algorithm is based on an artificial compression method (ACM) (for shocks/shears), and redundant multi-resolution wavelets (WAV) (for the above types of flow feature). These filter approaches also provide a natural and efficient way for the minimization of Div(B) numerical error. The filter scheme consists of spatially sixth order or higher non-dissipative spatial difference operators as the base scheme for the inviscid flux derivatives. If necessary, a small amount of high order linear dissipation is used to remove spurious high frequency oscillations. For example, an eighth-order centered linear dissipation (AD8) might be included in conjunction with a spatially sixth-order base scheme. The inviscid difference operator is applied twice for the viscous flux derivatives. After the completion of a full time step of the base scheme step, the solution is adaptively filtered by the product of a 'flow detector' and the 'nonlinear dissipative portion' of a high-resolution shock-capturing scheme. In addition, the scheme independent wavelet flow detector can be used in conjunction with spatially compact, spectral or spectral element type of base schemes. The ACM and wavelet filter schemes using the dissipative portion of a second-order shock-capturing scheme with sixth-order spatial central base scheme for both the inviscid and viscous MHD flux

6. Numerical integration of population models satisfying conservation laws: NSFD methods.

PubMed

Mickens, Ronald E

2007-10-01

Population models arising in ecology, epidemiology and mathematical biology may involve a conservation law, i.e. the total population is constant. In addition to these cases, other situations may occur for which the total population, asymptotically in time, approach a constant value. Since it is rarely the situation that the equations of motion can be analytically solved to obtain exact solutions, it follows that numerical techniques are needed to provide solutions. However, numerical procedures are only valid if they can reproduce fundamental properties of the differential equations modeling the phenomena of interest. We show that for population models, involving a dynamical conservation law the use of nonstandard finite difference (NSFD) methods allows the construction of discretization schemes such that they are dynamically consistent (DC) with the original differential equations. The paper will briefly discuss the NSFD methodology, the concept of DC, and illustrate their application to specific problems for population models.

7. Impact of numerical integration on gas curtain simulations

SciTech Connect

Rider, W.; Kamm, J.

2000-11-01

In recent years, we have presented a less than glowing experimental comparison of hydrodynamic codes with the gas curtain experiment (e.g., Kamm et al. 1999a). Here, we discuss the manner in which the details of the hydrodynamic integration techniques may conspire to produce poor results. This also includes some progress in improving the results and agreement with experimental results. Because our comparison was conducted on the details of the experimental images (i.e., their detailed structural information), our results do not conflict with previously published results of good agreement with Richtmyer-Meshkov instabilities based on the integral scale of mixing. New experimental and analysis techniques are also discussed.

8. Managing Climate Risk. Integrating Adaptation into World Bank Group Operations

SciTech Connect

Van Aalst, M.

2006-08-15

Climate change is already taking place, and further changes are inevitable. Developing countries, and particularly the poorest people in these countries, are most at risk. The impacts result not only from gradual changes in temperature and sea level but also, in particular, from increased climate variability and extremes, including more intense floods, droughts, and storms. These changes are already having major impacts on the economic performance of developing countries and on the lives and livelihoods of millions of poor people around the world. Climate change thus directly affects the World Bank Group's mission of eradicating poverty. It also puts at risk many projects in a wide range of sectors, including infrastructure, agriculture, human health, water resources, and environment. The risks include physical threats to the investments, potential underperformance, and the possibility that projects will indirectly contribute to rising vulnerability by, for example, triggering investment and settlement in high-risk areas. The way to address these concerns is not to separate climate change adaptation from other priorities but to integrate comprehensive climate risk management into development planning, programs, and projects. While there is a great need to heighten awareness of climate risk in Bank work, a large body of experience on climate risk management is already available, in analytical work, in country dialogues, and in a growing number of investment projects. This operational experience highlights the general ingredients for successful integration of climate risk management into the mainstream development agenda: getting the right sectoral departments and senior policy makers involved; incorporating risk management into economic planning; engaging a wide range of nongovernmental actors (businesses, nongovernmental organizations, communities, and so on); giving attention to regulatory issues; and choosing strategies that will pay off immediately under current

9. Numerical simulation of scattering of acoustic waves by inelastic bodies using hypersingular boundary integral equation

SciTech Connect

Daeva, S.G.; Setukha, A.V.

2015-03-10

A numerical method for solving a problem of diffraction of acoustic waves by system of solid and thin objects based on the reduction the problem to a boundary integral equation in which the integral is understood in the sense of finite Hadamard value is proposed. To solve this equation we applied piecewise constant approximations and collocation methods numerical scheme. The difference between the constructed scheme and earlier known is in obtaining approximate analytical expressions to appearing system of linear equations coefficients by separating the main part of the kernel integral operator. The proposed numerical scheme is tested on the solution of the model problem of diffraction of an acoustic wave by inelastic sphere.

10. Numerical Research of Airframe/Engine Integrative Hypersonic Vehicle

DTIC Science & Technology

2007-11-02

paper, an engineering method and a finite volume method based on the center of grid are developed for preliminary research of interested integrative...development of hypersonic technology, advanced experimental, analytical and computational methods are being exploited in the design of hypersonic...configurations to obtain excellent aerodynamic characteristics[5]. Due to the limitation of test capabilities to model all the impossible flight conditions

11. Simpson's Rule by Rectangles: A Numerical Approach to Integration.

ERIC Educational Resources Information Center

Powell, Martin

1985-01-01

Shows that Simpson's rule can be obtained as the average of three simple rectangular approximations and can therefore be introduced to students before they meet any calculus. In addition, the accuracy of the rule (which is for exact cubes) can be exploited to introduce the topic of integration. (JN)

12. Spatially adaptive stochastic numerical methods for intrinsic fluctuations in reaction-diffusion systems

SciTech Connect

Atzberger, Paul J.

2010-05-01

Stochastic partial differential equations are introduced for the continuum concentration fields of reaction-diffusion systems. The stochastic partial differential equations account for fluctuations arising from the finite number of molecules which diffusively migrate and react. Spatially adaptive stochastic numerical methods are developed for approximation of the stochastic partial differential equations. The methods allow for adaptive meshes with multiple levels of resolution, Neumann and Dirichlet boundary conditions, and domains having geometries with curved boundaries. A key issue addressed by the methods is the formulation of consistent discretizations for the stochastic driving fields at coarse-refined interfaces of the mesh and at boundaries. Methods are also introduced for the efficient generation of the required stochastic driving fields on such meshes. As a demonstration of the methods, investigations are made of the role of fluctuations in a biological model for microorganism direction sensing based on concentration gradients. Also investigated, a mechanism for spatial pattern formation induced by fluctuations. The discretization approaches introduced for SPDEs have the potential to be widely applicable in the development of numerical methods for the study of spatially extended stochastic systems.

13. Numerical study of three-dimensional liquid jet breakup with adaptive unstructured meshes

Xie, Zhihua; Pavlidis, Dimitrios; Salinas, Pablo; Pain, Christopher; Matar, Omar

2016-11-01

Liquid jet breakup is an important fundamental multiphase flow, often found in many industrial engineering applications. The breakup process is very complex, involving jets, liquid films, ligaments, and small droplets, featuring tremendous complexity in interfacial topology and a large range of spatial scales. The objective of this study is to investigate the fluid dynamics of three-dimensional liquid jet breakup problems, such as liquid jet primary breakup and gas-sheared liquid jet breakup. An adaptive unstructured mesh modelling framework is employed here, which can modify and adapt unstructured meshes to optimally represent the underlying physics of multiphase problems and reduce computational effort without sacrificing accuracy. The numerical framework consists of a mixed control volume and finite element formulation, a 'volume of fluid' type method for the interface capturing based on a compressive control volume advection method and second-order finite element methods, and a force-balanced algorithm for the surface tension implementation. Numerical examples of some benchmark tests and the dynamics of liquid jet breakup with and without ambient gas are presented to demonstrate the capability of this method.

14. Integrated numerical methods for hypersonic aircraft cooling systems analysis

NASA Technical Reports Server (NTRS)

Petley, Dennis H.; Jones, Stuart C.; Dziedzic, William M.

1992-01-01

Numerical methods have been developed for the analysis of hypersonic aircraft cooling systems. A general purpose finite difference thermal analysis code is used to determine areas which must be cooled. Complex cooling networks of series and parallel flow can be analyzed using a finite difference computer program. Both internal fluid flow and heat transfer are analyzed, because increased heat flow causes a decrease in the flow of the coolant. The steady state solution is a successive point iterative method. The transient analysis uses implicit forward-backward differencing. Several examples of the use of the program in studies of hypersonic aircraft and rockets are provided.

15. A Numerical Study of Mesh Adaptivity in Multiphase Flows with Non-Newtonian Fluids

Percival, James; Pavlidis, Dimitrios; Xie, Zhihua; Alberini, Federico; Simmons, Mark; Pain, Christopher; Matar, Omar

2014-11-01

We present an investigation into the computational efficiency benefits of dynamic mesh adaptivity in the numerical simulation of transient multiphase fluid flow problems involving Non-Newtonian fluids. Such fluids appear in a range of industrial applications, from printing inks to toothpastes and introduce new challenges for mesh adaptivity due to the additional ``memory'' of viscoelastic fluids. Nevertheless, the multiscale nature of these flows implies huge potential benefits for a successful implementation. The study is performed using the open source package Fluidity, which couples an unstructured mesh control volume finite element solver for the multiphase Navier-Stokes equations to a dynamic anisotropic mesh adaptivity algorithm, based on estimated solution interpolation error criteria, and conservative mesh-to-mesh interpolation routine. The code is applied to problems involving rheologies ranging from simple Newtonian to shear-thinning to viscoelastic materials and verified against experimental data for various industrial and microfluidic flows. This work was undertaken as part of the EPSRC MEMPHIS programme grant EP/K003976/1.

16. Numerical approximation of weakly singular integrals on a triangle

2016-10-01

In this paper, we propose product cubature rules based on the polynomial approximation in order to evaluate the following integrals I (F ;y )= ∫TK (x ,y ) F (x )ω (x )d x , where x = (x1, x2), y = (y1, y2), K is a "weakly"singular or a "nearly"singular kernel, T the domain T is the triangle of vertices (0, 0), (0, 1), (1, 0), f is a given bivariate function defined on T and ω is a proper weight function.

17. Advantages of vertically adaptive coordinates in numerical models of stratified shelf seas

Gräwe, Ulf; Holtermann, Peter; Klingbeil, Knut; Burchard, Hans

2015-08-01

Shelf seas such as the North Sea and the Baltic Sea are characterised by spatially and temporally varying stratification that is highly relevant for their physical dynamics and the evolution of their ecosystems. Stratification may vary from unstably stratified (e.g., due to convective surface cooling) to strongly stratified with density jumps of up to 10 kg/m3 per m (e.g., in overflows into the Baltic Sea). Stratification has a direct impact on vertical turbulent transports (e.g., of nutrients) and influences the entrainment rate of ambient water into dense bottom currents which in turn determine the stratification of and oxygen supply to, e.g., the central Baltic Sea. Moreover, the suppression of the vertical diffusivity at the summer thermocline is one of the limiting factors for the vertical exchange of nutrients in the North Sea. Due to limitations of computational resources and since the locations of such density jumps (either by salinity or temperature) are predicted by the model simulation itself, predefined vertical coordinates cannot always reliably resolve these features. Thus, all shelf sea models with a predefined vertical coordinate distribution are inherently subject to under-resolution of the density structure. To solve this problem, Burchard and Beckers (2004) and Hofmeister et al. (2010) developed the concept of vertically adaptive coordinates for ocean models, where zooming of vertical coordinates at locations of strong stratification (and shear) is imposed. This is achieved by solving a diffusion equation for the position of the coordinates (with the diffusivity being proportional to the stratification or shear frequencies). We will show for a coupled model system of the North Sea and the Baltic Sea (resolution ˜ 1.8 km) how numerical mixing is substantially reduced and model results become significantly more realistic when vertically adaptive coordinates are applied. We additionally demonstrate that vertically adaptive coordinates perform well

18. Numerical Modeling of 3-D Dynamics of Ultrasound Contrast Agent Microbubbles Using the Boundary Integral Method

Calvisi, Michael; Manmi, Kawa; Wang, Qianxi

2014-11-01

Ultrasound contrast agents (UCAs) are microbubbles stabilized with a shell typically of lipid, polymer, or protein and are emerging as a unique tool for noninvasive therapies ranging from gene delivery to tumor ablation. The nonspherical dynamics of contrast agents are thought to play an important role in both diagnostic and therapeutic applications, for example, causing the emission of subharmonic frequency components and enhancing the uptake of therapeutic agents across cell membranes and tissue interfaces. A three-dimensional model for nonspherical contrast agent dynamics based on the boundary integral method is presented. The effects of the encapsulating shell are approximated by adapting Hoff's model for thin-shell, spherical contrast agents to the nonspherical case. A high-quality mesh of the bubble surface is maintained by implementing a hybrid approach of the Lagrangian method and elastic mesh technique. Numerical analyses for the dynamics of UCAs in an infinite liquid and near a rigid wall are performed in parameter regimes of clinical relevance. The results show that the presence of a coating significantly reduces the oscillation amplitude and period, increases the ultrasound pressure amplitude required to incite jetting, and reduces the jet width and velocity.

19. A time-accurate adaptive grid method and the numerical simulation of a shock-vortex interaction

NASA Technical Reports Server (NTRS)

Bockelie, Michael J.; Eiseman, Peter R.

1990-01-01

A time accurate, general purpose, adaptive grid method is developed that is suitable for multidimensional steady and unsteady numerical simulations. The grid point movement is performed in a manner that generates smooth grids which resolve the severe solution gradients and the sharp transitions in the solution gradients. The temporal coupling of the adaptive grid and the PDE solver is performed with a grid prediction correction method that is simple to implement and ensures the time accuracy of the grid. Time accurate solutions of the 2-D Euler equations for an unsteady shock vortex interaction demonstrate the ability of the adaptive method to accurately adapt the grid to multiple solution features.

20. Fast-Fourier-transform based numerical integration method for the Rayleigh-Sommerfeld diffraction formula

Shen, Fabin; Wang, Anbo

2006-02-01

The numerical calculation of the Rayleigh-Sommerfeld diffraction integral is investigated. The implementation of a fast-Fourier-transform (FFT) based direct integration (FFT-DI) method is presented, and Simpson's rule is used to improve the calculation accuracy. The sampling interval, the size of the computation window, and their influence on numerical accuracy and on computational complexity are discussed for the FFT-DI and the FFT-based angular spectrum (FFT-AS) methods. The performance of the FFT-DI method is verified by numerical simulation and compared with that of the FFT-AS method.

1. Numerical simulation of diffusion MRI signals using an adaptive time-stepping method

Li, Jing-Rebecca; Calhoun, Donna; Poupon, Cyril; Le Bihan, Denis

2014-01-01

The effect on the MRI signal of water diffusion in biological tissues in the presence of applied magnetic field gradient pulses can be modelled by a multiple compartment Bloch-Torrey partial differential equation. We present a method for the numerical solution of this equation by coupling a standard Cartesian spatial discretization with an adaptive time discretization. The time discretization is done using the explicit Runge-Kutta-Chebyshev method, which is more efficient than the forward Euler time discretization for diffusive-type problems. We use this approach to simulate the diffusion MRI signal from the extra-cylindrical compartment in a tissue model of the brain gray matter consisting of cylindrical and spherical cells and illustrate the effect of cell membrane permeability.

2. On the use of the line integral in the numerical treatment of conservative problems

Brugnano, Luigi; Iavernaro, Felice

2016-06-01

We sketch out the use of the line integral as a tool to devise numerical methods suitable for conservative and, in particular, Hamiltonian problems. The monograph [3] presents the fundamental theory on line integral methods and this short note aims at exploring some aspects and results emerging from their study.

3. Adaptive Voltage Management Enabling Energy Efficiency in Nanoscale Integrated Circuits

Shapiro, Alexander E.

Battery powered devices emphasize energy efficiency in modern sub-22 nm CMOS microprocessors rendering classic power reduction solutions not sufficient. Classical solutions that reduce power consumption in high performance integrated circuits are superseded with novel and enhanced power reduction techniques to enable the greater energy efficiency desired in modern microprocessors and emerging mobile platforms. Dynamic power consumption is reduced by operating over a wide range of supply voltages. This region of operation is enabled by a high speed and power efficient level shifter which translates low voltage digital signals to higher voltages (and vice versa), a key component that enables communication among circuits operating at different voltage levels. Additionally, optimizing the wide supply voltage range of signals propagating across long interconnect enables greater energy savings. A closed-form delay model supporting wide voltage range is developed to enable this capability. The model supports an ultra-wide voltage range from nominal voltages to subthreshold voltages, and a wide range of repeater sizes. To mitigate the drawback of lower operating speed at reduced supply voltages, the high performance exhibited by MOS current mode logic technology is exploited. High performance and energy efficient circuits are enabled by combining this logic style with power efficient near threshold circuits. Many-core systems that operate at high frequencies and process highly parallel workloads benefit from this combination of MCML with NTC. Due to aggressive scaling, static power consumption can in some cases overshadow dynamic power. Techniques to lower leakage power have therefore become an important objective in modern microprocessors. To address this issue, an adaptive power gating technique is proposed. This technique utilizes high levels of granularity to save additional leakage power when a circuit is active as opposed to standard power gating that saves static

4. Numerical solution of a class of integral equations arising in two-dimensional aerodynamics

NASA Technical Reports Server (NTRS)

Fromme, J.; Golberg, M. A.

1978-01-01

We consider the numerical solution of a class of integral equations arising in the determination of the compressible flow about a thin airfoil in a ventilated wind tunnel. The integral equations are of the first kind with kernels having a Cauchy singularity. Using appropriately chosen Hilbert spaces, it is shown that the kernel gives rise to a mapping which is the sum of a unitary operator and a compact operator. This allows the problem to be studied in terms of an equivalent integral equation of the second kind. A convergent numerical algorithm for its solution is derived by using Galerkin's method. It is shown that this algorithm is numerically equivalent to Bland's collocation method, which is then used as the method of computation. Extensive numerical calculations are presented establishing the validity of the theory.

PubMed Central

Chaix, Cécile; Kovalsky, Stephen; Kosmider, Matthew; Barrett, Harrison H.; Furenlid, Lars R.

2015-01-01

AdaptiSPECT is a pre-clinical adaptive SPECT imaging system under final development at the Center for Gamma-ray Imaging. The system incorporates multiple adaptive features: an adaptive aperture, 16 detectors mounted on translational stages, and the ability to switch between a non-multiplexed and a multiplexed imaging configuration. In this paper, we review the design of AdaptiSPECT and its adaptive features. We then describe the on-going integration of the imaging system. PMID:26347197

6. Numerical quadrature methods for integrals of singular periodic functions and their application to singular and weakly singular integral equations

NASA Technical Reports Server (NTRS)

Sidi, A.; Israeli, M.

1986-01-01

High accuracy numerical quadrature methods for integrals of singular periodic functions are proposed. These methods are based on the appropriate Euler-Maclaurin expansions of trapezoidal rule approximations and their extrapolations. They are used to obtain accurate quadrature methods for the solution of singular and weakly singular Fredholm integral equations. Such periodic equations are used in the solution of planar elliptic boundary value problems, elasticity, potential theory, conformal mapping, boundary element methods, free surface flows, etc. The use of the quadrature methods is demonstrated with numerical examples.

7. Buckling of adaptive elastic bone-plate: theoretical and numerical investigation.

PubMed

Ramtani, S; Abdi, M

2005-06-01

During day-to-day activities, many bones in the axial and appendicular skeleton are subjected to repetitive, cyclic loading that often results directly in an increased risk of bone fracture. In clinical orthopedics, trabecular fatigue fractures are observed as compressive stress fractures in the proximal femur, vertebrae, calcaneus and tibia, that are often preceded by buckling and bending of microstructural elements (Müller et al. in J Biomechanics 31:150 1998; Gibson in J Biomechanics 18:317-328 1985; Gibson and Ashby in Cellular solids 1997; Lotz et al. in Osteoporos Int 5:252-261 1995; Carter and Hayes in Science 194:1174-1176 1976). However, the relative importance of bone density and architecture in the etiology of these fractures are poorly understood and consequently not investigated from a biomechanical point of view. In the present contribution, an attempt is made to formulate a bone-plate buckling theory using Cowin's concepts of adaptive elasticity (Cowin and Hegedus in J Elast 6:313-325 1976; Hegedus and Cowin J Elast 6:337-352 1976). In particular, the buckling problem of a Kirchhoff-Love bone plate is investigated numerically by using the finite difference method and an iterative solving approach (Chen in Comput Methods Appl Mech Eng 167:91-99 1998; Hildebland in Introduction to numerical analysis 1974; Richtmyer and Morton in Difference methods for initial-value problems 1967).

8. Parallel implementation of an adaptive and parameter-free N-body integrator

Pruett, C. David; Ingham, William H.; Herman, Ralph D.

2011-05-01

Previously, Pruett et al. (2003) [3] described an N-body integrator of arbitrarily high order M with an asymptotic operation count of O(MN). The algorithm's structure lends itself readily to data parallelization, which we document and demonstrate here in the integration of point-mass systems subject to Newtonian gravitation. High order is shown to benefit parallel efficiency. The resulting N-body integrator is robust, parameter-free, highly accurate, and adaptive in both time-step and order. Moreover, it exhibits linear speedup on distributed parallel processors, provided that each processor is assigned at least a handful of bodies. Program summaryProgram title: PNB.f90 Catalogue identifier: AEIK_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEIK_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC license, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 3052 No. of bytes in distributed program, including test data, etc.: 68 600 Distribution format: tar.gz Programming language: Fortran 90 and OpenMPI Computer: All shared or distributed memory parallel processors Operating system: Unix/Linux Has the code been vectorized or parallelized?: The code has been parallelized but has not been explicitly vectorized. RAM: Dependent upon N Classification: 4.3, 4.12, 6.5 Nature of problem: High accuracy numerical evaluation of trajectories of N point masses each subject to Newtonian gravitation. Solution method: Parallel and adaptive extrapolation in time via power series of arbitrary degree. Running time: 5.1 s for the demo program supplied with the package.

9. Reliable and efficient a posteriori error estimation for adaptive IGA boundary element methods for weakly-singular integral equations

Feischl, Michael; Gantner, Gregor; Praetorius, Dirk

2015-06-01

We consider the Galerkin boundary element method (BEM) for weakly-singular integral equations of the first-kind in 2D. We analyze some residual-type a posteriori error estimator which provides a lower as well as an upper bound for the unknown Galerkin BEM error. The required assumptions are weak and allow for piecewise smooth parametrizations of the boundary, local mesh-refinement, and related standard piecewise polynomials as well as NURBS. In particular, our analysis gives a first contribution to adaptive BEM in the frame of isogeometric analysis (IGABEM), for which we formulate an adaptive algorithm which steers the local mesh-refinement and the multiplicity of the knots. Numerical experiments underline the theoretical findings and show that the proposed adaptive strategy leads to optimal convergence.

10. Reliable and efficient a posteriori error estimation for adaptive IGA boundary element methods for weakly-singular integral equations

PubMed Central

Feischl, Michael; Gantner, Gregor; Praetorius, Dirk

2015-01-01

We consider the Galerkin boundary element method (BEM) for weakly-singular integral equations of the first-kind in 2D. We analyze some residual-type a posteriori error estimator which provides a lower as well as an upper bound for the unknown Galerkin BEM error. The required assumptions are weak and allow for piecewise smooth parametrizations of the boundary, local mesh-refinement, and related standard piecewise polynomials as well as NURBS. In particular, our analysis gives a first contribution to adaptive BEM in the frame of isogeometric analysis (IGABEM), for which we formulate an adaptive algorithm which steers the local mesh-refinement and the multiplicity of the knots. Numerical experiments underline the theoretical findings and show that the proposed adaptive strategy leads to optimal convergence. PMID:26085698

11. Integration of the immune system: a complex adaptive supersystem

Crisman, Mark V.

2001-10-01

Immunity to pathogenic organisms is a complex process involving interacting factors within the immune system including circulating cells, tissues and soluble chemical mediators. Both the efficiency and adaptive responses of the immune system in a dynamic, often hostile, environment are essential for maintaining our health and homeostasis. This paper will present a brief review of one of nature's most elegant, complex adaptive systems.

12. Tensor Product Model Transformation Based Adaptive Integral-Sliding Mode Controller: Equivalent Control Method

PubMed Central

Zhao, Guoliang; Li, Hongxing

2013-01-01

This paper proposes new methodologies for the design of adaptive integral-sliding mode control. A tensor product model transformation based adaptive integral-sliding mode control law with respect to uncertainties and perturbations is studied, while upper bounds on the perturbations and uncertainties are assumed to be unknown. The advantage of proposed controllers consists in having a dynamical adaptive control gain to establish a sliding mode right at the beginning of the process. Gain dynamics ensure a reasonable adaptive gain with respect to the uncertainties. Finally, efficacy of the proposed controller is verified by simulations on an uncertain nonlinear system model. PMID:24453897

13. Tensor product model transformation based adaptive integral-sliding mode controller: equivalent control method.

PubMed

Zhao, Guoliang; Sun, Kaibiao; Li, Hongxing

2013-01-01

This paper proposes new methodologies for the design of adaptive integral-sliding mode control. A tensor product model transformation based adaptive integral-sliding mode control law with respect to uncertainties and perturbations is studied, while upper bounds on the perturbations and uncertainties are assumed to be unknown. The advantage of proposed controllers consists in having a dynamical adaptive control gain to establish a sliding mode right at the beginning of the process. Gain dynamics ensure a reasonable adaptive gain with respect to the uncertainties. Finally, efficacy of the proposed controller is verified by simulations on an uncertain nonlinear system model.

14. Developmental specialization of the left parietal cortex for the semantic representation of Arabic numerals: an fMR-adaptation study.

PubMed

Vogel, Stephan E; Goffin, Celia; Ansari, Daniel

2015-04-01

The way the human brain constructs representations of numerical symbols is poorly understood. While increasing evidence from neuroimaging studies has indicated that the intraparietal sulcus (IPS) becomes increasingly specialized for symbolic numerical magnitude representation over developmental time, the extent to which these changes are associated with age-related differences in symbolic numerical magnitude representation or with developmental changes in non-numerical processes, such as response selection, remains to be uncovered. To address these outstanding questions we investigated developmental changes in the cortical representation of symbolic numerical magnitude in 6- to 14-year-old children using a passive functional magnetic resonance imaging adaptation design, thereby mitigating the influence of response selection. A single-digit Arabic numeral was repeatedly presented on a computer screen and interspersed with the presentation of novel digits deviating as a function of numerical ratio (smaller/larger number). Results demonstrated a correlation between age and numerical ratio in the left IPS, suggesting an age-related increase in the extent to which numerical symbols are represented in the left IPS. Brain activation of the right IPS was modulated by numerical ratio but did not correlate with age, indicating hemispheric differences in IPS engagement during the development of symbolic numerical representation.

15. Numerical method to solve Cauchy type singular integral equation with error bounds

Setia, Amit; Sharma, Vaishali; Liu, Yucheng

2017-01-01

Cauchy type singular integral equations with index zero naturally occur in the field of aerodynamics. Literature is very much developed for these equations and Chebyshevs polynomials are most frequently used to solve these integral equations. In this paper, a residual based Galerkins method has been proposed by using Legendre polynomial as basis functions to solve Cauchy singular integral equation of index zero. It converts the Cauchy singular integral equation into system of equations which can be easily solved. The test examples are given for illustration of proposed numerical method. Error bounds are derived as well as implemented in all the test examples.

16. A comparison of the efficiency of numerical methods for integrating chemical kinetic rate equations

NASA Technical Reports Server (NTRS)

1984-01-01

The efficiency of several algorithms used for numerical integration of stiff ordinary differential equations was compared. The methods examined included two general purpose codes EPISODE and LSODE and three codes (CHEMEQ, CREK1D and GCKP84) developed specifically to integrate chemical kinetic rate equations. The codes were applied to two test problems drawn from combustion kinetics. The comparisons show that LSODE is the fastest code available for the integration of combustion kinetic rate equations. It is shown that an iterative solution of the algebraic energy conservation equation to compute the temperature can be more efficient then evaluating the temperature by integrating its time-derivative.

17. Direct numerical simulations of particle-laden density currents with adaptive, discontinuous finite elements

Parkinson, S. D.; Hill, J.; Piggott, M. D.; Allison, P. A.

2014-09-01

High-resolution direct numerical simulations (DNSs) are an important tool for the detailed analysis of turbidity current dynamics. Models that resolve the vertical structure and turbulence of the flow are typically based upon the Navier-Stokes equations. Two-dimensional simulations are known to produce unrealistic cohesive vortices that are not representative of the real three-dimensional physics. The effect of this phenomena is particularly apparent in the later stages of flow propagation. The ideal solution to this problem is to run the simulation in three dimensions but this is computationally expensive. This paper presents a novel finite-element (FE) DNS turbidity current model that has been built within Fluidity, an open source, general purpose, computational fluid dynamics code. The model is validated through re-creation of a lock release density current at a Grashof number of 5 × 106 in two and three dimensions. Validation of the model considers the flow energy budget, sedimentation rate, head speed, wall normal velocity profiles and the final deposit. Conservation of energy in particular is found to be a good metric for measuring model performance in capturing the range of dynamics on a range of meshes. FE models scale well over many thousands of processors and do not impose restrictions on domain shape, but they are computationally expensive. The use of adaptive mesh optimisation is shown to reduce the required element count by approximately two orders of magnitude in comparison with fixed, uniform mesh simulations. This leads to a substantial reduction in computational cost. The computational savings and flexibility afforded by adaptivity along with the flexibility of FE methods make this model well suited to simulating turbidity currents in complex domains.

18. Assessing institutional capacities to adapt to climate change - integrating psychological dimensions in the Adaptive Capacity Wheel

Grothmann, T.; Grecksch, K.; Winges, M.; Siebenhüner, B.

2013-03-01

19. Assessing institutional capacities to adapt to climate change: integrating psychological dimensions in the Adaptive Capacity Wheel

Grothmann, T.; Grecksch, K.; Winges, M.; Siebenhüner, B.

2013-12-01

20. Numerical modeling of acoustic timescale detonation initiation using the Adaptive Wavelet-Collocation Method

Regele, Jonathan D.

Multi-dimensional numerical modeling of detonation initiation is the primary goal of this thesis. The particular scenario under examination is initiating a detonation wave through acoustic timescale thermal power deposition. Physically this would correspond to igniting a reactive mixture with a laser pulse as opposed to a typical electric spark. Numerous spatial and temporal scales are involved, which makes these problems computationally challenging to solve. In order to model these problems, a shock capturing scheme is developed that utilizes the computational efficiency of the Adaptive Wavelet-Collocation Method (AWCM) to properly handle the multiple scales involved. With this technique, previous one-dimensional problems with unphysically small activation energies are revisited and simulated with the AWCM. The results demonstrate a qualitative agreement with previous work that used a uniform grid MacCormack scheme. Both sets of data show the basic sequence of events that are needed in order for a DDT process to occur. Instead of starting with a strong shock-coupled reaction zone as many other studies have done, the initial pulse is weak enough to allow the shock and the reaction zone to decouple. Reflected compression waves generated by the inertially confined reaction zone lead to localized reaction centers, which eventually explode and further accelerate the process. A shock-coupled reaction zone forms an initially overdriven detonation, which relaxes to a steady CJ wave. The one-dimensional problems are extended to two dimensions using a circular heat deposition in a channel. Two-dimensional results demonstrate the same sequence of events, which suggests that the concepts developed in the original one-dimensional work are applicable to multiple dimensions.

1. Frequency responses and resolving power of numerical integration of sampled data

Yaroslavsky, L. P.; Moreno, A.; Campos, J.

2005-04-01

Methods of numerical integration of sampled data are compared in terms of their frequency responses and resolving power. Compared, theoretically and by numerical experiments, are trapezoidal, Simpson, Simpson-3/8 methods, method based on cubic spline data interpolation and Discrete Fourier Transform (DFT) based method. Boundary effects associated with DFT- based and spline-based methods are investigated and an improved Discrete Cosine Transform based method is suggested and shown to be superior to all other methods both in terms of approximation to the ideal continuous integrator and of the level of the boundary effects.

2. Numerical integration of the stochastic Landau-Lifshitz-Gilbert equation in generic time-discretization schemes.

PubMed

Romá, Federico; Cugliandolo, Leticia F; Lozano, Gustavo S

2014-08-01

We introduce a numerical method to integrate the stochastic Landau-Lifshitz-Gilbert equation in spherical coordinates for generic discretization schemes. This method conserves the magnetization modulus and ensures the approach to equilibrium under the expected conditions. We test the algorithm on a benchmark problem: the dynamics of a uniformly magnetized ellipsoid. We investigate the influence of various parameters, and in particular, we analyze the efficiency of the numerical integration, in terms of the number of steps needed to reach a chosen long time with a given accuracy.

Bargatze, L. F.

2015-12-01

4. Feasibility study of the numerical integration of shell equations using the field method

NASA Technical Reports Server (NTRS)

Cohen, G. A.

1973-01-01

The field method is developed for arbitrary open branch domains subjected to general linear boundary conditions. Although closed branches are within the scope of the method, they are not treated here. The numerical feasibility of the method has been demonstrated by implementing it in a computer program for the linear static analysis of open branch shells of revolution under asymmetric loads. For such problems the field method eliminates the well-known numerical problem of long subintervals associated with the rapid growth of extraneous solutions. Also, the method appears to execute significantly faster than other numerical integration methods.

5. Numerical simulation of current sheet formation in a quasiseparatrix layer using adaptive mesh refinement

SciTech Connect

Effenberger, Frederic; Thust, Kay; Grauer, Rainer; Dreher, Juergen; Arnold, Lukas

2011-03-15

The formation of a thin current sheet in a magnetic quasiseparatrix layer (QSL) is investigated by means of numerical simulation using a simplified ideal, low-{beta}, MHD model. The initial configuration and driving boundary conditions are relevant to phenomena observed in the solar corona and were studied earlier by Aulanier et al. [Astron. Astrophys. 444, 961 (2005)]. In extension to that work, we use the technique of adaptive mesh refinement (AMR) to significantly enhance the local spatial resolution of the current sheet during its formation, which enables us to follow the evolution into a later stage. Our simulations are in good agreement with the results of Aulanier et al. up to the calculated time in that work. In a later phase, we observe a basically unarrested collapse of the sheet to length scales that are more than one order of magnitude smaller than those reported earlier. The current density attains correspondingly larger maximum values within the sheet. During this thinning process, which is finally limited by lack of resolution even in the AMR studies, the current sheet moves upward, following a global expansion of the magnetic structure during the quasistatic evolution. The sheet is locally one-dimensional and the plasma flow in its vicinity, when transformed into a comoving frame, qualitatively resembles a stagnation point flow. In conclusion, our simulations support the idea that extremely high current densities are generated in the vicinities of QSLs as a response to external perturbations, with no sign of saturation.

6. On the stability of numerical integration routines for ordinary differential equations.

NASA Technical Reports Server (NTRS)

Glover, K.; Willems, J. C.

1973-01-01

Numerical integration methods for the solution of initial value problems for ordinary vector differential equations may be modelled as discrete time feedback systems. The stability criteria discovered in modern control theory are applied to these systems and criteria involving the routine, the step size and the differential equation are derived. Linear multistep, Runge-Kutta, and predictor-corrector methods are all investigated.

7. Some numerical methods for integrating systems of first-order ordinary differential equations

NASA Technical Reports Server (NTRS)

Clark, N. W.

1969-01-01

Report on numerical methods of integration includes the extrapolation methods of Bulirsch-Stoer and Neville. A comparison is made nith the Runge-Kutta and Adams-Moulton methods, and circumstances are discussed under which the extrapolation method may be preferred.

8. Abstract Applets: A Method for Integrating Numerical Problem Solving into the Undergraduate Physics Curriculum

SciTech Connect

Peskin, Michael E

2003-02-13

In upper-division undergraduate physics courses, it is desirable to give numerical problem-solving exercises integrated naturally into weekly problem sets. I explain a method for doing this that makes use of the built-in class structure of the Java programming language. I also supply a Java class library that can assist instructors in writing programs of this type.

9. Integrating Adaptability into Special Operations Forces Intermediate Level Education

DTIC Science & Technology

2010-10-01

components of adaptability, as described in this report. In addition, we found that while some of the material covered by the ILE curriculum relates...19 APPENDIX A – ADVANCED MATERIALS ............................................................ A-1 APPENDIX B – INTERVIEW... MATERIALS ............................................................ B-1 APPENDIX C – INTERVIEW DATA

10. Numerical solution of optimal control problems using multiple-interval integral Gegenbauer pseudospectral methods

Tang, Xiaojun

2016-04-01

The main purpose of this work is to provide multiple-interval integral Gegenbauer pseudospectral methods for solving optimal control problems. The latest developed single-interval integral Gauss/(flipped Radau) pseudospectral methods can be viewed as special cases of the proposed methods. We present an exact and efficient approach to compute the mesh pseudospectral integration matrices for the Gegenbauer-Gauss and flipped Gegenbauer-Gauss-Radau points. Numerical results on benchmark optimal control problems confirm the ability of the proposed methods to obtain highly accurate solutions.

11. Adaptability and the integration of computer-based information processing into the dynamics of organizations.

PubMed

Kampfner, Roberto R

2006-07-01

The structure of a system influences its adaptability. An important result of adaptability theory is that subsystem independence increases adaptability [Conrad, M., 1983. Adaptability. Plenum Press, New York]. Adaptability is essential in systems that face an uncertain environment such as biological systems and organizations. Modern organizations are the product of human design. And so it is their structure and the effect that it has on their adaptability. In this paper we explore the potential effects of computer-based information processing on the adaptability of organizations. The integration of computer-based processes into the dynamics of the functions they support and the effect it has on subsystem independence are especially relevant to our analysis.

12. Numerical Modelling of Volcanic Ash Settling in Water Using Adaptive Unstructured Meshes

Jacobs, C. T.; Collins, G. S.; Piggott, M. D.; Kramer, S. C.; Wilson, C. R.

2011-12-01

particles. The numerically predicted settling velocities for both individual particles and plumes, as well as the instability behaviour, agree well with experimental observations. Building on this successful validation, we use results from a suite of simulations spanning a variety of characteristic particle sizes and inflow flux rates to test theoretical criteria for determining whether particles settle individually or collectively. This suggests that the relevant criterion for predicting the onset of plume formation must take into account the turbulent (rather than laminar) nature of plume settling. An important benefit of our unstructured adaptive mesh model over multi-phase models that use regular structured grids of uniform resolution is that it is able to focus numerical resolution in areas important to the dynamics while decreasing resolution where it is not needed. We show that this gives the same solution accuracy for reduced computational cost compared with uniform resolution. Moreover, the multi-scale capabilities of our model allows us to consider small-scale plume evolution in domains many times larger than is achievable in the laboratory.

13. Numerical Simulations of Optical Turbulence Using an Advanced Atmospheric Prediction Model: Implications for Adaptive Optics Design

Alliss, R.

2014-09-01

Optical turbulence (OT) acts to distort light in the atmosphere, degrading imagery from astronomical telescopes and reducing the data quality of optical imaging and communication links. Some of the degradation due to turbulence can be corrected by adaptive optics. However, the severity of optical turbulence, and thus the amount of correction required, is largely dependent upon the turbulence at the location of interest. Therefore, it is vital to understand the climatology of optical turbulence at such locations. In many cases, it is impractical and expensive to setup instrumentation to characterize the climatology of OT, so numerical simulations become a less expensive and convenient alternative. The strength of OT is characterized by the refractive index structure function Cn2, which in turn is used to calculate atmospheric seeing parameters. While attempts have been made to characterize Cn2 using empirical models, Cn2 can be calculated more directly from Numerical Weather Prediction (NWP) simulations using pressure, temperature, thermal stability, vertical wind shear, turbulent Prandtl number, and turbulence kinetic energy (TKE). In this work we use the Weather Research and Forecast (WRF) NWP model to generate Cn2 climatologies in the planetary boundary layer and free atmosphere, allowing for both point-to-point and ground-to-space seeing estimates of the Fried Coherence length (ro) and other seeing parameters. Simulations are performed using a multi-node linux cluster using the Intel chip architecture. The WRF model is configured to run at 1km horizontal resolution and centered on the Mauna Loa Observatory (MLO) of the Big Island. The vertical resolution varies from 25 meters in the boundary layer to 500 meters in the stratosphere. The model top is 20 km. The Mellor-Yamada-Janjic (MYJ) TKE scheme has been modified to diagnose the turbulent Prandtl number as a function of the Richardson number, following observations by Kondo and others. This modification

14. Data rate management and real time operation: recursive adaptive frame integration of limited data

Rafailov, Michael K.

2006-08-01

Recursive Limited Frame Integration was proposed as a way to improve frame integration performance and mitigate issues related to high data rate needed to support conventional frame integration. The technique uses two thresholds -one tuned for optimum probability of detection, the other to manage required false alarm rate, and places integration process between those thresholds. This configuration allows a non-linear integration process that, along with Signal-to-Noise Ratio (SNR) gain, provides system designers more capability where cost, weight, or power considerations limit system data rate, processing, or memory capability. However, Recursive Frame Integration Limited may have performance issues when single-frame SNR is really low. Recursive Adaptive Limited Frame Integration was proposed as a means to improve limited integration performance with really low single-frame SNR. It combines the benefits of nonlinear recursive limited frame integration and adaptive thresholds with a kind of conventional frame integration. Adding the third threshold may help in managing real time operations. In the paper the Recursive Frame Integration is presented in form of multiple parallel recursive integration. Such an approach can help not only in data rate management but in mitigation of low single frame SNR issue for Recursive Integration as well as in real time operations with frame integration.

15. Direct numerical simulations of particle-laden density currents with adaptive, discontinuous finite elements

Parkinson, S. D.; Hill, J.; Piggott, M. D.; Allison, P. A.

2014-05-01

High resolution direct numerical simulations (DNS) are an important tool for the detailed analysis of turbidity current dynamics. Models that resolve the vertical structure and turbulence of the flow are typically based upon the Navier-Stokes equations. Two-dimensional simulations are known to produce unrealistic cohesive vortices that are not representative of the real three-dimensional physics. The effect of this phenomena is particularly apparent in the later stages of flow propagation. The ideal solution to this problem is to run the simulation in three dimensions but this is computationally expensive. This paper presents a novel finite-element (FE) DNS turbidity current model that has been built within Fluidity, an open source, general purpose, computational fluid dynamics code. The model is validated through re-creation of a lock release density current at a Grashof number of 5 × 106 in two, and three-dimensions. Validation of the model considers the flow energy budget, sedimentation rate, head speed, wall normal velocity profiles and the final deposit. Conservation of energy in particular is found to be a good metric for measuring mesh performance in capturing the range of dynamics. FE models scale well over many thousands of processors and do not impose restrictions on domain shape, but they are computationally expensive. Use of discontinuous discretisations and adaptive unstructured meshing technologies, which reduce the required element count by approximately two orders of magnitude, results in high resolution DNS models of turbidity currents at a fraction of the cost of traditional FE models. The benefits of this technique will enable simulation of turbidity currents in complex and large domains where DNS modelling was previously unachievable.

16. Quantum simulations of nuclei and nuclear pasta with the multiresolution adaptive numerical environment for scientific simulations

Sagert, I.; Fann, G. I.; Fattoyev, F. J.; Postnikov, S.; Horowitz, C. J.

2016-05-01

Background: Neutron star and supernova matter at densities just below the nuclear matter saturation density is expected to form a lattice of exotic shapes. These so-called nuclear pasta phases are caused by Coulomb frustration. Their elastic and transport properties are believed to play an important role for thermal and magnetic field evolution, rotation, and oscillation of neutron stars. Furthermore, they can impact neutrino opacities in core-collapse supernovae. Purpose: In this work, we present proof-of-principle three-dimensional (3D) Skyrme Hartree-Fock (SHF) simulations of nuclear pasta with the Multi-resolution ADaptive Numerical Environment for Scientific Simulations (MADNESS). Methods: We perform benchmark studies of 16O, 208Pb, and 238U nuclear ground states and calculate binding energies via 3D SHF simulations. Results are compared with experimentally measured binding energies as well as with theoretically predicted values from an established SHF code. The nuclear pasta simulation is initialized in the so-called waffle geometry as obtained by the Indiana University Molecular Dynamics (IUMD) code. The size of the unit cell is 24 fm with an average density of about ρ =0.05 fm-3 , proton fraction of Yp=0.3 , and temperature of T =0 MeV. Results: Our calculations reproduce the binding energies and shapes of light and heavy nuclei with different geometries. For the pasta simulation, we find that the final geometry is very similar to the initial waffle state. We compare calculations with and without spin-orbit forces. We find that while subtle differences are present, the pasta phase remains in the waffle geometry. Conclusions: Within the MADNESS framework, we can successfully perform calculations of inhomogeneous nuclear matter. By using pasta configurations from IUMD it is possible to explore different geometries and test the impact of self-consistent calculations on the latter.

17. INCORPORATING CATASTROPHES INTO INTEGRATED ASSESSMENT: SCIENCE, IMPACTS, AND ADAPTATION

EPA Science Inventory

Incorporating potential catastrophic consequences into integrated assessment models of climate change has been a top priority of policymakers and modelers alike. We review the current state of scientific understanding regarding three frequently mentioned geophysical catastrophes,...

18. A Comparative Study of Acousto-Optic Time-Integrating Correlators for Adaptive Jamming Cancellation

DTIC Science & Technology

1997-10-01

This final report presents a comparative study of the space-integrating and time-integrating configurations of an acousto - optic correlator...systematically evaluate all existing acousto - optic correlator architectures and to determine which would be most suitable for adaptive jamming

19. Three Authentic Curriculum-Integration Approaches to Bird Adaptations That Incorporate Technology and Thinking Skills

ERIC Educational Resources Information Center

Rule, Audrey C.; Barrera, Manuel T., III

2008-01-01

Integration of subject areas with technology and thinking skills is a way to help teachers cope with today's overloaded curriculum and to help students see the connectedness of different curriculum areas. This study compares three authentic approaches to teaching a science unit on bird adaptations for habitat that integrate thinking skills and…

20. Determinants of International Students' Adaptation: Examining Effects of Integrative Motivation, Instrumental Motivation and Second Language Proficiency

ERIC Educational Resources Information Center

Yu, Baohua; Downing, Kevin

2012-01-01

This study examined the influence of integrative motivation, instrumental motivation and second language (L2) proficiency on socio-cultural/academic adaptation in a sample of two groups of international students studying Chinese in China. Results revealed that the non-Asian student group reported higher levels of integrative motivation,…

1. Examination of Numerical Integration Accuracy and Modeling for GRACE-FO and GRACE-II

2012-12-01

As technological advances throughout the field of satellite geodesy improve the accuracy of satellite measurements, numerical methods and algorithms must be able to keep pace. Currently, the Gravity Recovery and Climate Experiment's (GRACE) dual one-way microwave ranging system can determine changes in inter-satellite range to a precision of a few microns; however, with the advent of laser measurement systems nanometer precision ranging is a realistic possibility. With this increase in measurement accuracy, a reevaluation of the accuracy inherent in the linear multi-step numerical integration methods is necessary. Two areas where this can be a primary concern are the ability of the numerical integration methods to accurately predict the satellite's state in the presence of numerous small accelerations due to operation of the spacecraft attitude control thrusters, and due to small, point-mass anomalies on the surface of the Earth. This study attempts to quantify and minimize these numerical errors in an effort to improve the accuracy of modeling and propagation of these perturbations; helping to provide further insight into the behavior and evolution of the Earth's gravity field from the more capable gravity missions in the future.

2. Integrating climate change mitigation, adaptation, communication and education strategies in Matanzas Province, Cuba: A Citizen Science Approach

Rodriguez Bueno, R. A.; Byrne, J. M.

2015-12-01

The Environment Service Center of Matanzas (ESCM), Cuba and the University of Lethbridge are collaborating on the development of climate mitigation and adaptation programs in Matanzas province. Tourism is the largest industry in Matanzas. Protecting that industry means protecting coastal zones and conservation areas of value to tourism. These same areas are critical to protecting the landscape from global environmental change: enhanced tropical cyclones, flooding, drought and a range of other environmental change impacts. Byrne (2014) adapted a multidisciplinary methodology for climate adaptation capacity definition for the population of Nicaragua. A wide array of adaptive capacity skills and resources were integrated with agricultural crop modeling to define regions of the country where adaptive capacity development were weakest and should be improved. In Matanzas province, we are developing a series of multidisciplinary mitigation and adaptation programs that builds social science and science knowledge to expand capacity within the ESCM and the provincial population. We will be exploring increased risk due to combined watershed and tropical cyclone flooding, stresses on crops, and defining a range of possibilities in shifting from fossil fuels to renewable energy. The program will build ongoing interactions with thousands of Matanzas citizens through site visits carried out by numerous Cuban and visiting students participating in a four-month education semester with a number of Lethbridge and Matanzas faculty. These visits will also provide local citizens with better access to web-based interactions. We will evaluate mitigation and adaptive capacities in three municipalities and some rural areas across the province. Furthermore, we will explore better ways and means to communicate between the research and conservation staff and the larger population of the province.

3. Conservation properties of numerical integration methods for systems of ordinary differential equations

NASA Technical Reports Server (NTRS)

Rosenbaum, J. S.

1976-01-01

If a system of ordinary differential equations represents a property conserving system that can be expressed linearly (e.g., conservation of mass), it is then desirable that the numerical integration method used conserve the same quantity. It is shown that both linear multistep methods and Runge-Kutta methods are 'conservative' and that Newton-type methods used to solve the implicit equations preserve the inherent conservation of the numerical method. It is further shown that a method used by several authors is not conservative.

4. Numerical implementation of the mixed potential integral equation for planar structures with ferrite layers arbitrarily magnetized

Mesa, F.; Medina, F.

2006-12-01

This work presents a new implementation of the mixed potential integral equation (MPIE) for planar structures that can include ferrite layers arbitrarily magnetized. The implementation of the MPIE here reported is carried out in the space domain. Thus it will combine the well-known numerical advantages of working with potentials as well as the flexibility for analyzing nonrectangular shape conductors with the additional ability of including anisotropic layers of arbitrarily magnetized ferrites. In this way, our approach widens the scope of the space domain MPIE and sets this method as a very efficient and versatile numerical tool to deal with a wide class of planar microwave circuits and antennas.

5. A comparison of the efficiency of numerical methods for integrating chemical kinetic rate equations

NASA Technical Reports Server (NTRS)

1984-01-01

A comparison of the efficiency of several algorithms recently developed for the efficient numerical integration of stiff ordinary differential equations is presented. The methods examined include two general-purpose codes EPISODE and LSODE and three codes (CHEMEQ, CREK1D, and GCKP84) developed specifically to integrate chemical kinetic rate equations. The codes are applied to two test problems drawn from combustion kinetics. The comparisons show that LSODE is the fastest code currently available for the integration of combustion kinetic rate equations. An important finding is that an iterative solution of the algebraic energy conservation equation to compute the temperature can be more efficient than evaluating the temperature by integrating its time-derivative.

6. ADAPT: A Developmental, Asemantic, and Procedural Model for Transcoding From Verbal to Arabic Numerals

ERIC Educational Resources Information Center

Barrouillet, Pierre; Camos, Valerie; Perruchet, Pierre; Seron, Xavier

2004-01-01

This article presents a new model of transcoding numbers from verbal to arabic form. This model, called ADAPT, is developmental, asemantic, and procedural. The authors' main proposal is that the transcoding process shifts from an algorithmic strategy to the direct retrieval from memory of digital forms. Thus, the model is evolutive, adaptive, and…

7. Comparison of symbolic and numerical integration methods for an assumed-stress hybrid shell element

NASA Technical Reports Server (NTRS)

Rengarajan, Govind; Knight, Norman F., Jr.; Aminpour, Mohammad A.

1993-01-01

Hybrid shell elements have long been regarded with reserve by the commercial finite element developers despite the high degree of reliability and accuracy associated with such formulations. The fundamental reason is the inherent higher computational cost of the hybrid approach as compared to the displacement-based formulations. However, a noteworthy factor in favor of hybrid elements is that numerical integration to generate element matrices can be entirely avoided by the use of symbolic integration. In this paper, the use of the symbolic computational approach is presented for an assumed-stress hybrid shell element with drilling degrees of freedom and the significant time savings achieved is demonstrated through an example.

8. Numerical simulation and experimental research of the integrated high-power LED radiator

Xiang, J. H.; Zhang, C. L.; Gan, Z. J.; Zhou, C.; Chen, C. G.; Chen, S.

2017-01-01

The thermal management has become an urgent problem to be solved with the increasing power and the improving integration of the LED (light emitting diode) chip. In order to eliminate the contact resistance of the radiator, this paper presented an integrated high-power LED radiator based on phase-change heat transfer, which realized the seamless connection between the vapor chamber and the cooling fins. The radiator was optimized by combining the numerical simulation and the experimental research. The effects of the chamber diameter and the parameters of fin on the heat dissipation performance were analyzed. The numerical simulation results were compared with the measured values by experiment. The results showed that the fin thickness, the fin number, the fin height and the chamber diameter were the factors which affected the performance of radiator from primary to secondary.

9. Numerical evaluation of the Rayleigh integral for planar radiators using the FFT

NASA Technical Reports Server (NTRS)

Williams, E. G.; Maynard, J. D.

1982-01-01

Rayleigh's integral formula is evaluated numerically for planar radiators of any shape, with any specified velocity in the source plane using the fast Fourier transfrom algorithm. The major advantage of this technique is its speed of computation - over 400 times faster than a straightforward two-dimensional numerical integration. The technique is developed for computation of the radiated pressure in the nearfield of the source and can be easily extended to provide, with little computation time, the vector intensity in the nearfield. Computations with the FFT of the nearfield pressure of baffled rectangular plates with clamped and free boundaries are compared with the 'exact' solution to illuminate any errors. The bias errors, introduced by the FFT, are investigated and a technique is developed to significantly reduce them.

10. Numerical evaluation of two-center integrals over Slater type orbitals

Kurt, S. A.; Yükçü, N.

2016-03-01

Slater Type Orbitals (STOs) which one of the types of exponential type orbitals (ETOs) are used usually as basis functions in the multicenter molecular integrals to better understand physical and chemical properties of matter. In this work, we develop algorithms for two-center overlap and two-center two-electron hybrid and Coulomb integrals which are calculated with help of translation method for STOs and some auxiliary functions by V. Magnasco's group. We use Mathematica programming language to produce algorithms for these calculations. Numerical results for some quantum numbers are presented in the tables. Consequently, we compare our obtained numerical results with the other known literature results and other details of evaluation method are discussed.

11. Numerical methods for estimating J integral in models with regular rectangular meshes

Kozłowiec, B.

2017-02-01

Cracks and delaminations are the common structural degradation mechanisms studied recently using numerous methods and techniques. Among them, numerical methods based on FEM analyses are in widespread commercial use. The scope of these methods has focused i.e. on energetic approach to linear elastic fracture mechanics (LEFM) theory, encompassing such quantities as the J-integral and the energy release rate G. This approach enables to introduce damage criteria of analyzed structures without dealing with the details of the physical singularities occurring at the crack tip. In this paper, two numerical methods based on LEFM are used to analyze both isotropic and orthotropic specimens and the results are compared with well-known analytical solutions as well as (in some cases) VCCT results. These methods are optimized for industrial use with simple, rectangular meshes. The verification is made based on two dimensional mode partitioning.

12. Some remarks on the numerical computation of integrals on an unbounded interval

Capobianco, M.; Criscuolo, G.

2007-08-01

An account of the error and the convergence theory is given for Gauss?Laguerre and Gauss?Radau?Laguerre quadrature formulae. We develop also truncated models of the original Gauss rules to compute integrals extended over the positive real axis. Numerical examples confirming the theoretical results are given comparing these rules among themselves and with different quadrature formulae proposed by other authors (Evans, Int. J. Comput. Math. 82:721?730, 2005; Gautschi, BIT 31:438?446, 1991).

13. Varying Timescales of Stimulus Integration Unite Neural Adaptation and Prototype Formation.

PubMed

Mattar, Marcelo G; Kahn, David A; Thompson-Schill, Sharon L; Aguirre, Geoffrey K

2016-07-11

Human visual perception is both stable and adaptive. Perception of complex objects, such as faces, is shaped by the long-term average of experience as well as immediate, comparative context. Measurements of brain activity have demonstrated corresponding neural mechanisms, including norm-based responses reflective of stored prototype representations, and adaptation induced by the immediately preceding stimulus. Here, we consider the possibility that these apparently separate phenomena can arise from a single mechanism of sensory integration operating over varying timescales. We used fMRI to measure neural responses from the fusiform gyrus while subjects observed a rapid stream of face stimuli. Neural activity at this cortical site was best explained by the integration of sensory experience over multiple sequential stimuli, following a decaying-exponential weighting function. Although this neural activity could be mistaken for immediate neural adaptation or long-term, norm-based responses, it in fact reflected a timescale of integration intermediate to both. We then examined the timescale of sensory integration across the cortex. We found a gradient that ranged from rapid sensory integration in early visual areas, to long-term, stable representations in higher-level, ventral-temporal cortex. These findings were replicated with a new set of face stimuli and subjects. Our results suggest that a cascade of visual areas integrate sensory experience, transforming highly adaptable responses at early stages to stable representations at higher levels.

PubMed

Thurley, Kay

2016-01-01

Judgments of physical stimuli show characteristic biases; relatively small stimuli are overestimated whereas relatively large stimuli are underestimated (regression effect). Such biases likely result from a strategy that seeks to minimize errors given noisy estimates about stimuli that itself are drawn from a distribution, i.e., the statistics of the environment. While being conceptually well described, it is unclear how such a strategy could be implemented neurally. The present paper aims toward answering this question. A theoretical approach is introduced that describes magnitude estimation as two successive stages of noisy (neural) integration. Both stages are linked by a reference memory that is updated with every new stimulus. The model reproduces the behavioral characteristics of magnitude estimation and makes several experimentally testable predictions. Moreover, the model identifies the regression effect as a means of minimizing estimation errors and explains how this optimality strategy depends on the subject's discrimination abilities and on the stimulus statistics. The latter influence predicts another property of magnitude estimation, the so-called range effect. Beyond being successful in describing decision-making, the present work suggests that noisy integration may also be important in processing magnitudes.

15. Adaptive Runge-Kutta integration for stiff systems: Comparing Nosé and Nosé-Hoover dynamics for the harmonic oscillator

Graham Hoover, William; Clinton Sprott, Julien; Griswold Hoover, Carol

2016-10-01

We describe the application of adaptive (variable time step) integrators to stiff differential equations encountered in many applications. Linear harmonic oscillators subject to nonlinear thermal constraints can exhibit either stiff or smooth dynamics. Two closely related examples, Nosé's dynamics and Nosé-Hoover dynamics, are both based on Hamiltonian mechanics and generate microstates consistent with Gibbs' canonical ensemble. Nosé's dynamics is stiff and can present severe numerical difficulties. Nosé-Hoover dynamics, although it follows exactly the same trajectory, is smooth and relatively trouble-free. We emphasize the power of adaptive integrators to resolve stiff problems such as the Nosé dynamics for the harmonic oscillator. The solutions also illustrate the power of computer graphics to enrich numerical solutions.

16. Numerous strategies but limited implementation guidance in US local adaptation plans

Woodruff, Sierra C.; Stults, Missy

2016-08-01

Adaptation planning offers a promising approach for identifying and devising solutions to address local climate change impacts. Yet there is little empirical understanding of the content and quality of these plans. We use content analysis to evaluate 44 local adaptation plans in the United States and multivariate regression to examine how plan quality varies across communities. We find that plans draw on multiple data sources to analyse future climate impacts and include a breadth of strategies. Most plans, however, fail to prioritize impacts and strategies or provide detailed implementation processes, raising concerns about whether adaptation plans will translate into on-the-ground reductions in vulnerability. Our analysis also finds that plans authored by the planning department and those that engaged elected officials in the planning process were of higher quality. The results provide important insights for practitioners, policymakers and scientists wanting to improve local climate adaptation planning and action.

17. The numerical simulation tool for the MAORY multiconjugate adaptive optics system

Arcidiacono, C.; Schreiber, L.; Bregoli, G.; Diolaiti, E.; Foppiani, I.; Agapito, G.; Puglisi, A.; Xompero, M.; Oberti, S.; Cosentino, G.; Lombini, M.; Butler, R. C.; Ciliegi, P.; Cortecchia, F.; Patti, M.; Esposito, S.; Feautrier, P.

2016-07-01

The Multiconjugate Adaptive Optics RelaY (MAORY) is and Adaptive Optics module to be mounted on the ESO European-Extremely Large Telescope (E-ELT). It is an hybrid Natural and Laser Guide System that will perform the correction of the atmospheric turbulence volume above the telescope feeding the Multi-AO Imaging Camera for Deep Observations Near Infrared spectro-imager (MICADO). We developed an end-to-end Monte- Carlo adaptive optics simulation tool to investigate the performance of a the MAORY and the calibration, acquisition, operation strategies. MAORY will implement Multiconjugate Adaptive Optics combining Laser Guide Stars (LGS) and Natural Guide Stars (NGS) measurements. The simulation tool implement the various aspect of the MAORY in an end to end fashion. The code has been developed using IDL and use libraries in C++ and CUDA for efficiency improvements. Here we recall the code architecture, we describe the modeled instrument components and the control strategies implemented in the code.

18. DE 102 - A numerically integrated ephemeris of the moon and planets spanning forty-four centuries

Newhall, X. X.; Standish, E. M.; Williams, J. G.

1983-08-01

It is pointed out that the 1960's were the turning point for the generation of lunar and planetary ephemerides. All previous measurements of the positions of solar system bodies were optical angular measurements. New technological improvements leading to immense changes in observational accuracy are related to developments concerning radar, Viking landers on Mars, and laser ranges to lunar corner cube retroreflectors. Suitable numerical integration techniques and more comprehensive physical models were developed to match the accuracy of the modern data types. The present investigation is concerned with the first integrated ephemeris, DE 102, which covers the entire span of the historical astronomical observations of usable accuracy which are known. The fit is made to modern data. The integration spans the time period from 1411 BC to 3002 AD.

19. An analysis of the impact of auditory-nerve adaptation on behavioral measures of temporal integration in cochlear implant recipients

Hay-McCutcheon, Marcia J.; Brown, Carolyn J.; Abbas, Paul J.

2005-10-01

The objective of this study was to determine the impact that auditory-nerve adaptation has on behavioral measures of temporal integration in Nucleus 24 cochlear implant recipients. It was expected that, because the auditory nerve serves as the input to central temporal integrator, a large degree of auditory-nerve adaptation would reduce the amount of temporal integration. Neural adaptation was measured by tracking amplitude changes of the electrically evoked compound action potential (ECAP) in response to 1000-pps biphasic pulse trains of varying durations. Temporal integration was measured at both suprathreshold and threshold levels by an adaptive procedure. Although varying degrees of neural adaptation and temporal integration were observed across individuals, results of this investigation revealed no correlation between the degree of neural adaptation and psychophysical measures of temporal integration.

20. An analysis of the impact of auditory-nerve adaptation on behavioral measures of temporal integration in cochlear implant recipients.

PubMed

Hay-McCutcheon, Marcia J; Brown, Carolyn J; Abbas, Paul J

2005-10-01

The objective of this study was to determine the impact that auditory-nerve adaptation has on behavioral measures of temporal integration in Nucleus 24 cochlear implant recipients. It was expected that, because the auditory nerve serves as the input to central temporal integrator, a large degree of auditory-nerve adaptation would reduce the amount of temporal integration. Neural adaptation was measured by tracking amplitude changes of the electrically evoked compound action potential (ECAP) in response to 1000-pps biphasic pulse trains of varying durations. Temporal integration was measured at both suprathreshold and threshold levels by an adaptive procedure. Although varying degrees of neural adaptation and temporal integration were observed across individuals, results of this investigation revealed no correlation between the degree of neural adaptation and psychophysical measures of temporal integration.

1. INTEGRATING EVOLUTIONARY AND FUNCTIONAL APPROACHES TO INFER ADAPTATION AT SPECIFIC LOCI

PubMed Central

Storz, Jay F.; Wheat, Christopher W.

2010-01-01

Inferences about adaptation at specific loci are often exclusively based on the static analysis of DNA sequence variation. Ideally, population-genetic evidence for positive selection serves as a stepping-off point for experimental studies to elucidate the functional significance of the putatively adaptive variation. We argue that inferences about adaptation at specific loci are best achieved by integrating the indirect, retrospective insights provided by population-genetic analyses with the more direct, mechanistic insights provided by functional experiments. Integrative studies of adaptive genetic variation may sometimes be motivated by experimental insights into molecular function, which then provide the impetus to perform population genetic tests to evaluate whether the functional variation is of adaptive significance. In other cases, studies may be initiated by genome scans of DNA variation to identify candidate loci for recent adaptation. Results of such analyses can then motivate experimental efforts to test whether the identified candidate loci do in fact contribute to functional variation in some fitness-related phenotype. Functional studies can provide corroborative evidence for positive selection at particular loci, and can potentially reveal specific molecular mechanisms of adaptation. PMID:20500215

2. Sea Extremes: Integrated impact assessment in coastal climate adaptation

Sorensen, Carlo; Knudsen, Per; Broge, Niels; Molgaard, Mads; Andersen, Ole

2016-04-01

We investigate effects of sea level rise and a change in precipitation pattern on coastal flooding hazards. Historic and present in situ and satellite data of water and groundwater levels, precipitation, vertical ground motion, geology, and geotechnical soil properties are combined with flood protection measures, topography, and infrastructure to provide a more complete picture of the water-related impact from climate change at an exposed coastal location. Results show that future sea extremes evaluated from extreme value statistics may, indeed, have a large impact. The integrated effects from future storm surges and other geo- and hydro-parameters need to be considered in order to provide for the best protection and mitigation efforts, however. Based on the results we present and discuss a simple conceptual model setup that can e.g. be used for 'translation' of regional sea level rise evidence and projections to concrete impact measures. This may be used by potentially affected stakeholders -often working in different sectors and across levels of governance, in a common appraisal of the challenges faced ahead. The model may also enter dynamic tools to evaluate local impact as sea level research advances and projections for the future are updated.

3. Numerical solution of random singular integral equation appearing in crack problems

NASA Technical Reports Server (NTRS)

Sambandham, M.; Srivatsan, T. S.; Bharucha-Reid, A. T.

1986-01-01

The solution of several elasticity problems, and particularly crack problems, can be reduced to the solution of one-dimensional singular integral equations with a Cauchy-type kernel or to a system of uncoupled singular integral equations. Here a method for the numerical solution of random singular integral equations of Cauchy type is presented. The solution technique involves a Chebyshev series approximation, the coefficients of which are the solutions of a system of random linear equations. This method is applied to the problem of periodic array of straight cracks inside an infinite isotropic elastic medium and subjected to a nonuniform pressure distribution along the crack edges. The statistical properties of the random solution are evaluated numerically, and the random solution is used to determine the values of the stress-intensity factors at the crack tips. The error, expressed as the difference between the mean of the random solution and the deterministic solution, is established. Values of stress-intensity factors at the crack tip for different random input functions are presented.

4. Comparing numerical integration schemes for time-continuous car-following models

Treiber, Martin; Kanagaraj, Venkatesan

2015-02-01

When simulating trajectories by integrating time-continuous car-following models, standard integration schemes such as the fourth-order Runge-Kutta method (RK4) are rarely used while the simple Euler method is popular among researchers. We compare four explicit methods both analytically and numerically: Euler's method, ballistic update, Heun's method (trapezoidal rule), and the standard RK4. As performance metrics, we plot the global discretization error as a function of the numerical complexity. We tested the methods on several time-continuous car-following models in several multi-vehicle simulation scenarios with and without discontinuities such as stops or a discontinuous behavior of an external leader. We find that the theoretical advantage of RK4 (consistency order 4) only plays a role if both the acceleration function of the model and the trajectory of the leader are sufficiently often differentiable. Otherwise, we obtain lower (and often fractional) consistency orders. Although, to our knowledge, Heun's method has never been used for integrating car-following models, it turns out to be the best scheme for many practical situations. The ballistic update always prevails over Euler's method although both are of first order.

5. A fast numerical solution of scattering by a cylinder: Spectral method for the boundary integral equations

NASA Technical Reports Server (NTRS)

Hu, Fang Q.

1994-01-01

It is known that the exact analytic solutions of wave scattering by a circular cylinder, when they exist, are not in a closed form but in infinite series which converges slowly for high frequency waves. In this paper, we present a fast number solution for the scattering problem in which the boundary integral equations, reformulated from the Helmholtz equation, are solved using a Fourier spectral method. It is shown that the special geometry considered here allows the implementation of the spectral method to be simple and very efficient. The present method differs from previous approaches in that the singularities of the integral kernels are removed and dealt with accurately. The proposed method preserves the spectral accuracy and is shown to have an exponential rate of convergence. Aspects of efficient implementation using FFT are discussed. Moreover, the boundary integral equations of combined single and double-layer representation are used in the present paper. This ensures the uniqueness of the numerical solution for the scattering problem at all frequencies. Although a strongly singular kernel is encountered for the Neumann boundary conditions, we show that the hypersingularity can be handled easily in the spectral method. Numerical examples that demonstrate the validity of the method are also presented.

6. Numerical integration of the restricted three-body problem with Lie series

Abouelmagd, Elbaz I.; Guirao, Juan L. G.; Mostafa, A.

2014-12-01

The aim of this work is to present some recurrence formulas for the equations of motion of an infinitesimal body in the planar restricted three-body problem which allow us to integrate numerically this problem via a Lie series approach. For doing this, the equations of motion of the problem are transformed to an origin at one of the libration points and the Lie operator and recurrence formulas for the terms of the Lie series are constructed. In addition, we provide an algorithm that allows us to find any number of Lie series terms and which gives successful calculations for the orbit of the infinitesimal body around one of the libration points. Furthermore, all our mathematical relations are performed under the effect of the zonal harmonic parameters of the bigger primary up to J 4. Finally, a numerical application of these results is given to the case of the Earth-Moon system.

7. Adaptation and Integration of Permanent Immigrants Seminar (4th, Geneva, Switzerland, May 8-11, 1979).

ERIC Educational Resources Information Center

International Migration, 1979

1979-01-01

This document contains working papers prepared for a seminar on Adaptation and Integration of Permanent Immigrants, along with general and specific recommendations formulated by seminar participants. Conclusions and recommendations from each paper are presented in English, French, and Spanish; the conference papers themselves are presented only in…

8. Career Adaptability: An Integrative Construct for Life-Span, Life-Space Theory.

ERIC Educational Resources Information Center

Savickas, Mark L.

1997-01-01

Examines the origin and current status of lifespan, life-space theory and proposes one way in which to integrate its three segments. Discusses a functionalist strategy for theory construction and the outcomes and consequences of this strategy. Discusses future directions for theory development, such as career adaptability and planful attitudes.…

9. Simulation Based Evaluation of Integrated Adaptive Control and Flight Planning Technologies

NASA Technical Reports Server (NTRS)

Campbell, Stefan Forrest; Kaneshige, John T.

2008-01-01

The objective of this work is to leverage NASA resources to enable effective evaluation of resilient aircraft technologies through simulation. This includes examining strengths and weaknesses of adaptive controllers, emergency flight planning algorithms, and flight envelope determination algorithms both individually and as an integrated package.

10. Integrated and adaptive management of water resources: Tensions, legacies, and the next best thing

SciTech Connect

Engle, Nathan L.; Johns, Owen R.; Lemos, Maria Carmen; Nelson, Donald

2011-02-01

11. Adaptive angular-velocity Vold-Kalman filter order tracking - Theoretical basis, numerical implementation and parameter investigation

Pan, M.-Ch.; Chu, W.-Ch.; Le, Duc-Do

2016-12-01

The paper presents an alternative Vold-Kalman filter order tracking (VKF_OT) method, i.e. adaptive angular-velocity VKF_OT technique, to extract and characterize order components in an adaptive manner for the condition monitoring and fault diagnosis of rotary machinery. The order/spectral waveforms to be tracked can be recursively solved by using Kalman filter based on the one-step state prediction. The paper comprises theoretical derivation of computation scheme, numerical implementation, and parameter investigation. Comparisons of the adaptive VKF_OT scheme with two other ones are performed through processing synthetic signals of designated order components. Processing parameters such as the weighting factor and the correlation matrix of process noise, and data conditions like the sampling frequency, which influence tracking behavior, are explored. The merits such as adaptive processing nature and computation efficiency brought by the proposed scheme are addressed although the computation was performed in off-line conditions. The proposed scheme can simultaneously extract multiple spectral components, and effectively decouple close and crossing orders associated with multi-axial reference rotating speeds.

12. An Adaptive Numeric Predictor-corrector Guidance Algorithm for Atmospheric Entry Vehicles. M.S. Thesis - MIT, Cambridge

NASA Technical Reports Server (NTRS)

Spratlin, Kenneth Milton

1987-01-01

An adaptive numeric predictor-corrector guidance is developed for atmospheric entry vehicles which utilize lift to achieve maximum footprint capability. Applicability of the guidance design to vehicles with a wide range of performance capabilities is desired so as to reduce the need for algorithm redesign with each new vehicle. Adaptability is desired to minimize mission-specific analysis and planning. The guidance algorithm motivation and design are presented. Performance is assessed for application of the algorithm to the NASA Entry Research Vehicle (ERV). The dispersions the guidance must be designed to handle are presented. The achievable operational footprint for expected worst-case dispersions is presented. The algorithm performs excellently for the expected dispersions and captures most of the achievable footprint.

13. Direct numerical solution of the transonic perturbation integral equation for lifting and nonlifting airfoils

NASA Technical Reports Server (NTRS)

Nixon, D.

1978-01-01

The linear transonic perturbation integral equation previously derived for nonlifting airfoils is formulated for lifting cases. In order to treat shock wave motions, a strained coordinate system is used in which the shock location is invariant. The tangency boundary conditions are either formulated using the thin airfoil approximation or by using the analytic continuation concept. A direct numerical solution to this equation is derived in contrast to the iterative scheme initially used, and results of both lifting and nonlifting examples indicate that the method is satisfactory.

14. Extremely Fast Numerical Integration of Ocean Surface Wave Dynamics: Building Blocks for a Higher Order Method

DTIC Science & Technology

2006-09-30

αηηx + βη = 0 (1) where co = gh , α = 3co / 2h and . The KdV equation has the generalized Fourier solution (for periodic and/or quasi... numerical integration of the partial differential equations of surface water waves is the long-term goal of this work. The approach is a...applications of the method. APPROACH We first consider the shallow water equation known as the Korteweg-deVries ( KdV ) equation ): ηt + coηx

15. A finite element-based constrained mixture implementation for arterial growth, remodeling, and adaptation: theory and numerical verification.

PubMed

Valentín, A; Humphrey, J D; Holzapfel, G A

2013-08-01

We implemented a constrained mixture model of arterial growth and remodeling in a nonlinear finite element framework to facilitate numerical analyses of diverse cases of arterial adaptation and maladaptation, including disease progression, resulting in complex evolving geometries and compositions. This model enables hypothesis testing by predicting consequences of postulated characteristics of cell and matrix turnover, including evolving quantities and orientations of fibrillar constituents and nonhomogenous degradation of elastin or loss of smooth muscle function. The nonlinear finite element formulation is general within the context of arterial mechanics, but we restricted our present numerical verification to cylindrical geometries to allow comparisons with prior results for two special cases: uniform transmural changes in mass and differential growth and remodeling within a two-layered cylindrical model of the human aorta. The present finite element model recovers the results of these simplified semi-inverse analyses with good agreement.

16. A Finite Element Based Constrained Mixture Implementation for Arterial Growth, Remodeling, and Adaptation: Theory and Numerical Verification

PubMed Central

Valentín, A.; Humphrey, J. D.; Holzapfel, G. A.

2013-01-01

We implemented a constrained mixture model of arterial growth and remodeling (G&R) in a nonlinear finite element framework to facilitate numerical analyses of diverse cases of arterial adaptation and maladaptation, including disease progression, resulting in complex evolving geometries and compositions. This model enables hypothesis testing by predicting consequences of postulated characteristics of cell and matrix turnover, including evolving quantities and orientations of fibrillar constituents and non-homogenous degradation of elastin or loss of smooth muscle function. The non-linear finite element formulation is general within the context of arterial mechanics, but we restricted our present numerical verification to cylindrical geometries to allow comparisons to prior results for two special cases: uniform transmural changes in mass and differential G&R within a two-layered cylindrical model of the human aorta. The present finite element model recovers the results of these simplified semi-inverse analyses with good agreement. PMID:23713058

17. Numerical Modeling of Pressurization of Cryogenic Propellant Tank for Integrated Vehicle Fluid System

NASA Technical Reports Server (NTRS)

Majumdar, Alok K.; LeClair, Andre C.; Hedayat, Ali

2016-01-01

This paper presents a numerical model of pressurization of a cryogenic propellant tank for the Integrated Vehicle Fluid (IVF) system using the Generalized Fluid System Simulation Program (GFSSP). The IVF propulsion system, being developed by United Launch Alliance, uses boiloff propellants to drive thrusters for the reaction control system as well as to run internal combustion engines to develop power and drive compressors to pressurize propellant tanks. NASA Marshall Space Flight Center (MSFC) has been running tests to verify the functioning of the IVF system using a flight tank. GFSSP, a finite volume based flow network analysis software developed at MSFC, has been used to develop an integrated model of the tank and the pressurization system. This paper presents an iterative algorithm for converging the interface boundary conditions between different component models of a large system model. The model results have been compared with test data.

18. Climate change adaptation and Integrated Water Resource Management in the water sector

Ludwig, Fulco; van Slobbe, Erik; Cofino, Wim

2014-10-01

Integrated Water Resources Management (IWRM) was introduced in 1980s to better optimise water uses between different water demanding sectors. However, since it was introduced water systems have become more complicated due to changes in the global water cycle as a result of climate change. The realization that climate change will have a significant impact on water availability and flood risks has driven research and policy making on adaptation. This paper discusses the main similarities and differences between climate change adaptation and IWRM. The main difference between the two is the focus on current and historic issues of IWRM compared to the (long-term) future focus of adaptation. One of the main problems of implementing climate change adaptation is the large uncertainties in future projections. Two completely different approaches to adaptation have been developed in response to these large uncertainties. A top-down approach based on large scale biophysical impacts analyses focussing on quantifying and minimizing uncertainty by using a large range of scenarios and different climate and impact models. The main problem with this approach is the propagation of uncertainties within the modelling chain. The opposite is the bottom up approach which basically ignores uncertainty. It focusses on reducing vulnerabilities, often at local scale, by developing resilient water systems. Both these approaches however are unsuitable for integrating into water management. The bottom up approach focuses too much on socio-economic vulnerability and too little on developing (technical) solutions. The top-down approach often results in an “explosion” of uncertainty and therefore complicates decision making. A more promising direction of adaptation would be a risk based approach. Future research should further develop and test an approach which starts with developing adaptation strategies based on current and future risks. These strategies should then be evaluated using a range

19. The strategy for numerical solving of PIES without explicit calculation of singular integrals in 2D potential problems

Szerszeń, Krzysztof; Zieniuk, Eugeniusz

2016-06-01

The paper presents a strategy for numerical solving of parametric integral equation system (PIES) for 2D potential problems without explicit calculation of singular integrals. The values of these integrals will be expressed indirectly in terms of easy to compute non-singular integrals. The effectiveness of the proposed strategy is investigated with the example of potential problem modeled by the Laplace equation. The strategy simplifies the structure of the program with good the accuracy of the obtained solutions.

20. Comparison of numerical techniques for integration of stiff ordinary differential equations arising in combustion chemistry

NASA Technical Reports Server (NTRS)

1984-01-01

The efficiency and accuracy of several algorithms recently developed for the efficient numerical integration of stiff ordinary differential equations are compared. The methods examined include two general-purpose codes, EPISODE and LSODE, and three codes (CHEMEQ, CREK1D, and GCKP84) developed specifically to integrate chemical kinetic rate equations. The codes are applied to two test problems drawn from combustion kinetics. The comparisons show that LSODE is the fastest code currently available for the integration of combustion kinetic rate equations. An important finding is that an interactive solution of the algebraic energy conservation equation to compute the temperature does not result in significant errors. In addition, this method is more efficient than evaluating the temperature by integrating its time derivative. Significant reductions in computational work are realized by updating the rate constants (k = at(supra N) N exp(-E/RT) only when the temperature change exceeds an amount delta T that is problem dependent. An approximate expression for the automatic evaluation of delta T is derived and is shown to result in increased efficiency.

1. Numerical simulation of a lattice polymer model at its integrable point

Bedini, A.; Owczarek, A. L.; Prellberg, T.

2013-07-01

We revisit an integrable lattice model of polymer collapse using numerical simulations. This model was first studied by Blöte and Nienhuis (1989 J. Phys. A: Math. Gen. 22 1415) and it describes polymers with some attraction, providing thus a model for the polymer collapse transition. At a particular set of Boltzmann weights the model is integrable and the exponents ν = 12/23 ≈ 0.522 and γ = 53/46 ≈ 1.152 have been computed via identification of the scaling dimensions xt = 1/12 and xh = -5/48. We directly investigate the polymer scaling exponents via Monte Carlo simulations using the pruned-enriched Rosenbluth method algorithm. By simulating this polymer model for walks up to length 4096 we find ν = 0.576(6) and γ = 1.045(5), which are clearly different from the predicted values. Our estimate for the exponent ν is compatible with the known θ-point value of 4/7 and in agreement with very recent numerical evaluation by Foster and Pinettes (2012 J. Phys. A: Math. Theor. 45 505003).

2. Adaptive integral feedback controller for pitch and yaw channels of an AUV with actuator saturations.

PubMed

Sarhadi, Pouria; Noei, Abolfazl Ranjbar; Khosravi, Alireza

2016-11-01

Input saturations and uncertain dynamics are among the practical challenges in control of autonomous vehicles. Adaptive control is known as a proper method to deal with the uncertain dynamics of these systems. Therefore, incorporating the ability to confront with input saturation in adaptive controllers can be valuable. In this paper, an adaptive autopilot is presented for the pitch and yaw channels of an autonomous underwater vehicle (AUV) in the presence of input saturations. This will be performed by combination of a model reference adaptive control (MRAC) with integral state feedback with a modern anti-windup (AW) compensator. MRAC with integral state feedback is commonly used in autonomous vehicles. However, some proper modifications need to be taken into account in order to cope with the saturation problem. To this end, a Riccati-based anti-windup (AW) compensator is employed. The presented technique is applied to the non-linear six degrees of freedom (DOF) model of an AUV and the obtained results are compared with that of its baseline method. Several simulation scenarios are executed in the pitch and yaw channels to evaluate the controller performance. Moreover, effectiveness of proposed adaptive controller is comprehensively investigated by implementing Monte Carlo simulations. The obtained results verify the performance of proposed method.

3. Four-stage computational technology with adaptive numerical methods for computational aerodynamics

Shaydurov, V.; Liu, T.; Zheng, Z.

2012-10-01

Computational aerodynamics is a key technology in aircraft design which is ahead of physical experiment and complements it. Of course all three components of computational modeling are actively developed: mathematical models of real aerodynamic processes, numerical algorithms, and high-performance computing. The most impressive progress has been made in the field of computing, though with a considerable complication of computer architecture. Numerical algorithms are developed more conservative. More precisely, they are offered and theoretically justified for more simple mathematical problems. Nevertheless, computational mathematics now has amassed a whole palette of numerical algorithms that can provide acceptable accuracy and interface between modern mathematical models in aerodynamics and high-performance computers. A significant step in this direction was the European Project ADIGMA whose positive experience will be used in International Project TRISTAM for further movement in the field of computational technologies for aerodynamics. This paper gives a general overview of objectives and approaches intended to use and a description of the recommended four-stage computer technology.

4. Integrating Climate Change Adaptation into Disaster Risk Reduction in Urban Contexts: Perceptions and Practice

PubMed Central

Rivera, Claudia

2014-01-01

This paper analyses the perceptions of disaster risk reduction (DRR) practitioners concerning the on-going integration of climate change adaptation (CCA) into their practices in urban contexts in Nicaragua. Understanding their perceptions is important as this will provide information on how this integration can be improved. Exploring the perceptions of practitioners in Nicaragua is important as the country has a long history of disasters, and practitioners have been developing the current DRR planning framework for more than a decade. The analysis is based on semi-structured interviews designed to collect information about practitioners’ understanding of: (a) CCA, (b) the current level of integration of CCA into DRR and urban planning, (c) the opportunities and constraints of this integration, and (d) the potential to adapt cities to climate change. The results revealed that practitioners’ perception is that the integration of CCA into their practice is at an early stage, and that they need to improve their understanding of CCA in terms of a development issue. Three main constraints on improved integration were identified: (a) a recognized lack of understanding of CCA, (b) insufficient guidance on how to integrate it, and (c) the limited opportunities to integrate it into urban planning due to a lack of instruments and capacity in this field. Three opportunities were also identified: (a) practitioners’ awareness of the need to integrate CCA into their practices, (b) the robust structure of the DRR planning framework in the country, which provides a suitable channel for facilitating integration, and (c) the fact that CCA is receiving more attention and financial and technical support from the international community. PMID:24475365

5. Three-Dimensional Integration of Graphene via Swelling, Shrinking, and Adaptation.

PubMed

Choi, Jonghyun; Kim, Hoe Joon; Wang, Michael Cai; Leem, Juyoung; King, William P; Nam, SungWoo

2015-07-08

The transfer of graphene from its growth substrate to a target substrate has been widely investigated for its decisive role in subsequent device integration and performance. Thus far, various reported methods of graphene transfer have been mostly limited to planar or curvilinear surfaces due to the challenges associated with fractures from local stress during transfer onto three-dimensional (3D) microstructured surfaces. Here, we report a robust approach to integrate graphene onto 3D microstructured surfaces while maintaining the structural integrity of graphene, where the out-of-plane dimensions of the 3D features vary from 3.5 to 50 μm. We utilized three sequential steps: (1) substrate swelling, (2) shrinking, and (3) adaptation, in order to achieve damage-free, large area integration of graphene on 3D microstructures. Detailed scanning electron microscopy, atomic force microscopy, Raman spectroscopy, and electrical resistance measurement studies show that the amount of substrate swelling as well as the flexural rigidities of the transfer film affect the integration yield and quality of the integrated graphene. We also demonstrate the versatility of our approach by extension to a variety of 3D microstructured geometries. Lastly, we show the integration of hybrid structures of graphene decorated with gold nanoparticles onto 3D microstructure substrates, demonstrating the compatibility of our integration method with other hybrid nanomaterials. We believe that the versatile, damage-free integration method based on swelling, shrinking, and adaptation will pave the way for 3D integration of two-dimensional (2D) materials and expand potential applications of graphene and 2D materials in the future.

6. Adaptively-refined overlapping grids for the numerical solution of systems of hyperbolic conservation laws

NASA Technical Reports Server (NTRS)

Brislawn, Kristi D.; Brown, David L.; Chesshire, Geoffrey S.; Saltzman, Jeffrey S.

1995-01-01

Adaptive mesh refinement (AMR) in conjunction with higher-order upwind finite-difference methods have been used effectively on a variety of problems in two and three dimensions. In this paper we introduce an approach for resolving problems that involve complex geometries in which resolution of boundary geometry is important. The complex geometry is represented by using the method of overlapping grids, while local resolution is obtained by refining each component grid with the AMR algorithm, appropriately generalized for this situation. The CMPGRD algorithm introduced by Chesshire and Henshaw is used to automatically generate the overlapping grid structure for the underlying mesh.

7. Artisticc: An Art and Science Integration Project to Enquire into Community Level Adaptation to Climate Change

Vanderlinden, J. P.; Baztan, J.

2014-12-01

The prupose of this paper is to present the "Adaptation Research a Transdisciplinary community and policy centered appoach" (ARTisticc) project. ARTisticc's goal is to apply innovative standardized transdisciplinary art and science integrative approaches to foster robust, socially, culturally and scientifically, community centred adaptation to climate change. The approach used in the project is based on the strong understanding that adaptation is: (a) still "a concept of uncertain form"; (b) a concept dealing with uncertainty; (c) a concept that calls for an analysis that goes beyond the traditional disciplinary organization of science, and; (d) an unconventional process in the realm of science and policy integration. The project is centered on case studies in France, Greenland, Russia, India, Canada, Alaska, and Senegal. In every site we jointly develop artwork while we analyzing how natural science, essentially geosciences can be used in order to better adapt in the future, how society adapt to current changes and how memories of past adaptations frames current and future processes. Artforms are mobilized in order to share scientific results with local communities and policy makers, this in a way that respects cultural specificities while empowering stakeholders, ARTISTICC translates these "real life experiments" into stories and artwork that are meaningful to those affected by climate change. The scientific results and the culturally mediated productions will thereafter be used in order to co-construct, with NGOs and policy makers, policy briefs, i.e. robust and scientifically legitimate policy recommendations regarding coastal adaptation. This co-construction process will be in itself analysed with the goal of increasing arts and science's performative functions in the universe of evidence-based policy making. The project involves scientists from natural sciences, the social sciences and the humanities, as well as artitis from the performing arts (playwriters

8. Numerical simulation studies for the first-light adaptive optics system of the Large Binocular Telescope

Carbillet, Marcel; Riccardi, Armando; Esposito, Simone

2004-10-01

We present our latest results concerning the simulation studies performed for the first-light adaptive optics (AO) system of the Large Binocular Telescope (LBT), namely WLBT. After a brief description of the "raw" performance evaluation results, in terms of Strehl ratios attained in the various considered bands (from V to K), we focus on the "scientific" performance that will be obtained when considering the subsequent instrumentation that will benefit from the correction given by the AO system WLBT and the adaptive secondary mirrors LBT 672. In particular, we discuss the performance of the coupling with the instrument LUCIFER, working at near-infrared bands, in terms of signal-to-noise values and limiting magnitudes, and in both the cases of spectroscopy and photometric detection. We also give the encircled energies that are expected in the visible bands, result relevant in one hand for the instrument PEPSI, and in other hand for the "technical viewer" that will be on board the WLBT system itself.

9. Does Integration Help Adapt to Climate Change? Case of Increased US Corn Yield Volatility

Verma, M.; Diffenbaugh, N. S.; Hertel, T. W.

2012-12-01

10. Effect of spatial configuration of an extended nonlinear Kierstead-Slobodkin reaction-transport model with adaptive numerical scheme.

PubMed

Owolabi, Kolade M; Patidar, Kailash C

2016-01-01

In this paper, we consider the numerical simulations of an extended nonlinear form of Kierstead-Slobodkin reaction-transport system in one and two dimensions. We employ the popular fourth-order exponential time differencing Runge-Kutta (ETDRK4) schemes proposed by Cox and Matthew (J Comput Phys 176:430-455, 2002), that was modified by Kassam and Trefethen (SIAM J Sci Comput 26:1214-1233, 2005), for the time integration of spatially discretized partial differential equations. We demonstrate the supremacy of ETDRK4 over the existing exponential time differencing integrators that are of standard approaches and provide timings and error comparison. Numerical results obtained in this paper have granted further insight to the question 'What is the minimal size of the spatial domain so that the population persists?' posed by Kierstead and Slobodkin (J Mar Res 12:141-147, 1953), with a conclusive remark that the population size increases with the size of the domain. In attempt to examine the biological wave phenomena of the solutions, we present the numerical results in both one- and two-dimensional space, which have interesting ecological implications. Initial data and parameter values were chosen to mimic some existing patterns.

11. Algebraic Stabilization of Explicit Numerical Integration for Extremely Stiff Reaction Networks

SciTech Connect

Guidry, Mike W

2012-01-01

In contrast to the prevailing view in the literature, it is shown that even extremely stiff sets of ordinary differential equations may be solved efficiently by explicit methods if limiting algebraic solutions are used to stabilize the numerical integration. The stabilizing algebra differs essentially for systems well removed from equilibrium and those near equilibrium. Explicit asymptotic and quasi-steady-state methods that are appropriate when the system is only weakly equilibrated are examined first. These methods are then extended to the case of close approach to equilibrium through a new implementation of partial equilibrium approximations. Using stringent tests with astrophysical thermonuclear networks, evidence is provided that these methods can deal with the stiffest networks, even in the approach to equilibrium, with accuracy and integration timestepping comparable to that of implicit methods. Because explicit methods can execute a timestep faster and scale more favorably with network size than implicit algorithms, our results suggest that algebraically stabilized explicit methods might enable integration of larger reaction networks coupled to fluid dynamics than has been feasible previously for a variety of disciplines.

12. Numerical analysis of composite STEEL-CONCRETE SECTIONS using integral equation of Volterra

Partov, Doncho; Kantchev, Vesselin

2011-09-01

The paper presents analysis of the stress and deflections changes due to creep in statically determinate composite steel-concrete beam. The mathematical model involves the equation of equilibrium, compatibility and constitutive relationship, i.e. an elastic law for the steel part and an integral-type creep law of Boltzmann — Volterra for the concrete part. On the basis of the theory of the viscoelastic body of Arutyunian-Trost-Bažant for determining the redistribution of stresses in beam section between concrete plate and steel beam with respect to time "t", two independent Volterra integral equations of the second kind have been derived. Numerical method based on linear approximation of the singular kernal function in the integral equation is presented. Example with the model proposed is investigated. The creep functions is suggested by the model CEB MC90-99 and the "ACI 209R-92 model. The elastic modulus of concrete E c (t) is assumed to be constant in time `t'. The obtained results from the both models are compared.

13. Numerical analysis of composite STEEL-CONCRETE SECTIONS using integral equation of Volterra

Partov, Doncho; Kantchev, Vesselin

2011-09-01

The paper presents analysis of the stress and deflections changes due to creep in statically determinate composite steel-concrete beam. The mathematical model involves the equation of equilibrium, compatibility and constitutive relationship, i.e. an elastic law for the steel part and an integral-type creep law of Boltzmann — Volterra for the concrete part. On the basis of the theory of the viscoelastic body of Arutyunian-Trost-Bažant for determining the redistribution of stresses in beam section between concrete plate and steel beam with respect to time "t", two independent Volterra integral equations of the second kind have been derived. Numerical method based on linear approximation of the singular kernal function in the integral equation is presented. Example with the model proposed is investigated. The creep functions is suggested by the model CEB MC90-99 and the "ACI 209R-92 model. The elastic modulus of concrete Ec(t) is assumed to be constant in time `t'. The obtained results from the both models are compared.

14. Numerical simulation on the adaptation of forms in trabecular bone to mechanical disuse and basic multi-cellular unit activation threshold at menopause

Gong, He; Fan, Yubo; Zhang, Ming

2008-04-01

The objective of this paper is to identify the effects of mechanical disuse and basic multi-cellular unit (BMU) activation threshold on the form of trabecular bone during menopause. A bone adaptation model with mechanical- biological factors at BMU level was integrated with finite element analysis to simulate the changes of trabecular bone structure during menopause. Mechanical disuse and changes in the BMU activation threshold were applied to the model for the period from 4 years before to 4 years after menopause. The changes in bone volume fraction, trabecular thickness and fractal dimension of the trabecular structures were used to quantify the changes of trabecular bone in three different cases associated with mechanical disuse and BMU activation threshold. It was found that the changes in the simulated bone volume fraction were highly correlated and consistent with clinical data, and that the trabecular thickness reduced significantly during menopause and was highly linearly correlated with the bone volume fraction, and that the change trend of fractal dimension of the simulated trabecular structure was in correspondence with clinical observations. The numerical simulation in this paper may help to better understand the relationship between the bone morphology and the mechanical, as well as biological environment; and can provide a quantitative computational model and methodology for the numerical simulation of the bone structural morphological changes caused by the mechanical environment, and/or the biological environment.

15. On the performance of voltage stepping for the simulation of adaptive, nonlinear integrate-and-fire neuronal networks.

PubMed

Kaabi, Mohamed Ghaith; Tonnelier, Arnaud; Martinez, Dominique

2011-05-01

In traditional event-driven strategies, spike timings are analytically given or calculated with arbitrary precision (up to machine precision). Exact computation is possible only for simplified neuron models, mainly the leaky integrate-and-fire model. In a recent paper, Zheng, Tonnelier, and Martinez (2009) introduced an approximate event-driven strategy, named voltage stepping, that allows the generic simulation of nonlinear spiking neurons. Promising results were achieved in the simulation of single quadratic integrate-and-fire neurons. Here, we assess the performance of voltage stepping in network simulations by considering more complex neurons (quadratic integrate-and-fire neurons with adaptation) coupled with multiple synapses. To handle the discrete nature of synaptic interactions, we recast voltage stepping in a general framework, the discrete event system specification. The efficiency of the method is assessed through simulations and comparisons with a modified time-stepping scheme of the Runge-Kutta type. We demonstrated numerically that the original order of voltage stepping is preserved when simulating connected spiking neurons, independent of the network activity and connectivity.

16. Assessment of Disaster Risk Reduction and Climate Change Adaptation policy integration in Zambia

Pilli-Sihvola, K.; Väätäinen-Chimpuku, S.

2015-12-01

Integration of Disaster Risk Management (DRM) and Climate Change Adaptation (CCA) policies, their implementation measures and the contribution of these to development has been gaining attention recently. Due to the shared objectives of CCA and particularly Disaster Risk Reduction (DRR), a component of DRM, their integration provides many benefits. At the implementation level, DRR and CCA are usually integrated. Policy integration, however, is often lacking. This study presents a novel analysis of the policy integration of DRR and CCA by 1) suggesting a definition for their integration at a general and further at horizontal and vertical levels, 2) using an analysis framework for policy integration cycle, which separates the policy formulation and implementation processes, and 3) applying these to a case study in Zambia. Moreover, the study identifies the key gaps in the integration process, obtains an understanding of identified key factors for creating an enabling environment for the integration, and provides recommendations for further progress. The study is based on a document analysis of the relevant DRM, climate change (CC), agriculture, forestry, water management and meteorology policy documents and Acts, and 21 semi-structured interviews with key stakeholders. Horizontal integration has occurred both ways, as the revised DRM policy draft has incorporated CCA, and the new CC policy draft has incorporated DRR. This is not necessarily an optimal strategy and unless carefully implemented, it may create pressure on institutional structures and duplication of efforts in the implementation. Much less vertical integration takes place, and where it does, no guidance on how potential goal conflicts with sectorial and development objectives ought to be handled. The objectives of the instruments show convergence. At the programme stage, the measures are fully integrated as they can be classified as robust CCA measures, providing benefits in the current and future

17. Highly integrated digital electronic control: Digital flight control, aircraft model identification, and adaptive engine control

NASA Technical Reports Server (NTRS)

Baer-Riedhart, Jennifer L.; Landy, Robert J.

1987-01-01

The highly integrated digital electronic control (HIDEC) program at NASA Ames Research Center, Dryden Flight Research Facility is a multiphase flight research program to quantify the benefits of promising integrated control systems. McDonnell Aircraft Company is the prime contractor, with United Technologies Pratt and Whitney Aircraft, and Lear Siegler Incorporated as major subcontractors. The NASA F-15A testbed aircraft was modified by the HIDEC program by installing a digital electronic flight control system (DEFCS) and replacing the standard F100 (Arab 3) engines with F100 engine model derivative (EMD) engines equipped with digital electronic engine controls (DEEC), and integrating the DEEC's and DEFCS. The modified aircraft provides the capability for testing many integrated control modes involving the flight controls, engine controls, and inlet controls. This paper focuses on the first two phases of the HIDEC program, which are the digital flight control system/aircraft model identification (DEFCS/AMI) phase and the adaptive engine control system (ADECS) phase.

18. Assessment method of numerical integration used in measuring profile based on ultra-precise thin light beam scanning

Lang, Zhi-Guo; Tan, Jiu-Bin

2009-11-01

In order to improve the precision of profile measurement based on ultra-precise thin light beam scanning, an assessment method that compares different numerical integration algorithms in frequency-domain is put forward. The compared numerical integration methods are regarded as recursive digital filters. Through comparing their functions of frequency response in frequency-domain, the delivering role of noise with different frequencies can be analyzed directly and clearly in the process of integrating measured slope data. Analyzing results show that the method of cubic spline is better than trapezoidal, Simpson and 3/8 Simpson rules.

19. Mie Light-Scattering Granulometer with an Adaptive Numerical Filtering Method. II. Experiments.

PubMed

Hespel, L; Delfour, A; Guillame, B

2001-02-20

A nephelometer is presented that theoretically requires no absolute calibration. This instrument is used for determining the particle-size distribution of various scattering media (aerosols, fogs, rocket exhausts, engine plumes, and the like) from angular static light-scattering measurements. An inverse procedure is used, which consists of a least-squares method and a regularization scheme based on numerical filtering. To retrieve the distribution function one matches the experimental data with theoretical patterns derived from Mie theory. The main principles of the inverse method are briefly presented, and the nephelometer is then described with the associated partial calibration procedure. Finally, the whole granulometer system (inverse method and nephelometer) is validated by comparison of measurements of scattering media with calibrated monodisperse or known size distribution functions.

20. An integrated data-directed numerical method for estimating the undiscovered mineral endowment in a region

USGS Publications Warehouse

McCammon, R.B.; Finch, W.I.; Kork, J.O.; Bridges, N.J.

1994-01-01

An integrated data-directed numerical method has been developed to estimate the undiscovered mineral endowment within a given area. The method has been used to estimate the undiscovered uranium endowment in the San Juan Basin, New Mexico, U.S.A. The favorability of uranium concentration was evaluated in each of 2,068 cells defined within the Basin. Favorability was based on the correlated similarity of the geologic characteristics of each cell to the geologic characteristics of five area-related deposit models. Estimates of the undiscovered endowment for each cell were categorized according to deposit type, depth, and cutoff grade. The method can be applied to any mineral or energy commodity provided that the data collected reflect discovered endowment. ?? 1994 Oxford University Press.

1. A prefiltering version of the Kalman filter with new numerical integration formulas for Riccati equations

NASA Technical Reports Server (NTRS)

Womble, M. E.; Potter, J. E.

1975-01-01

A prefiltering version of the Kalman filter is derived for both discrete and continuous measurements. The derivation consists of determining a single discrete measurement that is equivalent to either a time segment of continuous measurements or a set of discrete measurements. This prefiltering version of the Kalman filter easily handles numerical problems associated with rapid transients and ill-conditioned Riccati matrices. Therefore, the derived technique for extrapolating the Riccati matrix from one time to the next constitutes a new set of integration formulas which alleviate ill-conditioning problems associated with continuous Riccati equations. Furthermore, since a time segment of continuous measurements is converted into a single discrete measurement, Potter's square root formulas can be used to update the state estimate and its error covariance matrix. Therefore, if having the state estimate and its error covariance matrix at discrete times is acceptable, the prefilter extends square root filtering with all its advantages, to continuous measurement problems.

PubMed

Broom, Donald M

2006-01-01

3. Regional analysis techniques for integrating experimental and numerical measurements of transport properties of reservoir rocks

Alizadeh, S. M.; Latham, S.; Middleton, J.; Limaye, A.; Senden, T. J.; Arns, C. H.

2017-02-01

Assessing the mechanisms of micro-structural change and their effect on transport properties using digital core analysis requires balancing field of view and resolution. This typically leads to the compromise of working with relatively small samples, where boundary effects can be substantial. A direct comparison with experiment, as e.g. desirable to eliminate unknown parameters and integrate numerical and physical experiments, needs to consider these boundary effects. Here we develop a workflow to define measuring windows within a sample where these boundary effects are minimised allowing the integration of physical and numerical experiment. We consider in particular sleeve leakage and use a radial partitioning of the solutions to various transport equations to derive relevant regional measures, which may be used for the development of cross-correlations between physical properties. Samples of Bentheimer and Castlegate sandstone as well as Mt. Gambier limestone and a sucrosic dolomite are considered. The sample plugs are encased in rubber sleeves and micro-CT images acquired at ambient conditions. Using these high-resolution images we calculate transport properties, namely permeability and electrical conductivity, and analyse the resulting field solutions with regard to flux across different regions of interest. The latter are selected on the basis of distance to the sample sleeve inner surface. Clear bypassing at the sleeve-sample interface in terms of elevated fluxes is observed for all samples, although to different extent. We consider different sleeve boundary conditions to define a measuring window minimising these effects, use the procedure to compare flux averages defined over these measuring windows with conventional choices of simulation domains, and compare resulting physical cross-correlations.

4. Sliding mode disturbance observer-based adaptive integral backstepping control of a piezoelectric nano-manipulator

Zhang, Yangming; Yan, Peng

2016-12-01

This paper investigates a systematic modeling and control methodology for a multi-axis PZT (piezoelectric transducer) actuated servo stage supporting nano-manipulations. A sliding mode disturbance observer-based adaptive integral backstepping control method with an estimated inverse model compensation scheme is proposed to achieve ultra high precision tracking in the presence of the hysteresis nonlinearities, model uncertainties, and external disturbances. By introducing a time rate of the input signal, an enhanced rate-dependent Prandtl-Ishlinskii model is developed to describe the hysteresis behaviors, and its inverse is also constructed to mitigate their adverse effects. In particular, the corresponding inverse compensation error is analyzed and its boundedness is proven. Subsequently, the sliding mode disturbance observer-based adaptive integral backstepping controller is designed to guarantee the convergence of the tracking error, where the sliding mode disturbance observer can track the total disturbances in a finite time, while the integral action is incorporated into the adaptive backstepping design to improve the steady-state control accuracy. Finally, real time implementations of the proposed algorithm are applied on the PZT actuated servo system, where excellent tracking performance with tracking precision error around 6‰ for circular contour tracking is achieved in the experimental results.

5. Transforming the sensing and numerical prediction of high-impact local weather through dynamic adaptation.

PubMed

Droegemeier, Kelvin K

2009-03-13

Mesoscale weather, such as convective systems, intense local rainfall resulting in flash floods and lake effect snows, frequently is characterized by unpredictable rapid onset and evolution, heterogeneity and spatial and temporal intermittency. Ironically, most of the technologies used to observe the atmosphere, predict its evolution and compute, transmit or store information about it, operate in a static pre-scheduled framework that is fundamentally inconsistent with, and does not accommodate, the dynamic behaviour of mesoscale weather. As a result, today's weather technology is highly constrained and far from optimal when applied to any particular situation. This paper describes a new cyberinfrastructure framework, in which remote and in situ atmospheric sensors, data acquisition and storage systems, assimilation and prediction codes, data mining and visualization engines, and the information technology frameworks within which they operate, can change configuration automatically, in response to evolving weather. Such dynamic adaptation is designed to allow system components to achieve greater overall effectiveness, relative to their static counterparts, for any given situation. The associated service-oriented architecture, known as Linked Environments for Atmospheric Discovery (LEAD), makes advanced meteorological and cyber tools as easy to use as ordering a book on the web. LEAD has been applied in a variety of settings, including experimental forecasting by the US National Weather Service, and allows users to focus much more attention on the problem at hand and less on the nuances of data formats, communication protocols and job execution environments.

6. Experimental analysis and numerical modeling of mollusk shells as a three dimensional integrated volume.

PubMed

Faghih Shojaei, M; Mohammadi, V; Rajabi, H; Darvizeh, A

2012-12-01

In this paper, a new numerical technique is presented to accurately model the geometrical and mechanical features of mollusk shells as a three dimensional (3D) integrated volume. For this purpose, the Newton method is used to solve the nonlinear equations of shell surfaces. The points of intersection on the shell surface are identified and the extra interior parts are removed. Meshing process is accomplished with respect to the coordinate of each point of intersection. The final 3D generated mesh models perfectly describe the spatial configuration of the mollusk shells. Moreover, the computational model perfectly matches with the actual interior geometry of the shells as well as their exterior architecture. The direct generation technique is employed to generate a 3D finite element (FE) model in ANSYS 11. X-ray images are taken to show the close similarity of the interior geometry of the models and the actual samples. A scanning electron microscope (SEM) is used to provide information on the microstructure of the shells. In addition, a set of compression tests were performed on gastropod shell specimens to obtain their ultimate compressive strength. A close agreement between experimental data and the relevant numerical results is demonstrated.

7. A Numerical Method for Integrating the Kinetic Equation of Coalescence and Breakup of Cloud Droplets.

Enukashvily, Isaac M.

1980-11-01

An extension of Bleck' method and of the method of moments is developed for the numerical integration of the kinetic equation of coalescence and breakup of cloud droplets. The number density function nk(x,t) in each separate cloud droplet packet between droplet mass grid points (xk,xk+1) is represented by an expansion in orthogonal polynomials with a given weighting function wk(x,k). The expansion coefficients describe the deviations of nk(x,t) from wk(x,k). In this way droplet number concentrations, liquid water contents and other moments in each droplet packet are conserved, and the problem of solving the kinetic equation is replaced by one of solving a set of coupled differential equations for the moments of the number density function nk(x,t). Equations for these moments in each droplet packet are derived. The method is tested against existing solutions of the coalescence equation. Numerical results are obtained when Bleck's uniform distribution hypothesis for nk(x,t) and Golovin's asymptotic solution of the coalescence equation is chosen for the, weighting function wk(x, k). A comparison between numerical results computed by Bleck's method and by the method of this study is made. It is shown that for the correct computation of the coalescence and breakup interactions between cloud droplet packets it is very important that the, approximation of the nk(x,t) between grid points (xk,xk+1) satisfies the conservation conditions for the number concentration, liquid water content and other moments of the cloud droplet packets. If these conservation conditions are provided, even the quasi-linear approximation of the nk(x,t) in comparison with Berry's six-point interpolation will give reasonable results which are very close to the existing analytic solutions.

8. A multiple hypotheses uncertainty analysis in hydrological modelling: about model structure, landscape parameterization, and numerical integration

Pilz, Tobias; Francke, Till; Bronstert, Axel

2016-04-01

Until today a large number of competing computer models has been developed to understand hydrological processes and to simulate and predict streamflow dynamics of rivers. This is primarily the result of a lack of a unified theory in catchment hydrology due to insufficient process understanding and uncertainties related to model development and application. Therefore, the goal of this study is to analyze the uncertainty structure of a process-based hydrological catchment model employing a multiple hypotheses approach. The study focuses on three major problems that have received only little attention in previous investigations. First, to estimate the impact of model structural uncertainty by employing several alternative representations for each simulated process. Second, explore the influence of landscape discretization and parameterization from multiple datasets and user decisions. Third, employ several numerical solvers for the integration of the governing ordinary differential equations to study the effect on simulation results. The generated ensemble of model hypotheses is then analyzed and the three sources of uncertainty compared against each other. To ensure consistency and comparability all model structures and numerical solvers are implemented within a single simulation environment. First results suggest that the selection of a sophisticated numerical solver for the differential equations positively affects simulation outcomes. However, already some simple and easy to implement explicit methods perform surprisingly well and need less computational efforts than more advanced but time consuming implicit techniques. There is general evidence that ambiguous and subjective user decisions form a major source of uncertainty and can greatly influence model development and application at all stages.

9. 3-D Numerical Simulation of Hydrostatic Tests of Porous Rocks Using Adapted Constitutive Model

Chemenda, A. I.; Daniel, M.

2014-12-01

The high complexity and poor knowledge of the constitutive properties of porous rocks are principal obstacles for the modeling of their deformation. Normally, the constitutive lows are to be derived from the experimental data (nominal strains and stresses). They are known, however, to be sensitive to the mechanical instabilities within the rock specimen and the boundary (notably friction) conditions at its ends. To elucidate the impact of these conditions on the measured mechanical response we use 3-D finite-difference simulations of experimental tests. Modeling of hydrostatic tests was chosen because it does not typically involve deformation instabilities. The ends of the cylindrical 'rock sample' are in contact with the 'steel' elastic platens through the frictional interfaces. The whole system is subjected to a normal stress Pc applied to the external model surface. A new constitutive model of porous rocks with the cap-type yield function is used. This function is quadratic in the mean stress σm and depends on the inelastic strain γp in a way to generate strain softening at small σm and strain-hardening at high σm. The corresponding material parameters are defined from the experimental data and have clear interpretation in terms of the geometry of the yield surface. The constitutive model with this yield function and the Drucker-Prager plastic potential has been implemented in 3-D dynamic explicit code Flac3D. The results of an extensive set of numerical simulations at different model parameters will be presented. They show, in particular, that the shape of the 'numerical' hydrostats is very similar to that obtained from the experimental tests and that it is practically insensitive to the interface friction. On the other hand, the stress and strain fields within the specimen dramatically depend on this parameter. The inelastic deformation at the specimen's ends starts well before reaching the grain crushing pressure P* and evolves heterogeneously with Pc

10. Integrating bioassessment and ecological risk assessment: an approach to developing numerical water-quality criteria.

PubMed

King, Ryan S; Richardson, Curtis J

2003-06-01

Ioassessment is used worldwide to monitor aquatic health but is infrequently used with risk-assessment objectives, such as supporting the development of defensible, numerical water-quality criteria. To this end, we present a generalized approach for detecting potential ecological thresholds using assemblage-level attributes and a multimetric index (Index of Biological Integrity-IBI) as endpoints in response to numerical changes in water quality. To illustrate the approach, we used existing macroinvertebrate and surface-water total phosphorus (TP) datasets from an observed P gradient and a P-dosing experiment in wetlands of the south Florida coastal plain nutrient ecoregion. Ten assemblage attributes were identified as potential metrics using the observational data, and five were validated in the experiment. These five core metrics were subjected individually and as an aggregated Nutrient-IBI to nonparametric changepoint analysis (nCPA) to estimate cumulative probabilities of a threshold response to TP. Threshold responses were evident for all metrics and the IBI, and were repeatable through time. Results from the observed gradient indicated that a threshold was > or = 50% probable between 12.6 and 19.4 microg/L TP for individual metrics and 14.8 microg/L TP for the IBI. Results from the P-dosing experiment revealed > or = 50% probability of a response between 11.2 and 13.0 microg/L TP for the metrics and 12.3 microg/L TP for the IBI. Uncertainty analysis indicated a low (typically > or = 5%) probability that an IBI threshold occurred at < or = 10 microg/L TP, while there was > or = 95% certainty that the threshold was < or = 17 microg/L TP. The weight-of-evidence produced from these analyses implies that a TP concentration > 12-15 microg/L is likely to cause degradation of macroinvertebrate assemblage structure and function, a reflection of biological integrity, in the study area. This finding may assist in the development of a numerical water-quality criterion for

11. Adaptive macro finite elements for the numerical solution of monodomain equations in cardiac electrophysiology.

PubMed

Heidenreich, Elvio A; Ferrero, José M; Doblaré, Manuel; Rodríguez, José F

2010-07-01

Many problems in biology and engineering are governed by anisotropic reaction-diffusion equations with a very rapidly varying reaction term. This usually implies the use of very fine meshes and small time steps in order to accurately capture the propagating wave while avoiding the appearance of spurious oscillations in the wave front. This work develops a family of macro finite elements amenable for solving anisotropic reaction-diffusion equations with stiff reactive terms. The developed elements are incorporated on a semi-implicit algorithm based on operator splitting that includes adaptive time stepping for handling the stiff reactive term. A linear system is solved on each time step to update the transmembrane potential, whereas the remaining ordinary differential equations are solved uncoupled. The method allows solving the linear system on a coarser mesh thanks to the static condensation of the internal degrees of freedom (DOF) of the macroelements while maintaining the accuracy of the finer mesh. The method and algorithm have been implemented in parallel. The accuracy of the method has been tested on two- and three-dimensional examples demonstrating excellent behavior when compared to standard linear elements. The better performance and scalability of different macro finite elements against standard finite elements have been demonstrated in the simulation of a human heart and a heterogeneous two-dimensional problem with reentrant activity. Results have shown a reduction of up to four times in computational cost for the macro finite elements with respect to equivalent (same number of DOF) standard linear finite elements as well as good scalability properties.

12. Essays on agricultural adaptation to climate change and ethanol market integration in the U.S

13. New Techniques for Simulation of Ion Implantation by Numerical Integration of Boltzmann Transport Equation

Wang, Shyh-Wei; Guo, Shuang-Fa

1998-01-01

New techniques for more accurate and efficient simulation of ion implantations by a stepwise numerical integration of the Boltzmann transport equation (BTE) have been developed in this work. Instead of using uniform energy grid, a non-uniform grid is employed to construct the momentum distribution matrix. A more accurate simulation result is obtained for heavy ions implanted into silicon. In the same time, rather than utilizing the conventional Lindhard, Nielsen and Schoitt (LNS) approximation, an exact evaluation of the integrals involving the nuclear differential scattering cross-section (dσn=2πp dp) is proposed. The impact parameter p as a function of ion energy E and scattering angle φ is obtained by solving the magic formula iteratively and an interpolation techniques is devised during the simulation process. The simulation time using exact evaluation is about 3.5 times faster than that using the Littmark and Ziegler (LZ) spline fitted cross-section function for phosphorus implantation into silicon.

14. Quantum free-energy differences from nonequilibrium path integrals. I. Methods and numerical application.

PubMed

van Zon, Ramses; Hernández de la Peña, Lisandro; Peslherbe, Gilles H; Schofield, Jeremy

2008-10-01

In this paper, the imaginary-time path-integral representation of the canonical partition function of a quantum system and nonequilibrium work fluctuation relations are combined to yield methods for computing free-energy differences in quantum systems using nonequilibrium processes. The path-integral representation is isomorphic to the configurational partition function of a classical field theory, to which a natural but fictitious Hamiltonian dynamics is associated. It is shown that if this system is prepared in an equilibrium state, after which a control parameter in the fictitious Hamiltonian is changed in a finite time, then formally the Jarzynski nonequilibrium work relation and the Crooks fluctuation relation hold, where work is defined as the change in the energy as given by the fictitious Hamiltonian. Since the energy diverges for the classical field theory in canonical equilibrium, two regularization methods are introduced which limit the number of degrees of freedom to be finite. The numerical applicability of the methods is demonstrated for a quartic double-well potential with varying asymmetry. A general parameter-free smoothing procedure for the work distribution functions is useful in this context.

15. Shear Behavior of 3D Woven Hollow Integrated Sandwich Composites: Experimental, Theoretical and Numerical Study

Zhou, Guangming; Liu, Chang; Cai, Deng'an; Li, Wenlong; Wang, Xiaopei

2016-11-01

An experimental, theoretical and numerical investigation on the shear behavior of 3D woven hollow integrated sandwich composites was presented in this paper. The microstructure of the composites was studied, then the shear modulus and load-deflection curves were obtained by double lap shear tests on the specimens in two principal directions of the sandwich panels, called warp and weft. The experimental results showed that the shear modulus of the warp was higher than that of the weft and the failure occurred in the roots of piles. A finite element model was established to predict the shear behavior of the composites. The simulated results agreed well with the experimental data. Simultaneously, a theoretical method was developed to predict the shear modulus. By comparing with the experimental data, the accuracy of the theoretical method was verified. The influence of structural parameters on shear modulus was also discussed. The higher yarn number, yarn density and dip angle of the piles could all improve the shear modulus of 3D woven hollow integrated sandwich composites at different levels, while the increasing height would decrease the shear modulus.

16. Numerical optimization of integrating cavities for diffraction-limited millimeter-wave bolometer arrays.

PubMed

Glenn, Jason; Chattopadhyay, Goutam; Edgington, Samantha F; Lange, Andrew E; Bock, James J; Mauskopf, Philip D; Lee, Adrian T

2002-01-01

Far-infrared to millimeter-wave bolometers designed to make astronomical observations are typically encased in integrating cavities at the termination of feedhorns or Winston cones. This photometer combination maximizes absorption of radiation, enables the absorber area to be minimized, and controls the directivity of absorption, thereby reducing susceptibility to stray light. In the next decade, arrays of hundreds of silicon nitride micromesh bolometers with planar architectures will be used in ground-based, suborbital, and orbital platforms for astronomy. The optimization of integrating cavity designs is required for achieving the highest possible sensitivity for these arrays. We report numerical simulations of the electromagnetic fields in integrating cavities with an infinite plane-parallel geometry formed by a solid reflecting backshort and the back surface of a feedhorn array block. Performance of this architecture for the bolometer array camera (Bolocam) for cosmology at a frequency of 214 GHz is investigated. We explore the sensitivity of absorption efficiency to absorber impedance and backshort location and the magnitude of leakage from cavities. The simulations are compared with experimental data from a room-temperature scale model and with the performance of Bolocam at a temperature of 300 mK. The main results of the simulations for Bolocam-type cavities are that (1) monochromatic absorptions as high as 95% are achievable with <1% cross talk between neighboring cavities, (2) the optimum absorber impedances are 400 ohms/sq, but with a broad maximum from approximately 150 to approximately 700 ohms/sq, and (3) maximum absorption is achieved with absorber diameters > or = 1.5 lambda. Good general agreement between the simulations and the experiments was found.

17. The significance of phytoplankton photo-adaptation and benthic pelagic coupling to primary production in the South China Sea: Observations and numerical investigations

Liu, Kon-Kee; Chen, Ying-Jie; Tseng, Chun-Mao; Lin, I.-I.; Liu, Hong-Bin; Snidvongs, Anond

2007-07-01

The primary production in the South China Sea (SCS) has been assessed by a coupled physical-biogeochemical model with a simple NPZD ecosystem [Liu et al., 2002. Monsoon-forced chlorophyll distribution and primary production in the SCS: observations and a numerical study. Deep-Sea Research I 49(8), 1387-1412]. In recent years there have been an increasing number of observations in the SCS that may be used to check the validity of the previous approach. The coupled model of the SCS mentioned above employs a photo-adaptation scheme for the phytoplankton growth and uses the simplest bottom boundary condition of an inert benthic layer. These adopted schemes are checked against observations at the South-East Asian Time-series Study (SEATS) Station in the northern SCS and in the Gulf of Thailand. Numerical experiments with or without photo-adaptation or active benthic processes are carried out in this study. Additional experiments are performed with different parameters used for these processes. The observations at the SEATS Station provide direct evidence for the variable chlorophyll-to-nitrogen ratio in phytoplankton as required by photo-adaptation. It is concluded that a photo-adaptation scheme is critical to the phytoplankton growth, especially for the development of the subsurface chlorophyll maximum (SCM). Without photo-adaptation, the average value of the vertically integrated primary production (IPP) over the whole SCS domain would be 35% lower. It is noted that, the modeled SCM occurs at depths shallower than observations due to physical as well as biological processes employed by the model. Increasing the upper limit of the chlorophyll-to-nitrogen ratio, as suggested by observations, enhances chlorophyll level in the lower part of the euphotic zone and raises primary productivity in areas with rich nutrient supply. The observed values of the IPP in the Gulf of Thailand clearly demonstrate the importance of the benthic-pelagic coupling to the nutrient cycle

18. An Integrated Systems Approach to Designing Climate Change Adaptation Policy in Water Resources

Ryu, D.; Malano, H. M.; Davidson, B.; George, B.

2014-12-01

Climate change projections are characterised by large uncertainties with rainfall variability being the key challenge in designing adaptation policies. Climate change adaptation in water resources shows all the typical characteristics of 'wicked' problems typified by cognitive uncertainty as new scientific knowledge becomes available, problem instability, knowledge imperfection and strategic uncertainty due to institutional changes that inevitably occur over time. Planning that is characterised by uncertainties and instability requires an approach that can accommodate flexibility and adaptive capacity for decision-making. An ability to take corrective measures in the event that scenarios and responses envisaged initially derive into forms at some future stage. We present an integrated-multidisciplinary and comprehensive framework designed to interface and inform science and decision making in the formulation of water resource management strategies to deal with climate change in the Musi Catchment of Andhra Pradesh, India. At the core of this framework is a dialogue between stakeholders, decision makers and scientists to define a set of plausible responses to an ensemble of climate change scenarios derived from global climate modelling. The modelling framework used to evaluate the resulting combination of climate scenarios and adaptation responses includes the surface and groundwater assessment models (SWAT & MODFLOW) and the water allocation modelling (REALM) to determine the water security of each adaptation strategy. Three climate scenarios extracted from downscaled climate models were selected for evaluation together with four agreed responses—changing cropping patterns, increasing watershed development, changing the volume of groundwater extraction and improving irrigation efficiency. Water security in this context is represented by the combination of level of water availability and its associated security of supply for three economic activities (agriculture

19. An Approximation to the Adaptive Exponential Integrate-and-Fire Neuron Model Allows Fast and Predictive Fitting to Physiological Data

PubMed Central

Hertäg, Loreen; Hass, Joachim; Golovko, Tatiana; Durstewitz, Daniel

2012-01-01

For large-scale network simulations, it is often desirable to have computationally tractable, yet in a defined sense still physiologically valid neuron models. In particular, these models should be able to reproduce physiological measurements, ideally in a predictive sense, and under different input regimes in which neurons may operate in vivo. Here we present an approach to parameter estimation for a simple spiking neuron model mainly based on standard f–I curves obtained from in vitro recordings. Such recordings are routinely obtained in standard protocols and assess a neuron’s response under a wide range of mean-input currents. Our fitting procedure makes use of closed-form expressions for the firing rate derived from an approximation to the adaptive exponential integrate-and-fire (AdEx) model. The resulting fitting process is simple and about two orders of magnitude faster compared to methods based on numerical integration of the differential equations. We probe this method on different cell types recorded from rodent prefrontal cortex. After fitting to the f–I current-clamp data, the model cells are tested on completely different sets of recordings obtained by fluctuating (“in vivo-like”) input currents. For a wide range of different input regimes, cell types, and cortical layers, the model could predict spike times on these test traces quite accurately within the bounds of physiological reliability, although no information from these distinct test sets was used for model fitting. Further analyses delineated some of the empirical factors constraining model fitting and the model’s generalization performance. An even simpler adaptive LIF neuron was also examined in this context. Hence, we have developed a “high-throughput” model fitting procedure which is simple and fast, with good prediction performance, and which relies only on firing rate information and standard physiological data widely and easily available. PMID:22973220

20. Covariance matching based adaptive unscented Kalman filter for direct filtering in INS/GNSS integration

Meng, Yang; Gao, Shesheng; Zhong, Yongmin; Hu, Gaoge; Subic, Aleksandar

2016-03-01

1. The numerical integration of fundamental diffraction integrals for converging polarized spherical waves using a two-dimensional form of Simpson's 1/3 Rule

Cooper, I. J.; Sheppard, C. J. R.; Roy, M.

2005-08-01

A comprehensive matrix method based upon a two-dimensional form of Simpson's 1/3 rule (2DSC method) to integrate numerically the vector form of the fundamental diffraction integrals is described for calculating the characteristics of the focal region for a converging polarized spherical wave. The only approximation needed in using the 2DSC method is the Kirchhoff boundary conditions at the aperture. The 2DSC method can be used to study the focusing of vector beams with different polarizations and profiles and for different filters over a large range of numerical apertures or Fresnel numbers.

2. Towards an integrated agenda for adaptation research: theory, practice, and policy: Strategy paper

SciTech Connect

Wilbanks, Thomas J; Patwardhan, Anand; Downing, Tom; Leary, Neil

2009-01-01

Adaptation to the adverse impacts of climate change has been recognized as a priority area for national and international policy. The findings of the Fourth Assessment Report of the IPCC have reemphasized the urgency of action and the scale of response needed to cope with climate change outcomes. The scientific community has an important role to play in advancing the information and knowledge base that would help in identifying, developing and implementing effective responses to enhance adaptive capacity and reduce vulnerability. This paper examines the way in which science and research could advance the adaptation agenda. To do so, we pose a number of questions aimed at identifying the knowledge gaps and research needs. We argue that in order to address these science and research needs, an integrated approach is necessary, one that combines new knowledge with new approaches for knowledge generation, and where research and practice co-evolve; and that such a learning-by-doing approach is essential to rapidly scale up and implement concrete adaptation actions.

3. Integrated approaches to natural resources management in practice: the catalyzing role of National Adaptation Programmes for Action.

PubMed

Stucki, Virpi; Smith, Mark

2011-06-01

The relationship of forests in water quantity and quality has been debated during the past years. At the same time, focus on climate change has increased interest in ecosystem restoration as a means for adaptation. Climate change might become one of the key drivers pushing integrated approaches for natural resources management into practice. The National Adaptation Programme of Action (NAPA) is an initiative agreed under the UN Framework Convention on Climate Change. An analysis was done to find out how widely ecosystem restoration and integrated approaches have been incorporated into NAPA priority adaptation projects. The data show that that the NAPAs can be seen as potentially important channel for operationalizing various integrated concepts. Key challenge is to implement the NAPA projects. The amount needed to implement the NAPA projects aiming at ecosystem restoration using integrated approaches presents only 0.7% of the money pledged in Copenhagen for climate change adaptation.

4. Advances in numerical solutions to integral equations in liquid state theory

Howard, Jesse J.

Solvent effects play a vital role in the accurate description of the free energy profile for solution phase chemical and structural processes. The inclusion of solvent effects in any meaningful theoretical model however, has proven to be a formidable task. Generally, methods involving Poisson-Boltzmann (PB) theory and molecular dynamic (MD) simulations are used, but they either fail to accurately describe the solvent effects or require an exhaustive computation effort to overcome sampling problems. An alternative to these methods are the integral equations (IEs) of liquid state theory which have become more widely applicable due to recent advancements in the theory of interaction site fluids and the numerical methods to solve the equations. In this work a new numerical method is developed based on a Newton-type scheme coupled with Picard/MDIIS routines. To extend the range of these numerical methods to large-scale data systems, the size of the Jacobian is reduced using basis functions, and the Newton steps are calculated using a GMRes solver. The method is then applied to calculate solutions to the 3D reference interaction site model (RISM) IEs of statistical mechanics, which are derived from first principles, for a solute model of a pair of parallel graphene plates at various separations in pure water. The 3D IEs are then extended to electrostatic models using an exact treatment of the long-range Coulomb interactions for negatively charged walls and DNA duplexes in aqueous electrolyte solutions to calculate the density profiles and solution thermodynamics. It is found that the 3D-IEs provide a qualitative description of the density distributions of the solvent species when compared to MD results, but at a much reduced computational effort in comparison to MD simulations. The thermodynamics of the solvated systems are also qualitatively reproduced by the IE results. The findings of this work show the IEs to be a valuable tool for the study and prediction of

5. The Effect of Dietary Adaption on Cranial Morphological Integration in Capuchins (Order Primates, Genus Cebus)

PubMed Central

Makedonska, Jana; Wright, Barth W.; Strait, David S.

2012-01-01

A fundamental challenge of morphology is to identify the underlying evolutionary and developmental mechanisms leading to correlated phenotypic characters. Patterns and magnitudes of morphological integration and their association with environmental variables are essential for understanding the evolution of complex phenotypes, yet the nature of the relevant selective pressures remains poorly understood. In this study, the adaptive significance of morphological integration was evaluated through the association between feeding mechanics, ingestive behavior and craniofacial variation. Five capuchin species were examined, Cebus apella sensu stricto, Cebus libidinosus, Cebus nigritus, Cebus olivaceus and Cebus albifrons. Twenty three-dimensional landmarks were chosen to sample facial regions experiencing high strains during feeding, characteristics affecting muscular mechanical advantage and basicranial regions. Integration structure and magnitude between and within the oral and zygomatic subunits, between and within blocks maximizing modularity and within the face, the basicranium and the cranium were examined using partial-least squares, eigenvalue variance, integration indices compared inter-specifically at a common level of sampled population variance and cluster analyses. Results are consistent with previous findings reporting a relative constancy of facial and cranial correlation patterns across mammals, while covariance magnitudes vary. Results further suggest that food material properties structure integration among functionally-linked facial elements and possibly integration between the face and the basicranium. Hard-object-feeding capuchins, especially C.apella s.s., whose faces experience particularly high biomechanical loads are characterized by higher facial and cranial integration especially compared to C.albifrons, likely because morphotypes compromising feeding performance are selected against in species relying on obdurate fallback foods. This is the

6. Integrating human responses to climate change into conservation vulnerability assessments and adaptation planning.

PubMed

Maxwell, Sean L; Venter, Oscar; Jones, Kendall R; Watson, James E M

2015-10-01

The impact of climate change on biodiversity is now evident, with the direct impacts of changing temperature and rainfall patterns and increases in the magnitude and frequency of extreme events on species distribution, populations, and overall ecosystem function being increasingly publicized. Changes in the climate system are also affecting human communities, and a range of human responses across terrestrial and marine realms have been witnessed, including altered agricultural activities, shifting fishing efforts, and human migration. Failing to account for the human responses to climate change is likely to compromise climate-smart conservation efforts. Here, we use a well-established conservation planning framework to show how integrating human responses to climate change into both species- and site-based vulnerability assessments and adaptation plans is possible. By explicitly taking into account human responses, conservation practitioners will improve their evaluation of species and ecosystem vulnerability, and will be better able to deliver win-wins for human- and biodiversity-focused climate adaptation.

7. Nonlinear adaptive control using the Fourier integral and its application to CSTR systems.

PubMed

Zhang, Huaguang; Cai, Lilong

2002-01-01

This paper presents a new nonlinear adaptive tracking controller for a class of general time-variant nonlinear systems. The control system consists of an inner loop and an outer loop. The inner loop is a fuzzy sliding mode control that is used as the feedback controller to overcome random instant disturbances. The stability of the inner loop is designed by the sliding mode control method. The other loop is a Fourier integral-based control that is used as the feedforward controller to overcome the deterministic type of uncertain disturbance. The asymptotic convergence condition of the nonlinear adaptive control system is guaranteed by the Lyapunov direct method. The effectiveness of the proposed controller is illustrated by its application to composition control in a continuously stirred tank reactor system.

8. Deep Impact Sequence Planning Using Multi-Mission Adaptable Planning Tools With Integrated Spacecraft Models

NASA Technical Reports Server (NTRS)

Wissler, Steven S.; Maldague, Pierre; Rocca, Jennifer; Seybold, Calina

2006-01-01

The Deep Impact mission was ambitious and challenging. JPL's well proven, easily adaptable multi-mission sequence planning tools combined with integrated spacecraft subsystem models enabled a small operations team to develop, validate, and execute extremely complex sequence-based activities within very short development times. This paper focuses on the core planning tool used in the mission, APGEN. It shows how the multi-mission design and adaptability of APGEN made it possible to model spacecraft subsystems as well as ground assets throughout the lifecycle of the Deep Impact project, starting with models of initial, high-level mission objectives, and culminating in detailed predictions of spacecraft behavior during mission-critical activities.

9. Integrating Systems Health Management with Adaptive Controls for a Utility-Scale Wind Turbine

NASA Technical Reports Server (NTRS)

Frost, Susan A.; Goebel, Kai; Trinh, Khanh V.; Balas, Mark J.; Frost, Alan M.

2011-01-01

Increasing turbine up-time and reducing maintenance costs are key technology drivers for wind turbine operators. Components within wind turbines are subject to considerable stresses due to unpredictable environmental conditions resulting from rapidly changing local dynamics. Systems health management has the aim to assess the state-of-health of components within a wind turbine, to estimate remaining life, and to aid in autonomous decision-making to minimize damage. Advanced adaptive controls can provide the mechanism to enable optimized operations that also provide the enabling technology for Systems Health Management goals. The work reported herein explores the integration of condition monitoring of wind turbine blades with contingency management and adaptive controls. Results are demonstrated using a high fidelity simulator of a utility-scale wind turbine.

10. Integrated optimal allocation model for complex adaptive system of water resources management (II): Case study

Zhou, Yanlai; Guo, Shenglian; Xu, Chong-Yu; Liu, Dedi; Chen, Lu; Wang, Dong

2015-12-01

Climate change, rapid economic development and increase of the human population are considered as the major triggers of increasing challenges for water resources management. This proposed integrated optimal allocation model (IOAM) for complex adaptive system of water resources management is applied in Dongjiang River basin located in the Guangdong Province of China. The IOAM is calibrated and validated under baseline period 2010 year and future period 2011-2030 year, respectively. The simulation results indicate that the proposed model can make a trade-off between demand and supply for sustainable development of society, economy, ecology and environment and achieve adaptive management of water resources allocation. The optimal scheme derived by multi-objective evaluation is recommended for decision-makers in order to maximize the comprehensive benefits of water resources management.

11. Challenges in Incorporating Climate Change Adaptation into Integrated Water Resources Management

Kirshen, P. H.; Cardwell, H.; Kartez, J.; Merrill, S.

2011-12-01

Over the last few decades, integrated water resources management (IWRM), under various names, has become the accepted philosophy for water management in the USA. While much is still to be learned about how to actually carry it out, implementation is slowly moving forward - spurred by both legislation and the demands of stakeholders. New challenges to IWRM have arisen because of climate change. Climate change has placed increased demands on the creativities of planners and engineers because they now must design systems that will function over decades of hydrologic uncertainties that dwarf any previous hydrologic or other uncertainties. Climate and socio-economic monitoring systems must also now be established to determine when the future climate has changed sufficiently to warrant undertaking adaptation. The requirements for taking some actions now and preserving options for future actions as well as the increased risk of social inequities in climate change impacts and adaptation are challenging experts in stakeholder participation. To meet these challenges, an integrated methodology is essential that builds upon scenario analysis, risk assessment, statistical decision theory, participatory planning, and consensus building. This integration will create cross-disciplinary boundaries for these disciplines to overcome.

12. An Adaptive Integration Model of Vector Polyline to DEM Data Based on Spherical Degeneration Quadtree Grids

Zhao, X. S.; Wang, J. J.; Yuan, Z. Y.; Gao, Y.

2013-10-01

Traditional geometry-based approach can maintain the characteristics of vector data. However, complex interpolation calculations limit its applications in high resolution and multi-source spatial data integration at spherical scale in digital earth systems. To overcome this deficiency, an adaptive integration model of vector polyline and spherical DEM is presented. Firstly, Degenerate Quadtree Grid (DQG) which is one of the partition models for global discrete grids, is selected as a basic framework for the adaptive integration model. Secondly, a novel shift algorithm is put forward based on DQG proximity search. The main idea of shift algorithm is that the vector node in a DQG cell moves to the cell corner-point when the displayed area of the cell is smaller or equal to a pixel of screen in order to find a new vector polyline approximate to the original one, which avoids lots of interpolation calculations and achieves seamless integration. Detailed operation steps are elaborated and the complexity of algorithm is analyzed. Thirdly, a prototype system has been developed by using VC++ language and OpenGL 3D API. ASTER GDEM data and DCW roads data sets of Jiangxi province in China are selected to evaluate the performance. The result shows that time consumption of shift algorithm decreased about 76% than that of geometry-based approach. Analysis on the mean shift error from different dimensions has been implemented. In the end, the conclusions and future works in the integration of vector data and DEM based on discrete global grids are also given.

13. Numerical modelling of physical processes in a ballistic laboratory setup with a tapered adapter and plastic piston used for obtaining high muzzle velocities

Bykov, N. V.

2014-12-01

Numerical modelling of a ballistic setup with a tapered adapter and plastic piston is considered. The processes in the firing chamber are described within the framework of quasi- one-dimensional gas dynamics and a geometrical law of propellant burn by means of Lagrangian mass coordinates. The deformable piston is considered to be an ideal liquid with specific equations of state. The numerical solution is obtained by means of a modified explicit von Neumann scheme. The calculation results given show that the ballistic setup with a tapered adapter and plastic piston produces increased shell muzzle velocities by a factor of more than 1.5-2.

14. Wavefront correction and high-resolution in vivo OCT imaging with an objective integrated multi-actuator adaptive lens.

PubMed

Bonora, Stefano; Jian, Yifan; Zhang, Pengfei; Zam, Azhar; Pugh, Edward N; Zawadzki, Robert J; Sarunic, Marinko V

2015-08-24

Adaptive optics is rapidly transforming microscopy and high-resolution ophthalmic imaging. The adaptive elements commonly used to control optical wavefronts are liquid crystal spatial light modulators and deformable mirrors. We introduce a novel Multi-actuator Adaptive Lens that can correct aberrations to high order, and which has the potential to increase the spread of adaptive optics to many new applications by simplifying its integration with existing systems. Our method combines an adaptive lens with an imaged-based optimization control that allows the correction of images to the diffraction limit, and provides a reduction of hardware complexity with respect to existing state-of-the-art adaptive optics systems. The Multi-actuator Adaptive Lens design that we present can correct wavefront aberrations up to the 4th order of the Zernike polynomial characterization. The performance of the Multi-actuator Adaptive Lens is demonstrated in a wide field microscope, using a Shack-Hartmann wavefront sensor for closed loop control. The Multi-actuator Adaptive Lens and image-based wavefront-sensorless control were also integrated into the objective of a Fourier Domain Optical Coherence Tomography system for in vivo imaging of mouse retinal structures. The experimental results demonstrate that the insertion of the Multi-actuator Objective Lens can generate arbitrary wavefronts to correct aberrations down to the diffraction limit, and can be easily integrated into optical systems to improve the quality of aberrated images.

15. Wavefront correction and high-resolution in vivo OCT imaging with an objective integrated multi-actuator adaptive lens

PubMed Central

Bonora, Stefano; Jian, Yifan; Zhang, Pengfei; Zam, Azhar; Pugh, Edward N.; Zawadzki, Robert J.; Sarunic, Marinko V.

2015-01-01

Adaptive optics is rapidly transforming microscopy and high-resolution ophthalmic imaging. The adaptive elements commonly used to control optical wavefronts are liquid crystal spatial light modulators and deformable mirrors. We introduce a novel Multi-actuator Adaptive Lens that can correct aberrations to high order, and which has the potential to increase the spread of adaptive optics to many new applications by simplifying its integration with existing systems. Our method combines an adaptive lens with an imaged-based optimization control that allows the correction of images to the diffraction limit, and provides a reduction of hardware complexity with respect to existing state-of-the-art adaptive optics systems. The Multi-actuator Adaptive Lens design that we present can correct wavefront aberrations up to the 4th order of the Zernike polynomial characterization. The performance of the Multi-actuator Adaptive Lens is demonstrated in a wide field microscope, using a Shack-Hartmann wavefront sensor for closed loop control. The Multi-actuator Adaptive Lens and image-based wavefront-sensorless control were also integrated into the objective of a Fourier Domain Optical Coherence Tomography system for in vivo imaging of mouse retinal structures. The experimental results demonstrate that the insertion of the Multi-actuator Objective Lens can generate arbitrary wavefronts to correct aberrations down to the diffraction limit, and can be easily integrated into optical systems to improve the quality of aberrated images. PMID:26368169

16. Physical Constraints on Biological Integral Control Design for Homeostasis and Sensory Adaptation

PubMed Central

Ang, Jordan; McMillen, David R.

2013-01-01

Synthetic biology includes an effort to use design-based approaches to create novel controllers, biological systems aimed at regulating the output of other biological processes. The design of such controllers can be guided by results from control theory, including the strategy of integral feedback control, which is central to regulation, sensory adaptation, and long-term robustness. Realization of integral control in a synthetic network is an attractive prospect, but the nature of biochemical networks can make the implementation of even basic control structures challenging. Here we present a study of the general challenges and important constraints that will arise in efforts to engineer biological integral feedback controllers or to analyze existing natural systems. Constraints arise from the need to identify target output values that the combined process-plus-controller system can reach, and to ensure that the controller implements a good approximation of integral feedback control. These constraints depend on mild assumptions about the shape of input-output relationships in the biological components, and thus will apply to a variety of biochemical systems. We summarize our results as a set of variable constraints intended to provide guidance for the design or analysis of a working biological integral feedback controller. PMID:23442873

17. Physical constraints on biological integral control design for homeostasis and sensory adaptation.

PubMed

Ang, Jordan; McMillen, David R

2013-01-22

Synthetic biology includes an effort to use design-based approaches to create novel controllers, biological systems aimed at regulating the output of other biological processes. The design of such controllers can be guided by results from control theory, including the strategy of integral feedback control, which is central to regulation, sensory adaptation, and long-term robustness. Realization of integral control in a synthetic network is an attractive prospect, but the nature of biochemical networks can make the implementation of even basic control structures challenging. Here we present a study of the general challenges and important constraints that will arise in efforts to engineer biological integral feedback controllers or to analyze existing natural systems. Constraints arise from the need to identify target output values that the combined process-plus-controller system can reach, and to ensure that the controller implements a good approximation of integral feedback control. These constraints depend on mild assumptions about the shape of input-output relationships in the biological components, and thus will apply to a variety of biochemical systems. We summarize our results as a set of variable constraints intended to provide guidance for the design or analysis of a working biological integral feedback controller.

18. Numerical modeling of the 3D dynamics of ultrasound contrast agent microbubbles using the boundary integral method

Wang, Qianxi; Manmi, Kawa; Calvisi, Michael L.

2015-02-01

Ultrasound contrast agents (UCAs) are microbubbles stabilized with a shell typically of lipid, polymer, or protein and are emerging as a unique tool for noninvasive therapies ranging from gene delivery to tumor ablation. While various models have been developed to describe the spherical oscillations of contrast agents, the treatment of nonspherical behavior has received less attention. However, the nonspherical dynamics of contrast agents are thought to play an important role in therapeutic applications, for example, enhancing the uptake of therapeutic agents across cell membranes and tissue interfaces, and causing tissue ablation. In this paper, a model for nonspherical contrast agent dynamics based on the boundary integral method is described. The effects of the encapsulating shell are approximated by adapting Hoff's model for thin-shell, spherical contrast agents. A high-quality mesh of the bubble surface is maintained by implementing a hybrid approach of the Lagrangian method and elastic mesh technique. The numerical model agrees well with a modified Rayleigh-Plesset equation for encapsulated spherical bubbles. Numerical analyses of the dynamics of UCAs in an infinite liquid and near a rigid wall are performed in parameter regimes of clinical relevance. The oscillation amplitude and period decrease significantly due to the coating. A bubble jet forms when the amplitude of ultrasound is sufficiently large, as occurs for bubbles without a coating; however, the threshold amplitude required to incite jetting increases due to the coating. When a UCA is near a rigid boundary subject to acoustic forcing, the jet is directed towards the wall if the acoustic wave propagates perpendicular to the boundary. When the acoustic wave propagates parallel to the rigid boundary, the jet direction has components both along the wave direction and towards the boundary that depend mainly on the dimensionless standoff distance of the bubble from the boundary. In all cases, the jet

19. Predicting geomorphic evolution through integration of numerical-model scenarios and topographic/bathymetric-survey updates

Plant, N. G.; Long, J.; Dalyander, S.; Thompson, D.; Miselis, J. L.

2013-12-01

Natural resource and hazard management of barrier islands requires an understanding of geomorphic changes associated with long-term processes and storms. Uncertainty exists in understanding how long-term processes interact with the geomorphic changes caused by storms and the resulting perturbations of the long-term evolution trajectories. We use high-resolution data sets to initialize and correct high-fidelity numerical simulations of oceanographic forcing and resulting barrier island evolution. We simulate two years of observed storms to determine the individual and cumulative impacts of these events. Results are separated into cross-shore and alongshore components of sediment transport and compared with observed topographic and bathymetric changes during these time periods. The discrete island change induced by these storms is integrated with previous knowledge of long-term net alongshore sediment transport to project island evolution. The approach has been developed and tested using data collected at the Chandeleur Island chain off the coast of Louisiana (USA). The simulation time period included impacts from tropical and winter storms, as well as a human-induced perturbation associated with construction of a sand berm along the island shoreline. The predictions and observations indicated that storm and long-term processes both contribute to the migration, lowering, and disintegration of the artificial berm and natural island. Further analysis will determine the relative importance of cross-shore and alongshore sediment transport processes and the dominant time scales that drive each of these processes and subsequent island morphologic response.

20. A novel approach to improve numerical weather prediction skills by using anomaly integration and historical data

Peng, Xindong; Che, Yuzhang; Chang, Jun

2013-08-01

Using the concept of anomaly integration and historical climate data, we have developed a novel operational framework to implement deterministic numerical weather prediction within 15 days. Real-case validation shows pronounced improvements in the forecasts of global geopotential heights in 20 out of 30 cases with the Community Atmosphere Model version 3.0. Seven other cases are marginally improved, and only three are deteriorated, in which all are ameliorated within the first-week period. The average of the 30 cases shows an obvious increase of anomaly correlation coefficient (ACC) and a decrease of root mean square error (RMSE) of the geopotential height over global, hemispherical, and tropical zones. Significant amelioration on tropical circulation is displayed within the first-week prediction. The forecasting skill is extended by 0.6 day in terms of days of the ACC greater than 0.6 for 500 hPa 30 case averaged geopotential height on global scale. The 30 case mean ACC and RMSE of 500 hPa temperature show the increment of 0.2 and -1.6 K, respectively, in the first-week prediction. In the case of January 2008, much more reasonable horizontal distribution and vertical structure are achieved in bias-corrected model geopotential height, temperature, relative humidity, and horizontal wind components in comparison to reanalysis data. In spite of a need for additional storage of historical modeling data, the new method does not increase computational costs and therefore is suitable for routine application.

1. Solutions to the ellipsoidal Clairaut constant and the inverse geodetic problem by numerical integration

Sjöberg, L. E.

2012-11-01

We derive computational formulas for determining the Clairaut constant, i.e. the cosine of the maximum latitude of the geodesic arc, from two given points on the oblate ellipsoid of revolution. In all cases the Clairaut constant is unique. The inverse geodetic problem on the ellipsoid is to determine the geodesic arc between and the azimuths of the arc at the given points. We present the solution for the fixed Clairaut constant. If the given points are not(nearly) antipodal, each azimuth and location of the geodesic is unique, while for the fixed points in the ”antipodal region”, roughly within 36”.2 from the antipode, there are two geodesics mirrored in the equator and with complementary azimuths at each point. In the special case with the given points located at the poles of the ellipsoid, all meridians are geodesics. The special role played by the Clairaut constant and the numerical integration make this method different from others available in the literature.

2. An Integrated Numerical Hydrodynamic Shallow Flow-Solute Transport Model for Urban Area

Alias, N. A.; Mohd Sidek, L.

2016-03-01

The rapidly changing on land profiles in the some urban areas in Malaysia led to the increasing of flood risk. Extensive developments on densely populated area and urbanization worsen the flood scenario. An early warning system is really important and the popular method is by numerically simulating the river and flood flows. There are lots of two-dimensional (2D) flood model predicting the flood level but in some circumstances, still it is difficult to resolve the river reach in a 2D manner. A systematic early warning system requires a precisely prediction of flow depth. Hence a reliable one-dimensional (1D) model that provides accurate description of the flow is essential. Research also aims to resolve some of raised issues such as the fate of pollutant in river reach by developing the integrated hydrodynamic shallow flow-solute transport model. Presented in this paper are results on flow prediction for Sungai Penchala and the convection-diffusion of solute transports simulated by the developed model.

3. Automatic seizure detection using correlation integral with nonlinear adaptive denoising and Kalman filter.

PubMed

Hongda Wang; Chiu-Sing Choy

2016-08-01

The ability of correlation integral for automatic seizure detection using scalp EEG data has been re-examined in this paper. To facilitate the detection performance and overcome the shortcoming of correlation integral, nonlinear adaptive denoising and Kalman filter have been adopted for pre-processing and post-processing. The three-stage algorithm has achieved 84.6% sensitivity and 0.087/h false detection rate, which are comparable to many machine learning based methods, but at much lower computational cost. Since this algorithm is tested with long-term scalp EEG, it has the potential to achieve higher performance with intracranial EEG. The clinical value of this algorithm includes providing a pre-judgement to assist the doctor's diagnosis procedure and acting as a reliable warning system in a wearable device for epilepsy patients.

4. Kedalion: NASA's Adaptable and Agile Hardware/Software Integration and Test Lab

NASA Technical Reports Server (NTRS)

Mangieri, Mark L.; Vice, Jason

2011-01-01

NASA fs Kedalion engineering analysis lab at Johnson Space Center is on the forefront of validating and using many contemporary avionics hardware/software development and integration techniques, which represent new paradigms to heritage NASA culture. Kedalion has validated many of the Orion hardware/software engineering techniques borrowed from the adjacent commercial aircraft avionics solution space, with the intention to build upon such techniques to better align with today fs aerospace market. Using agile techniques, commercial products, early rapid prototyping, in-house expertise and tools, and customer collaboration, Kedalion has demonstrated that cost effective contemporary paradigms hold the promise to serve future NASA endeavors within a diverse range of system domains. Kedalion provides a readily adaptable solution for medium/large scale integration projects. The Kedalion lab is currently serving as an in-line resource for the project and the Multipurpose Crew Vehicle (MPCV) program.

5. Adaptive Iterated Extended Kalman Filter and Its Application to Autonomous Integrated Navigation for Indoor Robot

PubMed Central

Chen, Xiyuan; Li, Qinghua

2014-01-01

As the core of the integrated navigation system, the data fusion algorithm should be designed seriously. In order to improve the accuracy of data fusion, this work proposed an adaptive iterated extended Kalman (AIEKF) which used the noise statistics estimator in the iterated extended Kalman (IEKF), and then AIEKF is used to deal with the nonlinear problem in the inertial navigation systems (INS)/wireless sensors networks (WSNs)-integrated navigation system. Practical test has been done to evaluate the performance of the proposed method. The results show that the proposed method is effective to reduce the mean root-mean-square error (RMSE) of position by about 92.53%, 67.93%, 55.97%, and 30.09% compared with the INS only, WSN, EKF, and IEKF. PMID:24693225

6. Adaptive iterated extended Kalman filter and its application to autonomous integrated navigation for indoor robot.

PubMed

Xu, Yuan; Chen, Xiyuan; Li, Qinghua

2014-01-01

As the core of the integrated navigation system, the data fusion algorithm should be designed seriously. In order to improve the accuracy of data fusion, this work proposed an adaptive iterated extended Kalman (AIEKF) which used the noise statistics estimator in the iterated extended Kalman (IEKF), and then AIEKF is used to deal with the nonlinear problem in the inertial navigation systems (INS)/wireless sensors networks (WSNs)-integrated navigation system. Practical test has been done to evaluate the performance of the proposed method. The results show that the proposed method is effective to reduce the mean root-mean-square error (RMSE) of position by about 92.53%, 67.93%, 55.97%, and 30.09% compared with the INS only, WSN, EKF, and IEKF.

7. An Adaptive Intelligent Integrated Lighting Control Approach for High-Performance Office Buildings

Karizi, Nasim

An acute and crucial societal problem is the energy consumed in existing commercial buildings. There are 1.5 million commercial buildings in the U.S. with only about 3% being built each year. Hence, existing buildings need to be properly operated and maintained for several decades. Application of integrated centralized control systems in buildings could lead to more than 50% energy savings. This research work demonstrates an innovative adaptive integrated lighting control approach which could achieve significant energy savings and increase indoor comfort in high performance office buildings. In the first phase of the study, a predictive algorithm was developed and validated through experiments in an actual test room. The objective was to regulate daylight on a specified work plane by controlling the blind slat angles. Furthermore, a sensor-based integrated adaptive lighting controller was designed in Simulink which included an innovative sensor optimization approach based on genetic algorithm to minimize the number of sensors and efficiently place them in the office. The controller was designed based on simple integral controllers. The objective of developed control algorithm was to improve the illuminance situation in the office through controlling the daylight and electrical lighting. To evaluate the performance of the system, the controller was applied on experimental office model in Lee et al.'s research study in 1998. The result of the developed control approach indicate a significantly improvement in lighting situation and 1-23% and 50-78% monthly electrical energy savings in the office model, compared to two static strategies when the blinds were left open and closed during the whole year respectively.

8. Numerical methods for the simulation of complex multi-body flows with applications for the integrated Space Shuttle vehicle

NASA Technical Reports Server (NTRS)

Chan, William M.

1992-01-01

The following papers are presented: (1) numerical methods for the simulation of complex multi-body flows with applications for the Integrated Space Shuttle vehicle; (2) a generalized scheme for 3-D hyperbolic grid generation; (3) collar grids for intersecting geometric components within the Chimera overlapped grid scheme; and (4) application of the Chimera overlapped grid scheme to simulation of Space Shuttle ascent flows.

9. New methods for the numerical integration of ordinary differential equations and their application to the equations of motion of spacecraft

NASA Technical Reports Server (NTRS)

Banyukevich, A.; Ziolkovski, K.

1975-01-01

A number of hybrid methods for solving Cauchy problems are described on the basis of an evaluation of advantages of single and multiple-point numerical integration methods. The selection criterion is the principle of minimizing computer time. The methods discussed include the Nordsieck method, the Bulirsch-Stoer extrapolation method, and the method of recursive Taylor-Steffensen power series.

10. Sull'Integrazione delle Strutture Numeriche nella Scuola dell'Obbligo (Integrating Numerical Structures in Mandatory School).

ERIC Educational Resources Information Center

Bonotto, C.

1995-01-01

Attempted to verify knowledge regarding decimal and rational numbers in children ages 10-14. Discusses how pupils can receive and assimilate extensions of the number system from natural numbers to decimals and fractions and later can integrate this extension into a single and coherent numerical structure. (Author/MKR)

11. The determination of the dynamical flattening J2 and the mass of Saturn via improving the orbits by numerical integration.

Shen, Kaixian

1990-12-01

The orbits of Iapetus and Titan have been generated by numerical integration using Gauss-Jackson method, and fitted to 1414 astrometric observations of Iapetus-Titan. The fit yielded well-determined value of the dynamical flattening J2 of Saturn and the mass ration Saturn/Sun.

12. Integration of Online Parameter Identification and Neural Network for In-Flight Adaptive Control

NASA Technical Reports Server (NTRS)

Hageman, Jacob J.; Smith, Mark S.; Stachowiak, Susan

2003-01-01

An indirect adaptive system has been constructed for robust control of an aircraft with uncertain aerodynamic characteristics. This system consists of a multilayer perceptron pre-trained neural network, online stability and control derivative identification, a dynamic cell structure online learning neural network, and a model following control system based on the stochastic optimal feedforward and feedback technique. The pre-trained neural network and model following control system have been flight-tested, but the online parameter identification and online learning neural network are new additions used for in-flight adaptation of the control system model. A description of the modification and integration of these two stand-alone software packages into the complete system in preparation for initial flight tests is presented. Open-loop results using both simulation and flight data, as well as closed-loop performance of the complete system in a nonlinear, six-degree-of-freedom, flight validated simulation, are analyzed. Results show that this online learning system, in contrast to the nonlearning system, has the ability to adapt to changes in aerodynamic characteristics in a real-time, closed-loop, piloted simulation, resulting in improved flying qualities.

13. Developing integrated approaches to climate change adaptation in rural communities of the Peruvian Andes

Huggel, Christian

2010-05-01

Over centuries, Andean communities have developed strategies to cope with climate variability and extremes, such as cold waves or droughts, which can have severe impacts on their welfare. Nevertheless, the rural population, living at altitudes of 3000 to 4000 m asl or even higher, remains highly vulnerable to external stresses, partly because of the extreme living conditions, partly as a consequence of high poverty. Moreover, recent studies indicate that climatic extreme events have increased in frequency in the past years. A Peruvian-Swiss Climate Change Adaptation Programme in Peru (PACC) is currently undertaking strong efforts to understand the links between climatic conditions and local livelihood assets. The goal is to propose viable strategies for adaptation in collaboration with the local population and governments. The program considers three main areas of action, i.e. (i) water resource management; (ii) disaster risk reduction; and (iii) food security. The scientific studies carried out within the programme follow a highly transdisciplinary approach, spanning the whole range from natural and social sciences. Moreover, the scientific Peruvian-Swiss collaboration is closely connected to people and institutions operating at the implementation and political level. In this contribution we report on first results of thematic studies, address critical questions, and outline the potential of integrative research for climate change adaptation in mountain regions in the context of a developing country.

14. Adaptive behaviour and feedback processing integrate experience and instruction in reinforcement learning.

PubMed

Schiffer, Anne-Marike; Siletti, Kayla; Waszak, Florian; Yeung, Nick

2017-02-01

In any non-deterministic environment, unexpected events can indicate true changes in the world (and require behavioural adaptation) or reflect chance occurrence (and must be discounted). Adaptive behaviour requires distinguishing these possibilities. We investigated how humans achieve this by integrating high-level information from instruction and experience. In a series of EEG experiments, instructions modulated the perceived informativeness of feedback: Participants performed a novel probabilistic reinforcement learning task, receiving instructions about reliability of feedback or volatility of the environment. Importantly, our designs de-confound informativeness from surprise, which typically co-vary. Behavioural results indicate that participants used instructions to adapt their behaviour faster to changes in the environment when instructions indicated that negative feedback was more informative, even if it was simultaneously less surprising. This study is the first to show that neural markers of feedback anticipation (stimulus-preceding negativity) and of feedback processing (feedback-related negativity; FRN) reflect informativeness of unexpected feedback. Meanwhile, changes in P3 amplitude indicated imminent adjustments in behaviour. Collectively, our findings provide new evidence that high-level information interacts with experience-driven learning in a flexible manner, enabling human learners to make informed decisions about whether to persevere or explore new options, a pivotal ability in our complex environment.

15. Numerical simulation of a Richtmyer-Meshkov instability with an adaptive central-upwind sixth-order WENO scheme

Tritschler, V. K.; Hu, X. Y.; Hickel, S.; Adams, N. A.

2013-07-01

Two-dimensional simulations of the single-mode Richtmyer-Meshkov instability (RMI) are conducted and compared to experimental results of Jacobs and Krivets (2005 Phys. Fluids 17 034105). The employed adaptive central-upwind sixth-order weighted essentially non-oscillatory (WENO) scheme (Hu et al 2010 J. Comput. Phys. 229 8952-65) introduces only very small numerical dissipation while preserving the good shock-capturing properties of other standard WENO schemes. Hence, it is well suited for simulations with both small-scale features and strong gradients. A generalized Roe average is proposed to make the multicomponent model of Shyue (1998 J. Comput. Phys. 142 208-42) suitable for high-order accurate reconstruction schemes. A first sequence of single-fluid simulations is conducted and compared to the experiment. We find that the WENO-CU6 method better resolves small-scale structures, leading to earlier symmetry breaking and increased mixing. The first simulation, however, fails to correctly predict the global characteristic structures of the RMI. This is due to a mismatch of the post-shock parameters in single-fluid simulations when the pre-shock states are matched with the experiment. When the post-shock parameters are matched, much better agreement with the experimental data is achieved. In a sequence of multifluid simulations, the uncertainty in the density gradient associated with transition between the fluids is assessed. Thereby the multifluid simulations show a considerable improvement over the single-fluid simulations.

16. Adaptation of Chinese Graduate Students to the Academic Integrity Requirements of a U.S. University: A Mixed Methods Research

ERIC Educational Resources Information Center

Jian, Hu

2012-01-01

The purpose of this mixed method study was to investigate how graduates originating from mainland China adapt to the U.S. academic integrity requirements. In the first, quantitative phase of the study, the research questions focused on understanding the state of academic integrity in China. This guiding question was divided into two sub-questions,…

17. Integration of proteomics and metabolomics to elucidate metabolic adaptation in Leishmania.

PubMed

Akpunarlieva, Snezhana; Weidt, Stefan; Lamasudin, Dhilia; Naula, Christina; Henderson, David; Barrett, Michael; Burgess, Karl; Burchmore, Richard

2017-02-23

Leishmania parasites multiply and develop in the gut of a sand fly vector in order to be transmitted to a vertebrate host. During this process they encounter and exploit various nutrients, including sugars, and amino and fatty acids. We have previously generated a mutant Leishmania line that is deficient in glucose transport and which displays some biologically important phenotypic changes such as reduced growth in axenic culture, reduced biosynthesis of hexose-containing virulence factors, increased sensitivity to oxidative stress, and dramatically reduced parasite burden in both insect vector and macrophage host cells. Here we report the generation and integration of proteomic and metabolomic approaches to identify molecular changes that may explain these phenotypes. Our data suggest changes in pathways of glycoconjugate production and redox homeostasis, which likely represent adaptations to the loss of sugar uptake capacity and explain the reduced virulence of this mutant in sand flies and mammals. Our data contribute to understanding the mechanisms of metabolic adaptation in Leishmania and illustrate the power of integrated proteomic and metabolomic approaches to relate biochemistry to phenotype.

18. Numerical evaluation of multi-loop integrals for arbitrary kinematics with SecDec 2.0

Borowka, Sophia; Carter, Jonathon; Heinrich, Gudrun

2013-02-01

We present the program SecDec 2.0, which contains various new features. First, it allows the numerical evaluation of multi-loop integrals with no restriction on the kinematics. Dimensionally regulated ultraviolet and infrared singularities are isolated via sector decomposition, while threshold singularities are handled by a deformation of the integration contour in the complex plane. As an application, we present numerical results for various massive two-loop four-point diagrams. SecDec 2.0 also contains new useful features for the calculation of more general parameter integrals, related for example to phase space integrals. Program summaryProgram title: SecDec 2.0 Catalogue identifier: AEIR_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEIR_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 156829 No. of bytes in distributed program, including test data, etc.: 2137907 Distribution format: tar.gz Programming language: Wolfram Mathematica, Perl, Fortran/C++. Computer: From a single PC to a cluster, depending on the problem. Operating system: Unix, Linux. RAM: Depending on the complexity of the problem Classification: 4.4, 5, 11.1. Catalogue identifier of previous version: AEIR_v1_0 Journal reference of previous version: Comput. Phys. Comm. 182(2011)1566 Does the new version supersede the previous version?: Yes Nature of problem: Extraction of ultraviolet and infrared singularities from parametric integrals appearing in higher order perturbative calculations in gauge theories. Numerical integration in the presence of integrable singularities (e.g., kinematic thresholds). Solution method: Algebraic extraction of singularities in dimensional regularization using iterated sector decomposition. This leads to a Laurent series in the dimensional regularization

19. Integrated analysis considered mitigation cost, damage cost and adaptation cost in Northeast Asia

Park, J. H.; Lee, D. K.; Kim, H. G.; Sung, S.; Jung, T. Y.

2015-12-01

Various studies show that raising the temperature as well as storms, cold snap, raining and drought caused by climate change. And variety disasters have had a damage to mankind. The world risk report(2012, The Nature Conservancy) and UNU-EHS (the United Nations University Institute for Environment and Human Security) reported that more and more people are exposed to abnormal weather such as floods, drought, earthquakes, typhoons and hurricanes over the world. In particular, the case of Korea, we influenced by various pollutants which are occurred in Northeast Asian countries, China and Japan, due to geographical meteorological characteristics. These contaminants have had a significant impact on air quality with the pollutants generated in Korea. Recently, around the world continued their effort to reduce greenhouse gas and to improve air quality in conjunction with the national or regional development goals priority. China is also working on various efforts in accordance with the international flows to cope with climate change and air pollution. In the future, effect of climate change and air quality in Korea and Northeast Asia will be change greatly according to China's growth and mitigation policies. The purpose of this study is to minimize the damage caused by climate change on the Korean peninsula through an integrated approach taking into account the mitigation and adaptation plan. This study will suggest a climate change strategy at the national level by means of a comprehensive economic analysis of the impacts and mitigation of climate change. In order to quantify the impact and damage cost caused by climate change scenarios in a regional scale, it should be priority variables selected in accordance with impact assessment of climate change. The sectoral impact assessment was carried out on the basis of selected variables and through this, to derive the methodology how to estimate damage cost and adaptation cost. And then, the methodology was applied in Korea

20. Orbit determination based on meteor observations using numerical integration of equations of motion

Dmitriev, Vasily; Lupovka, Valery; Gritsevich, Maria

2015-11-01

Recently, there has been a worldwide proliferation of instruments and networks dedicated to observing meteors, including airborne and future space-based monitoring systems . There has been a corresponding rapid rise in high quality data accumulating annually. In this paper, we present a method embodied in the open-source software program "Meteor Toolkit", which can effectively and accurately process these data in an automated mode and discover the pre-impact orbit and possibly the origin or parent body of a meteoroid or asteroid. The required input parameters are the topocentric pre-atmospheric velocity vector and the coordinates of the atmospheric entry point of the meteoroid, i.e. the beginning point of visual path of a meteor, in an Earth centered-Earth fixed coordinate system, the International Terrestrial Reference Frame (ITRF). Our method is based on strict coordinate transformation from the ITRF to an inertial reference frame and on numerical integration of the equations of motion for a perturbed two-body problem. Basic accelerations perturbing a meteoroid's orbit and their influence on the orbital elements are also studied and demonstrated. Our method is then compared with several published studies that utilized variations of a traditional analytical technique, the zenith attraction method, which corrects for the direction of the meteor's trajectory and its apparent velocity due to Earth's gravity. We then demonstrate the proposed technique on new observational data obtained from the Finnish Fireball Network (FFN) as well as on simulated data. In addition, we propose a method of analysis of error propagation, based on general rule of covariance transformation.

1. Chemical adaptability: the integration of different kinds of matter into giant molecular metal oxides.

PubMed

Müller, Achim; Merca, Alice; Al-Karawi, Ahmed Jasim M; Garai, Somenath; Bögge, Hartmut; Hou, Guangfeng; Wu, Lixin; Haupt, Erhard T K; Rehder, Dieter; Haso, Fadi; Liu, Tianbo

2012-12-14

Unique properties of the two giant wheel-shaped molybdenum-oxides of the type {Mo(154)}≡[{Mo(2)}{Mo(8)}{Mo(1)}](14) (1) and {Mo(176)}≡[{Mo(2)}{Mo(8)}{Mo(1)}](16) (2) that have the same building blocks either 14 or 16 times, respectively, are considered and show a "chemical adaptability" as a new phenomenon regarding the integration of a large number of appropriate cations and anions, for example, in form of the large "salt-like" {M(SO(4))}(16) rings (M = K(+), NH(4)(+)), while the two resulting {Mo(146)(K(SO(4)))(16)} (3) and {Mo(146)(NH(4)(SO(4)))(16)} (4) type hybrid compounds have the same shape as the parent ring structures. The chemical adaptability, which also allows the integration of anions and cations even at the same positions in the {Mo(4)O(6)}-type units of 1 and 2, is caused by easy changes in constitution by reorganisation and simultaneous release of (some) building blocks (one example: two opposite orientations of the same functional groups, that is, of H(2)O{Mo=O} (I) and O={Mo(H(2)O)} (II) are possible). Whereas Cu(2+) in [(H(4)Cu(II)(5))Mo(V)(28)Mo(VI)(114)O(432)(H(2)O)(58)](26-) (5 a) is simply coordinated to two parent O(2-) ions of {Mo(4)O(6)} and to two fragments of type II, the SO(4)(2-) integration in 3 and 4 occurs through the substitution of two oxo ligands of {Mo(4)O(6)} as well as two H(2)O ligands of fragment I. Complexes 3 and now 4 were characterised by different physical methods, for example, solutions of 4 in DMSO with sophisticated NMR spectroscopy (EXSY, DOSY and HSQC). The NH(4)(+) ions integrated in the cluster anion of 4 "communicate" with those in solution in the sense that the related H(+) ion exchange is in equilibrium. The important message: the reported "chemical adaptability" has its formal counterpart in solutions of "molybdates", which can form unique dynamic libraries containing constituents/building blocks that may form and break reversibly and can lead to the isolation of a variety of giant clusters with

2. Neuronal Spike Timing Adaptation Described with a Fractional Leaky Integrate-and-Fire Model

PubMed Central

Teka, Wondimu; Marinov, Toma M.; Santamaria, Fidel

2014-01-01

The voltage trace of neuronal activities can follow multiple timescale dynamics that arise from correlated membrane conductances. Such processes can result in power-law behavior in which the membrane voltage cannot be characterized with a single time constant. The emergent effect of these membrane correlations is a non-Markovian process that can be modeled with a fractional derivative. A fractional derivative is a non-local process in which the value of the variable is determined by integrating a temporal weighted voltage trace, also called the memory trace. Here we developed and analyzed a fractional leaky integrate-and-fire model in which the exponent of the fractional derivative can vary from 0 to 1, with 1 representing the normal derivative. As the exponent of the fractional derivative decreases, the weights of the voltage trace increase. Thus, the value of the voltage is increasingly correlated with the trajectory of the voltage in the past. By varying only the fractional exponent, our model can reproduce upward and downward spike adaptations found experimentally in neocortical pyramidal cells and tectal neurons in vitro. The model also produces spikes with longer first-spike latency and high inter-spike variability with power-law distribution. We further analyze spike adaptation and the responses to noisy and oscillatory input. The fractional model generates reliable spike patterns in response to noisy input. Overall, the spiking activity of the fractional leaky integrate-and-fire model deviates from the spiking activity of the Markovian model and reflects the temporal accumulated intrinsic membrane dynamics that affect the response of the neuron to external stimulation. PMID:24675903

3. Neuronal spike timing adaptation described with a fractional leaky integrate-and-fire model.

PubMed

Teka, Wondimu; Marinov, Toma M; Santamaria, Fidel

2014-03-01

The voltage trace of neuronal activities can follow multiple timescale dynamics that arise from correlated membrane conductances. Such processes can result in power-law behavior in which the membrane voltage cannot be characterized with a single time constant. The emergent effect of these membrane correlations is a non-Markovian process that can be modeled with a fractional derivative. A fractional derivative is a non-local process in which the value of the variable is determined by integrating a temporal weighted voltage trace, also called the memory trace. Here we developed and analyzed a fractional leaky integrate-and-fire model in which the exponent of the fractional derivative can vary from 0 to 1, with 1 representing the normal derivative. As the exponent of the fractional derivative decreases, the weights of the voltage trace increase. Thus, the value of the voltage is increasingly correlated with the trajectory of the voltage in the past. By varying only the fractional exponent, our model can reproduce upward and downward spike adaptations found experimentally in neocortical pyramidal cells and tectal neurons in vitro. The model also produces spikes with longer first-spike latency and high inter-spike variability with power-law distribution. We further analyze spike adaptation and the responses to noisy and oscillatory input. The fractional model generates reliable spike patterns in response to noisy input. Overall, the spiking activity of the fractional leaky integrate-and-fire model deviates from the spiking activity of the Markovian model and reflects the temporal accumulated intrinsic membrane dynamics that affect the response of the neuron to external stimulation.

4. Compact integration factor methods for complex domains and adaptive mesh refinement.

PubMed

Liu, Xinfeng; Nie, Qing

2010-08-10

Implicit integration factor (IIF) method, a class of efficient semi-implicit temporal scheme, was introduced recently for stiff reaction-diffusion equations. To reduce cost of IIF, compact implicit integration factor (cIIF) method was later developed for efficient storage and calculation of exponential matrices associated with the diffusion operators in two and three spatial dimensions for Cartesian coordinates with regular meshes. Unlike IIF, cIIF cannot be directly extended to other curvilinear coordinates, such as polar and spherical coordinate, due to the compact representation for the diffusion terms in cIIF. In this paper, we present a method to generalize cIIF for other curvilinear coordinates through examples of polar and spherical coordinates. The new cIIF method in polar and spherical coordinates has similar computational efficiency and stability properties as the cIIF in Cartesian coordinate. In addition, we present a method for integrating cIIF with adaptive mesh refinement (AMR) to take advantage of the excellent stability condition for cIIF. Because the second order cIIF is unconditionally stable, it allows large time steps for AMR, unlike a typical explicit temporal scheme whose time step is severely restricted by the smallest mesh size in the entire spatial domain. Finally, we apply those methods to simulating a cell signaling system described by a system of stiff reaction-diffusion equations in both two and three spatial dimensions using AMR, curvilinear and Cartesian coordinates. Excellent performance of the new methods is observed.

5. Adaptive HIFU noise cancellation for simultaneous therapy and imaging using an integrated HIFU/imaging transducer

Jeong, Jong Seob; Cannata, Jonathan Matthew; Shung, K. Kirk

2010-04-01

It was previously demonstrated that it is feasible to simultaneously perform ultrasound therapy and imaging of a coagulated lesion during treatment with an integrated transducer that is capable of high intensity focused ultrasound (HIFU) and B-mode ultrasound imaging. It was found that coded excitation and fixed notch filtering upon reception could significantly reduce interference caused by the therapeutic transducer. During HIFU sonication, the imaging signal generated with coded excitation and fixed notch filtering had a range side-lobe level of less than -40 dB, while traditional short-pulse excitation and fixed notch filtering produced a range side-lobe level of -20 dB. The shortcoming is, however, that relatively complicated electronics may be needed to utilize coded excitation in an array imaging system. It is for this reason that in this paper an adaptive noise canceling technique is proposed to improve image quality by minimizing not only the therapeutic interference, but also the remnant side-lobe 'ripples' when using the traditional short-pulse excitation. The performance of this technique was verified through simulation and experiments using a prototype integrated HIFU/imaging transducer. Although it is known that the remnant ripples are related to the notch attenuation value of the fixed notch filter, in reality, it is difficult to find the optimal notch attenuation value due to the change in targets or the media resulted from motion or different acoustic properties even during one sonication pulse. In contrast, the proposed adaptive noise canceling technique is capable of optimally minimizing both the therapeutic interference and residual ripples without such constraints. The prototype integrated HIFU/imaging transducer is composed of three rectangular elements. The 6 MHz center element is used for imaging and the outer two identical 4 MHz elements work together to transmit the HIFU beam. Two HIFU elements of 14.4 mm × 20.0 mm dimensions could

6. A quartic B-spline based explicit time integration scheme for structural dynamics with controllable numerical dissipation

Wen, W. B.; Duan, S. Y.; Yan, J.; Ma, Y. B.; Wei, K.; Fang, D. N.

2017-03-01

An explicit time integration scheme based on quartic B-splines is presented for solving linear structural dynamics problems. The scheme is of a one-parameter family of schemes where free algorithmic parameter controls stability, accuracy and numerical dispersion. The proposed scheme possesses at least second-order accuracy and at most third-order accuracy. A 2D wave problem is analyzed to demonstrate the effectiveness of the proposed scheme in reducing high-frequency modes and retaining low-frequency modes. Except for general structural dynamics, the proposed scheme can be used effectively for wave propagation problems in which numerical dissipation is needed to reduce spurious oscillations.

7. SINDA'85/FLUINT - SYSTEMS IMPROVED NUMERICAL DIFFERENCING ANALYZER AND FLUID INTEGRATOR (CONVEX VERSION)

NASA Technical Reports Server (NTRS)

Cullimore, B.

1994-01-01

SINDA, the Systems Improved Numerical Differencing Analyzer, is a software system for solving lumped parameter representations of physical problems governed by diffusion-type equations. SINDA was originally designed for analyzing thermal systems represented in electrical analog, lumped parameter form, although its use may be extended to include other classes of physical systems which can be modeled in this form. As a thermal analyzer, SINDA can handle such interrelated phenomena as sublimation, diffuse radiation within enclosures, transport delay effects, and sensitivity analysis. FLUINT, the FLUid INTegrator, is an advanced one-dimensional fluid analysis program that solves arbitrary fluid flow networks. The working fluids can be single phase vapor, single phase liquid, or two phase. The SINDA'85/FLUINT system permits the mutual influences of thermal and fluid problems to be analyzed. The SINDA system consists of a programming language, a preprocessor, and a subroutine library. The SINDA language is designed for working with lumped parameter representations and finite difference solution techniques. The preprocessor accepts programs written in the SINDA language and converts them into standard FORTRAN. The SINDA library consists of a large number of FORTRAN subroutines that perform a variety of commonly needed actions. The use of these subroutines can greatly reduce the programming effort required to solve many problems. A complete run of a SINDA'85/FLUINT model is a four step process. First, the user's desired model is run through the preprocessor which writes out data files for the processor to read and translates the user's program code. Second, the translated code is compiled. The third step requires linking the user's code with the processor library. Finally, the processor is executed. SINDA'85/FLUINT program features include 20,000 nodes, 100,000 conductors, 100 thermal submodels, and 10 fluid submodels. SINDA'85/FLUINT can also model two phase flow

8. Linked Environment for Atmospheric Discovery (LEAD): Transforming the Sensing and Numerical Prediction of High Impact Local Weather Through Dynamic Adaptation

Ramamurthy, M.; Droegemeier, K.

2006-12-01

Those who have experienced the devastation of a tornado, the raging waters of a flash flood, or the paralyzing impacts of lake-effect snows understand that mesoscale weather develops rapidly, often with considerable uncertainty with regard to location. Such weather is also locally intense and frequently influenced by processes on both larger and smaller scales. Ironically, few of the technologies used to observe the atmosphere, predict its evolution, and compute, transmit, or store information about it operate in a manner that accommodates the dynamic behavior of mesoscale weather. Radars do not adaptively scan specific regions of thunderstorms; numerical models are run largely on fixed time schedules in fixed configurations; and cyberinfrastructure does not allow meteorological tools to run on-demand, change configurations in response to the weather, or provide the fault tolerance needed for rapid reconfiguration. As a result, today's weather technology is highly constrained and far from optimal when applied to any particular situation. This presentation describes a major paradigm shift now underway in the field of meteorology -- away from today's environment in which remote sensing systems, atmospheric prediction models, and hazardous weather detection systems operate in fixed configurations, and on fixed schedules largely independent of weather -- to one in which they can change their configuration dynamically in response to the evolving weather. A major driver of this change is a project known as Linked Environments for Atmospheric Discovery (LEAD) -- a 5-year NSF Large Information Technology Research (ITR) grant that is developing cyberinfrastructure to allow scientists, students, tools and sensors to interact with weather. This presentation will describe the research and technology development being performed to establish this capability

9. The B-cell antigen receptor integrates adaptive and innate immune signals

PubMed Central

Otipoby, Kevin L.; Waisman, Ari; Derudder, Emmanuel; Srinivasan, Lakshmi; Franklin, Andrew; Rajewsky, Klaus

2015-01-01

B cells respond to antigens by engagement of their B-cell antigen receptor (BCR) and of coreceptors through which signals from helper T cells or pathogen-associated molecular patterns are delivered. We show that the proliferative response of B cells to the latter stimuli is controlled by BCR-dependent activation of phosphoinositidyl 3-kinase (PI-3K) signaling. Glycogen synthase kinase 3β and Foxo1 are two PI-3K-regulated targets that play important roles, but to different extents, depending on the specific mitogen. These results suggest a model for integrating signals from the innate and the adaptive immune systems in the control of the B-cell immune response. PMID:26371314

10. Integration of coping and social support perspectives: implications for the study of adaptation to chronic diseases.

PubMed

Schreurs, K M; de Ridder, D T

1997-01-01

In this article, empirical studies dealing with the relationship between coping and social support are discussed in order to identify promising themes for research on adaptation to chronic diseases. Although only few studies deal with this issue explicitly, the review reveals that four ways to study the relationship between coping and social support can be distinguished: (a) seeking social support as a coping strategy; (b) social support as a coping resource; (c) social support as dependent on the way individual patients cope; and (d) coping by a social system. It is argued that all four ways of integrating coping and social support contribute to a better understanding of adaptation to chronic diseases. However, exploring the interrelatedness of both concepts by studying social support as a coping resource and social support as dependent on the patient's own coping behavior appear to be especially fruitful in the short term, as they: (a) provide a better insight in the social determinants of coping, and (b) may help to clarify the way social support affects health and well-being.

11. Integrity of the osteocyte bone cell network in osteoporotic fracture: Implications for mechanical load adaptation

Kuliwaba, J. S.; Truong, L.; Codrington, J. D.; Fazzalari, N. L.

2010-06-01

The human skeleton has the ability to modify its material composition and structure to accommodate loads through adaptive modelling and remodelling. The osteocyte cell network is now considered to be central to the regulation of skeletal homeostasis; however, very little is known of the integrity of the osteocyte cell network in osteoporotic fragility fracture. This study was designed to characterise osteocyte morphology, the extent of osteocyte cell apoptosis and expression of sclerostin protein (a negative regulator of bone formation) in trabecular bone from the intertrochanteric region of the proximal femur, for postmenopausal women with fragility hip fracture compared to age-matched women who had not sustained fragility fracture. Osteocyte morphology (osteocyte, empty lacunar, and total lacunar densities) and the degree of osteocyte apoptosis (percent caspase-3 positive osteocyte lacunae) were similar between the fracture patients and non-fracture women. The fragility hip fracture patients had a lower proportion of sclerostin-positive osteocyte lacunae in comparison to sclerostin-negative osteocyte lacunae, in contrast to similar percent sclerostin-positive/sclerostin-negative lacunae for non-fracture women. The unexpected finding of decreased sclerostin expression in trabecular bone osteocytes from fracture cases may be indicative of elevated bone turnover and under-mineralisation, characteristic of postmenopausal osteoporosis. Further, altered osteocytic expression of sclerostin may be involved in the mechano-responsiveness of bone. Optimal function of the osteocyte cell network is likely to be a critical determinant of bone strength, acting via mechanical load adaptation, and thus contributing to osteoporotic fracture risk.

12. Adaptive integral LOS path following for an unmanned airship with uncertainties based on robust RBFNN backstepping.

PubMed

Zheng, Zewei; Zou, Yao

2016-11-01

This paper investigates the path following control problem for an unmanned airship in the presence of unknown wind and uncertainties. The backstepping technique augmented by a robust adaptive radial basis function neural network (RBFNN) is employed as the main control framework. Based on the horizontal dynamic model of the airship, an improved adaptive integral line-of-sight (LOS) guidance law is first proposed, which suits any parametric paths. The guidance law calculates the desired yaw angle and estimates the wind. Then the controller is extended to cope with the airship yaw tracking and velocity control by resorting to the augmented backstepping technique. The uncertainties of the dynamics are compensated by using the robust RBFNNs. Each robust RBFNN utilizes an nth-order smooth switching function to combine a conventional RBFNN with a robust control. The conventional RBFNN dominates in the neural active region, while the robust control retrieves the transient outside the active region, so that the stability range can be widened. Stability analysis shows that the controlled closed-loop system is globally uniformly ultimately bounded. Simulations are provided to validate the effectiveness of the proposed control approach.

13. Fuzzy adaptive interacting multiple model nonlinear filter for integrated navigation sensor fusion.

PubMed

Tseng, Chien-Hao; Chang, Chih-Wen; Jwo, Dah-Jing

2011-01-01

In this paper, the application of the fuzzy interacting multiple model unscented Kalman filter (FUZZY-IMMUKF) approach to integrated navigation processing for the maneuvering vehicle is presented. The unscented Kalman filter (UKF) employs a set of sigma points through deterministic sampling, such that a linearization process is not necessary, and therefore the errors caused by linearization as in the traditional extended Kalman filter (EKF) can be avoided. The nonlinear filters naturally suffer, to some extent, the same problem as the EKF for which the uncertainty of the process noise and measurement noise will degrade the performance. As a structural adaptation (model switching) mechanism, the interacting multiple model (IMM), which describes a set of switching models, can be utilized for determining the adequate value of process noise covariance. The fuzzy logic adaptive system (FLAS) is employed to determine the lower and upper bounds of the system noise through the fuzzy inference system (FIS). The resulting sensor fusion strategy can efficiently deal with the nonlinear problem for the vehicle navigation. The proposed FUZZY-IMMUKF algorithm shows remarkable improvement in the navigation estimation accuracy as compared to the relatively conventional approaches such as the UKF and IMMUKF.

14. Controlled Aeroelastic Response and Airfoil Shaping Using Adaptive Materials and Integrated Systems

NASA Technical Reports Server (NTRS)

Pinkerton, Jennifer L.; McGowan, Anna-Maria R.; Moses, Robert W.; Scott, Robert C.; Heeg, Jennifer

1996-01-01

This paper presents an overview of several activities of the Aeroelasticity Branch at the NASA Langley Research Center in the area of applying adaptive materials and integrated systems for controlling both aircraft aeroelastic response and airfoil shape. The experimental results of four programs are discussed: the Piezoelectric Aeroelastic Response Tailoring Investigation (PARTI); the Adaptive Neural Control of Aeroelastic Response (ANCAR) program; the Actively Controlled Response of Buffet Affected Tails (ACROBAT) program; and the Airfoil THUNDER Testing to Ascertain Characteristics (ATTACH) project. The PARTI program demonstrated active flutter control and significant rcductions in aeroelastic response at dynamic pressures below flutter using piezoelectric actuators. The ANCAR program seeks to demonstrate the effectiveness of using neural networks to schedule flutter suppression control laws. Th,e ACROBAT program studied the effectiveness of a number of candidate actuators, including a rudder and piezoelectric actuators, to alleviate vertical tail buffeting. In the ATTACH project, the feasibility of using Thin-Layer Composite-Uimorph Piezoelectric Driver and Sensor (THUNDER) wafers to control airfoil aerodynamic characteristics was investigated. Plans for future applications are also discussed.

15. Testing and integrating the laser system of ARGOS: the ground layer adaptive optics for LBT

Loose, C.; Rabien, S.; Barl, L.; Borelli, J.; Deysenroth, M.; Gaessler, W.; Gemperlein, H.; Honsberg, M.; Kulas, M.; Lederer, R.; Raab, W.; Rahmer, G.; Ziegleder, J.

2012-07-01

The Laser Guide Star facility ARGOS will provide Ground Layer Adaptive Optics to the Large Binocular Telescope (LBT). The system operates three pulsed laser beacons above each of the two primary mirrors, which are Rayleigh scattered in 12km height. This enables correction over a wide field of view, using the adaptive secondary mirror of the LBT. The ARGOS laser system is designed around commercially available, pulsed Nd:YAG lasers working at 532 nm. In preparation for a successful commissioning, it is important to ascertain that the specifications are met for every component of the laser system. The testing of assembled, optical subsystems is likewise necessary. In particular it is required to confirm a high output power, beam quality and pulse stability of the beacons. In a second step, the integrated laser system along with its electronic cabinets are installed on a telescope simulator. This unit is capable of carrying the whole assembly and can be tilted to imitate working conditions at the LBT. It allows alignment and functionality testing of the entire system, ensuring that flexure compensation and system diagnosis work properly in different orientations.

16. The adaptive optics beam steering mirror for the GMT Integral-Field Spectrograph, GMTIFS

Sharp, R.; Boz, R.; Hart, J.; Bloxham, G.; Bundy, D.; Davis, J.; McGregor, P. J.; Nielson, J.; Vest, C.; Young, P. J.

2014-07-01

To achieve the high adaptive optics sky coverage necessary to allow the GMT Integral-Field Spectrograph to access key scientific targets, the on-instrument adaptive-optics wavefront-sensing system must patrol the full 180 arcsecond diameter guide field passed to the instrument. Starlight must be held stationary on the wavefront sensor (accounting for flexure, differential refraction and non-sidereal tracking rates) to ~ 1 milliarcsecond to provide the stable position reference signal for deep AO observations and avoid introducing image blur. Hence a tight tolerance of 1/180,000 is placed on the positioning and encoding accuracy for the cryogenic On-Instrument Wave-Front Sensor feed. GMTIFS will achieve this requirement using a beam-steering mirror system as an optical relay for starlight from across the accessible guide field. The system avoids hysteresis and backlash by eliminating friction and avoiding gearing while maintaining high setting speed and accuracy with a precision feedback loop. Here we present the design of the relay system and the technical solution deployed to meet the challenging specifications for drive rate, accuracy and positional encoding of the beam-steering system.

17. Efficient integration of spectral features for vehicle tracking utilizing an adaptive sensor

Uzkent, Burak; Hoffman, Matthew J.; Vodacek, Anthony

2015-03-01

Object tracking in urban environments is an important and challenging problem that is traditionally tackled using visible and near infrared wavelengths. By inserting extended data such as spectral features of the objects one can improve the reliability of the identification process. However, huge increase in data created by hyperspectral imaging is usually prohibitive. To overcome the complexity problem, we propose a persistent air-to-ground target tracking system inspired by a state-of-the-art, adaptive, multi-modal sensor. The adaptive sensor is capable of providing panchromatic images as well as the spectra of desired pixels. This addresses the data challenge of hyperspectral tracking by only recording spectral data as needed. Spectral likelihoods are integrated into a data association algorithm in a Bayesian fashion to minimize the likelihood of misidentification. A framework for controlling spectral data collection is developed by incorporating motion segmentation information and prior information from a Gaussian Sum filter (GSF) movement predictions from a multi-model forecasting set. An intersection mask of the surveillance area is extracted from OpenStreetMap source and incorporated into the tracking algorithm to perform online refinement of multiple model set. The proposed system is tested using challenging and realistic scenarios generated in an adverse environment.

18. Alertness Modulates Conflict Adaptation and Feature Integration in an Opposite Way

PubMed Central

Chen, Jia; Huang, Xiting; Chen, Antao

2013-01-01

19. On testing a subroutine for the numerical integration of ordinary differential equations

NASA Technical Reports Server (NTRS)

Krogh, F. T.

1973-01-01

This paper discusses how to numerically test a subroutine for the solution of ordinary differential equations. Results obtained with a variable order Adams method are given for eleven simple test cases.-

20. Assessing the bio-mitigation effect of integrated multi-trophic aquaculture on marine environment by a numerical approach.

PubMed

Zhang, Junbo; Kitazawa, Daisuke

2016-09-15

With increasing concern over the aquatic environment in marine culture, the integrated multi-trophic aquaculture (IMTA) has received extensive attention in recent years. A three-dimensional numerical ocean model is developed to explore the negative impacts of aquaculture wastes and assess the bio-mitigation effect of IMTA systems on marine environments. Numerical results showed that the concentration of surface phytoplankton could be controlled by planting seaweed (a maximum reduction of 30%), and the percentage change in the improvement of bottom dissolved oxygen concentration increased to 35% at maximum due to the ingestion of organic wastes by sea cucumbers. Numerical simulations indicate that seaweeds need to be harvested in a timely manner for maximal absorption of nutrients, and the initial stocking density of sea cucumbers >3.9 individuals m(-2) is preferred to further eliminate the organic wastes sinking down to the sea bottom.

1. Numerical Treatment of Differential and Integral Equations by the P and H-P Versions of the Finite Element Method

DTIC Science & Technology

1992-01-01

mathematical papers which describe various locking effects and analyze methods (mainly mixed methods ) to overcome it. However, the treatment in these...finite element method in various areas, such as the numerical approximation of three-dimensional PDEs anu integral equations, the investigation of mixed ... methods for these versions and, most importantly, the uniform approximation of parameter-dependent problems by these versions. By the p version, we

2. On time discretizations for spectral methods. [numerical integration of Fourier and Chebyshev methods for dynamic partial differential equations

NASA Technical Reports Server (NTRS)

Gottlieb, D.; Turkel, E.

1980-01-01

New methods are introduced for the time integration of the Fourier and Chebyshev methods of solution for dynamic differential equations. These methods are unconditionally stable, even though no matrix inversions are required. Time steps are chosen by accuracy requirements alone. For the Fourier method both leapfrog and Runge-Kutta methods are considered. For the Chebyshev method only Runge-Kutta schemes are tested. Numerical calculations are presented to verify the analytic results. Applications to the shallow water equations are presented.

3. Robust Matching Cost Function for Stereo Correspondence Using Matching by Tone Mapping and Adaptive Orthogonal Integral Image.

PubMed

Dinh, Vinh Quang; Nguyen, Vinh Dinh; Jeon, Jae Wook

2015-12-01

Real-world stereo images are inevitably affected by radiometric differences, including variations in exposure, vignetting, lighting, and noise. Stereo images with severe radiometric distortion can have large radiometric differences and include locally nonlinear changes. In this paper, we first introduce an adaptive orthogonal integral image, which is an improved version of an orthogonal integral image. After that, based on matching by tone mapping and the adaptive orthogonal integral image, we propose a robust and accurate matching cost function that can tolerate locally nonlinear intensity distortion. By using the adaptive orthogonal integral image, the proposed matching cost function can adaptively construct different support regions of arbitrary shapes and sizes for different pixels in the reference image, so it can operate robustly within object boundaries. Furthermore, we develop techniques to automatically estimate the values of the parameters of our proposed function. We conduct experiments using the proposed matching cost function and compare it with functions employing the census transform, supporting local binary pattern, and adaptive normalized cross correlation, as well as a mutual information-based matching cost function using different stereo data sets. By using the adaptive orthogonal integral image, the proposed matching cost function reduces the error from 21.51% to 15.73% in the Middlebury data set, and from 15.9% to 10.85% in the Kitti data set, as compared with using the orthogonal integral image. The experimental results indicate that the proposed matching cost function is superior to the state-of-the-art matching cost functions under radiometric variation.

4. Integration Preferences of Wildtype AAV-2 for Consensus Rep-Binding Sites at Numerous Loci in the Human Genome

PubMed Central

Hüser, Daniela; Gogol-Döring, Andreas; Lutter, Timo; Weger, Stefan; Winter, Kerstin; Hammer, Eva-Maria; Cathomen, Toni; Reinert, Knut; Heilbronn, Regine

2010-01-01

Adeno-associated virus type 2 (AAV) is known to establish latency by preferential integration in human chromosome 19q13.42. The AAV non-structural protein Rep appears to target a site called AAVS1 by simultaneously binding to Rep-binding sites (RBS) present on the AAV genome and within AAVS1. In the absence of Rep, as is the case with AAV vectors, chromosomal integration is rare and random. For a genome-wide survey of wildtype AAV integration a linker-selection-mediated (LSM)-PCR strategy was designed to retrieve AAV-chromosomal junctions. DNA sequence determination revealed wildtype AAV integration sites scattered over the entire human genome. The bioinformatic analysis of these integration sites compared to those of rep-deficient AAV vectors revealed a highly significant overrepresentation of integration events near to consensus RBS. Integration hotspots included AAVS1 with 10% of total events. Novel hotspots near consensus RBS were identified on chromosome 5p13.3 denoted AAVS2 and on chromsome 3p24.3 denoted AAVS3. AAVS2 displayed seven independent junctions clustered within only 14 bp of a consensus RBS which proved to bind Rep in vitro similar to the RBS in AAVS3. Expression of Rep in the presence of rep-deficient AAV vectors shifted targeting preferences from random integration back to the neighbourhood of consensus RBS at hotspots and numerous additional sites in the human genome. In summary, targeted AAV integration is not as specific for AAVS1 as previously assumed. Rather, Rep targets AAV to integrate into open chromatin regions in the reach of various, consensus RBS homologues in the human genome. PMID:20628575

5. Integrated Modeling and Participatory Scenario Planning for Climate Adaptation: the Maui Groundwater Project

Keener, V. W.; Finucane, M.; Brewington, L.

2014-12-01

For the last century, the island of Maui, Hawaii, has been the center of environmental, agricultural, and legal conflict with respect to surface and groundwater allocation. Planning for adequate future freshwater resources requires flexible and adaptive policies that emphasize partnerships and knowledge transfer between scientists and non-scientists. In 2012 the Hawai'i state legislature passed the Climate Change Adaptation Priority Guidelines (Act 286) law requiring county and state policy makers to include island-wide climate change scenarios in their planning processes. This research details the ongoing work by researchers in the NOAA funded Pacific RISA to support the development of Hawaii's first island-wide water use plan under the new climate adaptation directive. This integrated project combines several models with participatory future scenario planning. The dynamically downscaled triply nested Hawaii Regional Climate Model (HRCM) was modified from the WRF community model and calibrated to simulate the many microclimates on the Hawaiian archipelago. For the island of Maui, the HRCM was validated using 20 years of hindcast data, and daily projections were created at a 1 km scale to capture the steep topography and diverse rainfall regimes. Downscaled climate data are input into a USGS hydrological model to quantify groundwater recharge. This model was previously used for groundwater management, and is being expanded utilizing future climate projections, current land use maps and future scenario maps informed by stakeholder input. Participatory scenario planning began in 2012 to bring together a diverse group of over 50 decision-makers in government, conservation, and agriculture to 1) determine the type of information they would find helpful in planning for climate change, and 2) develop a set of scenarios that represent alternative climate/management futures. This is an iterative process, resulting in flexible and transparent narratives at multiple scales

6. An adaptable XML based approach for scientific data management and integration

Wang, Fusheng; Thiel, Florian; Furrer, Daniel; Vergara-Niedermayr, Cristobal; Qin, Chen; Hackenberg, Georg; Bourgue, Pierre-Emmanuel; Kaltschmidt, David; Wang, Mo

2008-03-01

Increased complexity of scientific research poses new challenges to scientific data management. Meanwhile, scientific collaboration is becoming increasing important, which relies on integrating and sharing data from distributed institutions. We develop SciPort, a Web-based platform on supporting scientific data management and integration based on a central server based distributed architecture, where researchers can easily collect, publish, and share their complex scientific data across multi-institutions. SciPort provides an XML based general approach to model complex scientific data by representing them as XML documents. The documents capture not only hierarchical structured data, but also images and raw data through references. In addition, SciPort provides an XML based hierarchical organization of the overall data space to make it convenient for quick browsing. To provide generalization, schemas and hierarchies are customizable with XML-based definitions, thus it is possible to quickly adapt the system to different applications. While each institution can manage documents on a Local SciPort Server independently, selected documents can be published to a Central Server to form a global view of shared data across all sites. By storing documents in a native XML database, SciPort provides high schema extensibility and supports comprehensive queries through XQuery. By providing a unified and effective means for data modeling, data access and customization with XML, SciPort provides a flexible and powerful platform for sharing scientific data for scientific research communities, and has been successfully used in both biomedical research and clinical trials.

7. Validation of Optical Turbulence Simulations from a Numerical Weather Prediction Model in Support of Adaptive Optics Design

Alliss, R.; Felton, B.

Optical turbulence (OT) acts to distort light in the atmosphere, degrading imagery from large astronomical telescopes and possibly reducing data quality of air to air laser communication links. Some of the degradation due to turbulence can be corrected by adaptive optics. However, the severity of optical turbulence, and thus the amount of correction required, is largely dependent upon the turbulence at the location of interest. Therefore, it is vital to understand the climatology of optical turbulence at such locations. In many cases, it is impractical and expensive to setup instrumentation to characterize the climatology of OT, so simulations become a less expensive and convenient alternative. The strength of OT is characterized by the refractive index structure function Cn2, which in turn is used to calculate atmospheric seeing parameters. While attempts have been made to characterize Cn2 using empirical models, Cn2 can be calculated more directly from Numerical Weather Prediction (NWP) simulations using pressure, temperature, thermal stability, vertical wind shear, turbulent Prandtl number, and turbulence kinetic energy (TKE). In this work we use the Weather Research and Forecast (WRF) NWP model to generate Cn2 climatologies in the planetary boundary layer and free atmosphere, allowing for both point-to-point and ground-to-space seeing estimates of the Fried Coherence length (ro) and other seeing parameters. Simulations are performed using the Maui High Performance Computing Centers Jaws cluster. The WRF model is configured to run at 1km horizontal resolution over a domain covering the islands of Maui and the Big Island. The vertical resolution varies from 25 meters in the boundary layer to 500 meters in the stratosphere. The model top is 20 km. We are interested in the variations in Cn2 and the Fried Coherence Length (ro) between the summits of Haleakala and Mauna Loa. Over six months of simulations have been performed over this area. Simulations indicate that

8. Integration of numerical modeling and observations for the Gulf of Naples monitoring network

Iermano, I.; Uttieri, M.; Zambianchi, E.; Buonocore, B.; Cianelli, D.; Falco, P.; Zambardino, G.

2012-04-01

Lethal effects of mineral oils on fragile marine and coastal ecosystems are now well known. Risks and damages caused by a maritime accident can be reduced with the help of better forecasts and efficient monitoring systems. The MED project TOSCA (Tracking Oil Spills and Coastal Awareness Network), which gathers 13 partners from 4 Mediterranean countries, has been designed to help create a better response system to maritime accidents. Through the construction of an observational network, based on state of the art technology (HF radars and drifters), TOSCA provides real-time observations and forecasts of the Mediterranean coastal marine environmental conditions. The system is installed and assessed in five test sites on the coastal areas of oil spill outlets (Eastern Mediterranean) and on high traffic areas (Western Mediterranean). The Gulf of Naples, a small semi-closed basin opening to the Tyrrhenian Sea is one of the five test-sites. It is of particular interest from both the environmental point of view, due to peculiar ecosystem properties in the area, and because it sustains important touristic and commercial activities. Currently the Gulf of Naples monitoring network is represented by five automatic weather stations distributed along the coasts of the Gulf, one weather radar, two tide gauges, one waverider buoy, and moored physical, chemical and bio-optical instrumentation. In addition, a CODAR-SeaSonde HF coastal radar system composed of three antennas is located in Portici, Massa Lubrense and Castellammare. The system provides hourly data of surface currents over the entire Gulf with a 1km spatial resolution. A numerical modeling implementation based on Regional Ocean Modeling System (ROMS) is actually integrated in the Gulf of Naples monitoring network. ROMS is a 3-D, free-surface, hydrostatic, primitive equation, finite difference ocean model. In our configuration, the model has high horizontal resolution (250m), and 30 sigma levels in the vertical. Thanks

9. Construction of an extended invariant for an arbitrary ordinary differential equation with its development in a numerical integration algorithm.

PubMed

Fukuda, Ikuo; Nakamura, Haruki

2006-02-01

For an arbitrary ordinary differential equation (ODE), a scheme for constructing an extended ODE endowed with a time-invariant function is here proposed. This scheme enables us to examine the accuracy of the numerical integration of an ODE that may itself have had no invariant. These quantities are constructed by referring to the Nosé-Hoover molecular dynamics equation and its related conserved quantity. By applying this procedure to several molecular dynamics equations, the conventional conserved quantity individually defined in each dynamics can be reproduced in a uniform, generalized way; our concept allows a transparent outlook underlying these quantities and ideas. Developing the technique, for a certain class of ODEs we construct a numerical integrator that is not only explicit and symmetric, but preserves a unit Jacobian for a suitably defined extended ODE, which also provides an invariant. Our concept is thus to simply build a divergence-free extended ODE whose solution is just a lift-up of the original ODE, and to constitute an efficient integrator that preserves the phase-space volume on the extended system. We present precise discussions about the general mathematical properties of the integrator and provide specific conditions that should be incorporated for practical applications.

10. cuSwift --- a suite of numerical integration methods for modelling planetary systems implemented in C/CUDA

Hellmich, S.; Mottola, S.; Hahn, G.; Kührt, E.; Hlawitschka, M.

2014-07-01

Simulations of dynamical processes in planetary systems represent an important tool for studying the orbital evolution of the systems [1--3]. Using modern numerical integration methods, it is possible to model systems containing many thousands of objects over timescales of several hundred million years. However, in general, supercomputers are needed to get reasonable simulation results in acceptable execution times [3]. To exploit the ever-growing computation power of Graphics Processing Units (GPUs) in modern desktop computers, we implemented cuSwift, a library of numerical integration methods for studying long-term dynamical processes in planetary systems. cuSwift can be seen as a re-implementation of the famous SWIFT integrator package written by Hal Levison and Martin Duncan. cuSwift is written in C/CUDA and contains different integration methods for various purposes. So far, we have implemented three algorithms: a 15th-order Radau integrator [4], the Wisdom-Holman Mapping (WHM) integrator [5], and the Regularized Mixed Variable Symplectic (RMVS) Method [6]. These algorithms treat only the planets as mutually gravitationally interacting bodies whereas asteroids and comets (or other minor bodies of interest) are treated as massless test particles which are gravitationally influenced by the massive bodies but do not affect each other or the massive bodies. The main focus of this work is on the symplectic methods (WHM and RMVS) which use a larger time step and thus are capable of integrating many particles over a large time span. As an additional feature, we implemented the non-gravitational Yarkovsky effect as described by M. Brož [7]. With cuSwift, we show that the use of modern GPUs makes it possible to speed up these methods by more than one order of magnitude compared to the single-core CPU implementation, thereby enabling modest workstation computers to perform long-term dynamical simulations. We use these methods to study the influence of the Yarkovsky

11. Evaluating marginal likelihood with thermodynamic integration method and comparison with several other numerical methods

SciTech Connect

Liu, Peigui; Elshall, Ahmed S.; Ye, Ming; Beerli, Peter; Zeng, Xiankui; Lu, Dan; Tao, Yuezan

2016-02-05

Evaluating marginal likelihood is the most critical and computationally expensive task, when conducting Bayesian model averaging to quantify parametric and model uncertainties. The evaluation is commonly done by using Laplace approximations to evaluate semianalytical expressions of the marginal likelihood or by using Monte Carlo (MC) methods to evaluate arithmetic or harmonic mean of a joint likelihood function. This study introduces a new MC method, i.e., thermodynamic integration, which has not been attempted in environmental modeling. Instead of using samples only from prior parameter space (as in arithmetic mean evaluation) or posterior parameter space (as in harmonic mean evaluation), the thermodynamic integration method uses samples generated gradually from the prior to posterior parameter space. This is done through a path sampling that conducts Markov chain Monte Carlo simulation with different power coefficient values applied to the joint likelihood function. The thermodynamic integration method is evaluated using three analytical functions by comparing the method with two variants of the Laplace approximation method and three MC methods, including the nested sampling method that is recently introduced into environmental modeling. The thermodynamic integration method outperforms the other methods in terms of their accuracy, convergence, and consistency. The thermodynamic integration method is also applied to a synthetic case of groundwater modeling with four alternative models. The application shows that model probabilities obtained using the thermodynamic integration method improves predictive performance of Bayesian model averaging. As a result, the thermodynamic integration method is mathematically rigorous, and its MC implementation is computationally general for a wide range of environmental problems.

12. Mixing-to-eruption timescales: an integrated model combining numerical simulations and high-temperature experiments with natural melts

Montagna, Chiara; Perugini, Diego; De Campos, Christina; Longo, Antonella; Dingwell, Donald Bruce; Papale, Paolo

2015-04-01

Arrival of magma from depth into shallow reservoirs and associated mixing processes have been documented as possible triggers of explosive eruptions. Quantifying the timing from beginning of mixing to eruption is of fundamental importance in volcanology in order to put constraints about the possible onset of a new eruption. Here we integrate numerical simulations and high-temperature experiment performed with natural melts with the aim to attempt identifying the mixing-to-eruption timescales. We performed two-dimensional numerical simulations of the arrival of gas-rich magmas into shallow reservoirs. We solve the fluid dynamics for the two interacting magmas evaluating the space-time evolution of the physical properties of the mixture. Convection and mingling develop quickly into the chamber and feeding conduit/dyke. Over time scales of hours, the magmas in the reservoir appear to have mingled throughout, and convective patterns become harder to identify. High-temperature magma mixing experiments have been performed using a centrifuge and using basaltic and phonolitic melts from Campi Flegrei (Italy) as initial end-members. Concentration Variance Decay (CVD), an inevitable consequence of magma mixing, is exponential with time. The rate of CVD is a powerful new geochronometer for the time from mixing to eruption/quenching. The mingling-to-eruption time of three explosive volcanic eruptions from Campi Flegrei (Italy) yield durations on the order of tens of minutes. These results are in perfect agreement with the numerical simulations that suggest a maximum mixing time of a few hours to obtain a hybrid mixture. We show that integration of numerical simulation and high-temperature experiments can provide unprecedented results about mixing processes in volcanic systems. The combined application of numerical simulations and CVD geochronometer to the eruptive products of active volcanoes could be decisive for the preparation of hazard mitigation during volcanic unrest.

13. Numerical solution of linear and nonlinear Fredholm integral equations by using weighted mean-value theorem.

PubMed

Altürk, Ahmet

2016-01-01

Mean value theorems for both derivatives and integrals are very useful tools in mathematics. They can be used to obtain very important inequalities and to prove basic theorems of mathematical analysis. In this article, a semi-analytical method that is based on weighted mean-value theorem for obtaining solutions for a wide class of Fredholm integral equations of the second kind is introduced. Illustrative examples are provided to show the significant advantage of the proposed method over some existing techniques.

14. Application of a numerical differencing analyzer computer program to a Modular Integrated Utility System

NASA Technical Reports Server (NTRS)

Brandli, A. E.; Donham, C. F.

1974-01-01

This paper describes the application of a numerical differencing analyzer computer program to the thermal analyzation of a MIUS model. The MIUS model which was evaluated is one which would be required to support a 648-unit Garden Apartment Complex. This computer program was capable of predicting the thermal performance of this MIUS from the impressed electrical, heating, and cooling loads.

15. A modified seventh order two step hybrid method for the numerical integration of oscillatory problems

Kalogiratou, Z.; Monovasilis, Th.; Simos, T. E.

2016-12-01

In this work we consider trigonometrically fitted two step hybrid methods for the numerical solution of second-order initial value problems. We follow the approach of Simos and derive trigonometrically fitting conditions for methods with five stages. As an example we modify a seventh order method and apply to three well known oscillatory problems.

16. A Numerical Methods Course Based on B-Learning: Integrated Learning Design and Follow Up

ERIC Educational Resources Information Center

2013-01-01

Information and communication technologies advance continuously, providing a real support for learning processes. Learning technologies address areas which previously have corresponded to face-to-face learning, while mobile resources are having a growing impact on education. Numerical Methods is a discipline and profession based on technology. In…

17. Adaptation of the phase of the human linear vestibulo-ocular reflex (LVOR) and effects on the oculomotor neural integrator

NASA Technical Reports Server (NTRS)

Hegemann, S.; Shelhamer, M.; Kramer, P. D.; Zee, D. S.

2000-01-01

The phase of the translational linear VOR (LVOR) can be adaptively modified by exposure to a visual-vestibular mismatch. We extend here our earlier work on LVOR phase adaptation, and discuss the role of the oculomotor neural integrator. Ten subjects were oscillated laterally at 0.5 Hz, 0.3 g peak acceleration, while sitting upright on a linear sled. LVOR was assessed before and after adaptation with subjects tracking the remembered location of a target at 1 m in the dark. Phase and gain were measured by fitting sine waves to the desaccaded eye movements, and comparing sled and eye position. To adapt LVOR phase, the subject viewed a computer-generated stereoscopic visual display, at a virtual distance of 1 m, that moved so as to require either a phase lead or a phase lag of 53 deg. Adaptation lasted 20 min, during which subjects were oscillated at 0.5 Hz/0.3 g. Four of five subjects produced an adaptive change in the lag condition (range 4-45 deg), and each of five produced a change in the lead condition (range 19-56 deg), as requested. Changes in drift on eccentric gaze suggest that the oculomotor velocity-to-position integrator may be involved in the phase changes.

18. Integrated soil fertility management in sub-Saharan Africa: unravelling local adaptation

Vanlauwe, B.; Descheemaeker, K.; Giller, K. E.; Huising, J.; Merckx, R.; Nziguheba, G.; Wendt, J.; Zingore, S.

2014-12-01

Intensification of smallholder agriculture in sub-Saharan Africa is necessary to address rural poverty and natural resource degradation. Integrated Soil Fertility Management (ISFM) is a means to enhance crop productivity while maximizing the agronomic efficiency (AE) of applied inputs, and can thus contribute to sustainable intensification. ISFM consists of a set of best practices, preferably used in combination, including the use of appropriate germplasm, the appropriate use of fertilizer and of organic resources, and good agronomic practices. The large variability in soil fertility conditions within smallholder farms is also recognised within ISFM, including soils with constraints beyond those addressed by fertilizer and organic inputs. The variable biophysical environments that characterize smallholder farming systems have profound effects on crop productivity and AE and targeted application of limited agro-inputs and management practices is necessary to enhance AE. Further, management decisions depend on the farmer's resource endowments and production objectives. In this paper we discuss the "local adaptation" component of ISFM and how this can be conceptualized within an ISFM framework, backstopped by analysis of AE at plot and farm level. At plot level, a set of four constraints to maximum AE is discussed in relation to "local adaptation": soil acidity, secondary nutrient and micro-nutrient (SMN) deficiencies, physical constraints, and drought stress. In each of these cases, examples are presented whereby amendments and/or practices addressing these have a significantly positive impact on fertilizer AE, including mechanistic principles underlying these effects. While the impact of such amendments and/or practices is easily understood for some practices (e.g., the application of SMNs where these are limiting), for others, more complex interactions with fertilizer AE can be identified (e.g., water harvesting under varying rainfall conditions). At farm scale

19. Integrated soil fertility management in sub-Saharan Africa: unravelling local adaptation

Vanlauwe, B.; Descheemaeker, K.; Giller, K. E.; Huising, J.; Merckx, R.; Nziguheba, G.; Wendt, J.; Zingore, S.

2015-06-01

Intensification of smallholder agriculture in sub-Saharan Africa is necessary to address rural poverty and natural resource degradation. Integrated soil fertility management (ISFM) is a means to enhance crop productivity while maximizing the agronomic efficiency (AE) of applied inputs, and can thus contribute to sustainable intensification. ISFM consists of a set of best practices, preferably used in combination, including the use of appropriate germplasm, the appropriate use of fertilizer and of organic resources, and good agronomic practices. The large variability in soil fertility conditions within smallholder farms is also recognized within ISFM, including soils with constraints beyond those addressed by fertilizer and organic inputs. The variable biophysical environments that characterize smallholder farming systems have profound effects on crop productivity and AE, and targeted application of agro-inputs and management practices is necessary to enhance AE. Further, management decisions depend on the farmer's resource endowments and production objectives. In this paper we discuss the "local adaptation" component of ISFM and how this can be conceptualized within an ISFM framework, backstopped by analysis of AE at plot and farm level. At plot level, a set of four constraints to maximum AE is discussed in relation to "local adaptation": soil acidity, secondary nutrient and micronutrient (SMN) deficiencies, physical constraints, and drought stress. In each of these cases, examples are presented whereby amendments and/or practices addressing these have a significantly positive impact on fertilizer AE, including mechanistic principles underlying these effects. While the impact of such amendments and/or practices is easily understood for some practices (e.g. the application of SMNs where these are limiting), for others, more complex processes influence AE (e.g. water harvesting under varying rainfall conditions). At farm scale, adjusting fertilizer applications to

20. Development and Adaptation of an Employment-Integration Program for People Who Are Visually Impaired in Quebec, Canada

ERIC Educational Resources Information Center

Wittich, Walter; Watanabe, Donald H.; Scully, Lizabeth; Bergevin , Martin

2013-01-01

Introduction: In the Province of Quebec, Canada, it is estimated that only about one-third of working-age adults with visual impairments are part of the workforce, despite ongoing efforts of rehabilitation and government agencies to integrate these individuals. The present article describes the development and adaptation of a pre-employment…

1. Evaluating marginal likelihood with thermodynamic integration method and comparison with several other numerical methods

DOE PAGES

Liu, Peigui; Elshall, Ahmed S.; Ye, Ming; ...

2016-02-05

Evaluating marginal likelihood is the most critical and computationally expensive task, when conducting Bayesian model averaging to quantify parametric and model uncertainties. The evaluation is commonly done by using Laplace approximations to evaluate semianalytical expressions of the marginal likelihood or by using Monte Carlo (MC) methods to evaluate arithmetic or harmonic mean of a joint likelihood function. This study introduces a new MC method, i.e., thermodynamic integration, which has not been attempted in environmental modeling. Instead of using samples only from prior parameter space (as in arithmetic mean evaluation) or posterior parameter space (as in harmonic mean evaluation), the thermodynamicmore » integration method uses samples generated gradually from the prior to posterior parameter space. This is done through a path sampling that conducts Markov chain Monte Carlo simulation with different power coefficient values applied to the joint likelihood function. The thermodynamic integration method is evaluated using three analytical functions by comparing the method with two variants of the Laplace approximation method and three MC methods, including the nested sampling method that is recently introduced into environmental modeling. The thermodynamic integration method outperforms the other methods in terms of their accuracy, convergence, and consistency. The thermodynamic integration method is also applied to a synthetic case of groundwater modeling with four alternative models. The application shows that model probabilities obtained using the thermodynamic integration method improves predictive performance of Bayesian model averaging. As a result, the thermodynamic integration method is mathematically rigorous, and its MC implementation is computationally general for a wide range of environmental problems.« less

2. Technical note: application of α-QSS to the numerical integration of kinetic equations in tropospheric chemistry

Liu, F.; Schaller, E.; Mott, D. R.

2005-08-01

A major task in many applications of atmospheric chemistry transport problems is the numerical integration of stiff systems of Ordinary Differential Equations (ODEs) describing the chemical transformations. A faster solver that is easier to couple to the other physics in the problem is still needed. The integration method, α-QSS, corresponding to the solver CHEMEQ2 aims at meeting the demands of a process-split, reacting-flow simulation (Mott 2000; Mott and Oran, 2001). However, this integrator has yet to be applied to the numerical integration of kinetic equations in tropospheric chemistry. A zero-dimensional (box) model is developed to test how well CHEMEQ2 works on the tropospheric chemistry equations. This paper presents the testing results. The reference chemical mechanisms herein used are Regional Atmospheric Chemistry Mechanism (RACM) (Stockwell et al., 1997) and its secondary lumped successor Regional Lumped Atmospheric Chemical Scheme (ReLACS) (Crassier et al., 2000). The box model is forced and initialized by the DRY scenarios of Protocol Ver. 2 developed by EUROTRAC (Poppe et al., 2001). The accuracy of CHEMEQ2 is evaluated by comparing the results to solutions obtained with VODE. This comparison is made with parameters of the error tolerance, relative difference with respect to VODE scheme, trade off between accuracy and efficiency, global time step for integration etc. The study based on the comparison concludes that the single-point α-QSS approach is fast and moderately accurate as well as easy to couple to reacting flow simulation models, which makes CHEMEQ2 one of the best candidates for three-dimensional atmospheric Chemistry Transport Modelling (CTM) studies. In addition the RACM mechanism may be replaced by ReLACS mechanism for tropospheric chemistry transport modelling. The testing results also imply that the accuracy for chemistry numerical simulations is highly different from species to species. Therefore ozone is not the good choice for

3. Integrated System Design: Promoting the Capacity of Sociotechnical Systems for Adaptation through Extensions of Cognitive Work Analysis.

PubMed

Naikar, Neelam; Elix, Ben

2016-01-01

This paper proposes an approach for integrated system design, which has the intent of facilitating high levels of effectiveness in sociotechnical systems by promoting their capacity for adaptation. Building on earlier ideas and empirical observations, this approach recognizes that to create adaptive systems it is necessary to integrate the design of all of the system elements, including the interfaces, teams, training, and automation, such that workers are supported in adapting their behavior as well as their structure, or organization, in a coherent manner. Current approaches for work analysis and design are limited in regard to this fundamental objective, especially in cases when workers are confronted with unforeseen events. A suitable starting point is offered by cognitive work analysis (CWA), but while this framework can support actors in adapting their behavior, it does not necessarily accommodate adaptations in their structure. Moreover, associated design approaches generally focus on individual system elements, and those that consider multiple elements appear limited in their ability to facilitate integration, especially in the manner intended here. The proposed approach puts forward the set of possibilities for work organization in a system as the central mechanism for binding the design of its various elements, so that actors can adapt their structure as well as their behavior-in a unified fashion-to handle both familiar and novel conditions. Accordingly, this paper demonstrates how the set of possibilities for work organization in a system may be demarcated independently of the situation, through extensions of CWA, and how it may be utilized in design. This lynchpin, conceptualized in the form of a diagram of work organization possibilities (WOP), is important for preserving a system's inherent capacity for adaptation. Future research should focus on validating these concepts and establishing the feasibility of implementing them in industrial contexts.

4. Integrated System Design: Promoting the Capacity of Sociotechnical Systems for Adaptation through Extensions of Cognitive Work Analysis

PubMed Central

Naikar, Neelam; Elix, Ben

2016-01-01

This paper proposes an approach for integrated system design, which has the intent of facilitating high levels of effectiveness in sociotechnical systems by promoting their capacity for adaptation. Building on earlier ideas and empirical observations, this approach recognizes that to create adaptive systems it is necessary to integrate the design of all of the system elements, including the interfaces, teams, training, and automation, such that workers are supported in adapting their behavior as well as their structure, or organization, in a coherent manner. Current approaches for work analysis and design are limited in regard to this fundamental objective, especially in cases when workers are confronted with unforeseen events. A suitable starting point is offered by cognitive work analysis (CWA), but while this framework can support actors in adapting their behavior, it does not necessarily accommodate adaptations in their structure. Moreover, associated design approaches generally focus on individual system elements, and those that consider multiple elements appear limited in their ability to facilitate integration, especially in the manner intended here. The proposed approach puts forward the set of possibilities for work organization in a system as the central mechanism for binding the design of its various elements, so that actors can adapt their structure as well as their behavior—in a unified fashion—to handle both familiar and novel conditions. Accordingly, this paper demonstrates how the set of possibilities for work organization in a system may be demarcated independently of the situation, through extensions of CWA, and how it may be utilized in design. This lynchpin, conceptualized in the form of a diagram of work organization possibilities (WOP), is important for preserving a system's inherent capacity for adaptation. Future research should focus on validating these concepts and establishing the feasibility of implementing them in industrial

5. Numerical integration of the master equation in some models of stochastic epidemiology.

PubMed

Jenkinson, Garrett; Goutsias, John

2012-01-01

The processes by which disease spreads in a population of individuals are inherently stochastic. The master equation has proven to be a useful tool for modeling such processes. Unfortunately, solving the master equation analytically is possible only in limited cases (e.g., when the model is linear), and thus numerical procedures or approximation methods must be employed. Available approximation methods, such as the system size expansion method of van Kampen, may fail to provide reliable solutions, whereas current numerical approaches can induce appreciable computational cost. In this paper, we propose a new numerical technique for solving the master equation. Our method is based on a more informative stochastic process than the population process commonly used in the literature. By exploiting the structure of the master equation governing this process, we develop a novel technique for calculating the exact solution of the master equation--up to a desired precision--in certain models of stochastic epidemiology. We demonstrate the potential of our method by solving the master equation associated with the stochastic SIR epidemic model. MATLAB software that implements the methods discussed in this paper is freely available as Supporting Information S1.

6. Numerical modelling of qualitative behaviour of solutions to convolution integral equations

Ford, Neville J.; Diogo, Teresa; Ford, Judith M.; Lima, Pedro

2007-08-01

We consider the qualitative behaviour of solutions to linear integral equations of the formwhere the kernel k is assumed to be either integrable or of exponential type. After a brief review of the well-known Paley-Wiener theory we give conditions that guarantee that exact and approximate solutions of (1) are of a specific exponential type. As an example, we provide an analysis of the qualitative behaviour of both exact and approximate solutions of a singular Volterra equation with infinitely many solutions. We show that the approximations of neighbouring solutions exhibit the correct qualitative behaviour.

7. Diffraction in a stratified region of a high numerical aperture Fresnel zone plate: a simple and rigorous integral representation.

PubMed

Zhang, Yaoju; Huang, Xiangjun; Zhang, Dong; An, Hongchang; Dai, Yuxing

2015-03-23

An algorithm for calculating the field distribution of a high numerical aperture Fresnel zone plate (FZP) in stratified media is presented, which is based on the vector angular spectrum method. The diffraction problem of FZP is solved for the case of a multilayer film with planar interfaces perpendicular to the optical axis. The solution is obtained in a rigorous mathematical manner and it satisfies the homogeneous wave equations. The electric strength vector of the transmitted and reflected field in the multilayer media is obtained for any polarized beam normally incident onto a binary phase circular FZP. For radially-, azimuthally- and linearly-polarized beam, the electric field in the focal region can be simplified as double or single integral, which can be readily used for numerical computation.

8. Experimental Studies on Model Reference Adaptive Control with Integral Action Employing a Rotary Encoder and Tachometer Sensors

PubMed Central

Wu, Guo-Qiang; Wu, Shu-Nan; Bai, Yu-Guang; Liu, Lei

2013-01-01

In this paper, an adaptive law with an integral action is designed and implemented on a DC motor by employing a rotary encoder and tachometer sensors. The stability is proved by using the Lyapunov function. The tracking errors asymptotically converge to zero according to the Barbalat lemma. The tracking performance is specified by a reference model, the convergence rate of Lyapunov function is specified by the matrix Q and the control action and the state weighting are restricted by the matrix Γ. The experimental results demonstrate the effectiveness of the proposed control. The maximum errors of the position and velocity with the integral action are reduced from 0.4 V and 1.5 V to 0.2 V and 0.4 V, respectively. The adaptive control with the integral action gives satisfactory performance, even when it suffers from input disturbance. PMID:23575034

9. Experimental studies on model reference adaptive control with integral action employing a rotary encoder and tachometer sensors.

PubMed

Wu, Guo-Qiang; Wu, Shu-Nan; Bai, Yu-Guang; Liu, Lei

2013-04-10

In this paper, an adaptive law with an integral action is designed and implemented on a DC motor by employing a rotary encoder and tachometer sensors. The stability is proved by using the Lyapunov function. The tracking errors asymptotically converge to zero according to the Barbalat lemma. The tracking performance is specified by a reference model, the convergence rate of Lyapunov function is specified by the matrix Q and the control action and the state weighting are restricted by the matrix Γ. The experimental results demonstrate the effectiveness of the proposed control. The maximum errors of the position and velocity with the integral action are reduced from 0.4 V and 1.5 V to 0.2 V and 0.4 V, respectively. The adaptive control with the integral action gives satisfactory performance, even when it suffers from input disturbance.

10. RADIO GALAXY 3C 230 OBSERVED WITH GEMINI LASER ADAPTIVE-OPTICS INTEGRAL-FIELD SPECTROSCOPY

SciTech Connect

Steinbring, Eric

2011-11-15

The Altair laser-guide-star adaptive optics facility combined with the near-infrared integral-field spectrometer on Gemini North have been employed to study the morphology and kinematics of 3C 230 at z = 1.5, the first such observations of a high-redshift radio galaxy. These suggest a bi-polar outflow spanning 0.''9 ({approx}16 kpc projected distance for a standard {Lambda} CDM cosmology) reaching a mean relative velocity of 235 km s{sup -1} in redshifted H{alpha} +[N II] and [S II] emission. Structure is resolved to 0.''1 (0.8 kpc), which is well correlated with optical images from the Hubble Space Telescope and Very Large Array radio maps obtained at similar spatial resolution. Line diagnostics suggest that over the 10{sup 7} yr to 10{sup 8} yr duration of its active galactic nucleus activity, gas has been ejected into bright turbulent lobes at rates comparable to star formation, although constituting perhaps only 1% of the baryonic mass in the galaxy.

11. Adaptive neuro-fuzzy inference system for real-time monitoring of integrated-constructed wetlands.

PubMed

Dzakpasu, Mawuli; Scholz, Miklas; McCarthy, Valerie; Jordan, Siobhán; Sani, Abdulkadir

2015-01-01

Monitoring large-scale treatment wetlands is costly and time-consuming, but required by regulators. Some analytical results are available only after 5 days or even longer. Thus, adaptive neuro-fuzzy inference system (ANFIS) models were developed to predict the effluent concentrations of 5-day biochemical oxygen demand (BOD5) and NH4-N from a full-scale integrated constructed wetland (ICW) treating domestic wastewater. The ANFIS models were developed and validated with a 4-year data set from the ICW system. Cost-effective, quicker and easier to measure variables were selected as the possible predictors based on their goodness of correlation with the outputs. A self-organizing neural network was applied to extract the most relevant input variables from all the possible input variables. Fuzzy subtractive clustering was used to identify the architecture of the ANFIS models and to optimize fuzzy rules, overall, improving the network performance. According to the findings, ANFIS could predict the effluent quality variation quite strongly. Effluent BOD5 and NH4-N concentrations were predicted relatively accurately by other effluent water quality parameters, which can be measured within a few hours. The simulated effluent BOD5 and NH4-N concentrations well fitted the measured concentrations, which was also supported by relatively low mean squared error. Thus, ANFIS can be useful for real-time monitoring and control of ICW systems.

12. Adaptation of Cryptococcus neoformans to mammalian hosts: integrated regulation of metabolism and virulence.

PubMed

Kronstad, Jim; Saikia, Sanjay; Nielson, Erik David; Kretschmer, Matthias; Jung, Wonhee; Hu, Guanggan; Geddes, Jennifer M H; Griffiths, Emma J; Choi, Jaehyuk; Cadieux, Brigitte; Caza, Mélissa; Attarian, Rodgoun

2012-02-01

The basidiomycete fungus Cryptococcus neoformans infects humans via inhalation of desiccated yeast cells or spores from the environment. In the absence of effective immune containment, the initial pulmonary infection often spreads to the central nervous system to result in meningoencephalitis. The fungus must therefore make the transition from the environment to different mammalian niches that include the intracellular locale of phagocytic cells and extracellular sites in the lung, bloodstream, and central nervous system. Recent studies provide insights into mechanisms of adaptation during this transition that include the expression of antiphagocytic functions, the remodeling of central carbon metabolism, the expression of specific nutrient acquisition systems, and the response to hypoxia. Specific transcription factors regulate these functions as well as the expression of one or more of the major known virulence factors of C. neoformans. Therefore, virulence factor expression is to a large extent embedded in the regulation of a variety of functions needed for growth in mammalian hosts. In this regard, the complex integration of these processes is reminiscent of the master regulators of virulence in bacterial pathogens.

13. Adaptive optics scanning laser ophthalmoscope with integrated wide-field retinal imaging and tracking.

PubMed

Ferguson, R Daniel; Zhong, Zhangyi; Hammer, Daniel X; Mujat, Mircea; Patel, Ankit H; Deng, Cong; Zou, Weiyao; Burns, Stephen A

2010-11-01

We have developed a new, unified implementation of the adaptive optics scanning laser ophthalmoscope (AOSLO) incorporating a wide-field line-scanning ophthalmoscope (LSO) and a closed-loop optical retinal tracker. AOSLO raster scans are deflected by the integrated tracking mirrors so that direct AOSLO stabilization is automatic during tracking. The wide-field imager and large-spherical-mirror optical interface design, as well as a large-stroke deformable mirror (DM), enable the AOSLO image field to be corrected at any retinal coordinates of interest in a field of >25 deg. AO performance was assessed by imaging individuals with a range of refractive errors. In most subjects, image contrast was measurable at spatial frequencies close to the diffraction limit. Closed-loop optical (hardware) tracking performance was assessed by comparing sequential image series with and without stabilization. Though usually better than 10 μm rms, or 0.03 deg, tracking does not yet stabilize to single cone precision but significantly improves average image quality and increases the number of frames that can be successfully aligned by software-based post-processing methods. The new optical interface allows the high-resolution imaging field to be placed anywhere within the wide field without requiring the subject to re-fixate, enabling easier retinal navigation and faster, more efficient AOSLO montage capture and stitching.

14. Plastoglobuli: Plastid Microcompartments with Integrated Functions in Metabolism, Plastid Developmental Transitions, and Environmental Adaptation.

PubMed

van Wijk, Klaas J; Kessler, Felix

2017-01-25

Plastoglobuli (PGs) are plastid lipoprotein particles surrounded by a membrane lipid monolayer. PGs contain small specialized proteomes and metabolomes. They are present in different plastid types (e.g., chloroplasts, chromoplasts, and elaioplasts) and are dynamic in size and shape in response to abiotic stress or developmental transitions. PGs in chromoplasts are highly enriched in carotenoid esters and enzymes involved in carotenoid metabolism. PGs in chloroplasts are associated with thylakoids and contain ∼30 core proteins (including six ABC1 kinases) as well as additional proteins recruited under specific conditions. Systems analysis has suggested that chloroplast PGs function in metabolism of prenyl lipids (e.g., tocopherols, plastoquinone, and phylloquinone); redox and photosynthetic regulation; plastid biogenesis; and senescence, including recycling of phytol, remobilization of thylakoid lipids, and metabolism of jasmonate. These functionalities contribute to chloroplast PGs' role in responses to stresses such as high light and nitrogen starvation. PGs are thus lipid microcompartments with multiple functions integrated into plastid metabolism, developmental transitions, and environmental adaptation. This review provides an in-depth overview of PG experimental observations, summarizes the present understanding of PG features and functions, and provides a conceptual framework for PG research and the realization of opportunities for crop improvement. Expected final online publication date for the Annual Review of Plant Biology Volume 68 is April 29, 2017. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.

15. Integration and Evaluation of Microscope Adapter for the Ultra-Compact Imaging Spectrometer

Smith-Dryden, S. D.; Blaney, D. L.; Van Gorp, B.; Mouroulis, P.; Green, R. O.; Sellar, R. G.; Rodriguez, J.; Wilson, D.

2012-12-01

Petrologic, diagenetic, impact and weathering processes often happen at scales that are not observable from orbit. On Earth, one of the most common things that a scientist does when trying to understand detailed geologic history is to create a thin section of the rock and study the mineralogy and texture. Unfortunately, sample preparation and manipulation with advanced instrumentation may be a resource intensive proposition (e.g. time, power, complexity) in-situ. Getting detailed mineralogy and textural information without sample preparation is highly desirable. Visible to short wavelength microimaging spectroscopy has the potential to provide this information without sample preparation. Wavelengths between 500-2600 nm are sensitive to a wide range of minerals including mafic, carbonates, clays, and sulfates. The Ultra-Compact Imaging Spectrometer (UCIS) has been developed as a low mass (<2.0 kg), low power (~5.2 W) Offner spectrometer, ideal for use on Mars rover or other in-situ platforms. The UCIS instrument with its HgCdTe detector provides a spectral resolution of 10 nm with a range of 500-2600 nm, in addition to a 30 degree field of view and a 1.35 mrad instantaneous field of view. (Van Gorp et al. 2011). To explore applications of this technology for microscale investigations, an f/10 microimaging adapter has been designed and integrated to allow imaging of samples. The spatial coverage of the instrument is 2.56 cm with sampling of 67.5 microns (380 spatial pixels). Because the adapter is slow relative to the UCIS detector, strong sample illumination is required. Light from the lamp box was directed through optical fiber bundles, and directed onto the sample at a high angle of incidence to provide dark field imaging. For data collection, a mineral sample is mounted on the microscope adapter and scanned by the detector as it is moved horizontally via actuator. Data from the instrument is stored as a xyz cube end product with one spectral and two spatial

16. A stochastic regulator for integrated communication and control systems. I - Formulation of control law. II - Numerical analysis and simulation

NASA Technical Reports Server (NTRS)

Liou, Luen-Woei; Ray, Asok

1991-01-01

A state feedback control law for integrated communication and control systems (ICCS) is formulated by using the dynamic programming and optimality principle on a finite-time horizon. The control law is derived on the basis of a stochastic model of the plant which is augmented in state space to allow for the effects of randomly varying delays in the feedback loop. A numerical procedure for synthesizing the control parameters is then presented, and the performance of the control law is evaluated by simulating the flight dynamics model of an advanced aircraft. Finally, recommendations for future work are made.

17. Study of electromagnetic scattering from randomly rough ocean-like surfaces using integral-equation-based numerical technique

Toporkov, Jakov V.

A numerical study of electromagnetic scattering by one-dimensional perfectly conducting randomly rough surfaces with an ocean-like Pierson-Moskowitz spectrum is presented. Simulations are based on solving the Magnetic Field Integral Equation (MFIE) using the numerical technique called the Method of Ordered Multiple Interactions (MOMI). The study focuses on the application and validation of this integral equation-based technique to scattering at low grazing angles and considers other aspects of numerical simulations crucial to obtaining correct results in the demanding low grazing angle regime. It was found that when the MFIE propagator matrix is used with zeros on its diagonal (as has often been the practice) the results appear to show an unexpected sensitivity to the sampling interval. This sensitivity is especially pronounced in the case of horizontal polarization and at low grazing angles. We show---both numerically and analytically---that the problem lies not with the particular numerical technique used (MOMI) but rather with how the MFIE is discretized. It is demonstrated that the inclusion of so-called "curvature terms" (terms that arise from a correct discretization procedure and are proportional to the second surface derivative) in the diagonal of the propagator matrix eliminates the problem completely. A criterion for the choice of the sampling interval used in discretizing the MFIE based on both electromagnetic wavelength and the surface spectral cutoff is established. The influence of the surface spectral cutoff value on the results of scattering simulations is investigated and a recommendation for the choice of this spectral cutoff for numerical simulation purposes is developed. Also studied is the applicability of the tapered incident field at low grazing incidence angles. It is found that when a Gaussian-like taper with fixed beam waist is used there is a characteristic pattern (anomalous jump) in the calculated average backscattered cross section at

18. Integrating Laboratory and Numerical Decompression Experiments to Investigate Fluid Dynamics into the Conduit

Spina, Laura; Colucci, Simone; De'Michieli Vitturi, Mattia; Scheu, Bettina; Dingwell, Donald Bruce

2015-04-01

The study of the fluid dynamics of magmatic melts into the conduit, where direct observations are unattainable, was proven to be strongly enhanced by multiparametric approaches. Among them, the coupling of numerical modeling with laboratory experiments represents a fundamental tool of investigation. Indeed, the experimental approach provide invaluable data to validate complex multiphase codes. We performed decompression experiments in a shock tube system, using pure silicon oil as a proxy for the basaltic melt. A range of viscosity comprised between 1 and 1000 Pa s was investigated. The samples were saturated with Argon for 72h at 10MPa, before being slowly decompressed to atmospheric pressure. The evolution of the analogue magmatic system was monitored through a high speed camera and pressure sensors, located into the analogue conduit. The experimental decompressions have then been reproduced numerically using a multiphase solver based on OpenFOAM framework. The original compressible multiphase Openfoam solver twoPhaseEulerFoam was extended to take into account the multicomponent nature of the fluid mixtures (liquid and gas) and the phase transition. According to the experimental conditions, the simulations were run with values of fluid viscosity ranging from 1 to 1000 Pa s. The sensitivity of the model has been tested for different values of the parameters t and D, representing respectively the relaxation time for gas exsolution and the average bubble diameter, required by the Gidaspow drag model. Valuable range of values for both parameters are provided from experimental observations, i.e. bubble nucleation time and bubble size distribution at a given pressure. The comparison of video images with the outcomes of the numerical models was performed by tracking the evolution of the gas volume fraction through time. Therefore, we were able to calibrate the parameter of the model by laboratory results, and to track the fluid dynamics of experimental decompression.

19. Asian International Students at an Australian University: Mapping the Paths between Integrative Motivation, Competence in L2 Communication, Cross-Cultural Adaptation and Persistence with Structural Equation Modelling

ERIC Educational Resources Information Center

Yu, Baohua

2013-01-01

This study examined the interrelationships of integrative motivation, competence in second language (L2) communication, sociocultural adaptation, academic adaptation and persistence of international students at an Australian university. Structural equation modelling demonstrated that the integrative motivation of international students has a…

20. Numerical integration of nearly-Hamiltonian systems. [Van der Pol oscillator and perturbed Keplerian motion

NASA Technical Reports Server (NTRS)

Bond, V. R.

1978-01-01

The reported investigation is concerned with the solution of systems of differential equations which are derived from a Hamiltonian function in the extended phase space. The problem selected involves a one-dimensional perturbed harmonic oscillator. The van der Pol equation considered has an exact asymptotic value for its amplitude. Comparisons are made between a numerical solution and a known analytical solution. In addition to the van der Pol problem, known solutions regarding the restricted problem of three bodies are used as examples for perturbed Keplerian motion. The extended phase space Hamiltonian discussed by Stiefel and Scheifele (1971) is considered. A description is presented of two canonical formulations of the perturbed harmonic oscillator.

1. Climate change impact and adaptation research requires integrated assessment and farming systems analysis: a case study in the Netherlands

Reidsma, Pytrik; Wolf, Joost; Kanellopoulos, Argyris; Schaap, Ben F.; Mandryk, Maryia; Verhagen, Jan; van Ittersum, Martin K.

2015-04-01

Rather than on crop modelling only, climate change impact assessments in agriculture need to be based on integrated assessment and farming systems analysis, and account for adaptation at different levels. With a case study for Flevoland, the Netherlands, we illustrate that (1) crop models cannot account for all relevant climate change impacts and adaptation options, and (2) changes in technology, policy and prices have had and are likely to have larger impacts on farms than climate change. While crop modelling indicates positive impacts of climate change on yields of major crops in 2050, a semi-quantitative and participatory method assessing impacts of extreme events shows that there are nevertheless several climate risks. A range of adaptation measures are, however, available to reduce possible negative effects at crop level. In addition, at farm level farmers can change cropping patterns, and adjust inputs and outputs. Also farm structural change will influence impacts and adaptation. While the 5th IPCC report is more negative regarding impacts of climate change on agriculture compared to the previous report, also for temperate regions, our results show that when putting climate change in context of other drivers, and when explicitly accounting for adaptation at crop and farm level, impacts may be less negative in some regions and opportunities are revealed. These results refer to a temperate region, but an integrated assessment may also change perspectives on climate change for other parts of the world.

2. Improving the Usability of Integrated Assessment for Adaptation Practice: Insights from the U.S. Southeast Energy Sector

SciTech Connect

de Bremond, Ariane; Preston, Benjamin; Rice, Jennie S.

2014-10-01

Energy systems comprise a key sector of the U.S. economy, and one that has been identified as potentially vulnerable to the effects of climate variability and change. However, understanding of adaptation processes in energy companies and private entities more broadly is limited. It is unclear, for example, the extent to which energy companies are well-served by existing knowledge and tools emerging from the impacts, adaptation and vulnerability (IAV) and integrated assessment modeling (IAM) communities and/or what experiments, analyses, and model results have practical utility for informing adaptation in the energy sector. As part of a regional IAM development project, we investigated available evidence of adaptation processes in the energy sector, with a particular emphasis on the U.S. Southeast and Gulf Coast region. A mixed methods approach of literature review and semi-structured interviews with key informants from energy utilities was used to compare existing knowledge from the IAV community with that of regional stakeholders. That comparison revealed that much of the IAV literature on the energy sector is climate-centric and therefore disconnected from the more integrated decision-making processes and institutional perspectives of energy utilities. Increasing the relevance of research and assessment for the energy sector will necessitate a greater investment in integrated assessment and modeling efforts that respond to practical decision-making needs as well as greater collaboration between energy utilities and researchers in the design, execution, and communication of those efforts.

3. Model coupling methodology for thermo-hydro-mechanical-chemical numerical simulations in integrated assessment of long-term site behaviour

Kempka, Thomas; De Lucia, Marco; Kühn, Michael

2015-04-01

The integrated assessment of long-term site behaviour taking into account a high spatial resolution at reservoir scale requires a sophisticated methodology to represent coupled thermal, hydraulic, mechanical and chemical processes of relevance. Our coupling methodology considers the time-dependent occurrence and significance of multi-phase flow processes, mechanical effects and geochemical reactions (Kempka et al., 2014). Hereby, a simplified hydro-chemical coupling procedure was developed (Klein et al., 2013) and validated against fully coupled hydro-chemical simulations (De Lucia et al., 2015). The numerical simulation results elaborated for the pilot site Ketzin demonstrate that mechanical reservoir, caprock and fault integrity are maintained during the time of operation and that after 10,000 years CO2 dissolution is the dominating trapping mechanism and mineralization occurs on the order of 10 % to 25 % with negligible changes to porosity and permeability. De Lucia, M., Kempka, T., Kühn, M. A coupling alternative to reactive transport simulations for long-term prediction of chemical reactions in heterogeneous CO2 storage systems (2014) Geosci Model Dev Discuss 7:6217-6261. doi:10.5194/gmdd-7-6217-2014. Kempka, T., De Lucia, M., Kühn, M. Geomechanical integrity verification and mineral trapping quantification for the Ketzin CO2 storage pilot site by coupled numerical simulations (2014) Energy Procedia 63:3330-3338, doi:10.1016/j.egypro.2014.11.361. Klein E, De Lucia M, Kempka T, Kühn M. Evaluation of longterm mineral trapping at the Ketzin pilot site for CO2 storage: an integrative approach using geo-chemical modelling and reservoir simulation. Int J Greenh Gas Con 2013; 19:720-730. doi:10.1016/j.ijggc.2013.05.014.

4. Numerical estimation of real and apparent integral neutron parameters used in nuclear borehole geophysics.

PubMed

Dworak, D; Drabina, A; Woźnicka, U

2006-07-01

The semi-empirical method of neutron logging tool calibration developed by Prof. J.A. Czubek uses the real and so-called apparent integral neutron parameters of geological formations. To this end, Czubek proposed a few separated calculation methods commonly based on analytical solutions of the neutron transport problem. A new calculation method for the neutron integral parameters is proposed. Quantities like slowing-down length, diffusion and migration lengths, probability to avoid absorption during slowing down, and thermal neutron absorption cross section can be easily approximated using Monte Carlo simulations. A comparison with the results of the analytical method developed by Czubek has been performed for many cases and the observed differences have been explained.

5. On the accuracy and convergence of implicit numerical integration of finite element generated ordinary differential equations

NASA Technical Reports Server (NTRS)

Baker, A. J.; Soliman, M. O.

1978-01-01

A study of accuracy and convergence of linear functional finite element solution to linear parabolic and hyperbolic partial differential equations is presented. A variable-implicit integration procedure is employed for the resultant system of ordinary differential equations. Accuracy and convergence is compared for the consistent and two lumped assembly procedures for the identified initial-value matrix structure. Truncation error estimation is accomplished using Richardson extrapolation.

6. On the Numerical Solution of the Integral Equation Formulation for Transient Structural Synthesis

DTIC Science & Technology

2014-09-01

history of integral equations dates back to the early nineteenth century when the profound mathematical insights of Newton and Leibniz were being...matrix. As shown in [10], the element stiffness matrix is as follows: 2 2 3 2 2 12 6 12 6 6 4 6 2 12 6 12 6 6 2 6 4 e l l l l l lEI K l ll l l l l

7. Optimization Algorithm for Kalman Filter Exploiting the Numerical Characteristics of SINS/GPS Integrated Navigation Systems.

PubMed

Hu, Shaoxing; Xu, Shike; Wang, Duhu; Zhang, Aiwu

2015-11-11

Aiming at addressing the problem of high computational cost of the traditional Kalman filter in SINS/GPS, a practical optimization algorithm with offline-derivation and parallel processing methods based on the numerical characteristics of the system is presented in this paper. The algorithm exploits the sparseness and/or symmetry of matrices to simplify the computational procedure. Thus plenty of invalid operations can be avoided by offline derivation using a block matrix technique. For enhanced efficiency, a new parallel computational mechanism is established by subdividing and restructuring calculation processes after analyzing the extracted "useful" data. As a result, the algorithm saves about 90% of the CPU processing time and 66% of the memory usage needed in a classical Kalman filter. Meanwhile, the method as a numerical approach needs no precise-loss transformation/approximation of system modules and the accuracy suffers little in comparison with the filter before computational optimization. Furthermore, since no complicated matrix theories are needed, the algorithm can be easily transplanted into other modified filters as a secondary optimization method to achieve further efficiency.

8. Optimization Algorithm for Kalman Filter Exploiting the Numerical Characteristics of SINS/GPS Integrated Navigation Systems

PubMed Central

Hu, Shaoxing; Xu, Shike; Wang, Duhu; Zhang, Aiwu

2015-01-01

Aiming at addressing the problem of high computational cost of the traditional Kalman filter in SINS/GPS, a practical optimization algorithm with offline-derivation and parallel processing methods based on the numerical characteristics of the system is presented in this paper. The algorithm exploits the sparseness and/or symmetry of matrices to simplify the computational procedure. Thus plenty of invalid operations can be avoided by offline derivation using a block matrix technique. For enhanced efficiency, a new parallel computational mechanism is established by subdividing and restructuring calculation processes after analyzing the extracted “useful” data. As a result, the algorithm saves about 90% of the CPU processing time and 66% of the memory usage needed in a classical Kalman filter. Meanwhile, the method as a numerical approach needs no precise-loss transformation/approximation of system modules and the accuracy suffers little in comparison with the filter before computational optimization. Furthermore, since no complicated matrix theories are needed, the algorithm can be easily transplanted into other modified filters as a secondary optimization method to achieve further efficiency. PMID:26569247

9. On the formulation, parameter identification and numerical integration of the EMMI model :plasticity and isotropic damage.

SciTech Connect

Bammann, Douglas J.; Johnson, G. C. (University of California, Berkeley, CA); Marin, Esteban B.; Regueiro, Richard A.

2006-01-01

In this report we present the formulation of the physically-based Evolving Microstructural Model of Inelasticity (EMMI) . The specific version of the model treated here describes the plasticity and isotropic damage of metals as being currently applied to model the ductile failure process in structural components of the W80 program . The formulation of the EMMI constitutive equations is framed in the context of the large deformation kinematics of solids and the thermodynamics of internal state variables . This formulation is focused first on developing the plasticity equations in both the relaxed (unloaded) and current configurations. The equations in the current configuration, expressed in non-dimensional form, are used to devise the identification procedure for the plasticity parameters. The model is then extended to include a porosity-based isotropic damage state variable to describe the progressive deterioration of the strength and mechanical properties of metals induced by deformation . The numerical treatment of these coupled plasticity-damage constitutive equations is explained in detail. A number of examples are solved to validate the numerical implementation of the model.

10. GMTIFS: the adaptive optics beam steering mirror for the GMT integral-field spectrograph

Davies, J.; Bloxham, G.; Boz, R.; Bundy, D.; Espeland, B.; Fordham, B.; Hart, J.; Herrald, N.; Nielsen, J.; Sharp, R.; Vaccarella, A.; Vest, C.; Young, P. J.

2016-07-01

To achieve the high adaptive optics sky coverage necessary to allow the GMT Integral-Field Spectrograph (GMTIFS) to access key scientific targets, the on-instrument adaptive-optics wavefront-sensing (OIWFS) system must patrol the full 180 arcsecond diameter guide field passed to the instrument. The OIWFS uses a diffraction limited guide star as the fundamental pointing reference for the instrument. During an observation the offset between the science target and the guide star will change due to sources such as flexure, differential refraction and non-sidereal tracking rates. GMTIFS uses a beam steering mirror to set the initial offset between science target and guide star and also to correct for changes in offset. In order to reduce image motion from beam steering errors to those comparable to the AO system in the most stringent case, the beam steering mirror is set a requirement of less than 1 milliarcsecond RMS. This corresponds to a dynamic range for both actuators and sensors of better than 1/180,000. The GMTIFS beam steering mirror uses piezo-walk actuators and a combination of eddy current sensors and interferometric sensors to achieve this dynamic range and control. While the sensors are rated for cryogenic operation, the actuators are not. We report on the results of prototype testing of single actuators, with the sensors, on the bench and in a cryogenic environment. Specific failures of the system are explained and suspected reasons for them. A modified test jig is used to investigate the option of heating the actuator and we report the improved results. In addition to individual component testing, we built and tested a complete beam steering mirror assembly. Testing was conducted with a point source microscope, however controlling environmental conditions to less than 1 micron was challenging. The assembly testing investigated acquisition accuracy and if there was any un-sensed hysteresis in the system. Finally we present the revised beam steering mirror

11. golem95: A numerical program to calculate one-loop tensor integrals with up to six external legs

Binoth, T.; Guillet, J.-Ph.; Heinrich, G.; Pilon, E.; Reiter, T.

2009-11-01

We present a program for the numerical evaluation of form factors entering the calculation of one-loop amplitudes with up to six external legs. The program is written in Fortran95 and performs the reduction to a certain set of basis integrals numerically, using a formalism where inverse Gram determinants can be avoided. It can be used to calculate one-loop amplitudes with massless internal particles in a fast and numerically stable way. Catalogue identifier: AEEO_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEEO_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 50 105 No. of bytes in distributed program, including test data, etc.: 241 657 Distribution format: tar.gz Programming language: Fortran95 Computer: Any computer with a Fortran95 compiler Operating system: Linux, Unix RAM: RAM used per form factor is insignificant, even for a rank six six-point form factor Classification: 4.4, 11.1 External routines: Perl programming language (http://www.perl.com/) Nature of problem: Evaluation of one-loop multi-leg tensor integrals occurring in the calculation of next-to-leading order corrections to scattering amplitudes in elementary particle physics. Solution method: Tensor integrals are represented in terms of form factors and a set of basic building blocks ("basis integrals"). The reduction to the basis integrals is

12. Integrated Numerical Simulation of Thermo-Hydro-Chemical Phenomena Associated with Geologic Disposal of High-Level Radioactive Waste

Park, Sang-Uk; Kim, Jun-Mo; Kihm, Jung-Hwi

2014-05-01

A series of numerical simulations was performed using a multiphase thermo-hydro-chemical numerical model to predict integratedly and evaluate quantitatively thermo-hydro-chemical phenomena due to heat generation associated with geologic disposal of high-level radioactive waste. The average mineralogical composition of the fifteen unweathered igneous rock bodies, which were classified as granite, in Republic of Korea was adopted as an initial (primary) mineralogical composition of the host rock of the repository of high-level radioactive waste in the numerical simulations. The numerical simulation results show that temperature rises and thus convective groundwater flow occurs near the repository due to heat generation associated with geologic disposal of high-level radioactive waste. Under these circumstances, a series of water-rock interactions take place. As a result, among the primary minerals, quartz, plagioclase (albite), biotite (annite), and muscovite are dissolved. However, orthoclase is initially precipitated and is then dissolved, whereas microcline is initially dissolved and is then precipitated. On the other hand, the secondary minerals such as kaolinite, Na-smectite, chlorite, and hematite are precipitated and are then partly dissolved. In addition, such dissolution and precipitation of the primary and secondary minerals change groundwater chemistry (quality) and induce reactive chemical transport. As a result, in groundwater, Na+, Fe2+, and HCO3- concentrations initially decrease, whereas K+, AlO2-, and aqueous SiO2 concentrations initially increase. On the other hand, H+ concentration initially increases and thus pH initially decreases due to dissociation of groundwater in order to provide OH-, which is essential in precipitation of Na-smectite and chlorite. Thus, the above-mentioned numerical simulation results suggest that thermo-hydro-chemical numerical simulation can provide a better understanding of heat transport, groundwater flow, and reactive

13. Elementary Techniques of Numerical Integration and Their Computer Implementation. Applications of Elementary Calculus to Computer Science. Modules and Monographs in Undergraduate Mathematics and Its Applications Project. UMAP Unit 379.

ERIC Educational Resources Information Center

Motter, Wendell L.

It is noted that there are some integrals which cannot be evaluated by determining an antiderivative, and these integrals must be subjected to other techniques. Numerical integration is one such method; it provides a sum that is an approximate value for some integral types. This module's purpose is to introduce methods of numerical integration and…

14. An efficient exponential time integration method for the numerical solution of the shallow water equations on the sphere

Gaudreault, Stéphane; Pudykiewicz, Janusz A.

2016-10-01

The exponential propagation methods were applied in the past for accurate integration of the shallow water equations on the sphere. Despite obvious advantages related to the exact solution of the linear part of the system, their use for the solution of practical problems in geophysics has been limited because efficiency of the traditional algorithm for evaluating the exponential of Jacobian matrix is inadequate. In order to circumvent this limitation, we modify the existing scheme by using the Incomplete Orthogonalization Method instead of the Arnoldi iteration. We also propose a simple strategy to determine the initial size of the Krylov space using information from previous time instants. This strategy is ideally suited for the integration of fluid equations where the structure of the system Jacobian does not change rapidly between the subsequent time steps. A series of standard numerical tests performed with the shallow water model on a geodesic icosahedral grid shows that the new scheme achieves efficiency comparable to the semi-implicit methods. This fact, combined with the accuracy and the mass conservation of the exponential propagation scheme, makes the presented method a good candidate for solving many practical problems, including numerical weather prediction.

15. Integration of finite element analysis and numerical optimization techniques for RAM transport package design

SciTech Connect

Harding, D.C.; Eldred, M.S.; Witkowski, W.R.

1995-12-31

Type B radioactive material transport packages must meet strict Nuclear Regulatory Commission (NRC) regulations specified in 10 CFR 71. Type B containers include impact limiters, radiation or thermal shielding layers, and one or more containment vessels. In the past, each component was typically designed separately based on its driving constraint and the expertise of the designer. The components were subsequently assembled and the design modified iteratively until all of the design criteria were met. This approach neglects the fact that components may serve secondary purposes as well as primary ones. For example, an impact limiter`s primary purpose is to act as an energy absorber and protect the contents of the package, but can also act as a heat dissipater or insulator. Designing the component to maximize its performance with respect to both objectives can be accomplished using numerical optimization techniques.

16. Path dependence of J in three numerical examples. [J integral in three crack propagation problems

NASA Technical Reports Server (NTRS)

Karabin, M. E., Jr.; Swedlow, J. L.

1979-01-01

Three cracked geometries are studied with the aid of a new finite element model. The procedure employs a variable singularity at the crack tip that tracks changes in the material response during the loading process. Two of the problems are tension-loaded center-crack panels and the other is a three-point bend specimen. Results usually agree with other numerical and analytical analyses, except the finding that J is path dependent as a substantial plastic zone develops. Credible J values are obtained near the crack tip and J shows a significant increase as the radius of J path increases over two orders of magnitude. Incremental and deformation theories are identical provided the stresses exhibit proportionality found in the far field stresses but not near the tip.

17. Efficient O(N) integration for all-electron electronic structure calculation using numeric basis functions

SciTech Connect

Havu, V. Blum, V.; Havu, P.; Scheffler, M.

2009-12-01

We consider the problem of developing O(N) scaling grid-based operations needed in many central operations when performing electronic structure calculations with numeric atom-centered orbitals as basis functions. We outline the overall formulation of localized algorithms, and specifically the creation of localized grid batches. The choice of the grid partitioning scheme plays an important role in the performance and memory consumption of the grid-based operations. Three different top-down partitioning methods are investigated, and compared with formally more rigorous yet much more expensive bottom-up algorithms. We show that a conceptually simple top-down grid partitioning scheme achieves essentially the same efficiency as the more rigorous bottom-up approaches.

18. Orbit determination based on meteor observations using numerical integration of equations of motion

Dmitriev, V.; Lupovka, V.; Gritsevich, M.

2014-07-01

We review the definitions and approaches to orbital-characteristics analysis applied to photographic or video ground-based observations of meteors. A number of camera networks dedicated to meteors registration were established all over the word, including USA, Canada, Central Europe, Australia, Spain, Finland and Poland. Many of these networks are currently operational. The meteor observations are conducted from different locations hosting the network stations. Each station is equipped with at least one camera for continuous monitoring of the firmament (except possible weather restrictions). For registered multi-station meteors, it is possible to accurately determine the direction and absolute value for the meteor velocity and thus obtain the topocentric radiant. Based on topocentric radiant one further determines the heliocentric meteor orbit. We aim to reduce total uncertainty in our orbit-determination technique, keeping it even less than the accuracy of observations. The additional corrections for the zenith attraction are widely in use and are implemented, for example, here [1]. We propose a technique for meteor-orbit determination with higher accuracy. We transform the topocentric radiant in inertial (J2000) coordinate system using the model recommended by IAU [2]. The main difference if compared to the existing orbit-determination techniques is integration of ordinary differential equations of motion instead of addition correction in visible velocity for zenith attraction. The attraction of the central body (the Sun), the perturbations by Earth, Moon and other planets of the Solar System, the Earth's flattening (important in the initial moment of integration, i.e. at the moment when a meteoroid enters the atmosphere), atmospheric drag may be optionally included in the equations. In addition, reverse integration of the same equations can be performed to analyze orbital evolution preceding to meteoroid's collision with Earth. To demonstrate the developed

19. Numerical methods for the simulation of complex multi-body flows with applications for the integrated Space Shuttle vehicle

NASA Technical Reports Server (NTRS)

Chan, William M.

1992-01-01

This project forms part of the long term computational effort to simulate the time dependent flow over the integrated Space Shuttle vehicle (orbiter, solid rocket boosters (SRB's), external tank (ET), and attach hardware) during its ascent mode for various nominal and abort flight conditions. Due to the limitations of experimental data such as wind tunnel wall effects and the difficulty of safely obtaining valid flight data, numerical simulations are undertaken to supplement the existing data base. This data can then be used to predict the aerodynamic behavior over a wide range of flight conditions. Existing computational results show relatively good overall comparison with experiments but further refinement is required to reduce numerical errors and to obtain finer agreements over a larger parameter space. One of the important goals of this project is to obtain better comparisons between numerical simulations and experiments. In the simulations performed so far, the geometry has been simplified in various ways to reduce the complexity so that useful results can be obtained in a reasonable time frame due to limitations in computer resources. In this project, the finer details of the major components of the Space Shuttle are modeled better by including more complexity in the geometry definition. Smaller components not included in early Space Shuttle simulations will now be modeled and gridded.

20. Thermal limits and adaptation in marine Antarctic ectotherms: an integrative view.

PubMed

Pörtner, Hans O; Peck, Lloyd; Somero, George

2007-12-29

A cause and effect understanding of thermal limitation and adaptation at various levels of biological organization is crucial in the elaboration of how the Antarctic climate has shaped the functional properties of extant Antarctic fauna. At the same time, this understanding requires an integrative view of how the various levels of biological organization may be intertwined. At all levels analysed, the functional specialization to permanently low temperatures implies reduced tolerance of high temperatures, as a trade-off. Maintenance of membrane fluidity, enzyme kinetic properties (Km and k(cat)) and protein structural flexibility in the cold supports metabolic flux and regulation as well as cellular functioning overall. Gene expression patterns and, even more so, loss of genetic information, especially for myoglobin (Mb) and haemoglobin (Hb) in notothenioid fishes, reflect the specialization of Antarctic organisms to a narrow range of low temperatures. The loss of Mb and Hb in icefish, together with enhanced lipid membrane densities (e.g. higher concentrations of mitochondria), becomes explicable by the exploitation of high oxygen solubility at low metabolic rates in the cold, where an enhanced fraction of oxygen supply occurs through diffusive oxygen flux. Conversely, limited oxygen supply to tissues upon warming is an early cause of functional limitation. Low standard metabolic rates may be linked to extreme stenothermy. The evolutionary forces causing low metabolic rates as a uniform character of life in Antarctic ectothermal animals may be linked to the requirement for high energetic efficiency as required to support higher organismic functioning in the cold. This requirement may result from partial compensation for the thermal limitation of growth, while other functions like hatching, development, reproduction and ageing are largely delayed. As a perspective, the integrative approach suggests that the patterns of oxygen- and capacity-limited thermal tolerance

1. Mosaic-skeleton method as applied to the numerical solution of three-dimensional Dirichlet problems for the Helmholtz equation in integral form

Kashirin, A. A.; Smagin, S. I.; Taltykina, M. Yu.

2016-04-01

Interior and exterior three-dimensional Dirichlet problems for the Helmholtz equation are solved numerically. They are formulated as equivalent boundary Fredholm integral equations of the first kind and are approximated by systems of linear algebraic equations, which are then solved numerically by applying an iteration method. The mosaic-skeleton method is used to speed up the solution procedure.

2. Integrating experimental and numerical methods for a scenario-based quantitative assessment of subsurface energy storage options

Kabuth, Alina; Dahmke, Andreas; Hagrey, Said Attia al; Berta, Márton; Dörr, Cordula; Koproch, Nicolas; Köber, Ralf; Köhn, Daniel; Nolde, Michael; Tilmann Pfeiffer, Wolf; Popp, Steffi; Schwanebeck, Malte; Bauer, Sebastian

2016-04-01

Within the framework of the transition to renewable energy sources ("Energiewende"), the German government defined the target of producing 60 % of the final energy consumption from renewable energy sources by the year 2050. However, renewable energies are subject to natural fluctuations. Energy storage can help to buffer the resulting time shifts between production and demand. Subsurface geological structures provide large potential capacities for energy stored in the form of heat or gas on daily to seasonal time scales. In order to explore this potential sustainably, the possible induced effects of energy storage operations have to be quantified for both specified normal operation and events of failure. The ANGUS+ project therefore integrates experimental laboratory studies with numerical approaches to assess subsurface energy storage scenarios and monitoring methods. Subsurface storage options for gas, i.e. hydrogen, synthetic methane and compressed air in salt caverns or porous structures, as well as subsurface heat storage are investigated with respect to site prerequisites, storage dimensions, induced effects, monitoring methods and integration into spatial planning schemes. The conceptual interdisciplinary approach of the ANGUS+ project towards the integration of subsurface energy storage into a sustainable subsurface planning scheme is presented here, and this approach is then demonstrated using the examples of two selected energy storage options: Firstly, the option of seasonal heat storage in a shallow aquifer is presented. Coupled thermal and hydraulic processes induced by periodic heat injection and extraction were simulated in the open-source numerical modelling package OpenGeoSys. Situations of specified normal operation as well as cases of failure in operational storage with leaking heat transfer fluid are considered. Bench-scale experiments provided parameterisations of temperature dependent changes in shallow groundwater hydrogeochemistry. As a

3. Computational and numerical aspects of using the integral equation method for adhesive layer fracture mechanics analysis

SciTech Connect

Giurgiutiu, V.; Ionita, A.; Dillard, D.A.; Graffeo, J.K.

1996-12-31

Fracture mechanics analysis of adhesively bonded joints has attracted considerable attention in recent years. A possible approach to the analysis of adhesive layer cracks is to study a brittle adhesive between 2 elastic half-planes representing the substrates. A 2-material 3-region elasticity problem is set up and has to be solved. A modeling technique based on the work of Fleck, Hutchinson, and Suo is used. Two complex potential problems using Muskelishvili`s formulation are set up for the 3-region, 2-material model: (a) a distribution of edge dislocations is employed to simulate the crack and its near field; and (b) a crack-free problem is used to simulate the effect of the external loading applied in the far field. Superposition of the two problems is followed by matching tractions and displacements at the bimaterial boundaries. The Cauchy principal value integral is used to treat the singularities. Imposing the traction-free boundary conditions over the entire crack length yielded a linear system of two integral equations. The parameters of the problem are Dundurs` elastic mismatch coefficients, {alpha} and {beta}, and the ratio c/H representing the geometric position of the crack in the adhesive layer.

4. Numeric solution of the electric field integral equation using Galerkin's method for axisymmetric cases

Lileg, Klemens

1990-12-01

The electric field integral equation is solved for a cylindrical antenna of arbitrary radius with flat endcaps using the method of moments. Trigonometric subdomain functions are used as basis functions; the weighting functions have the same shape as the basis functions (Galerkin's method). For the endcaps the approximation of the program NEC is used; the excitation is due to a homogeneous field in a gap in the center of the antenna. No analytical approximations are employed in the evaluation of the integrals needed for the computation of the impedance matrix. The admittance so obtained converges better than that found with the help of NEC, but in many cases it is not completely satisfactory. Therefore, the approximate condition for the endcaps are introduced, and trigonometric subdomain functions analogous to those used on the cylinder are used as basis functions. All additional evaluations are done without approximations. The results for the admittance converge in all cases even for a small number of segments. The impedance is measured for a number of monopoles of various radii above a conducting plane; for all frequencies good agreement with the calculation is obtained.

5. Classical to path-integral adaptive resolution in molecular simulation: towards a smooth quantum-classical coupling.

PubMed

Poma, A B; Delle Site, L

2010-06-25

Simulations that couple different molecular models in an adaptive way by changing resolution on the fly allow us to identify the relevant degrees of freedom of a system. This, in turn, leads to a detailed understanding of the essential physics which characterizes a system. While the delicate process of transition from one model to another is well understood for the adaptivity between classical molecular models the same cannot be said for the quantum-classical adaptivity. The main reason for this is the difficulty in describing a continuous transition between two different kinds of physical principles: probabilistic for the quantum and deterministic for the classical. Here we report the basic principles of an algorithm that allows for a continuous and smooth transition by employing the path integral description of atoms.

6. Numerical Modeling for Integrated Design of a DNAPL Partitioning Tracer Test

McCray, J. E.; Divine, C. E.; Dugan, P. J.; Wolf, L.; Boving, T.; Louth, M.; Brusseau, M. L.; Hayes, D.

2002-12-01

Partitioning tracer tests (PTTs) are commonly used to estimate the location and volume of nonaqueous-phase liquids (NAPLs) at contaminated groundwater sites. PTTs are completed before and after remediation efforts as one means to assess remediation effectiveness. PTT design is complex. Numerical models are invaluable tools for designing a PTT, particularly for designing flow rates and selecting tracers to ensure proper tracer breakthrough times, spatial design of injection-extraction wells and rates to maximize tracer capture, well-specific sampling density and frequency, and appropriate tracer-chemical masses. Generally, the design requires consideration of the following factors: type of contaminant; distribution of contaminant at the site, including location of hot spots; site hydraulic characteristics; measurement of the partitioning coefficients for the various tracers; the time allotted to conduct the PTT; evaluation of the magnitude and arrival time of the tracer breakthrough curves; duration of the tracer input pulse; maximum tracer concentrations; analytical detection limits for the tracers; estimation of the capture zone of the well field to tracer ensure mass balance and to limit residual tracer concentrations left in the subsurface; effect of chemical remediation agents on the PTT results, and disposal of the extracted tracer solution. These design principles are applied to a chemical-enhanced remediation effort for a chlorinated-solvent dense NAPL (DNAPL) site at Little Creek Naval Amphibious Base in Virginia Beach, Virginia. For this project, the hydrology and pre-PTT contaminant distribution were characterized using traditional methods (slug tests, groundwater and soil concentrations from monitoring wells, and geoprobe analysis), as well as membrane interface probe analysis. Additional wells were installed after these studies. Partitioning tracers were selected based on the primary DNAPL contaminants at the site, expected NAPL saturations

7. EEG-Based Prediction of Cognitive Workload Induced by Arithmetic: A Step towards Online Adaptation in Numerical Learning

ERIC Educational Resources Information Center

Spüler, Martin; Walter, Carina; Rosenstiel, Wolfgang; Gerjets, Peter; Moeller, Korbinian; Klein, Elise

2016-01-01

Numeracy is a key competency for living in our modern knowledge society. Therefore, it is essential to support numerical learning from basic to more advanced competency levels. From educational psychology it is known that learning is most effective when the respective content is neither too easy nor too demanding in relation to learners'…

8. Study of vortex ring dynamics in the nonlinear Schrodinger equation utilizing GPU-accelerated high-order compact numerical integrators

Caplan, Ronald Meyer

We numerically study the dynamics and interactions of vortex rings in the nonlinear Schrodinger equation (NLSE). Single ring dynamics for both bright and dark vortex rings are explored including their traverse velocity, stability, and perturbations resulting in quadrupole oscillations. Multi-ring dynamics of dark vortex rings are investigated, including scattering and merging of two colliding rings, leapfrogging interactions of co-traveling rings, as well as co-moving steady-state multi-ring ensembles. Simulations of choreographed multi-ring setups are also performed, leading to intriguing interaction dynamics. Due to the inherent lack of a close form solution for vortex rings and the dimensionality where they live, efficient numerical methods to integrate the NLSE have to be developed in order to perform the extensive number of required simulations. To facilitate this, compact high-order numerical schemes for the spatial derivatives are developed which include a new semi-compact modulus-squared Dirichlet boundary condition. The schemes are combined with a fourth-order Runge-Kutta time-stepping scheme in order to keep the overall method fully explicit. To ensure efficient use of the schemes, a stability analysis is performed to find bounds on the largest usable time step-size as a function of the spatial step-size. The numerical methods are implemented into codes which are run on NVIDIA graphic processing unit (GPU) parallel architectures. The codes running on the GPU are shown to be many times faster than their serial counterparts. The codes are developed with future usability in mind, and therefore are written to interface with MATLAB utilizing custom GPU-enabled C codes with a MEX-compiler interface. Reproducibility of results is achieved by combining the codes into a code package called NLSEmagic which is freely distributed on a dedicated website.

9. The lambda-scheme. [for numerical integration of Euler equation of compressible gas flow

NASA Technical Reports Server (NTRS)

Moretti, G.

1979-01-01

A method for integrating the Euler equations of gas dynamics for compressible flows in any hyperbolic case is presented. This method is applied to the Mach number distribution over a stretch of an infinite duct having a variable cross section, and to the distribution in a channel opening into a vacuum with the Mach number equalling 1.04. An example of the ability of this method to handle two-dimensional unsteady flows is shown using the steady shock-and-isobars pattern reached asymptotically about an ablated blunt body with a free stream Mach number equalling 12. A final example is presented where the technique is applied to a three-dimensional steady supersonic flow, with a Mach number of 2 and an angle of attack of 5 deg.

10. Numerical simulation of Stokes flow around particles via a hybrid Finite Difference-Boundary Integral scheme

Bhattacharya, Amitabh

2013-11-01

An efficient algorithm for simulating Stokes flow around particles is presented here, in which a second order Finite Difference method (FDM) is coupled to a Boundary Integral method (BIM). This method utilizes the strong points of FDM (i.e. localized stencil) and BIM (i.e. accurate representation of particle surface). Specifically, in each iteration, the flow field away from the particles is solved on a Cartesian FDM grid, while the traction on the particle surface (given the the velocity of the particle) is solved using BIM. The two schemes are coupled by matching the solution in an intermediate region between the particle and surrounding fluid. We validate this method by solving for flow around an array of cylinders, and find good agreement with Hasimoto's (J. Fluid Mech. 1959) analytical results.

11. Evaluation of 3 numerical methods for propulsion integration studies on transonic transport configurations

NASA Technical Reports Server (NTRS)

Yaros, S. F.; Carlson, J. R.; Chandrasekaran, B.

1986-01-01

An effort has been undertaken at the NASA Langley Research Center to assess the capabilities of available computational methods for use in propulsion integration design studies of transonic transport aircraft, particularly of pylon/nacelle combinations which exhibit essentially no interference drag. The three computer codes selected represent state-of-the-art computational methods for analyzing complex configurations at subsonic and transonic flight conditions. These are: EULER, a finitie volume solution of the Euler equation; VSAERO, a panel solution of the Laplace equation; and PPW, a finite difference solution of the small disturbance transonic equations. In general, all three codes have certain capabilities that allow them to be of some value in predicting the flows about transport configurations, but all have limitations. Until more accurate methods are available, careful application and interpretation of the results of these codes are needed.

12. Evaluation of three numerical methods for propulsion integration studies on transonic transport configurations

NASA Technical Reports Server (NTRS)

Yaros, Steven F.; Carlson, John R.; Chandrasekaran, Balasubramanyan

1986-01-01

An effort has been undertaken at the NASA Langley Research Center to assess the capabilities of available computational methods for use in propulsion integration design studies of transonic transport aircraft, particularly of pylon/nacelle combinations which exhibit essentially no interference drag. The three computer codes selected represent state-of-the-art computational methods for analyzing complex configurations at subsonic and transonic flight conditions. These are: EULER, a finite volume solution of the Euler equation; VSAERO, a panel solution of the Laplace equation; and PPW, a finite difference solution of the small disturbance transonic equations. In general, all three codes have certain capabilities that allow them to be of some value in predicting the flows about transport configurations, but all have limitations. Until more accurate methods are available, careful application and interpretation of the results of these codes are needed.

13. Radial 32P ion implantation using a coaxial plasma reactor: Activity imaging and numerical integration

Fortin, M. A.; Dufresne, V.; Paynter, R.; Sarkissian, A.; Stansfield, B.

2004-12-01

14. Integrated numerical design of an innovative Lower Hybrid launcher for Alcator C-Mod

SciTech Connect

Meneghini, O.; Shiraiwa, S.; Beck, W.; Irby, J.; Koert, P.; Parker, R. R.; Viera, R.; Wukitch, S.; Wilson, J.

2009-11-26

The new Alcator C-Mod LHCD system (LH2) is based on the concept of a four way splitter [1] which evenly splits the RF power among the four waveguides that compose one of the 16 columns of the LH grill. In this work several simulation tools have been used to study the LH2 coupling performance and the launched spectra when facing a plasma, numerically verifying the effectiveness of the four way splitter concept and further improving its design. The TOPLHA code has been used for modeling reflections at the antenna/plasma interface. TOPLHA results have been then coupled to the commercial code CST Microwave Studio to efficiently optimize the four way splitter geometry for several plasma scenarios. Subsequently, the COMSOL Multiphysics code has been used to self consistently take into account the electromagnetic-thermal-structural interactions. This comprehensive and predictive analysis has proven to be very valuable for understanding the behavior of the system when facing the plasma and has profoundly influenced several design choices of the LH2. According to the simulations, the final design ensures even poloidal power splitting for a wide range of plasma parameters, which ultimately results in an improvement of the wave coupling and an increased maximum operating power.

15. A Proteomic Perspective on the Bacterial Adaptation to Cold: Integrating OMICs Data of the Psychrotrophic Bacterium Exiguobacterium antarcticum B7

PubMed Central

Baraúna, Rafael A.; Freitas, Dhara Y.; Pinheiro, Juliana C.; Folador, Adriana R. C.; Silva, Artur

2017-01-01

Since the publication of one of the first studies using 2D gel electrophoresis by Patrick H. O’Farrell in 1975, several other studies have used that method to evaluate cellular responses to different physicochemical variations. In environmental microbiology, bacterial adaptation to cold environments is a “hot topic” because of its application in biotechnological processes. As in other fields, gel-based and gel-free proteomic methods have been used to determine the molecular mechanisms of adaptation to cold of several psychrotrophic and psychrophilic bacterial species. In this review, we aim to describe and discuss these main molecular mechanisms of cold adaptation, referencing proteomic studies that have made significant contributions to our current knowledge in the area. Furthermore, we use Exiguobacterium antarcticum B7 as a model organism to present the importance of integrating genomic, transcriptomic, and proteomic data. This species has been isolated in Antarctica and previously studied at all three omic levels. The integration of these data permitted more robust conclusions about the mechanisms of bacterial adaptation to cold. PMID:28248259

16. Integration of variable-rate OWC with OFDM-PON for hybrid optical access based on adaptive envelope modulation

Chen, Chen; Zhong, Wen-De; Wu, Dehao

2016-12-01

In this paper, we investigate an integrated optical wireless communication (OWC) and orthogonal frequency division multiplexing based passive optical network (OFDM-PON) system for hybrid wired and wireless optical access, based on an adaptive envelope modulation technique. Both the outdoor and indoor wireless communications are considered in the integrated system. The data for wired access is carried by a conventional OFDM signal, while the data for wireless access is carried by an M-ary pulse amplitude modulation (M-PAM) signal which is modulated onto the envelope of a phase-modulated OFDM signal. By adaptively modulating the wireless M-PAM signal onto the envelope of the wired phase-modulated constant envelope OFDM (CE-OFDM) signal, hybrid wired and wireless optical access can be seamlessly integrated and variable-rate optical wireless transmission can also be achieved. Analytical bit-error-rate (BER) expressions are derived for both the CE-OFDM signal with M-PAM overlay and the overlaid unipolar M-PAM signal, which are verified by Monte Carlo simulations. The BER performances of wired access, indoor OWC wireless access and outdoor OWC wireless access are evaluated. Moreover, variable-rate indoor and outdoor optical wireless access based on the adaptive envelope modulation technique is also discussed.

17. When genome integrity and cell cycle decisions collide: roles of polo kinases in cellular adaptation to DNA damage.

PubMed

Serrano, Diego; D'Amours, Damien

2014-09-01

The drive to proliferate and the need to maintain genome integrity are two of the most powerful forces acting on biological systems. When these forces enter in conflict, such as in the case of cells experiencing DNA damage, feedback mechanisms are activated to ensure that cellular proliferation is stopped and no further damage is introduced while cells repair their chromosomal lesions. In this circumstance, the DNA damage response dominates over the biological drive to proliferate, and may even result in programmed cell death if the damage cannot be repaired efficiently. Interestingly, the drive to proliferate can under specific conditions overcome the DNA damage response and lead to a reactivation of the proliferative program in checkpoint-arrested cells. This phenomenon is known as adaptation to DNA damage and is observed in all eukaryotic species where the process has been studied, including normal and cancer cells in humans. Polo-like kinases (PLKs) are critical regulators of the adaptation response to DNA damage and they play key roles at the interface of cell cycle and checkpoint-related decisions in cells. Here, we review recent progress in defining the specific roles of PLKs in the adaptation process and how this conserved family of eukaryotic kinases can integrate the fundamental need to preserve genomic integrity with effective cellular proliferation.

18. Integrating Field Measurements and Numerical Modeling to Investigate Gully Network Evolution

Rengers, F. K.; Tucker, G. E.

2011-12-01

With the advent of numerical modeling the exploration of landscape evolution has advanced from simple thought experiments to investigation of increasingly complex landforming processes. A common criticism of landscape evolution modeling, however, is the lack of model validation with actual field data. Here we present research that continues the advancement of landscape evolution theory by combining detailed field observations with numerical modeling. The focus of our investigation is gully networks on soft-rock strata, where rates of morphologic change are fast enough to measure on annual to decadal time scales. Our research focuses on a highly transient landscape on the high plains of eastern Colorado (40 miles east of Denver, CO) where convective thunderstorms drive ephemeral stream flow, resulting in incised gullies with vertical knickpoints. The site has yielded a comprehensive dataset of hydrology, topography, and geomorphic change. We are continuously monitoring several environmental parameters (including rainfall, overland flow, stream discharge, and soil moisture), and have explored the physical properties of the soil on the site through grain size analysis and infiltration measurements. In addition, time-lapse photography and repeat terrestrial lidar scanning make it possible to track knickpoint dynamics through time. The resulting dataset provides a case study for testing the ability of landscape evolution models to reproduce annual to decadal patterns of erosion and deposition. Knickpoint erosion is the largest contributor to landscape evolution and the controlling factor for gully migration rate. Average knickpoint retreat rates, based on historic aerial photographs and ongoing laser surveying, range between 0.1 and 2.5 m/yr. Knickpoint retreat appears to be driven by a combination of plunge-pool scour, large block failure, and grain-by-grain entrainment of sediment from the wall. Erosion is correlated with flash floods in the summer months. To test our

19. Integrating Geochemical and Geodynamic Numerical Models of Mantle Evolution and Plate Tectonics

Tackley, P. J.; Xie, S.

2001-12-01

The thermal and chemical evolution of Earth's mantle and plates are inextricably coupled by the plate tectonic - mantle convective system. Convection causes chemical differentiation, recycling and mixing, while chemical variations affect the convection through physical properties such as density and viscosity which depend on composition. It is now possible to construct numerical mantle convection models that track the thermo-chemical evolution of major and minor elements, and which can be used to test prospective models and hypotheses regarding Earth's chemical and thermal evolution. Model thermal and chemical structures can be compared to results from seismic tomography, while geochemical signatures (e.g., trace element ratios) can be compared to geochemical observations. The presented, two-dimensional model combines a simplified 2-component major element model with tracking of the most important trace elements, using a tracer method. Melting is self-consistently treated using a solidus, with melt placed on the surface as crust. Partitioning of trace elements occurs between melt and residue. Decaying heat-producing elements and secular cooling of the mantle and core provide the driving heat sources. Pseudo-plastic yielding of the lithosphere gives a first-order approximation of plate tectonics, and also allows planets with a rigid lid or intermittent plate tectonics to be modeled simply by increasing the yield strength. Preliminary models with an initially homogeneous mantle show that regions with a HIMU-like signature can be generated by crustal recycling, and regions with high 3He/4He ratios can be generated by residuum recycling. Outgassing of Argon is within the observed range. Models with initially layered mantles will also be investigated. In future it will be important to include a more realistic bulk compositional model that allows continental crust as well as oceanic crust to form, and to extend the model to three dimensions since toroidal flow may alter

20. Adaptive Mesh Refinement in the Context of Spectral Numerical Evolutions of Binary Black Hole Space-Times

Szilagyi, Bela

2011-04-01

Spectral numerical methods are known for giving faster convergence than finite difference methods, when evolving smooth quantities. In binary black hole simulations of the SpEC code this exponential convergence is clearly visible. However, the same exponential dependence of the numerical error on the grid-resolution will also mean that a linear order mismatch between the grid-structure and the actual data will lead to exponential loss of accuracy. In my talk I will show the way the Caltech-Cornell-CITA code deals with this, by use of what we call Spectral AMR. In our algorithm we monitor truncation error estimates in various regions of the grid as the simulation proceeds, and adjust the grid as necessary. Supported by Sherman Fairchild Foundation and NSF grants PHY-061459 and PHY-0652995 to Caltech.

1. Numerical investigations of free edge effects in integrally stiffened layered composite panels

Skrna-Jakl, I.; Rammerstorfer, F. G.

A linear finite element analysis is conducted to examine the free edge stresses and the displacement behavior of an integrally stiffened layered composite panel loaded under uniform inplane tension. Symmetric (+Phi, -Phi, 0, -Phi, +Phi) graphite-epoxy laminates with various fiber orientations in the off-axis plies are considered. The quadratic stress criterion, the Tsai-Wu criterion and the Mises equivalent stresses are used to determine a risk parameter for onset of delamination, first ply failure and matrix cracking in the neat resin. The results of the analysis show that the interlaminar stresses at the +Phi/-Phi and -Phi/0 interfaces increase rapidly in the skin-stringer transition. This behavior is observed at the free edge as well as at some distance from it. The magnitude of the interlaminar stresses in the skin-stringer transition is strongly influenced by the fiber orientations of the off-axis plies. In addition, the overall displacements depend on the magnitude of the off-axis ply angle. It is found that for Phi less than 30 deg the deformations of the stiffener section are dominated by bending, whereas for Phi in the range of 45 to 75 deg the deformations are dominated by torsion. The failure analysis shows that ply and matrix failure tend to occur prior to delamination for the considered configurations.

2. Integrating laboratory creep compaction data with numerical fault models: A Bayesian framework

USGS Publications Warehouse

Fitzenz, D.D.; Jalobeanu, A.; Hickman, S.H.

2007-01-01

We developed a robust Bayesian inversion scheme to plan and analyze laboratory creep compaction experiments. We chose a simple creep law that features the main parameters of interest when trying to identify rate-controlling mechanisms from experimental data. By integrating the chosen creep law or an approximation thereof, one can use all the data, either simultaneously or in overlapping subsets, thus making more complete use of the experiment data and propagating statistical variations in the data through to the final rate constants. Despite the nonlinearity of the problem, with this technique one can retrieve accurate estimates of both the stress exponent and the activation energy, even when the porosity time series data are noisy. Whereas adding observation points and/or experiments reduces the uncertainty on all parameters, enlarging the range of temperature or effective stress significantly reduces the covariance between stress exponent and activation energy. We apply this methodology to hydrothermal creep compaction data on quartz to obtain a quantitative, semiempirical law for fault zone compaction in the interseismic period. Incorporating this law into a simple direct rupture model, we find marginal distributions of the time to failure that are robust with respect to errors in the initial fault zone porosity. Copyright 2007 by the American Geophysical Union.

3. Robust, integrated computational control of NMR experiments to achieve optimal assignment by ADAPT-NMR.

PubMed

Bahrami, Arash; Tonelli, Marco; Sahu, Sarata C; Singarapu, Kiran K; Eghbalnia, Hamid R; Markley, John L

2012-01-01

ADAPT-NMR (Assignment-directed Data collection Algorithm utilizing a Probabilistic Toolkit in NMR) represents a groundbreaking prototype for automated protein structure determination by nuclear magnetic resonance (NMR) spectroscopy. With a [(13)C,(15)N]-labeled protein sample loaded into the NMR spectrometer, ADAPT-NMR delivers complete backbone resonance assignments and secondary structure in an optimal fashion without human intervention. ADAPT-NMR achieves this by implementing a strategy in which the goal of optimal assignment in each step determines the subsequent step by analyzing the current sum of available data. ADAPT-NMR is the first iterative and fully automated approach designed specifically for the optimal assignment of proteins with fast data collection as a byproduct of this goal. ADAPT-NMR evaluates the current spectral information, and uses a goal-directed objective function to select the optimal next data collection step(s) and then directs the NMR spectrometer to collect the selected data set. ADAPT-NMR extracts peak positions from the newly collected data and uses this information in updating the analysis resonance assignments and secondary structure. The goal-directed objective function then defines the next data collection step. The procedure continues until the collected data support comprehensive peak identification, resonance assignments at the desired level of completeness, and protein secondary structure. We present test cases in which ADAPT-NMR achieved results in two days or less that would have taken two months or more by manual approaches.

4. Numerical Development

ERIC Educational Resources Information Center

Siegler, Robert S.; Braithwaite, David W.

2016-01-01

In this review, we attempt to integrate two crucial aspects of numerical development: learning the magnitudes of individual numbers and learning arithmetic. Numerical magnitude development involves gaining increasingly precise knowledge of increasing ranges and types of numbers: from non-symbolic to small symbolic numbers, from smaller to larger…

5. Numerical and Experimental Investigation of Natural Convection in Open-Ended Channels with Application to Building Integrated Photovoltaic (BIPV) Systems

Timchenko, V.; Tkachenko, O. A.; Giroux-Julien, S.; Ménézo, C.

2015-05-01

Numerical and experimental investigations of the flow and heat transfer in open-ended channel formed by the double skin façade have been undertaken in order to improve understanding of the phenomena and to apply it to passive cooling of building integrated photovoltaic systems. Both uniform heating and non-uniform heating configurations in which heat sources alternated with unheated zones on both skins were studied. Different periodic and asymmetric heating modes have been considered for the same aspect ratio 1/15 of wall distance to wall height and for periodicity 1/15 and 4/15 of heated/unheated zones and heat input, 220 W/m2. In computational study three dimensional transient LES simulation was carried out. It is shown that in comparison to uniformly heating configuration, non-uniformly heating configuration enhances both convective heat transfer and chimney effect.

6. Optimization of multiple turbine arrays in a channel with tidally reversing flow by numerical modelling with adaptive mesh.

PubMed

Divett, T; Vennell, R; Stevens, C

2013-02-28

At tidal energy sites, large arrays of hundreds of turbines will be required to generate economically significant amounts of energy. Owing to wake effects within the array, the placement of turbines within will be vital to capturing the maximum energy from the resource. This study presents preliminary results using Gerris, an adaptive mesh flow solver, to investigate the flow through four different arrays of 15 turbines each. The goal is to optimize the position of turbines within an array in an idealized channel. The turbines are represented as areas of increased bottom friction in an adaptive mesh model so that the flow and power capture in tidally reversing flow through large arrays can be studied. The effect of oscillating tides is studied, with interesting dynamics generated as the tidal current reverses direction, forcing turbulent flow through the array. The energy removed from the flow by each of the four arrays is compared over a tidal cycle. A staggered array is found to extract 54 per cent more energy than a non-staggered array. Furthermore, an array positioned to one side of the channel is found to remove a similar amount of energy compared with an array in the centre of the channel.

7. Numerical simulation of particulate flows using a hybrid of finite difference and boundary integral methods

Bhattacharya, Amitabh; Kesarkar, Tejas

2016-10-01

A combination of finite difference (FD) and boundary integral (BI) methods is used to formulate an efficient solver for simulating unsteady Stokes flow around particles. The two-dimensional (2D) unsteady Stokes equation is being solved on a Cartesian grid using a second order FD method, while the 2D steady Stokes equation is being solved near the particle using BI method. The two methods are coupled within the viscous boundary layer, a few FD grid cells away from the particle, where solutions from both FD and BI methods are valid. We demonstrate that this hybrid method can be used to accurately solve for the flow around particles with irregular shapes, even though radius of curvature of the particle surface is not resolved by the FD grid. For dilute particle concentrations, we construct a virtual envelope around each particle and solve the BI problem for the flow field located between the envelope and the particle. The BI solver provides velocity boundary condition to the FD solver at "boundary" nodes located on the FD grid, adjacent to the particles, while the FD solver provides the velocity boundary condition to the BI solver at points located on the envelope. The coupling between FD method and BI method is implicit at every time step. This method allows us to formulate an O (N ) scheme for dilute suspensions, where N is the number of particles. For semidilute suspensions, where particles may cluster, an envelope formation method has been formulated and implemented, which enables solving the BI problem for each individual particle cluster, allowing efficient simulation of hydrodynamic interaction between particles even when they are in close proximity. The method has been validated against analytical results for flow around a periodic array of cylinders and for Jeffrey orbit of a moving ellipse in shear flow. Simulation of multiple force-free irregular shaped particles in the presence of shear in a 2D slit flow has been conducted to demonstrate the robustness of

8. Numerical simulation of particulate flows using a hybrid of finite difference and boundary integral methods.

PubMed

Bhattacharya, Amitabh; Kesarkar, Tejas

2016-10-01

A combination of finite difference (FD) and boundary integral (BI) methods is used to formulate an efficient solver for simulating unsteady Stokes flow around particles. The two-dimensional (2D) unsteady Stokes equation is being solved on a Cartesian grid using a second order FD method, while the 2D steady Stokes equation is being solved near the particle using BI method. The two methods are coupled within the viscous boundary layer, a few FD grid cells away from the particle, where solutions from both FD and BI methods are valid. We demonstrate that this hybrid method can be used to accurately solve for the flow around particles with irregular shapes, even though radius of curvature of the particle surface is not resolved by the FD grid. For dilute particle concentrations, we construct a virtual envelope around each particle and solve the BI problem for the flow field located between the envelope and the particle. The BI solver provides velocity boundary condition to the FD solver at "boundary" nodes located on the FD grid, adjacent to the particles, while the FD solver provides the velocity boundary condition to the BI solver at points located on the envelope. The coupling between FD method and BI method is implicit at every time step. This method allows us to formulate an O(N) scheme for dilute suspensions, where N is the number of particles. For semidilute suspensions, where particles may cluster, an envelope formation method has been formulated and implemented, which enables solving the BI problem for each individual particle cluster, allowing efficient simulation of hydrodynamic interaction between particles even when they are in close proximity. The method has been validated against analytical results for flow around a periodic array of cylinders and for Jeffrey orbit of a moving ellipse in shear flow. Simulation of multiple force-free irregular shaped particles in the presence of shear in a 2D slit flow has been conducted to demonstrate the robustness of

9. An integrated numerical framework for water quality modelling in cold-region rivers: A case of the lower Athabasca River.

PubMed

Shakibaeinia, Ahmad; Kashyap, Shalini; Dibike, Yonas B; Prowse, Terry D

2016-11-01

There is a great deal of interest to determine the state and variations of water quality parameters in the lower Athabasca River (LAR) ecosystem, northern Alberta, Canada, due to industrial developments in the region. As a cold region river, the annual cycle of ice cover formation and breakup play a key role in water quality transformation and transportation processes. An integrated deterministic numerical modelling framework is developed and applied for long-term and detailed simulation of the state and variation (spatial and temporal) of major water quality constituents both in open-water and ice covered conditions in the lower Athabasca River (LAR). The framework is based on the a 1D and a 2D hydrodynamic and water quality models externally coupled with the 1D river ice process models to account for the cold season effects. The models are calibrated/validated using available measured data and applied for simulation of dissolved oxygen (DO) and nutrients (i.e., nitrogen and phosphorus). The results show the effect of winter ice cover on reducing the DO concentration, and a fluctuating temporal trend for DO and nutrients during summer periods with substantial differences in concentration between the main channel and flood plains. This numerical frame work can be the basis for future water quality scenario-based studies in the LAR.

10. Adaptive nitrogen and integrated weed management in conservation agriculture: impacts on agronomic productivity, greenhouse gas emissions, and herbicide residues.

PubMed

Oyeogbe, Anthony Imoudu; Das, T K; Bhatia, Arti; Singh, Shashi Bala

2017-04-01

Increasing nitrogen (N) immobilization and weed interference in the early phase of implementation of conservation agriculture (CA) affects crop yields. Yet, higher fertilizer and herbicide use to improve productivity influences greenhouse gase emissions and herbicide residues. These tradeoffs precipitated a need for adaptive N and integrated weed management in CA-based maize (Zea mays L.)-wheat [Triticum aestivum (L.) emend Fiori & Paol] cropping system in the Indo-Gangetic Plains (IGP) to optimize N availability and reduce weed proliferation. Adaptive N fertilization was based on soil test value and normalized difference vegetation index measurement (NDVM) by GreenSeeker™ technology, while integrated weed management included brown manuring (Sesbania aculeata L. co-culture, killed at 25 days after sowing), herbicide mixture, and weedy check (control, i.e., without weed management). Results indicated that the 'best-adaptive N rate' (i.e., 50% basal + 25% broadcast at 25 days after sowing + supplementary N guided by NDVM) increased maize and wheat grain yields by 20 and 14% (averaged for 2 years), respectively, compared with whole recommended N applied at sowing. Weed management by brown manuring (during maize) and herbicide mixture (during wheat) resulted in 10 and 21% higher grain yields (averaged for 2 years), respectively, over the weedy check. The NDVM in-season N fertilization and brown manuring affected N2O and CO2 emissions, but resulted in improved carbon storage efficiency, while herbicide residuals in soil were significantly lower in the maize season than in wheat cropping. This study concludes that adaptive N and integrated weed management enhance synergy between agronomic productivity, fertilizer and herbicide efficiency, and greenhouse gas mitigation.

11. REVIEW: Internal models in sensorimotor integration: perspectives from adaptive control theory

Tin, Chung; Poon, Chi-Sang

2005-09-01

Internal models and adaptive controls are empirical and mathematical paradigms that have evolved separately to describe learning control processes in brain systems and engineering systems, respectively. This paper presents a comprehensive appraisal of the correlation between these paradigms with a view to forging a unified theoretical framework that may benefit both disciplines. It is suggested that the classic equilibrium-point theory of impedance control of arm movement is analogous to continuous gain-scheduling or high-gain adaptive control within or across movement trials, respectively, and that the recently proposed inverse internal model is akin to adaptive sliding control originally for robotic manipulator applications. Modular internal models' architecture for multiple motor tasks is a form of multi-model adaptive control. Stochastic methods, such as generalized predictive control, reinforcement learning, Bayesian learning and Hebbian feedback covariance learning, are reviewed and their possible relevance to motor control is discussed. Possible applicability of a Luenberger observer and an extended Kalman filter to state estimation problems—such as sensorimotor prediction or the resolution of vestibular sensory ambiguity—is also discussed. The important role played by vestibular system identification in postural control suggests an indirect adaptive control scheme whereby system states or parameters are explicitly estimated prior to the implementation of control. This interdisciplinary framework should facilitate the experimental elucidation of the mechanisms of internal models in sensorimotor systems and the reverse engineering of such neural mechanisms into novel brain-inspired adaptive control paradigms in future.

12. Alleviating inequality in climate policy costs: an integrated perspective on mitigation, damage and adaptation

De Cian, E.; Hof, A. F.; Marangoni, G.; Tavoni, M.; van Vuuren, D. P.

2016-07-01

Equity considerations play an important role in international climate negotiations. While policy analysis has often focused on equity as it relates to mitigation costs, there are large regional differences in adaptation costs and the level of residual damage. This paper illustrates the relevance of including adaptation and residual damage in equity considerations by determining how the allocation of emission allowances would change to counteract regional differences in total climate costs, defined as the costs of mitigation, adaptation, and residual damage. We compare emission levels resulting from a global carbon tax with two allocations of emission allowances under a global cap-and-trade system: one equating mitigation costs and one equating total climate costs as share of GDP. To account for uncertainties in both mitigation and adaptation, we use a model-comparison approach employing two alternative modeling frameworks with different damage, adaptation cost, and mitigation cost estimates, and look at two different climate goals. Despite the identified model uncertainties, we derive unambiguous results on the change in emission allowance allocation that could lessen the unequal distribution of adaptation costs and residual damages through the financial transfers associated with emission trading.

13. Adaptive radiations, ecological specialization, and the evolutionary integration of complex morphological structures.

PubMed

Monteiro, Leandro R; Nogueira, Marcelo R

2010-03-01

The evolutionary integration of complex morphological structures is a macroevolutionary pattern in which morphogenetic components evolve in a coordinated fashion, which can result from the interplay among processes of developmental, genetic integration, and different types of selection. We tested hypotheses of ecological versus developmental factors underlying patterns of within-species and evolutionary integration in the mandible of phyllostomid bats, during the most impressive ecological and morphological radiation among mammals. Shape variation of mandibular morphogenetic components was associated with diet, and the transition of integration patterns from developmental to within-species to evolutionary was examined. Within-species (as a proxy to genetic) integration in different lineages resembled developmental integration regardless of diet specialization, however, evolutionary integration patterns reflected selection in different mandibular components. For dietary specializations requiring extensive functional changes in mastication patterns or biting, such as frugivores and sanguivores, the evolutionary integration pattern was not associated with expected within-species or developmental integration. On the other hand, specializations with lower mastication demands or without major functional reorganization (such as nectarivores and carnivores), presented evolutionary integration patterns similar to the expected developmental pattern. These results show that evolutionary integration patterns are largely a result of independent selection on specific components regardless of developmental modules.

14. Numerical Simulation of the Combustion of Fuel Droplets: Finite Rate Kinetics and Flame Zone Grid Adaptation (CEFD)

NASA Technical Reports Server (NTRS)

Gogos, George; Bowen, Brent D.; Nickerson, Jocelyn S.

2002-01-01

The NASA Nebraska Space Grant (NSGC) & EPSCoR programs have continued their effort to support outstanding research endeavors by funding the Numerical Simulation of the Combustion of Fuel Droplets study at the University of Nebraska at Lincoln (UNL). This team of researchers has developed a transient numerical model to study the combustion of suspended and moving droplets. The engines that propel missiles, jets, and many other devices are dependent upon combustion. Therefore, data concerning the combustion of fuel droplets is of immediate relevance to aviation and aeronautical personnel, especially those involved in flight operations. The experiments being conducted by Dr. Gogos and Dr. Nayagam s research teams, allow investigators to gather data for comparison with theoretical predictions of burning rates, flame structures, and extinction conditions. The consequent improved hndamental understanding droplet combustion may contribute to the clean and safe utilization of fossil hels (Williams, Dryer, Haggard & Nayagam, 1997, 72). The present state of knowledge on convective extinction of he1 droplets derives fiom experiments conducted under normal gravity conditions. However, any data obtained with suspended droplets under normal gravity are grossly affected by gravity. The need to obtain experimental data under microgravity conditions is therefore well justified and addresses one of the goals of NASA s Human Exploration and Development of Space (HEDS) microgravity combustion experiment.

15. Rotor-bearing system integrated with shape memory alloy springs for ensuring adaptable dynamics and damping enhancement-Theory and experiment

Enemark, Søren; Santos, Ilmar F.

2016-05-01

Helical pseudoelastic shape memory alloy (SMA) springs are integrated into a dynamic system consisting of a rigid rotor supported by passive magnetic bearings. The aim is to determine the utility of SMAs for vibration attenuation via their mechanical hysteresis, and for adaptation of the dynamic behaviour via their temperature dependent stiffness properties. The SMA performance, in terms of vibration attenuation and adaptability, is compared to a benchmark configuration of the system having steel springs instead of SMA springs. A theoretical multidisciplinary approach is used to quantify the weakly nonlinear coupled dynamics of the rotor-bearing system. The nonlinear forces from the thermo-mechanical shape memory alloy springs and from the passive magnetic bearings are coupled to the rotor and bearing housing dynamics. The equations of motion describing rotor tilt and bearing housing lateral motion are solved in the time domain. The SMA behaviour is also described by the complex modulus to form approximative equations of motion, which are solved in the frequency domain using continuation techniques. Transient responses, ramp-ups and steady-state frequency responses of the system are investigated experimentally and numerically. By using the proper SMA temperature, vibration reductions up to around 50 percent can be achieved using SMAs instead of steel. Regarding system adaptability, both the critical speeds, the mode shapes and the modes' sensitivity to disturbances (e.g. imbalance) highly depend on the SMA temperature. Examples show that vibration reduction at constant rotational speeds up to around 75 percent can be achieved by changing the SMA temperature, primarily because of stiffness change, whereas hysteresis only limits large vibrations. The model is able to capture and explain the experimental dynamic behaviour.

16. Iterative method for the numerical solution of a system of integral equations for the heat conduction initial boundary value problem

Svetushkov, N. N.

2016-11-01

The paper deals with a numerical algorithm to reduce the overall system of integral equations describing the heat transfer process at any geometrically complex area (both twodimensional and three-dimensional), to the iterative solution of a system of independent onedimensional integral equations. This approach has been called "string method" and has been used to solve a number of applications, including the problem of the detonation wave front for the calculation of heat loads in pulse detonation engines. In this approach "the strings" are a set of limited segments parallel to the coordinate axes, into which the whole solving area is divided (similar to the way the strings are arranged in a tennis racket). Unlike other grid methods where often for finding solutions, the values of the desired function in the region located around a specific central point here in each iteration step is determined by the solution throughout the length of the one-dimensional "string", which connects the two end points and set them values and determine the temperature distribution along all the strings in the first step of an iterative procedure.

17. Physical and numerical sources of computational inefficiency in integration of chemical kinetic rate equations: Etiology, treatment and prognosis

NASA Technical Reports Server (NTRS)

1986-01-01

The design of a very fast, automatic black-box code for homogeneous, gas-phase chemical kinetics problems requires an understanding of the physical and numerical sources of computational inefficiency. Some major sources reviewed in this report are stiffness of the governing ordinary differential equations (ODE's) and its detection, choice of appropriate method (i.e., integration algorithm plus step-size control strategy), nonphysical initial conditions, and too frequent evaluation of thermochemical and kinetic properties. Specific techniques are recommended (and some advised against) for improving or overcoming the identified problem areas. It is argued that, because reactive species increase exponentially with time during induction, and all species exhibit asymptotic, exponential decay with time during equilibration, exponential-fitted integration algorithms are inherently more accurate for kinetics modeling than classical, polynomial-interpolant methods for the same computational work. But current codes using the exponential-fitted method lack the sophisticated stepsize-control logic of existing black-box ODE solver codes, such as EPISODE and LSODE. The ultimate chemical kinetics code does not exist yet, but the general characteristics of such a code are becoming apparent.

18. Integrated adaptive optics optical coherence tomography and adaptive optics scanning laser ophthalmoscope system for simultaneous cellular resolution in vivo retinal imaging.

PubMed

Zawadzki, Robert J; Jones, Steven M; Pilli, Suman; Balderas-Mata, Sandra; Kim, Dae Yu; Olivier, Scot S; Werner, John S

2011-06-01

We describe an ultrahigh-resolution (UHR) retinal imaging system that combines adaptive optics Fourier-domain optical coherence tomography (AO-OCT) with an adaptive optics scanning laser ophthalmoscope (AO-SLO) to allow simultaneous data acquisition by the two modalities. The AO-SLO subsystem was integrated into the previously described AO-UHR OCT instrument with minimal changes to the latter. This was done in order to ensure optimal performance and image quality of the AO- UHR OCT. In this design both imaging modalities share most of the optical components including a common AO-subsystem and vertical scanner. One of the benefits of combining Fd-OCT with SLO includes automatic co-registration between two acquisition channels for direct comparison between retinal structures imaged by both modalities (e.g., photoreceptor mosaics or microvasculature maps). Because of differences in the detection scheme of the two systems, this dual imaging modality instrument can provide insight into retinal morphology and potentially function, that could not be accessed easily by a single system. In this paper we describe details of the components and parameters of the combined instrument, including incorporation of a novel membrane magnetic deformable mirror with increased stroke and actuator count used as a single wavefront corrector. We also discuss laser safety calculations for this multimodal system. Finally, retinal images acquired in vivo with this system are presented.

19. Adaptive control paradigm for photovoltaic and solid oxide fuel cell in a grid-integrated hybrid renewable energy system

PubMed Central

Khan, Laiq

2017-01-01

The hybrid power system (HPS) is an emerging power generation scheme due to the plentiful availability of renewable energy sources. Renewable energy sources are characterized as highly intermittent in nature due to meteorological conditions, while the domestic load also behaves in a quite uncertain manner. In this scenario, to maintain the balance between generation and load, the development of an intelligent and adaptive control algorithm has preoccupied power engineers and researchers. This paper proposes a Hermite wavelet embedded NeuroFuzzy indirect adaptive MPPT (maximum power point tracking) control of photovoltaic (PV) systems to extract maximum power and a Hermite wavelet incorporated NeuroFuzzy indirect adaptive control of Solid Oxide Fuel Cells (SOFC) to obtain a swift response in a grid-connected hybrid power system. A comprehensive simulation testbed for a grid-connected hybrid power system (wind turbine, PV cells, SOFC, electrolyzer, battery storage system, supercapacitor (SC), micro-turbine (MT) and domestic load) is developed in Matlab/Simulink. The robustness and superiority of the proposed indirect adaptive control paradigm are evaluated through simulation results in a grid-connected hybrid power system testbed by comparison with a conventional PI (proportional and integral) control system. The simulation results verify the effectiveness of the proposed control paradigm. PMID:28329015

20. Adaptive control paradigm for photovoltaic and solid oxide fuel cell in a grid-integrated hybrid renewable energy system.

PubMed

Mumtaz, Sidra; Khan, Laiq

2017-01-01

The hybrid power system (HPS) is an emerging power generation scheme due to the plentiful availability of renewable energy sources. Renewable energy sources are characterized as highly intermittent in nature due to meteorological conditions, while the domestic load also behaves in a quite uncertain manner. In this scenario, to maintain the balance between generation and load, the development of an intelligent and adaptive control algorithm has preoccupied power engineers and researchers. This paper proposes a Hermite wavelet embedded NeuroFuzzy indirect adaptive MPPT (maximum power point tracking) control of photovoltaic (PV) systems to extract maximum power and a Hermite wavelet incorporated NeuroFuzzy indirect adaptive control of Solid Oxide Fuel Cells (SOFC) to obtain a swift response in a grid-connected hybrid power system. A comprehensive simulation testbed for a grid-connected hybrid power system (wind turbine, PV cells, SOFC, electrolyzer, battery storage system, supercapacitor (SC), micro-turbine (MT) and domestic load) is developed in Matlab/Simulink. The robustness and superiority of the proposed indirect adaptive control paradigm are evaluated through simulation results in a grid-connected hybrid power system testbed by comparison with a conventional PI (proportional and integral) control system. The simulation results verify the effectiveness of the proposed control paradigm.

1. How Can the Secondary School Learning Model Be Adapted to Provide for More Meaningful Curriculum Integration?

ERIC Educational Resources Information Center

Gill, Caroline; Fisher, Anthony

2014-01-01

Interest in curriculum integration (CI) has resurged recently as schools seek to bring together knowledge from separate curriculum areas to create a more holistic, integrated learning experience for students to address the demands of "twenty-first century" learning. As the educational sciences deliver new research on the role of the arts…

2. FeynDyn: A MATLAB program for fast numerical Feynman integral calculations for open quantum system dynamics on GPUs

Dattani, Nikesh S.

2013-12-01

language: MATLAB R2012a. Computer: See “Operating system”. Operating system: Any operating system that can run MATLAB R2007a or above. Classification: 4.4. Nature of problem: Calculating the dynamics of the reduced density operator of an open quantum system. Solution method: Numerical Feynman integral. Running time: Depends on the input parameters. See the main text for examples.

3. Toward Integrated Career Assessment: Using Story to Appraise Career Dispositions and Adaptability

ERIC Educational Resources Information Center

Hartung, Paul J.; Borges, Nicole J.

2005-01-01

This study examined the validity of using stories to appraise career dispositions and problems associated with career adaptability. Premedical students (63 women, 37 men) wrote narratives about Thematic Apperception Test cards (TAT) and responded to the Strong Interest Inventory (SII). Independent raters identified identical career adaptability…

4. Towards an Integrated Conceptual Model of International Student Adjustment and Adaptation

ERIC Educational Resources Information Center

Schartner, Alina; Young, Tony Johnstone

2016-01-01

Despite a burgeoning body of empirical research on "the international student experience", the area remains under-theorized. The literature to date lacks a guiding conceptual model that captures the adjustment and adaptation trajectories of this unique, growing, and important sojourner group. In this paper, we therefore put forward a…

5. Hydro-geophysical observations integration in numerical model: case study in Mediterranean karstic unsaturated zone (Larzac, france)

Champollion, Cédric; Fores, Benjamin; Le Moigne, Nicolas; Chéry, Jean

2016-04-01

Karstic hydro-systems are highly non-linear and heterogeneous but one of the main water resource in the Mediterranean area. Neither local measurements in boreholes or analysis at the spring can take into account the variability of the water storage. Since a few years, ground-based geophysical measurements (such as gravity, electrical resistivity or seismological data) allows following water storage in heterogeneous hydrosystems at an intermediate scale between boreholes and basin. Behind classical rigorous monitoring, the integration of geophysical data in hydrological numerical models in needed for both processes interpretation and quantification. Since a few years, a karstic geophysical observatory (GEK: Géodésie de l'Environnement Karstique, OSU OREME, SNO H+) has been setup in the Mediterranean area in the south of France. The observatory is surrounding more than 250m karstified dolomite, with an unsaturated zone of ~150m thickness. At the observatory water level in boreholes, evapotranspiration and rainfall are classical hydro-meteorological observations completed by continuous gravity, resistivity and seismological measurements. The main objective of the study is the modelling of the whole observation dataset by explicit unsaturated numerical model in one dimension. Hydrus software is used for the explicit modelling of the water storage and transfer and links the different observations (geophysics, water level, evapotranspiration) with the water saturation. Unknown hydrological parameters (permeability, porosity) are retrieved from stochastic inversions. The scale of investigation of the different observations are discussed thank to the modelling results. A sensibility study of the measurements against the model is done and key hydro-geological processes of the site are presented.

6. Integrating a Numerical Taxonomic Method and Molecular Phylogeny for Species Delimitation of Melampsora Species (Melampsoraceae, Pucciniales) on Willows in China.

PubMed

Zhao, Peng; Wang, Qing-Hong; Tian, Cheng-Ming; Kakishima, Makoto

2015-01-01

The species in genus Melampsora are the causal agents of leaf rust diseases on willows in natural habitats and plantations. However, the classification and recognition of species diversity are challenging because morphological characteristics are scant and morphological variation in Melampsora on willows has not been thoroughly evaluated. Thus, the taxonomy of Melampsora species on willows remains confused, especially in China where 31 species were reported based on either European or Japanese taxonomic systems. To clarify the species boundaries of Melampsora species on willows in China, we tested two approaches for species delimitation inferred from morphological and molecular variations. Morphological species boundaries were determined based on numerical taxonomic analyses of morphological characteristics in the uredinial and telial stages by cluster analysis and one-way analysis of variance. Phylogenetic species boundaries were delineated based on the generalized mixed Yule-coalescent (GMYC) model analysis of the sequences of the internal transcribed spacer (ITS1 and ITS2) regions including the 5.8S and D1/D2 regions of the large nuclear subunit of the ribosomal RNA gene. Numerical taxonomic analyses of 14 morphological characteristics recognized in the uredinial-telial stages revealed 22 morphological species, whereas the GMYC results recovered 29 phylogenetic species. In total, 17 morphological species were in concordance with the phylogenetic species and 5 morphological species were in concordance with 12 phylogenetic species. Both the morphological and molecular data supported 14 morphological characteristics, including 5 newly recognized characteristics and 9 traditionally emphasized characteristics, as effective for the differentiation of Melampsora species on willows in China. Based on the concordance and discordance of the two species delimitation approaches, we concluded that integrative taxonomy by using both morphological and molecular variations was

7. Integrating a Numerical Taxonomic Method and Molecular Phylogeny for Species Delimitation of Melampsora Species (Melampsoraceae, Pucciniales) on Willows in China

PubMed Central

Zhao, Peng; Wang, Qing-Hong; Tian, Cheng-Ming; Kakishima, Makoto

2015-01-01

The species in genus Melampsora are the causal agents of leaf rust diseases on willows in natural habitats and plantations. However, the classification and recognition of species diversity are challenging because morphological characteristics are scant and morphological variation in Melampsora on willows has not been thoroughly evaluated. Thus, the taxonomy of Melampsora species on willows remains confused, especially in China where 31 species were reported based on either European or Japanese taxonomic systems. To clarify the species boundaries of Melampsora species on willows in China, we tested two approaches for species delimitation inferred from morphological and molecular variations. Morphological species boundaries were determined based on numerical taxonomic analyses of morphological characteristics in the uredinial and telial stages by cluster analysis and one-way analysis of variance. Phylogenetic species boundaries were delineated based on the generalized mixed Yule-coalescent (GMYC) model analysis of the sequences of the internal transcribed spacer (ITS1 and ITS2) regions including the 5.8S and D1/D2 regions of the large nuclear subunit of the ribosomal RNA gene. Numerical taxonomic analyses of 14 morphological characteristics recognized in the uredinial-telial stages revealed 22 morphological species, whereas the GMYC results recovered 29 phylogenetic species. In total, 17 morphological species were in concordance with the phylogenetic species and 5 morphological species were in concordance with 12 phylogenetic species. Both the morphological and molecular data supported 14 morphological characteristics, including 5 newly recognized characteristics and 9 traditionally emphasized characteristics, as effective for the differentiation of Melampsora species on willows in China. Based on the concordance and discordance of the two species delimitation approaches, we concluded that integrative taxonomy by using both morphological and molecular variations was

8. Health risk in the context of climate change and adaptation - Concept and mapping as an integrated approach

Kienberger, S.; Notenbaert, A.; Zeil, P.; Bett, B.; Hagenlocher, M.; Omolo, A.

2012-04-01

Climate change has been stated as being one of the greatest challenges to global health in the current century. Climate change impacts on human health and the socio-economic and related poverty consequences are however still poorly understood. While epidemiological issues are strongly coupled with environmental and climatic parameters, the social and economic circumstances of populations might be of equal or even greater importance when trying to identify vulnerable populations and design appropriate and well-targeted adaptation measures. The inter-linkage between climate change, human health risk and socio-economic impacts remains an important - but largely outstanding - research field. We present an overview on how risk is traditionally being conceptualised in the human health domain and reflect critically on integrated approaches as being currently used in the climate change context. The presentation will also review existing approaches, and how they can be integrated towards adaptation tools. Following this review, an integrated risk concept is being presented, which has been currently adapted under the EC FP7 research project (HEALTHY FUTURES; http://www.healthyfutures.eu/). In this approach, health risk is not only defined through the disease itself (as hazard) but also by the inherent vulnerability of the system, population or region under study. It is in fact the interaction of environment and society that leads to the development of diseases and the subsequent risk of being negatively affected by it. In this conceptual framework vulnerability is being attributed to domains of lack of resilience as well as underlying preconditions determining susceptibilities. To fulfil a holistic picture vulnerability can be associated to social, economic, environmental, institutional, cultural and physical dimensions. The proposed framework also establishes the important nexus to adaptation and how different measures can be related to avoid disease outbreaks, reduce

9. Adaptive Topological Configuration of an Integrated Circuit/Packet-Switched Computer Network.

DTIC Science & Technology

1984-01-01

Gitman et al. [45] state that there are basically two approaches to the integrated network design problem: (1) solve the link/capacity problem for...1972), 1385-1397. 33. Frank, H., and Gitman , I. Economic analysis of integrated voice and data networks: a case study. Proc. of IEEE 66 , 11 (Nov. 1978...1974), 1074-1079. 45. Gitman , I., Hsieh, W., and Occhiogrosso, B. J. Analysis and design of hybrid switching networks. IEEE Trans. on Comm. Com-29

10. The primary cilium is a self-adaptable, integrating nexus for mechanical stimuli and cellular signaling.

PubMed

Nguyen, An M; Young, Y-N; Jacobs, Christopher R

2015-11-24

Mechanosensation is crucial for cells to sense and respond to mechanical signals within their local environment. While adaptation allows a sensor to be conditioned by stimuli within the environment and enables its operation in a wide range of stimuli intensities, the mechanisms behind adaptation remain controversial in even the most extensively studied mechanosensor, bacterial mechanosensitive channels. Primary cilia are ubiquitous sensory organelles. They have emerged as mechanosensors across diverse tissues, including kidney, liver and the embryonic node, and deflect with mechanical stimuli. Here, we show that both mechanical and chemical stimuli can alter cilium stiffness. We found that exposure to flow stiffens the cilium, which deflects less in response to subsequent exposures to flow. We also found that through a process involving acetylation, the cell can biochemically regulate cilium stiffness. Finally, we show that this altered stiffness directly affects the responsiveness of the cell to mechanical signals. These results demonstrate a potential mechanism through which the cell can regulate its mechanosensing apparatus.

11. Building Adaptive Game-Based Learning Resources: The Integration of IMS Learning Design and

ERIC Educational Resources Information Center

Burgos, Daniel; Moreno-Ger, Pablo; Sierra, Jose Luis; Fernandez-Manjon, Baltasar; Specht, Marcus; Koper, Rob

2008-01-01

IMS Learning Design (IMS-LD) is a specification to create units of learning (UoLs), which express a certain pedagogical model or strategy (e.g., adaptive learning with games). However, the authoring process of a UoL remains difficult because of the lack of high-level authoring tools for IMS-LD, even more so when the focus is on specific topics,…

12. Integrated analysis of numerous heterogeneous gene expression profiles for detecting robust disease-specific biomarkers and proposing drug targets.

PubMed

Amar, David; Hait, Tom; Izraeli, Shai; Shamir, Ron

2015-09-18

Genome-wide expression profiling has revolutionized biomedical research; vast amounts of expression data from numerous studies of many diseases are now available. Making the best use of this resource in order to better understand disease processes and treatment remains an open challenge. In particular, disease biomarkers detected in case-control studies suffer from low reliability and are only weakly reproducible. Here, we present a systematic integrative analysis methodology to overcome these shortcomings. We assembled and manually curated more than 14,000 expression profiles spanning 48 diseases and 18 expression platforms. We show that when studying a particular disease, judicious utilization of profiles from other diseases and information on disease hierarchy improves classification quality, avoids overoptimistic evaluation of that quality, and enhances disease-specific biomarker discovery. This approach yielded specific biomarkers for 24 of the analyzed diseases. We demonstrate how to combine these biomarkers with large-scale interaction, mutation and drug target data, forming a highly valuable disease summary that suggests novel directions in disease understanding and drug repurposing. Our analysis also estimates the number of samples required to reach a desired level of biomarker stability. This methodology can greatly improve the exploitation of the mountain of expression profiles for better disease analysis.

13. Numerical Estimation of the Pseudo-Jahn-Teller Effect Using Nonadiabatic Coupling Integrals in Monocyclic and Bicyclic Conjugated Molecules.

PubMed

Koseki, Shiro; Toyota, Azumao; Muramatsu, Takashi; Asada, Toshio; Matsunaga, Nikita

2016-12-29

The pseudo-Jahn-Teller (pJT) effect in monocyclic and bicyclic conjugated molecules was investigated by using the state-averaged multiconfiguration self-consistent field (MCSCF) method, together with the 6-31G(d,p) basis sets. Following the perturbation theory, the force constant along a normal mode Q is given by the sum of the classical force constant and the vibronic contribution (VC) resulting from the interaction of the ground state with excited states. The latter is given as the sum of individual contributions arising from vibronic interactions between the ground state and excited states. In the present work, each VC was calculated on the basis of nonadiabatic coupling (NAC) integrals. Furthermore, the classical force constant was estimated by taking advantage of the VC and the force constant obtained by vibrational analyses. For pentalene and heptalene, the present method seems to overestimate the VC in absolute value because of the small energy gap between the ground state and the lowest excited state. However, we are confident that the VC and the classical force constant for the other molecules are reasonable in magnitude in comparison with available literature information. Thus, it is proved that the present method is applicable and useful for numerical estimation of pJT effect.

14. Integrated approach for studying adaptation mechanisms in the human somatosensory cortical network.

PubMed

Venkatesan, Lalit; Barlow, Steven M; Popescu, Mihai; Popescu, Anda

2014-11-01

Magnetoencephalography and independent component analysis (ICA) was utilized to study and characterize neural adaptation in the somatosensory cortical network. Repetitive punctate tactile stimuli were applied unilaterally to the dominant hand and face using a custom-built pneumatic stimulator called the TAC-Cell. ICA-based source estimation from the evoked neuromagnetic responses indicated cortical activity in the contralateral primary somatosensory cortex (SI) for face stimulation, while hand stimulation resulted in robust contralateral SI and posterior parietal cortex (PPC) activation. Activity was also observed in the secondary somatosensory cortical area (SII) with reduced amplitude and higher variability across subjects. There was a significant difference in adaptation rate between SI and higher-order somatosensory cortices for hand stimulation. Adaptation was significantly dependent on stimulus frequency and pulse index within the stimulus train for both hand and face stimulation. The peak latency of the activity was significantly dependent on stimulation site (hand vs. face) and cortical area (SI vs. PPC). The difference in the peak latency of activity in SI and PPC is presumed to reflect a hierarchical serial-processing mechanism in the somatosensory cortex.

15. Adaptive integration in the visual cortex by depressing recurrent cortical circuits.

PubMed

van Rossum, Mark C W; van der Meer, Matthijs A A; Xiao, Dengke; Oram, Mike W

2008-07-01

Neurons in the visual cortex receive a large amount of input from recurrent connections, yet the functional role of these connections remains unclear. Here we explore networks with strong recurrence in a computational model and show that short-term depression of the synapses in the recurrent loops implements an adaptive filter. This allows the visual system to respond reliably to deteriorated stimuli yet quickly to high-quality stimuli. For low-contrast stimuli, the model predicts long response latencies, whereas latencies are short for high-contrast stimuli. This is consistent with physiological data showing that in higher visual areas, latencies can increase more than 100 ms at low contrast compared to high contrast. Moreover, when presented with briefly flashed stimuli, the model predicts stereotypical responses that outlast the stimulus, again consistent with physiological findings. The adaptive properties of the model suggest that the abundant recurrent connections found in visual cortex serve to adapt the network's time constant in accordance with the stimulus and normalizes neuronal signals such that processing is as fast as possible while maintaining reliability.

16. Integrating Antimicrobial Therapy with Host Immunity to Fight Drug-Resistant Infections: Classical vs. Adaptive Treatment.

PubMed

Gjini, Erida; Brito, Patricia H

2016-04-01

Antimicrobial resistance of infectious agents is a growing problem worldwide. To prevent the continuing selection and spread of drug resistance, rational design of antibiotic treatment is needed, and the question of aggressive vs. moderate therapies is currently heatedly debated. Host immunity is an important, but often-overlooked factor in the clearance of drug-resistant infections. In this work, we compare aggressive and moderate antibiotic treatment, accounting for host immunity effects. We use mathematical modelling of within-host infection dynamics to study the interplay between pathogen-dependent host immune responses and antibiotic treatment. We compare classical (fixed dose and duration) and adaptive (coupled to pathogen load) treatment regimes, exploring systematically infection outcomes such as time to clearance, immunopathology, host immunization, and selection of resistant bacteria. Our analysis and simulations uncover effective treatment strategies that promote synergy between the host immune system and the antimicrobial drug in clearing infection. Both in classical and adaptive treatment, we quantify how treatment timing and the strength of the immune response determine the success of moderate therapies. We explain key parameters and dimensions, where an adaptive regime differs from classical treatment, bringing new insight into the ongoing debate of resistance management. Emphasizing the sensitivity of treatment outcomes to the balance between external antibiotic intervention and endogenous natural defenses, our study calls for more empirical attention to host immunity processes.

17. Integrating Antimicrobial Therapy with Host Immunity to Fight Drug-Resistant Infections: Classical vs. Adaptive Treatment

PubMed Central

Gjini, Erida; Brito, Patricia H.

2016-01-01

Antimicrobial resistance of infectious agents is a growing problem worldwide. To prevent the continuing selection and spread of drug resistance, rational design of antibiotic treatment is needed, and the question of aggressive vs. moderate therapies is currently heatedly debated. Host immunity is an important, but often-overlooked factor in the clearance of drug-resistant infections. In this work, we compare aggressive and moderate antibiotic treatment, accounting for host immunity effects. We use mathematical modelling of within-host infection dynamics to study the interplay between pathogen-dependent host immune responses and antibiotic treatment. We compare classical (fixed dose and duration) and adaptive (coupled to pathogen load) treatment regimes, exploring systematically infection outcomes such as time to clearance, immunopathology, host immunization, and selection of resistant bacteria. Our analysis and simulations uncover effective treatment strategies that promote synergy between the host immune system and the antimicrobial drug in clearing infection. Both in classical and adaptive treatment, we quantify how treatment timing and the strength of the immune response determine the success of moderate therapies. We explain key parameters and dimensions, where an adaptive regime differs from classical treatment, bringing new insight into the ongoing debate of resistance management. Emphasizing the sensitivity of treatment outcomes to the balance between external antibiotic intervention and endogenous natural defenses, our study calls for more empirical attention to host immunity processes. PMID:27078624

18. The Fourier transform method and the SD-bar approach for the analytical and numerical treatment of multicenter overlap-like quantum similarity integrals

SciTech Connect

Safouhi, Hassan . E-mail: hassan.safouhi@ualberta.ca; Berlu, Lilian

2006-07-20

Molecular overlap-like quantum similarity measurements imply the evaluation of overlap integrals of two molecular electronic densities related by Dirac delta function. When the electronic densities are expanded over atomic orbitals using the usual LCAO-MO approach (linear combination of atomic orbitals), overlap-like quantum similarity integrals could be expressed in terms of four-center overlap integrals. It is shown that by introducing the Fourier transform of delta Dirac function in the integrals and using the Fourier transform approach combined with the so-called B functions, one can obtain analytic expressions of the integrals under consideration. These analytic expressions involve highly oscillatory semi-infinite spherical Bessel functions, which are the principal source of severe numerical and computational difficulties. In this work, we present a highly efficient algorithm for a fast and accurate numerical evaluation of these multicenter overlap-like quantum similarity integrals over Slater type functions. This algorithm is based on the SD-bar approach due to Safouhi. Recurrence formulae are used for a better control of the degree of accuracy and for a better stability of the algorithm. The numerical result section shows the efficiency of our algorithm, compared with the alternatives using the one-center two-range expansion method, which led to very complicated analytic expressions, the epsilon algorithm and the nonlinear D-bar transformation.

19. 3D Numerical Optimization Modelling of Ivancich landslides (Assisi, Italy) via integration of remote sensing and in situ observations.

Castaldo, Raffaele; De Novellis, Vincenzo; Lollino, Piernicola; Manunta, Michele; Tizzani, Pietro

2015-04-01

The new challenge that the research in slopes instabilities phenomena is going to tackle is the effective integration and joint exploitation of remote sensing measurements with in situ data and observations to study and understand the sub-surface interactions, the triggering causes, and, in general, the long term behaviour of the investigated landslide phenomenon. In this context, a very promising approach is represented by Finite Element (FE) techniques, which allow us to consider the intrinsic complexity of the mass movement phenomena and to effectively benefit from multi source observations and data. In this context, we perform a three dimensional (3D) numerical model of the Ivancich (Assisi, Central Italy) instability phenomenon. In particular, we apply an inverse FE method based on a Genetic Algorithm optimization procedure, benefitting from advanced DInSAR measurements, retrieved through the full resolution Small Baseline Subset (SBAS) technique, and an inclinometric array distribution. To this purpose we consider the SAR images acquired from descending orbit by the COSMO-SkyMed (CSK) X-band radar constellation, from December 2009 to February 2012. Moreover the optimization input dataset is completed by an array of eleven inclinometer measurements, from 1999 to 2006, distributed along the unstable mass. The landslide body is formed of debris material sliding on a arenaceous marl substratum, with a thin shear band detected using borehole and inclinometric data, at depth ranging from 20 to 60 m. Specifically, we consider the active role of this shear band in the control of the landslide evolution process. A large field monitoring dataset of the landslide process, including at-depth piezometric and geological borehole observations, were available. The integration of these datasets allows us to develop a 3D structural geological model of the considered slope. To investigate the dynamic evolution of a landslide, various physical approaches can be considered

20. BESC public portal: an integrative analysis of a resequenced ethanol adapted Clostridium thermocellum mutant

SciTech Connect

Syed, Mustafa H; Karpinets, Tatiana V; Leuze, Michael Rex; Park, Byung; Hyatt, Philip Douglas; Brown, Steven D; Uberbacher, Edward C

2012-01-01

The BioEnergy Science Center (BESC) is undertaking large experimental campaigns to understand the biosynthesis and biodegradation of biomass and to develop biofuel solutions. BESC is generating large volumes of diverse data, including genome sequences, omics data and assay results. The purpose of the BESC Knowledgebase is to serve as a centralized repository for experimentally generated data and to provide an integrated, interactive and user-friendly analysis framework. The Portal makes available tools for visualization, integration and analysis of data either produced by BESC or obtained from external resources.

1. Integrated sensor with frame memory and programmable resolution for light adaptive imaging

NASA Technical Reports Server (NTRS)

Zhou, Zhimin (Inventor); Fossum, Eric R. (Inventor); Pain, Bedabrata (Inventor)

2004-01-01

An image sensor operable to vary the output spatial resolution according to a received light level while maintaining a desired signal-to-noise ratio. Signals from neighboring pixels in a pixel patch with an adjustable size are added to increase both the image brightness and signal-to-noise ratio. One embodiment comprises a sensor array for receiving input signals, a frame memory array for temporarily storing a full frame, and an array of self-calibration column integrators for uniform column-parallel signal summation. The column integrators are capable of substantially canceling fixed pattern noise.

2. Nonlinear Interaction between Shunting and Adaptation Controls a Switch between Integration and Coincidence Detection in Pyramidal Neurons

PubMed Central

Prescott, Steven A.; Ratté, Stéphanie; De Koninck, Yves; Sejnowski, Terrence J.

2010-01-01

The membrane conductance of a pyramidal neuron in vivo is substantially increased by background synaptic input. Increased membrane conductance, or shunting, does not simply reduce neuronal excitability. Recordings from hippocampal pyramidal neurons using dynamic clamp revealed that adaptation caused complete cessation of spiking in the high conductance state, whereas repetitive spiking could persist despite adaptation in the low conductance state. This behavior was reproduced in a phase plane model and was explained by a shunting-induced increase in voltage threshold. The increase in threshold allows greater activation of the M current (IM) at subthreshold potentials and reduces the minimum adaptation required to stabilize the system; in contrast, activation of the afterhyperpolarization current is unaffected by the increase in threshold and therefore remains unable to stop repetitive spiking. The nonlinear interaction between shunting and IM has other important consequences. First, timing of spikes elicited by brief stimuli is more precise when background spikes elicited by sustained input are prohibited, as occurs exclusively with IM-mediated adaptation in the high conductance state. Second, activation of IM at subthreshold potentials, which is increased in the high conductance state, hyperpolarizes average membrane potential away from voltage threshold, allowing only large, rapid fluctuations to reach threshold and elicit spikes. These results suggest that the shift from a low to high conductance state in a pyramidal neuron is accompanied by a switch from encoding time-averaged input with firing rate to encoding transient inputs with precisely timed spikes, in effect, switching the operational mode from integration to coincidence detection. PMID:16957065

3. Simple numerical evaluation of modified Bessel functions Kν( x) of fractional order and the integral ʃ x∞K ν(η) dη

Kostroun, Vaclav O.

1980-05-01

Theoretical expressions for the angular and spectral distributions of synchrotron radiation involve modified Bessel functions of fractional order and the integral ʃ x∞K ν(η) dη . A simple series expressions for these quantities which can be evaluated numerically with hand-held programmable calculators is presented.

4. Integration of Posttranscriptional Gene Networks into Metabolic Adaptation and Biofilm Maturation in Candida albicans

PubMed Central

Harrison, Paul F.; Lo, Tricia L.; Quenault, Tara; Dagley, Michael J.; Bellousoff, Matthew; Powell, David R.; Beilharz, Traude H.; Traven, Ana

2015-01-01

The yeast Candida albicans is a human commensal and opportunistic pathogen. Although both commensalism and pathogenesis depend on metabolic adaptation, the regulatory pathways that mediate metabolic processes in C. albicans are incompletely defined. For example, metabolic change is a major feature that distinguishes community growth of C. albicans in biofilms compared to suspension cultures, but how metabolic adaptation is functionally interfaced with the structural and gene regulatory changes that drive biofilm maturation remains to be fully understood. We show here that the RNA binding protein Puf3 regulates a posttranscriptional mRNA network in C. albicans that impacts on mitochondrial biogenesis, and provide the first functional data suggesting evolutionary rewiring of posttranscriptional gene regulation between the model yeast Saccharomyces cerevisiae and C. albicans. A proportion of the Puf3 mRNA network is differentially expressed in biofilms, and by using a mutant in the mRNA deadenylase CCR4 (the enzyme recruited to mRNAs by Puf3 to control transcript stability) we show that posttranscriptional regulation is important for mitochondrial regulation in biofilms. Inactivation of CCR4 or dis-regulation of mitochondrial activity led to altered biofilm structure and over-production of extracellular matrix material. The extracellular matrix is critical for antifungal resistance and immune evasion, and yet of all biofilm maturation pathways extracellular matrix biogenesis is the least understood. We propose a model in which the hypoxic biofilm environment is sensed by regulators such as Ccr4 to orchestrate metabolic adaptation, as well as the regulation of extracellular matrix production by impacting on the expression of matrix-related cell wall genes. Therefore metabolic changes in biofilms might be intimately linked to a key biofilm maturation mechanism that ultimately results in untreatable fungal disease. PMID:26474309

5. Integrating adaptive behaviour in large-scale flood risk assessments: an Agent-Based Modelling approach

Haer, Toon; Aerts, Jeroen

2015-04-01

Between 1998 and 2009, Europe suffered over 213 major damaging floods, causing 1126 deaths, displacing around half a million people. In this period, floods caused at least 52 billion euro in insured economic losses making floods the most costly natural hazard faced in Europe. In many low-lying areas, the main strategy to cope with floods is to reduce the risk of the hazard through flood defence structures, like dikes and levees. However, it is suggested that part of the responsibility for flood protection needs to shift to households and businesses in areas at risk, and that governments and insurers can effectively stimulate the implementation of individual protective measures. However, adaptive behaviour towards flood risk reduction and the interaction between the government, insurers, and individuals has hardly been studied in large-scale flood risk assessments. In this study, an European Agent-Based Model is developed including agent representatives for the administrative stakeholders of European Member states, insurers and reinsurers markets, and individuals following complex behaviour models. The Agent-Based Modelling approach allows for an in-depth analysis of the interaction between heterogeneous autonomous agents and the resulting (non-)adaptive behaviour. Existing flood damage models are part of the European Agent-Based Model to allow for a dynamic response of both the agents and the environment to changing flood risk and protective efforts. By following an Agent-Based Modelling approach this study is a first contribution to overcome the limitations of traditional large-scale flood risk models in which the influence of individual adaptive behaviour towards flood risk reduction is often lacking.

6. Fostering Integrative Thinking: Adapting the Executive Education Model to the MBA Program

ERIC Educational Resources Information Center

Latham, Gary; Latham, Soosan D.; Whyte, Glen

2004-01-01

Many full-time MBA programs limit their effectiveness by clinging to functionalism. At best, they have made incremental changes to meet the market demand for MBA graduates. These changes, in most cases, have failed to integrate the various functional facets of complex business challenges. For insights into how to do so, many business schools need…

7. Preservice Teachers Integrate Understandings of Diversity Into Literacy Instruction: An Adaptation of the ABC's Model.

ERIC Educational Resources Information Center

Xu, Hong

2000-01-01

Investigated preservice teachers' understandings of their own and their students' cultural backgrounds, examining how they integrated those understandings into literacy instruction. The ABC model (autobiographies, biographies of students, cross-cultural analysis, analysis of cultural differences, and classroom practices) helped stimulate students…

8. Generic Service Integration in Adaptive Learning Experiences Using IMS Learning Design

ERIC Educational Resources Information Center

de-la-Fuente-Valentin, Luis; Pardo, Abelardo; Kloos, Carlos Delgado

2011-01-01

IMS Learning Design is a specification to capture the orchestration taking place in a learning scenario. This paper presents an extension called Generic Service Integration. This paradigm allows a bidirectional communication between the course engine in charge of the orchestration and conventional Web 2.0 tools. This communication allows the…

9. A robust data fusion scheme for integrated navigation systems employing fault detection methodology augmented with fuzzy adaptive filtering

2013-10-01

Integrated navigation systems for various applications, generally employs the centralized Kalman filter (CKF) wherein all measured sensor data are communicated to a single central Kalman filter. The advantage of CKF is that there is a minimal loss of information and high precision under benign conditions. But CKF may suffer computational overloading, and poor fault tolerance. The alternative is the federated Kalman filter (FKF) wherein the local estimates can deliver optimal or suboptimal state estimate as per certain information fusion criterion. FKF has enhanced throughput and multiple level fault detection capability. The Standard CKF or FKF require that the system noise and the measurement noise are zero-mean and Gaussian. Moreover it is assumed that covariance of system and measurement noises remain constant. But if the theoretical and actual statistical features employed in Kalman filter are not compatible, the Kalman filter does not render satisfactory solutions and divergence problems also occur. To resolve such problems, in this paper, an adaptive Kalman filter scheme strengthened with fuzzy inference system (FIS) is employed to adapt the statistical features of contributing sensors, online, in the light of real system dynamics and varying measurement noises. The excessive faults are detected and isolated by employing Chi Square test method. As a case study, the presented scheme has been implemented on Strapdown Inertial Navigation System (SINS) integrated with the Celestial Navigation System (CNS), GPS and Doppler radar using FKF. Collectively the overall system can be termed as SINS/CNS/GPS/Doppler integrated navigation system. The simulation results have validated the effectiveness of the presented scheme with significantly enhanced precision, reliability and fault tolerance. Effectiveness of the scheme has been tested against simulated abnormal errors/noises during different time segments of flight. It is believed that the presented scheme can be

10. Integrating dynamic stopping, transfer learning and language models in an adaptive zero-training ERP speller

Kindermans, Pieter-Jan; Tangermann, Michael; Müller, Klaus-Robert; Schrauwen, Benjamin

2014-06-01

Objective. Most BCIs have to undergo a calibration session in which data is recorded to train decoders with machine learning. Only recently zero-training methods have become a subject of study. This work proposes a probabilistic framework for BCI applications which exploit event-related potentials (ERPs). For the example of a visual P300 speller we show how the framework harvests the structure suitable to solve the decoding task by (a) transfer learning, (b) unsupervised adaptation, (c) language model and (d) dynamic stopping. Approach. A simulation study compares the proposed probabilistic zero framework (using transfer learning and task structure) to a state-of-the-art supervised model on n = 22 subjects. The individual influence of the involved components (a)-(d) are investigated. Main results. Without any need for a calibration session, the probabilistic zero-training framework with inter-subject transfer learning shows excellent performance—competitive to a state-of-the-art supervised method using calibration. Its decoding quality is carried mainly by the effect of transfer learning in combination with continuous unsupervised adaptation. Significance. A high-performing zero-training BCI is within reach for one of the most popular BCI paradigms: ERP spelling. Recording calibration data for a supervised BCI would require valuable time which is lost for spelling. The time spent on calibration would allow a novel user to spell 29 symbols with our unsupervised approach. It could be of use for various clinical and non-clinical ERP-applications of BCI.

11. Integrated physiological mechanisms of exercise performance, adaptation, and maladaptation to heat stress.

PubMed

Sawka, Michael N; Leon, Lisa R; Montain, Scott J; Sonna, Larry A

2011-10-01

12. Adaptive Classification of Landscape Process and Function: An Integration of Geoinformatics and Self-Organizing Maps

SciTech Connect

Coleman, Andre M.

2009-07-17

The advanced geospatial information extraction and analysis capabilities of a Geographic Information System (GISs) and Artificial Neural Networks (ANNs), particularly Self-Organizing Maps (SOMs), provide a topology-preserving means for reducing and understanding complex data relationships in the landscape. The Adaptive Landscape Classification Procedure (ALCP) is presented as an adaptive and evolutionary capability where varying types of data can be assimilated to address different management needs such as hydrologic response, erosion potential, habitat structure, instrumentation placement, and various forecast or what-if scenarios. This paper defines how the evaluation and analysis of spatial and/or temporal patterns in the landscape can provide insight into complex ecological, hydrological, climatic, and other natural and anthropogenic-influenced processes. Establishing relationships among high-dimensional datasets through neurocomputing based pattern recognition methods can help 1) resolve large volumes of data into a structured and meaningful form; 2) provide an approach for inferring landscape processes in areas that have limited data available but exhibit similar landscape characteristics; and 3) discover the value of individual variables or groups of variables that contribute to specific processes in the landscape. Classification of hydrologic patterns in the landscape is demonstrated.

13. Numerical analysis of wellbore integrity: results from a field study of a natural CO2 reservoir production well

Crow, W.; Gasda, S. E.; Williams, D. B.; Celia, M. A.; Carey, J. W.

2008-12-01

An important aspect of the risk associated with geological CO2 sequestration is the integrity of existing wellbores that penetrate geological layers targeted for CO2 injection. CO2 leakage may occur through multiple pathways along a wellbore, including through micro-fractures and micro-annuli within the "disturbed zone" surrounding the well casing. The effective permeability of this zone is a key parameter of wellbore integrity required for validation of numerical models. This parameter depends on a number of complex factors, including long-term attack by aggressive fluids, poor well completion and actions related to production of fluids through the wellbore. Recent studies have sought to replicate downhole conditions in the laboratory to identify the mechanisms and rates at which cement deterioration occurs. However, field tests are essential to understanding the in situ leakage properties of the millions of wells that exist in the mature sedimentary basins in North America. In this study, we present results from a field study of a 30-year-old production well from a natural CO2 reservoir. The wellbore was potentially exposed to a 96% CO2 fluid from the time of cement placement, and therefore cement degradation may be a significant factor leading to leakage pathways along this wellbore. A series of downhole tests was performed, including bond logs and extraction of sidewall cores. The cores were analyzed in the laboratory for mineralogical and hydrologic properties. A pressure test was conducted over an 11-ft section of well to determine the extent of hydraulic communication along the exterior of the well casing. Through analysis of this pressure test data, we are able estimate the effective permeability of the disturbed zone along the exterior of wellbore over this 11-ft section. We find the estimated range of effective permeability from the field test is consistent with laboratory analysis and bond log data. The cement interfaces with casing and/or formation are

14. The impact of watershed management on coastal morphology: A case study using an integrated approach and numerical modeling

Samaras, Achilleas G.; Koutitas, Christopher G.

2014-04-01

Coastal morphology evolves as the combined result of both natural- and human- induced factors that cover a wide range of spatial and temporal scales of effect. Areas in the vicinity of natural stream mouths are of special interest, as the direct connection with the upstream watershed extends the search for drivers of morphological evolution from the coastal area to the inland as well. Although the impact of changes in watersheds on the coastal sediment budget is well established, references that study concurrently the two fields and the quantification of their connection are scarce. In the present work, the impact of land-use changes in a watershed on coastal erosion is studied for a selected site in North Greece. Applications are based on an integrated approach to quantify the impact of watershed management on coastal morphology through numerical modeling. The watershed model SWAT and a shoreline evolution model developed by the authors (PELNCON-M) are used, evaluating with the latter the performance of the three longshore sediment transport rate formulae included in the model formulation. Results document the impact of crop abandonment on coastal erosion (agricultural land decrease from 23.3% to 5.1% is accompanied by the retreat of ~ 35 m in the vicinity of the stream mouth) and show the effect of sediment transport formula selection on the evolution of coastal morphology. Analysis denotes the relative importance of the parameters involved in the dynamics of watershed-coast systems, and - through the detailed description of a case study - is deemed to provide useful insights for researchers and policy-makers involved in their study.

15. Integration of Genetic and Phenotypic Data in 48 Lineages of Philippine Birds Shows Heterogeneous Divergence Processes and Numerous Cryptic Species

PubMed Central

Campbell, Kyle K.; Braile, Thomas

2016-01-01

The Philippine Islands are one of the most biologically diverse archipelagoes in the world. Current taxonomy, however, may underestimate levels of avian diversity and endemism in these islands. Although species limits can be difficult to determine among allopatric populations, quantitative methods for comparing phenotypic and genotypic data can provide useful metrics of divergence among populations and identify those that merit consideration for elevation to full species status. Using a conceptual approach that integrates genetic and phenotypic data, we compared populations among 48 species, estimating genetic divergence (p-distance) using the mtDNA marker ND2 and comparing plumage and morphometrics of museum study skins. Using conservative speciation thresholds, pairwise comparisons of genetic and phenotypic divergence suggested possible species-level divergences in more than half of the species studied (25 out of 48). In speciation process space, divergence routes were heterogeneous among taxa. Nearly all populations that surpassed high genotypic divergence thresholds were Passeriformes, and non-Passeriformes populations surpassed high phenotypic divergence thresholds more commonly than expected by chance. Overall, there was an apparent logarithmic increase in phenotypic divergence with respect to genetic divergence, suggesting the possibility that divergence among these lineages may initially be driven by divergent selection in this allopatric system. Also, genetic endemism was high among sampled islands. Higher taxonomy affected divergence in genotype and phenotype. Although broader lineage, genetic, phenotypic, and numeric sampling is needed to further explore heterogeneity among divergence processes and to accurately assess species-level diversity in these taxa, our results support the need for substantial taxonomic revisions among Philippine birds. The conservation implications are profound. PMID:27442510

16. Integration of Genetic and Phenotypic Data in 48 Lineages of Philippine Birds Shows Heterogeneous Divergence Processes and Numerous Cryptic Species.

PubMed

Campbell, Kyle K; Braile, Thomas; Winker, Kevin

2016-01-01

The Philippine Islands are one of the most biologically diverse archipelagoes in the world. Current taxonomy, however, may underestimate levels of avian diversity and endemism in these islands. Although species limits can be difficult to determine among allopatric populations, quantitative methods for comparing phenotypic and genotypic data can provide useful metrics of divergence among populations and identify those that merit consideration for elevation to full species status. Using a conceptual approach that integrates genetic and phenotypic data, we compared populations among 48 species, estimating genetic divergence (p-distance) using the mtDNA marker ND2 and comparing plumage and morphometrics of museum study skins. Using conservative speciation thresholds, pairwise comparisons of genetic and phenotypic divergence suggested possible species-level divergences in more than half of the species studied (25 out of 48). In speciation process space, divergence routes were heterogeneous among taxa. Nearly all populations that surpassed high genotypic divergence thresholds were Passeriformes, and non-Passeriformes populations surpassed high phenotypic divergence thresholds more commonly than expected by chance. Overall, there was an apparent logarithmic increase in phenotypic divergence with respect to genetic divergence, suggesting the possibility that divergence among these lineages may initially be driven by divergent selection in this allopatric system. Also, genetic endemism was high among sampled islands. Higher taxonomy affected divergence in genotype and phenotype. Although broader lineage, genetic, phenotypic, and numeric sampling is needed to further explore heterogeneity among divergence processes and to accurately assess species-level diversity in these taxa, our results support the need for substantial taxonomic revisions among Philippine birds. The conservation implications are profound.

17. A simplified baseband prefilter model with adaptive Kalman Filter for ultra-tight COMPASS/INS integration.

PubMed

Luo, Yong; Wu, Wenqi; Babu, Ravindra; Tang, Kanghua; Luo, Bing

2012-01-01

18. Path integral molecular dynamics within the grand canonical-like adaptive resolution technique: Simulation of liquid water

SciTech Connect

Agarwal, Animesh Delle Site, Luigi

2015-09-07

Quantum effects due to the spatial delocalization of light atoms are treated in molecular simulation via the path integral technique. Among several methods, Path Integral (PI) Molecular Dynamics (MD) is nowadays a powerful tool to investigate properties induced by spatial delocalization of atoms; however, computationally this technique is very demanding. The above mentioned limitation implies the restriction of PIMD applications to relatively small systems and short time scales. One of the possible solutions to overcome size and time limitation is to introduce PIMD algorithms into the Adaptive Resolution Simulation Scheme (AdResS). AdResS requires a relatively small region treated at path integral level and embeds it into a large molecular reservoir consisting of generic spherical coarse grained molecules. It was previously shown that the realization of the idea above, at a simple level, produced reasonable results for toy systems or simple/test systems like liquid parahydrogen. Encouraged by previous results, in this paper, we show the simulation of liquid water at room conditions where AdResS, in its latest and more accurate Grand-Canonical-like version (GC-AdResS), is merged with two of the most relevant PIMD techniques available in the literature. The comparison of our results with those reported in the literature and/or with those obtained from full PIMD simulations shows a highly satisfactory agreement.

19. Path integral molecular dynamics within the grand canonical-like adaptive resolution technique: Simulation of liquid water

Agarwal, Animesh; Delle Site, Luigi

2015-09-01

Quantum effects due to the spatial delocalization of light atoms are treated in molecular simulation via the path integral technique. Among several methods, Path Integral (PI) Molecular Dynamics (MD) is nowadays a powerful tool to investigate properties induced by spatial delocalization of atoms; however, computationally this technique is very demanding. The above mentioned limitation implies the restriction of PIMD applications to relatively small systems and short time scales. One of the possible solutions to overcome size and time limitation is to introduce PIMD algorithms into the Adaptive Resolution Simulation Scheme (AdResS). AdResS requires a relatively small region treated at path integral level and embeds it into a large molecular reservoir consisting of generic spherical coarse grained molecules. It was previously shown that the realization of the idea above, at a simple level, produced reasonable results for toy systems or simple/test systems like liquid parahydrogen. Encouraged by previous results, in this paper, we show the simulation of liquid water at room conditions where AdResS, in its latest and more accurate Grand-Canonical-like version (GC-AdResS), is merged with two of the most relevant PIMD techniques available in the literature. The comparison of our results with those reported in the literature and/or with those obtained from full PIMD simulations shows a highly satisfactory agreement.

20. High-Resolution Numerical Simulation and Analysis of Mach Reflection Structures in Detonation Waves in Low-Pressure H 2 –O 2 –Ar Mixtures: A Summary of Results Obtained with the Adaptive Mesh Refinement Framework AMROC

DOE PAGES

Deiterding, Ralf

2011-01-01

Numerical simulation can be key to the understanding of the multidimensional nature of transient detonation waves. However, the accurate approximation of realistic detonations is demanding as a wide range of scales needs to be resolved. This paper describes a successful solution strategy that utilizes logically rectangular dynamically adaptive meshes. The hydrodynamic transport scheme and the treatment of the nonequilibrium reaction terms are sketched. A ghost fluid approach is integrated into the method to allow for embedded geometrically complex boundaries. Large-scale parallel simulations of unstable detonation structures of Chapman-Jouguet detonations in low-pressure hydrogen-oxygen-argon mixtures demonstrate the efficiency of the described techniquesmore » in practice. In particular, computations of regular cellular structures in two and three space dimensions and their development under transient conditions, that is, under diffraction and for propagation through bends are presented. Some of the observed patterns are classified by shock polar analysis, and a diagram of the transition boundaries between possible Mach reflection structures is constructed.« less

1. Integrated Range-Doppler Map and Extended Target Classification with Adaptive Waveform for Cognitive Radar

DTIC Science & Technology

2014-12-01

wideband waveform. 14. SUBJECT TERMS waveform design, eigen waveform, ambiguity function, target identification , target detection , range Doppler map...are also interested in identification of extended targets . And finally, the third objective (which utilizes the results of the first two) is to...design an integrated scheme for the combined problem of range-Doppler location/ detection with extended target type identification with the use of a

2. Integrating land cover modeling and adaptive management to conserve endangered species and reduce catastrophic fire risk

USGS Publications Warehouse

Breininger, David; Duncan, Brean; Eaton, Mitchell J.; Johnson, Fred; Nichols, James

2014-01-01

Land cover modeling is used to inform land management, but most often via a two-step process, where science informs how management alternatives can influence resources, and then, decision makers can use this information to make decisions. A more efficient process is to directly integrate science and decision-making, where science allows us to learn in order to better accomplish management objectives and is developed to address specific decisions. Co-development of management and science is especially productive when decisions are complicated by multiple objectives and impeded by uncertainty. Multiple objectives can be met by the specification of tradeoffs, and relevant uncertainty can be addressed through targeted science (i.e., models and monitoring). We describe how to integrate habitat and fuel monitoring with decision-making focused on the dual objectives of managing for endangered species and minimizing catastrophic fire risk. Under certain conditions, both objectives might be achieved by a similar management policy; other conditions require tradeoffs between objectives. Knowledge about system responses to actions can be informed by developing hypotheses based on ideas about fire behavior and then applying competing management actions to different land units in the same system state. Monitoring and management integration is important to optimize state-specific management decisions and to increase knowledge about system responses. We believe this approach has broad utility and identifies a clear role for land cover modeling programs intended to inform decision-making.

3. Integrating multi-view transmission system into MPEG-21 stereoscopic and multi-view DIA (digital item adaptation)

Lee, Seungwon; Park, Ilkwon; Kim, Manbae; Byun, Hyeran

2006-10-01

As digital broadcasting technologies have been rapidly progressed, users' expectations for realistic and interactive broadcasting services also have been increased. As one of such services, 3D multi-view broadcasting has received much attention recently. In general, all the view sequences acquired at the server are transmitted to the client. Then, the user can select a part of views or all the views according to display capabilities. However, this kind of system requires high processing power of the server as well as the client, thus posing a difficulty in practical applications. To overcome this problem, a relatively simple method is to transmit only two view-sequences requested by the client in order to deliver a stereoscopic video. In this system, effective communication between the server and the client is one of important aspects. In this paper, we propose an efficient multi-view system that transmits two view-sequences and their depth maps according to user's request. The view selection process is integrated into MPEG-21 DIA (Digital Item Adaptation) so that our system is compatible to MPEG-21 multimedia framework. DIA is generally composed of resource adaptation and descriptor adaptation. It is one of merits that SVA (stereoscopic video adaptation) descriptors defined in DIA standard are used to deliver users' preferences and device capabilities. Furthermore, multi-view descriptions related to multi-view camera and system are newly introduced. The syntax of the descriptions and their elements is represented in XML (eXtensible Markup Language) schema. If the client requests an adapted descriptor (e.g., view numbers) to the server, then the server sends its associated view sequences. Finally, we present a method which can reduce user's visual discomfort that might occur while viewing stereoscopic video. This phenomenon happens when view changes as well as when a stereoscopic image produces excessive disparity caused by a large baseline between two cameras. To

NASA Technical Reports Server (NTRS)

Reschke, Millard R.; Bloomberg, Jacob J.; Harm, Deborah L.; Huebner, William P.; Krnavek, Jody M.; Paloski, William H.; Berthoz, Alan

1999-01-01

Research on perception and control of self-orientation and self-motion addresses interactions between action and perception . Self-orientation and self-motion, and the perception of that orientation and motion are required for and modified by goal-directed action. Detailed Supplementary Objective (DSO) 604 Operational Investigation-3 (OI-3) was designed to investigate the integrated coordination of head and eye movements within a structured environment where perception could modify responses and where response could be compensatory for perception. A full understanding of this coordination required definition of spatial orientation models for the microgravity environment encountered during spaceflight.

5. Quantifying direct DQPSK receiver with integrated photodiode array by assessing an adapted common-mode rejection ratio

Wang, J.; Lauermann, M.; Zawadzki, C.; Brinker, W.; Zhang, Z.; de Felipe, D.; Keil, N.; Grote, N.; Schell, M.

2011-12-01

In this work, a direct DQPSK receiver was fabricated, which comprises a polymer waveguide based delay-line interferometer (DLI); a polymer based optical hybrid, and two monolithic pairs of > 25 GHz bandwidth photodiodes that are vertically coupled to the polymer planar lightwave circuit (PLC) via integrated 45° mirrors. The common mode rejection ratio (CMRR) is used to characterize the performance of coherent receivers, by indicating the electrical power balance between the balanced detectors. However, the standard CMRR can only be measured when the PDs can be illuminated separately. Also, the standard CMRR does not take into account the errors in the relative phases of the receiver outputs. We introduce an adapted CMRR to characterize the direct receiver, which takes into account the unequal responsivities of the PDs, the uneven split of the input power by the DLI and hybrid, the phase error and the extinction ratio of the DLI and hybrid.

6. An experimental comparison of proportional-integral, sliding mode, and robust adaptive control for piezo-actuated nanopositioning stages.

PubMed

Gu, Guo-Ying; Zhu, Li-Min

2014-05-01

This paper presents a comparative study of the proportional-integral (PI) control, sliding mode control (SMC), and robust adaptive control (RAC) for applications to piezo-actuated nanopositioning stages without the inverse hysteresis construction. For a fair comparison, the control parameters of the SMC and RAC are selected on the basis of the well-tuned parameters of the PI controller under same desired trajectories and sampling frequencies. The comparative results show that the RAC improves the tracking performance by 17 and 37 times than the PI controller in terms of the maximum tracking error e(m) and the root mean tracking error e(rms), respectively, while the RAC improves the tracking performance by 7 and 9 times than the SMC in terms of e(m) and e(rms), respectively.

7. Experimental validation of numerical study on thermoelectric-based heating in an integrated centrifugal microfluidic platform for polymerase chain reaction amplification.

PubMed

Amasia, Mary; Kang, Seok-Won; Banerjee, Debjyoti; Madou, Marc

2013-01-01

A comprehensive study involving numerical analysis and experimental validation of temperature transients within a microchamber was performed for thermocycling operation in an integrated centrifugal microfluidic platform for polymerase chain reaction (PCR) amplification. Controlled heating and cooling of biological samples are essential processes in many sample preparation and detection steps for micro-total analysis systems. Specifically, the PCR process relies on highly controllable and uniform heating of nucleic acid samples for successful and efficient amplification. In these miniaturized systems, the heating process is often performed more rapidly, making the temperature control more difficult, and adding complexity to the integrated hardware system. To gain further insight into the complex temperature profiles within the PCR microchamber, numerical simulations using computational fluid dynamics and computational heat transfer were performed. The designed integrated centrifugal microfluidics platform utilizes thermoelectrics for ice-valving and thermocycling for PCR amplification. Embedded micro-thermocouples were used to record the static and dynamic thermal responses in the experiments. The data collected was subsequently used for computational validation of the numerical predictions for the system response during thermocycling, and these simulations were found to be in agreement with the experimental data to within ∼97%. When thermal contact resistance values were incorporated in the simulations, the numerical predictions were found to be in agreement with the experimental data to within ∼99.9%. This in-depth numerical modeling and experimental validation of a complex single-sided heating platform provide insights into hardware and system design for multi-layered polymer microfluidic systems. In addition, the biological capability along with the practical feasibility of the integrated system is demonstrated by successfully performing PCR amplification of

8. Morphological integration and pleiotropy in the adaptive body shape of the snail-feeding carabid beetle Damaster blaptoides.

PubMed

Konuma, Junji; Yamamoto, Satoshi; Sota, Teiji

2014-12-01

9. Adaptive Actor-Critic Design-Based Integral Sliding-Mode Control for Partially Unknown Nonlinear Systems With Input Disturbances.

PubMed

Fan, Quan-Yong; Yang, Guang-Hong

2016-01-01

This paper is concerned with the problem of integral sliding-mode control for a class of nonlinear systems with input disturbances and unknown nonlinear terms through the adaptive actor-critic (AC) control method. The main objective is to design a sliding-mode control methodology based on the adaptive dynamic programming (ADP) method, so that the closed-loop system with time-varying disturbances is stable and the nearly optimal performance of the sliding-mode dynamics can be guaranteed. In the first step, a neural network (NN)-based observer and a disturbance observer are designed to approximate the unknown nonlinear terms and estimate the input disturbances, respectively. Based on the NN approximations and disturbance estimations, the discontinuous part of the sliding-mode control is constructed to eliminate the effect of the disturbances and attain the expected equivalent sliding-mode dynamics. Then, the ADP method with AC structure is presented to learn the optimal control for the sliding-mode dynamics online. Reconstructed tuning laws are developed to guarantee the stability of the sliding-mode dynamics and the convergence of the weights of critic and actor NNs. Finally, the simulation results are presented to illustrate the effectiveness of the proposed method.

10. Medical case-based retrieval: integrating query MeSH terms for query-adaptive multi-modal fusion

Seco de Herrera, Alba G.; Foncubierta-Rodríguez, Antonio; Müller, Henning

2015-03-01

Advances in medical knowledge give clinicians more objective information for a diagnosis. Therefore, there is an increasing need for bibliographic search engines that can provide services helping to facilitate faster information search. The ImageCLEFmed benchmark proposes a medical case-based retrieval task. This task aims at retrieving articles from the biomedical literature that are relevant for differential diagnosis of query cases including a textual description and several images. In the context of this campaign many approaches have been investigated showing that the fusion of visual and text information can improve the precision of the retrieval. However, fusion does not always lead to better results. In this paper, a new query-adaptive fusion criterion to decide when to use multi-modal (text and visual) or only text approaches is presented. The proposed method integrates text information contained in MeSH (Medical Subject Headings) terms extracted and visual features of the images to find synonym relations between them. Given a text query, the query-adaptive fusion criterion decides when it is suitable to also use visual information for the retrieval. Results show that this approach can decide if a text or multi{modal approach should be used with 77.15% of accuracy.

11. Vision-based stabilization of nonholonomic mobile robots by integrating sliding-mode control and adaptive approach

Cao, Zhengcai; Yin, Longjie; Fu, Yili

2013-01-01

Vision-based pose stabilization of nonholonomic mobile robots has received extensive attention. At present, most of the solutions of the problem do not take the robot dynamics into account in the controller design, so that these controllers are difficult to realize satisfactory control in practical application. Besides, many of the approaches suffer from the initial speed and torque jump which are not practical in the real world. Considering the kinematics and dynamics, a two-stage visual controller for solving the stabilization problem of a mobile robot is presented, applying the integration of adaptive control, sliding-mode control, and neural dynamics. In the first stage, an adaptive kinematic stabilization controller utilized to generate the command of velocity is developed based on Lyapunov theory. In the second stage, adopting the sliding-mode control approach, a dynamic controller with a variable speed function used to reduce the chattering is designed, which is utilized to generate the command of torque to make the actual velocity of the mobile robot asymptotically reach the desired velocity. Furthermore, to handle the speed and torque jump problems, the neural dynamics model is integrated into the above mentioned controllers. The stability of the proposed control system is analyzed by using Lyapunov theory. Finally, the simulation of the control law is implemented in perturbed case, and the results show that the control scheme can solve the stabilization problem effectively. The proposed control law can solve the speed and torque jump problems, overcome external disturbances, and provide a new solution for the vision-based stabilization of the mobile robot.

12. Assessment of adaptability of zebu cattle ( Bos indicus) breeds in two different climatic conditions: using cytogenetic techniques on genome integrity

Kumar, Anil; Waiz, Syma Ashraf; Sridhar Goud, T.; Tonk, R. K.; Grewal, Anita; Singh, S. V.; Yadav, B. R.; Upadhyay, R. C.

2016-06-01

The aim of this study was to evaluate the genome integrity so as to assess the adaptability of three breeds of indigenous cattle reared under arid and semi-arid regions of Rajasthan (Bikaner) and Haryana (Karnal) India. The cattle were of homogenous group (same age and sex) of indigenous breeds viz. Sahiwal, Tharparkar and Kankrej. A total of 100 animals were selected for this study from both climatic conditions. The sister chromatid exchanges (SCE's), chromosomal gaps and chromatid breaks were observed in metaphase plates of chromosome preparations obtained from in vitro culture of peripheral blood lymphocytes. The mean number of breaks and gaps in Sahiwal and Tharparkar of semi-arid zone were 8.56 ± 3.16, 6.4 ± 3.39 and 8.72 ± 2.04, 3.52 ± 6.29, respectively. Similarly, the mean number of breaks and gaps in Tharparkar and Kankrej cattle of arid zone were 5.26 ± 1.76, 2.74 ± 1.76 and 5.24 ± 1.84, 2.5 ± 1.26, respectively. The frequency of SCEs in chromosomes was found significantly higher ( P < 0.05) in Tharparkar of semi-arid region (4.72 ± 1.55) compared to arid region (2.83 ± 1.01). Similarly, the frequency of SCEs was found to be 4.0 ± 1.41 in the Sahiwal of semi-arid region and 2.69 ± 1.12 in Kankrej of arid zone. Statistical analysis revealed significant differences ( P < 0.05) amongst the different zones, i.e. arid and semi-arid, whereas no significant difference ( P > 0.05) was observed in the same zone. The analysis of frequency of CAs and SCEs revealed significant effects of environmental conditions on the genome integrity of animals, thereby indicating an association with their adaptability.

13. Adapting needs assessment methodologies to build integrated health pathways for people in the criminal justice system.

PubMed

de Viggiani, N

2012-09-01

Criminal justice health services should be underpinned with good public health evidence about the population's health needs. Health needs assessment methodologies can provide valuable intelligence for commissioners to evaluate the quality of services and innovate according to need. However, health needs assessment can be limited if it takes a conventional epidemiological approach, focussing on individuals' healthcare needs in criminal justice settings. Techniques used to measure health and social need could be more widely applied and appropriately employed in the planning of health and social care services, especially if the intention is to be effective in reducing social exclusion and tackling health inequalities. Assessment tools are available that capture individual, social and environmental risk factors and determinants predisposing people to health and criminogenic risks. Good evidence gathering can mean that public health practitioners not only improve health, reduce inequalities and tackle social exclusion, but contribute to reducing re-offending. This paper suggests a new approach to assessment that integrates the full range of assessment methodologies available to practitioners. An integrated approach may be the way to enhance and enrich the public health function in providing evidence to improve the quality of local public services.

14. Sensori-motor integration during stance: time adaptation of control mechanisms on adding or removing vision.

PubMed

Sozzi, Stefania; Monti, Alberto; De Nunzio, Alessandro Marco; Do, Manh-Cuong; Schieppati, Marco

2011-04-01

Sudden addition or removal of visual information can be particularly critical to balance control. The promptness of adaptation of stance control mechanisms is quantified by the latency at which body oscillation and postural muscle activity vary after a shift in visual condition. In the present study, volunteers stood on a force platform with feet parallel or in tandem. Shifts in visual condition were produced by electronic spectacles. Ground reaction force (center of foot pressure, CoP) and EMG of leg postural muscles were acquired, and latency of CoP and EMG changes estimated by t-tests on the averaged traces. Time-to-reach steady-state was estimated by means of an exponential model. On allowing or occluding vision, decrements and increments in CoP position and oscillation occurred within about 2s. These were preceded by changes in muscle activity, regardless of visual-shift direction, foot position or front or rear leg in tandem. These time intervals were longer than simple reaction-time responses. The time course of recovery to steady-state was about 3s, shorter for oscillation than position. The capacity of modifying balance control at very short intervals both during quiet standing and under more critical balance conditions speaks in favor of a necessary coupling between vision, postural reference, and postural muscle activity, and of the swiftness of this sensory reweighing process.

15. Study on the properties of the Integrated Precipitable Water (IPW) maps derived by GPS, SAR interferometry and numerical forecasting models

Mateus, Pedro; Nico, Giovanni; Tomé, Ricardo; Catalão, João.; Miranda, Pedro

2010-05-01

The knowledge of spatial distribution of relative changes in atmospheric Integrated Precipitable Water (IPW) density is important for climate studies and numerical weather forecasting. An increase (or decrease) of the IPW density affects the phase of electromagnetic waves. For this reason, this quantity can be measured by techniques such as GPS and space-borne SAR interferometry (InSAR). The aim of this work is to study the isotropic properties of the IPW maps obtained by GPS and SAR InSAR measurements and derived by a Numerical Weather Forecasting Model. The existence of a power law in their phase spectrum is verified. The relationship between the interferometric phase delay and the topographic height of the observed area is also investigated. The Lisbon region, Portugal, was chosen as a study area. This region is monitored by a network of GPS permanent stations covering an area of about squared kilometers. The network consists of 12 GPS stations of which 4 belonging to the Instituto Geográfico Português (IGP) and 8 to Instituto Geográfico do Exercito (IGEOE). All stations were installed between 1997 and the beginning of 2009. The GAMIT package was used to process GPS data and to estimate the total zenith delay with a temporal sampling of 15 minutes. A set of 25 SAR interferograms with a 35-day temporal baseline were processed using ASAR-ENVISAT data acquired over the Lisbon region during the period from 2003 to 2005 and from 2008 to 2009. These interferograms give an estimate of the variation of the global atmospheric delay. Terrain deformations related to known geological phenomena in the Lisbon area are negligible at this time scale of 35 days. Furthermore, two interferometric SAR images acquired by ERS-1/2 over the Lisbon region on 20/07/1995 and 21/07/1995, respectively, and so with a temporal baseline of just 1 day, were also processed. The Weather Research & Forecasting Model (WRF) was used to generate the three-dimensional fields of temperature

16. Fully adaptive algorithms for multivariate integral equations using the non-standard form and multiwavelets with applications to the Poisson and bound-state Helmholtz kernels in three dimensions

Frediani, Luca; Fossgaard, Eirik; Flå, Tor; Ruud, Kenneth

2013-07-01

We have developed and implemented a general formalism for fast numerical solution of time-independent linear partial differential equations as well as integral equations through the application of numerically separable integral operators in d ≥ 1 dimensions using the non-standard (NS) form. The proposed formalism is universal, compact and oriented towards the practical implementation into a working code using multiwavelets. The formalism is applied to the case of Poisson and bound-state Helmholtz operators in d = 3. Our algorithms are fully adaptive in the sense that the grid supporting each function is obtained on the fly while the function is being computed. In particular, when the function g = O f is obtained by applying an integral operator O, the corresponding grid is not obtained by transferring the grid from the input function f. This aspect has significant implications that will be discussed in the numerical section. The operator kernels are represented in a separated form with finite but arbitrary precision using Gaussian functions. Such a representation combined with the NS form allows us to build a sparse, banded representation of Green's operator kernel. We have implemented a code for the application of such operators in a separated NS form to a multivariate function in a finite but, in principle, arbitrary number of dimensions. The error of the method is controlled, while the low complexity of the numerical algorithm is kept. The implemented code explicitly computes all the 22d components of the d-dimensional operator. Our algorithms are described in detail in the paper through pseudo-code examples. The final goal of our work is to be able to apply this method to build a fast and accurate Kohn-Sham solver for density functional theory.

17. On Fractional Model Reference Adaptive Control

PubMed Central

Shi, Bao; Dong, Chao

2014-01-01

This paper extends the conventional Model Reference Adaptive Control systems to fractional ones based on the theory of fractional calculus. A control law and an incommensurate fractional adaptation law are designed for the fractional plant and the fractional reference model. The stability and tracking convergence are analyzed using the frequency distributed fractional integrator model and Lyapunov theory. Moreover, numerical simulations of both linear and nonlinear systems are performed to exhibit the viability and effectiveness of the proposed methodology. PMID:24574897

18. Adaptive integration of habits into depth-limited planning defines a habitual-goal–directed spectrum

PubMed Central

Keramati, Mehdi; Smittenaar, Peter; Dolan, Raymond J.; Dayan, Peter

2016-01-01

Behavioral and neural evidence reveal a prospective goal-directed decision process that relies on mental simulation of the environment, and a retrospective habitual process that caches returns previously garnered from available choices. Artificial systems combine the two by simulating the environment up to some depth and then exploiting habitual values as proxies for consequences that may arise in the further future. Using a three-step task, we provide evidence that human subjects use such a normative plan-until-habit strategy, implying a spectrum of approaches that interpolates between habitual and goal-directed responding. We found that increasing time pressure led to shallower goal-directed planning, suggesting that a speed-accuracy tradeoff controls the depth of planning with deeper search leading to more accurate evaluation, at the cost of slower decision-making. We conclude that subjects integrate habit-based cached values directly into goal-directed evaluations in a normative manner. PMID:27791110

19. Adapting to the Needs of the Public Health Workforce: An Integrated Case-Based Training Program

PubMed Central

Sibbald, Shannon L.; Speechley, Mark; Thind, Amardeep

2016-01-01

The goal of any public health education at the Masters level is to transmit knowledge and skills to meet current and future public health challenges. We suggest an innovative multi-modal approach to public health education using a case-based pedagogy combined with competency-based curriculum and a team-based approach to foster truly experiential learning. We describe each pedagogical approach in connection to the relevance of optimal methods for training public health professionals. Western University’s Schulich Interfaculty Masters of Public Health (MPH) program (ON, Canada) provides a unique interprofessional education through case-based learning and competency-based curriculum. This Masters program has attracted applicants from around the world to learn in a supportive interprofessional environment and to foster them as they become learners and leaders in public health changes. To our knowledge, we are the first condensed MPH program using integrated case-based pedagogy as our main pedagogical approach. PMID:27790608

20. Simulation of Supersonic Jet Noise with the Adaptation of Overflow CFD Code and Kirchhoff Surface Integral

NASA Technical Reports Server (NTRS)

Kandula, Max; Caimi, Raoul; Steinrock, T. (Technical Monitor)

2001-01-01

An acoustic prediction capability for supersonic axisymmetric jets was developed on the basis of OVERFLOW Navier-Stokes CFD (Computational Fluid Dynamics) code of NASA Langley Research Center. Reynolds-averaged turbulent stresses in the flow field are modeled with the aid of Spalart-Allmaras one-equation turbulence model. Appropriate acoustic and outflow boundary conditions were implemented to compute time-dependent acoustic pressure in the nonlinear source-field. Based on the specification of acoustic pressure, its temporal and normal derivatives on the Kirchhoff surface, the near-field and the far-field sound pressure levels are computed via Kirchhoff surface integral, with the Kirchhoff surface chosen to enclose the nonlinear sound source region described by the CFD code. The methods are validated by a comparison of the predictions of sound pressure levels with the available data for an axisymmetric turbulent supersonic (Mach 2) perfectly expanded jet.