Sample records for complex variable meshless

  1. Meshless Geometric Subdivision

    DTIC Science & Technology

    2004-10-01

    Michelangelo Youthful data set is shown on the right. for p ∈ M and with boundary condition dM (q, q) = 0 is approximated by |∇dΩrP (p, ·)| = F̃ (p), (2...dealing with more complex geometry. We apply our meshless subdivision operator to a base point set of 10088 points generated from the Michelangelo ...acknowledge the permission to use the Michelangelo point sets granted by the Stanford Computer Graphics group. The Isis, 50% decimated and non

  2. A high-order staggered meshless method for elliptic problems

    DOE PAGES

    Trask, Nathaniel; Perego, Mauro; Bochev, Pavel Blagoveston

    2017-03-21

    Here, we present a new meshless method for scalar diffusion equations, which is motivated by their compatible discretizations on primal-dual grids. Unlike the latter though, our approach is truly meshless because it only requires the graph of nearby neighbor connectivity of the discretization points. This graph defines a local primal-dual grid complex with a virtual dual grid, in the sense that specification of the dual metric attributes is implicit in the method's construction. Our method combines a topological gradient operator on the local primal grid with a generalized moving least squares approximation of the divergence on the local dual grid. We show that the resulting approximation of the div-grad operator maintains polynomial reproduction to arbitrary orders and yields a meshless method, which attainsmore » $$O(h^{m})$$ convergence in both $L^2$- and $H^1$-norms, similar to mixed finite element methods. We demonstrate this convergence on curvilinear domains using manufactured solutions in two and three dimensions. Application of the new method to problems with discontinuous coefficients reveals solutions that are qualitatively similar to those of compatible mesh-based discretizations.« less

  3. Biomechanical Model for Computing Deformations for Whole-Body Image Registration: A Meshless Approach

    PubMed Central

    Li, Mao; Miller, Karol; Joldes, Grand Roman; Kikinis, Ron; Wittek, Adam

    2016-01-01

    Patient-specific biomechanical models have been advocated as a tool for predicting deformations of soft body organs/tissue for medical image registration (aligning two sets of images) when differences between the images are large. However, complex and irregular geometry of the body organs makes generation of patient-specific biomechanical models very time consuming. Meshless discretisation has been proposed to solve this challenge. However, applications so far have been limited to 2-D models and computing single organ deformations. In this study, 3-D comprehensive patient-specific non-linear biomechanical models implemented using Meshless Total Lagrangian Explicit Dynamics (MTLED) algorithms are applied to predict a 3-D deformation field for whole-body image registration. Unlike a conventional approach which requires dividing (segmenting) the image into non-overlapping constituents representing different organs/tissues, the mechanical properties are assigned using the Fuzzy C-Means (FCM) algorithm without the image segmentation. Verification indicates that the deformations predicted using the proposed meshless approach are for practical purposes the same as those obtained using the previously validated finite element models. To quantitatively evaluate the accuracy of the predicted deformations, we determined the spatial misalignment between the registered (i.e. source images warped using the predicted deformations) and target images by computing the edge-based Hausdorff distance. The Hausdorff distance-based evaluation determines that our meshless models led to successful registration of the vast majority of the image features. PMID:26791945

  4. Biomechanical model for computing deformations for whole-body image registration: A meshless approach.

    PubMed

    Li, Mao; Miller, Karol; Joldes, Grand Roman; Kikinis, Ron; Wittek, Adam

    2016-12-01

    Patient-specific biomechanical models have been advocated as a tool for predicting deformations of soft body organs/tissue for medical image registration (aligning two sets of images) when differences between the images are large. However, complex and irregular geometry of the body organs makes generation of patient-specific biomechanical models very time-consuming. Meshless discretisation has been proposed to solve this challenge. However, applications so far have been limited to 2D models and computing single organ deformations. In this study, 3D comprehensive patient-specific nonlinear biomechanical models implemented using meshless Total Lagrangian explicit dynamics algorithms are applied to predict a 3D deformation field for whole-body image registration. Unlike a conventional approach that requires dividing (segmenting) the image into non-overlapping constituents representing different organs/tissues, the mechanical properties are assigned using the fuzzy c-means algorithm without the image segmentation. Verification indicates that the deformations predicted using the proposed meshless approach are for practical purposes the same as those obtained using the previously validated finite element models. To quantitatively evaluate the accuracy of the predicted deformations, we determined the spatial misalignment between the registered (i.e. source images warped using the predicted deformations) and target images by computing the edge-based Hausdorff distance. The Hausdorff distance-based evaluation determines that our meshless models led to successful registration of the vast majority of the image features. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  5. A meshless method for solving two-dimensional variable-order time fractional advection-diffusion equation

    NASA Astrophysics Data System (ADS)

    Tayebi, A.; Shekari, Y.; Heydari, M. H.

    2017-07-01

    Several physical phenomena such as transformation of pollutants, energy, particles and many others can be described by the well-known convection-diffusion equation which is a combination of the diffusion and advection equations. In this paper, this equation is generalized with the concept of variable-order fractional derivatives. The generalized equation is called variable-order time fractional advection-diffusion equation (V-OTFA-DE). An accurate and robust meshless method based on the moving least squares (MLS) approximation and the finite difference scheme is proposed for its numerical solution on two-dimensional (2-D) arbitrary domains. In the time domain, the finite difference technique with a θ-weighted scheme and in the space domain, the MLS approximation are employed to obtain appropriate semi-discrete solutions. Since the newly developed method is a meshless approach, it does not require any background mesh structure to obtain semi-discrete solutions of the problem under consideration, and the numerical solutions are constructed entirely based on a set of scattered nodes. The proposed method is validated in solving three different examples including two benchmark problems and an applied problem of pollutant distribution in the atmosphere. In all such cases, the obtained results show that the proposed method is very accurate and robust. Moreover, a remarkable property so-called positive scheme for the proposed method is observed in solving concentration transport phenomena.

  6. Comparison between two meshless methods based on collocation technique for the numerical solution of four-species tumor growth model

    NASA Astrophysics Data System (ADS)

    Dehghan, Mehdi; Mohammadi, Vahid

    2017-03-01

    As is said in [27], the tumor-growth model is the incorporation of nutrient within the mixture as opposed to being modeled with an auxiliary reaction-diffusion equation. The formulation involves systems of highly nonlinear partial differential equations of surface effects through diffuse-interface models [27]. Simulations of this practical model using numerical methods can be applied for evaluating it. The present paper investigates the solution of the tumor growth model with meshless techniques. Meshless methods are applied based on the collocation technique which employ multiquadrics (MQ) radial basis function (RBFs) and generalized moving least squares (GMLS) procedures. The main advantages of these choices come back to the natural behavior of meshless approaches. As well as, a method based on meshless approach can be applied easily for finding the solution of partial differential equations in high-dimension using any distributions of points on regular and irregular domains. The present paper involves a time-dependent system of partial differential equations that describes four-species tumor growth model. To overcome the time variable, two procedures will be used. One of them is a semi-implicit finite difference method based on Crank-Nicolson scheme and another one is based on explicit Runge-Kutta time integration. The first case gives a linear system of algebraic equations which will be solved at each time-step. The second case will be efficient but conditionally stable. The obtained numerical results are reported to confirm the ability of these techniques for solving the two and three-dimensional tumor-growth equations.

  7. NOTE: Solving the ECG forward problem by means of a meshless finite element method

    NASA Astrophysics Data System (ADS)

    Li, Z. S.; Zhu, S. A.; He, Bin

    2007-07-01

    The conventional numerical computational techniques such as the finite element method (FEM) and the boundary element method (BEM) require laborious and time-consuming model meshing. The new meshless FEM only uses the boundary description and the node distribution and no meshing of the model is required. This paper presents the fundamentals and implementation of meshless FEM and the meshless FEM method is adapted to solve the electrocardiography (ECG) forward problem. The method is evaluated on a single-layer torso model, in which the analytical solution exists, and tested in a realistic geometry homogeneous torso model, with satisfactory results being obtained. The present results suggest that the meshless FEM may provide an alternative for ECG forward solutions.

  8. A GPU-accelerated implicit meshless method for compressible flows

    NASA Astrophysics Data System (ADS)

    Zhang, Jia-Le; Ma, Zhi-Hua; Chen, Hong-Quan; Cao, Cheng

    2018-05-01

    This paper develops a recently proposed GPU based two-dimensional explicit meshless method (Ma et al., 2014) by devising and implementing an efficient parallel LU-SGS implicit algorithm to further improve the computational efficiency. The capability of the original 2D meshless code is extended to deal with 3D complex compressible flow problems. To resolve the inherent data dependency of the standard LU-SGS method, which causes thread-racing conditions destabilizing numerical computation, a generic rainbow coloring method is presented and applied to organize the computational points into different groups by painting neighboring points with different colors. The original LU-SGS method is modified and parallelized accordingly to perform calculations in a color-by-color manner. The CUDA Fortran programming model is employed to develop the key kernel functions to apply boundary conditions, calculate time steps, evaluate residuals as well as advance and update the solution in the temporal space. A series of two- and three-dimensional test cases including compressible flows over single- and multi-element airfoils and a M6 wing are carried out to verify the developed code. The obtained solutions agree well with experimental data and other computational results reported in the literature. Detailed analysis on the performance of the developed code reveals that the developed CPU based implicit meshless method is at least four to eight times faster than its explicit counterpart. The computational efficiency of the implicit method could be further improved by ten to fifteen times on the GPU.

  9. Meshless method for solving fixed boundary problem of plasma equilibrium

    NASA Astrophysics Data System (ADS)

    Imazawa, Ryota; Kawano, Yasunori; Itami, Kiyoshi

    2015-07-01

    This study solves the Grad-Shafranov equation with a fixed plasma boundary by utilizing a meshless method for the first time. Previous studies have utilized a finite element method (FEM) to solve an equilibrium inside the fixed separatrix. In order to avoid difficulties of FEM (such as mesh problem, difficulty of coding, expensive calculation cost), this study focuses on the meshless methods, especially RBF-MFS and KANSA's method to solve the fixed boundary problem. The results showed that CPU time of the meshless methods was ten to one hundred times shorter than that of FEM to obtain the same accuracy.

  10. Meshless Modeling of Deformable Shapes and their Motion

    PubMed Central

    Adams, Bart; Ovsjanikov, Maks; Wand, Michael; Seidel, Hans-Peter; Guibas, Leonidas J.

    2010-01-01

    We present a new framework for interactive shape deformation modeling and key frame interpolation based on a meshless finite element formulation. Starting from a coarse nodal sampling of an object’s volume, we formulate rigidity and volume preservation constraints that are enforced to yield realistic shape deformations at interactive frame rates. Additionally, by specifying key frame poses of the deforming shape and optimizing the nodal displacements while targeting smooth interpolated motion, our algorithm extends to a motion planning framework for deformable objects. This allows reconstructing smooth and plausible deformable shape trajectories in the presence of possibly moving obstacles. The presented results illustrate that our framework can handle complex shapes at interactive rates and hence is a valuable tool for animators to realistically and efficiently model and interpolate deforming 3D shapes. PMID:24839614

  11. Meshless Method for Simulation of Compressible Flow

    NASA Astrophysics Data System (ADS)

    Nabizadeh Shahrebabak, Ebrahim

    In the present age, rapid development in computing technology and high speed supercomputers has made numerical analysis and computational simulation more practical than ever before for large and complex cases. Numerical simulations have also become an essential means for analyzing the engineering problems and the cases that experimental analysis is not practical. There are so many sophisticated and accurate numerical schemes, which do these simulations. The finite difference method (FDM) has been used to solve differential equation systems for decades. Additional numerical methods based on finite volume and finite element techniques are widely used in solving problems with complex geometry. All of these methods are mesh-based techniques. Mesh generation is an essential preprocessing part to discretize the computation domain for these conventional methods. However, when dealing with mesh-based complex geometries these conventional mesh-based techniques can become troublesome, difficult to implement, and prone to inaccuracies. In this study, a more robust, yet simple numerical approach is used to simulate problems in an easier manner for even complex problem. The meshless, or meshfree, method is one such development that is becoming the focus of much research in the recent years. The biggest advantage of meshfree methods is to circumvent mesh generation. Many algorithms have now been developed to help make this method more popular and understandable for everyone. These algorithms have been employed over a wide range of problems in computational analysis with various levels of success. Since there is no connectivity between the nodes in this method, the challenge was considerable. The most fundamental issue is lack of conservation, which can be a source of unpredictable errors in the solution process. This problem is particularly evident in the presence of steep gradient regions and discontinuities, such as shocks that frequently occur in high speed compressible flow problems. To solve this discontinuity problem, this research study deals with the implementation of a conservative meshless method and its applications in computational fluid dynamics (CFD). One of the most common types of collocating meshless method the RBF-DQ, is used to approximate the spatial derivatives. The issue with meshless methods when dealing with highly convective cases is that they cannot distinguish the influence of fluid flow from upstream or downstream and some methodology is needed to make the scheme stable. Therefore, an upwinding scheme similar to one used in the finite volume method is added to capture steep gradient or shocks. This scheme creates a flexible algorithm within which a wide range of numerical flux schemes, such as those commonly used in the finite volume method, can be employed. In addition, a blended RBF is used to decrease the dissipation ensuing from the use of a low shape parameter. All of these steps are formulated for the Euler equation and a series of test problems used to confirm convergence of the algorithm. The present scheme was first employed on several incompressible benchmarks to validate the framework. The application of this algorithm is illustrated by solving a set of incompressible Navier-Stokes problems. Results from the compressible problem are compared with the exact solution for the flow over a ramp and compared with solutions of finite volume discretization and the discontinuous Galerkin method, both requiring a mesh. The applicability of the algorithm and its robustness are shown to be applied to complex problems.

  12. A Meshless Method Using Radial Basis Functions for Beam Bending Problems

    NASA Technical Reports Server (NTRS)

    Raju, I. S.; Phillips, D. R.; Krishnamurthy, T.

    2004-01-01

    A meshless local Petrov-Galerkin (MLPG) method that uses radial basis functions (RBFs) as trial functions in the study of Euler-Bernoulli beam problems is presented. RBFs, rather than generalized moving least squares (GMLS) interpolations, are used to develop the trial functions. This choice yields a computationally simpler method as fewer matrix inversions and multiplications are required than when GMLS interpolations are used. Test functions are chosen as simple weight functions as they are in the conventional MLPG method. Compactly and noncompactly supported RBFs are considered. Noncompactly supported cubic RBFs are found to be preferable. Patch tests, mixed boundary value problems, and problems with complex loading conditions are considered. Results obtained from the radial basis MLPG method are either of comparable or better accuracy than those obtained when using the conventional MLPG method.

  13. The analysis of composite laminated beams using a 2D interpolating meshless technique

    NASA Astrophysics Data System (ADS)

    Sadek, S. H. M.; Belinha, J.; Parente, M. P. L.; Natal Jorge, R. M.; de Sá, J. M. A. César; Ferreira, A. J. M.

    2018-02-01

    Laminated composite materials are widely implemented in several engineering constructions. For its relative light weight, these materials are suitable for aerospace, military, marine, and automotive structural applications. To obtain safe and economical structures, the modelling analysis accuracy is highly relevant. Since meshless methods in the recent years achieved a remarkable progress in computational mechanics, the present work uses one of the most flexible and stable interpolation meshless technique available in the literature—the Radial Point Interpolation Method (RPIM). Here, a 2D approach is considered to numerically analyse composite laminated beams. Both the meshless formulation and the equilibrium equations ruling the studied physical phenomenon are presented with detail. Several benchmark beam examples are studied and the results are compared with exact solutions available in the literature and the results obtained from a commercial finite element software. The results show the efficiency and accuracy of the proposed numeric technique.

  14. Meshless Local Petrov-Galerkin Method for Bending Problems

    NASA Technical Reports Server (NTRS)

    Phillips, Dawn R.; Raju, Ivatury S.

    2002-01-01

    Recent literature shows extensive research work on meshless or element-free methods as alternatives to the versatile Finite Element Method. One such meshless method is the Meshless Local Petrov-Galerkin (MLPG) method. In this report, the method is developed for bending of beams - C1 problems. A generalized moving least squares (GMLS) interpolation is used to construct the trial functions, and spline and power weight functions are used as the test functions. The method is applied to problems for which exact solutions are available to evaluate its effectiveness. The accuracy of the method is demonstrated for problems with load discontinuities and continuous beam problems. A Petrov-Galerkin implementation of the method is shown to greatly reduce computational time and effort and is thus preferable over the previously developed Galerkin approach. The MLPG method for beam problems yields very accurate deflections and slopes and continuous moment and shear forces without the need for elaborate post-processing techniques.

  15. Meshless collocation methods for the numerical solution of elliptic boundary valued problems the rotational shallow water equations on the sphere

    NASA Astrophysics Data System (ADS)

    Blakely, Christopher D.

    This dissertation thesis has three main goals: (1) To explore the anatomy of meshless collocation approximation methods that have recently gained attention in the numerical analysis community; (2) Numerically demonstrate why the meshless collocation method should clearly become an attractive alternative to standard finite-element methods due to the simplicity of its implementation and its high-order convergence properties; (3) Propose a meshless collocation method for large scale computational geophysical fluid dynamics models. We provide numerical verification and validation of the meshless collocation scheme applied to the rotational shallow-water equations on the sphere and demonstrate computationally that the proposed model can compete with existing high performance methods for approximating the shallow-water equations such as the SEAM (spectral-element atmospheric model) developed at NCAR. A detailed analysis of the parallel implementation of the model, along with the introduction of parallel algorithmic routines for the high-performance simulation of the model will be given. We analyze the programming and computational aspects of the model using Fortran 90 and the message passing interface (mpi) library along with software and hardware specifications and performance tests. Details from many aspects of the implementation in regards to performance, optimization, and stabilization will be given. In order to verify the mathematical correctness of the algorithms presented and to validate the performance of the meshless collocation shallow-water model, we conclude the thesis with numerical experiments on some standardized test cases for the shallow-water equations on the sphere using the proposed method.

  16. Survey of meshless and generalized finite element methods: A unified approach

    NASA Astrophysics Data System (ADS)

    Babuška, Ivo; Banerjee, Uday; Osborn, John E.

    In the past few years meshless methods for numerically solving partial differential equations have come into the focus of interest, especially in the engineering community. This class of methods was essentially stimulated by difficulties related to mesh generation. Mesh generation is delicate in many situations, for instance, when the domain has complicated geometry; when the mesh changes with time, as in crack propagation, and remeshing is required at each time step; when a Lagrangian formulation is employed, especially with nonlinear PDEs. In addition, the need for flexibility in the selection of approximating functions (e.g., the flexibility to use non-polynomial approximating functions), has played a significant role in the development of meshless methods. There are many recent papers, and two books, on meshless methods; most of them are of an engineering character, without any mathematical analysis.In this paper we address meshless methods and the closely related generalized finite element methods for solving linear elliptic equations, using variational principles. We give a unified mathematical theory with proofs, briefly address implementational aspects, present illustrative numerical examples, and provide a list of references to the current literature.The aim of the paper is to provide a survey of a part of this new field, with emphasis on mathematics. We present proofs of essential theorems because we feel these proofs are essential for the understanding of the mathematical aspects of meshless methods, which has approximation theory as a major ingredient. As always, any new field is stimulated by and related to older ideas. This will be visible in our paper.

  17. Least-squares collocation meshless approach for radiative heat transfer in absorbing and scattering media

    NASA Astrophysics Data System (ADS)

    Liu, L. H.; Tan, J. Y.

    2007-02-01

    A least-squares collocation meshless method is employed for solving the radiative heat transfer in absorbing, emitting and scattering media. The least-squares collocation meshless method for radiative transfer is based on the discrete ordinates equation. A moving least-squares approximation is applied to construct the trial functions. Except for the collocation points which are used to construct the trial functions, a number of auxiliary points are also adopted to form the total residuals of the problem. The least-squares technique is used to obtain the solution of the problem by minimizing the summation of residuals of all collocation and auxiliary points. Three numerical examples are studied to illustrate the performance of this new solution method. The numerical results are compared with the other benchmark approximate solutions. By comparison, the results show that the least-squares collocation meshless method is efficient, accurate and stable, and can be used for solving the radiative heat transfer in absorbing, emitting and scattering media.

  18. High Performance Computing of Meshless Time Domain Method on Multi-GPU Cluster

    NASA Astrophysics Data System (ADS)

    Ikuno, Soichiro; Nakata, Susumu; Hirokawa, Yuta; Itoh, Taku

    2015-01-01

    High performance computing of Meshless Time Domain Method (MTDM) on multi-GPU using the supercomputer HA-PACS (Highly Accelerated Parallel Advanced system for Computational Sciences) at University of Tsukuba is investigated. Generally, the finite difference time domain (FDTD) method is adopted for the numerical simulation of the electromagnetic wave propagation phenomena. However, the numerical domain must be divided into rectangle meshes, and it is difficult to adopt the problem in a complexed domain to the method. On the other hand, MTDM can be easily adept to the problem because MTDM does not requires meshes. In the present study, we implement MTDM on multi-GPU cluster to speedup the method, and numerically investigate the performance of the method on multi-GPU cluster. To reduce the computation time, the communication time between the decomposed domain is hided below the perfect matched layer (PML) calculation procedure. The results of computation show that speedup of MTDM on 128 GPUs is 173 times faster than that of single CPU calculation.

  19. A well-balanced meshless tsunami propagation and inundation model

    NASA Astrophysics Data System (ADS)

    Brecht, Rüdiger; Bihlo, Alexander; MacLachlan, Scott; Behrens, Jörn

    2018-05-01

    We present a novel meshless tsunami propagation and inundation model. We discretize the nonlinear shallow-water equations using a well-balanced scheme relying on radial basis function based finite differences. For the inundation model, radial basis functions are used to extrapolate the dry region from nearby wet points. Numerical results against standard one- and two-dimensional benchmarks are presented.

  20. A Novel Haptic Interactive Approach to Simulation of Surgery Cutting Based on Mesh and Meshless Models

    PubMed Central

    Liu, Peter X.; Lai, Pinhua; Xu, Shaoping; Zou, Yanni

    2018-01-01

    In the present work, the majority of implemented virtual surgery simulation systems have been based on either a mesh or meshless strategy with regard to soft tissue modelling. To take full advantage of the mesh and meshless models, a novel coupled soft tissue cutting model is proposed. Specifically, the reconstructed virtual soft tissue consists of two essential components. One is associated with surface mesh that is convenient for surface rendering and the other with internal meshless point elements that is used to calculate the force feedback during cutting. To combine two components in a seamless way, virtual points are introduced. During the simulation of cutting, the Bezier curve is used to characterize smooth and vivid incision on the surface mesh. At the same time, the deformation of internal soft tissue caused by cutting operation can be treated as displacements of the internal point elements. Furthermore, we discussed and proved the stability and convergence of the proposed approach theoretically. The real biomechanical tests verified the validity of the introduced model. And the simulation experiments show that the proposed approach offers high computational efficiency and good visual effect, enabling cutting of soft tissue with high stability. PMID:29850006

  1. Real-time deformation of human soft tissues: A radial basis meshless 3D model based on Marquardt's algorithm.

    PubMed

    Zhou, Jianyong; Luo, Zu; Li, Chunquan; Deng, Mi

    2018-01-01

    When the meshless method is used to establish the mathematical-mechanical model of human soft tissues, it is necessary to define the space occupied by human tissues as the problem domain and the boundary of the domain as the surface of those tissues. Nodes should be distributed in both the problem domain and on the boundaries. Under external force, the displacement of the node is computed by the meshless method to represent the deformation of biological soft tissues. However, computation by the meshless method consumes too much time, which will affect the simulation of real-time deformation of human tissues in virtual surgery. In this article, the Marquardt's Algorithm is proposed to fit the nodal displacement at the problem domain's boundary and obtain the relationship between surface deformation and force. When different external forces are applied, the deformation of soft tissues can be quickly obtained based on this relationship. The analysis and discussion show that the improved model equations with Marquardt's Algorithm not only can simulate the deformation in real-time but also preserve the authenticity of the deformation model's physical properties. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. Mechanics of cantilever beam: Implementation and comparison of FEM and MLPG approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Trobec, Roman

    2016-06-08

    Two weak form solution approaches for partial differential equations, the well known meshbased finite element method and the newer meshless local Petrov Galerkin method are described and compared on a standard test case - mechanics of cantilever beam. The implementation, solution accuracy and calculation complexity are addressed for both approaches. We found out that FEM is superior in most standard criteria, but MLPG has some advantages because of its flexibility that results from its general formulation.

  3. Meshless Lagrangian SPH method applied to isothermal lid-driven cavity flow at low-Re numbers

    NASA Astrophysics Data System (ADS)

    Fraga Filho, C. A. D.; Chacaltana, J. T. A.; Pinto, W. J. N.

    2018-01-01

    SPH is a recent particle method applied in the cavities study, without many results available in the literature. The lid-driven cavity flow is a classic problem of the fluid mechanics, extensively explored in the literature and presenting a considerable complexity. The aim of this paper is to present a solution from the Lagrangian viewpoint for this problem. The discretization of the continuum domain is performed using the Lagrangian particles. The physical laws of mass, momentum and energy conservation are presented by the Navier-Stokes equations. A serial numerical code, written in Fortran programming language, has been used to perform the numerical simulations. The application of the SPH and comparison with the literature (mesh methods and a meshless collocation method) have been done. The positions of the primary vortex centre and the non-dimensional velocity profiles passing through the geometric centre of the cavity have been analysed. The numerical Lagrangian results showed a good agreement when compared to the results found in the literature, specifically for { Re} < 100.00 . Suggestions for improvements in the SPH model presented are listed, in the search for better results for flows with higher Reynolds numbers.

  4. A nearest-neighbour discretisation of the regularized stokeslet boundary integral equation

    NASA Astrophysics Data System (ADS)

    Smith, David J.

    2018-04-01

    The method of regularized stokeslets is extensively used in biological fluid dynamics due to its conceptual simplicity and meshlessness. This simplicity carries a degree of cost in computational expense and accuracy because the number of degrees of freedom used to discretise the unknown surface traction is generally significantly higher than that required by boundary element methods. We describe a meshless method based on nearest-neighbour interpolation that significantly reduces the number of degrees of freedom required to discretise the unknown traction, increasing the range of problems that can be practically solved, without excessively complicating the task of the modeller. The nearest-neighbour technique is tested against the classical problem of rigid body motion of a sphere immersed in very viscous fluid, then applied to the more complex biophysical problem of calculating the rotational diffusion timescales of a macromolecular structure modelled by three closely-spaced non-slender rods. A heuristic for finding the required density of force and quadrature points by numerical refinement is suggested. Matlab/GNU Octave code for the key steps of the algorithm is provided, which predominantly use basic linear algebra operations, with a full implementation being provided on github. Compared with the standard Nyström discretisation, more accurate and substantially more efficient results can be obtained by de-refining the force discretisation relative to the quadrature discretisation: a cost reduction of over 10 times with improved accuracy is observed. This improvement comes at minimal additional technical complexity. Future avenues to develop the algorithm are then discussed.

  5. GPU computing of compressible flow problems by a meshless method with space-filling curves

    NASA Astrophysics Data System (ADS)

    Ma, Z. H.; Wang, H.; Pu, S. H.

    2014-04-01

    A graphic processing unit (GPU) implementation of a meshless method for solving compressible flow problems is presented in this paper. Least-square fit is used to discretize the spatial derivatives of Euler equations and an upwind scheme is applied to estimate the flux terms. The compute unified device architecture (CUDA) C programming model is employed to efficiently and flexibly port the meshless solver from CPU to GPU. Considering the data locality of randomly distributed points, space-filling curves are adopted to re-number the points in order to improve the memory performance. Detailed evaluations are firstly carried out to assess the accuracy and conservation property of the underlying numerical method. Then the GPU accelerated flow solver is used to solve external steady flows over aerodynamic configurations. Representative results are validated through extensive comparisons with the experimental, finite volume or other available reference solutions. Performance analysis reveals that the running time cost of simulations is significantly reduced while impressive (more than an order of magnitude) speedups are achieved.

  6. Experimental and AI-based numerical modeling of contaminant transport in porous media

    NASA Astrophysics Data System (ADS)

    Nourani, Vahid; Mousavi, Shahram; Sadikoglu, Fahreddin; Singh, Vijay P.

    2017-10-01

    This study developed a new hybrid artificial intelligence (AI)-meshless approach for modeling contaminant transport in porous media. The key innovation of the proposed approach is that both black box and physically-based models are combined for modeling contaminant transport. The effectiveness of the approach was evaluated using experimental and real world data. Artificial neural network (ANN) and adaptive neuro-fuzzy inference system (ANFIS) were calibrated to predict temporal contaminant concentrations (CCs), and the effect of noisy and de-noised data on the model performance was evaluated. Then, considering the predicted CCs at test points (TPs, in experimental study) and piezometers (in Myandoab plain) as interior conditions, the multiquadric radial basis function (MQ-RBF), as a meshless approach which solves partial differential equation (PDE) of contaminant transport in porous media, was employed to estimate the CC values at any point within the study area where there was no TP or piezometer. Optimal values of the dispersion coefficient in the advection-dispersion PDE and shape coefficient of MQ-RBF were determined using the imperialist competitive algorithm. In temporal contaminant transport modeling, de-noised data enhanced the performance of ANN and ANFIS methods in terms of the determination coefficient, up to 6 and 5%, respectively, in the experimental study and up to 39 and 18%, respectively, in the field study. Results showed that the efficiency of ANFIS-meshless model was more than ANN-meshless model up to 2 and 13% in the experimental and field studies, respectively.

  7. Experimental and AI-based numerical modeling of contaminant transport in porous media.

    PubMed

    Nourani, Vahid; Mousavi, Shahram; Sadikoglu, Fahreddin; Singh, Vijay P

    2017-10-01

    This study developed a new hybrid artificial intelligence (AI)-meshless approach for modeling contaminant transport in porous media. The key innovation of the proposed approach is that both black box and physically-based models are combined for modeling contaminant transport. The effectiveness of the approach was evaluated using experimental and real world data. Artificial neural network (ANN) and adaptive neuro-fuzzy inference system (ANFIS) were calibrated to predict temporal contaminant concentrations (CCs), and the effect of noisy and de-noised data on the model performance was evaluated. Then, considering the predicted CCs at test points (TPs, in experimental study) and piezometers (in Myandoab plain) as interior conditions, the multiquadric radial basis function (MQ-RBF), as a meshless approach which solves partial differential equation (PDE) of contaminant transport in porous media, was employed to estimate the CC values at any point within the study area where there was no TP or piezometer. Optimal values of the dispersion coefficient in the advection-dispersion PDE and shape coefficient of MQ-RBF were determined using the imperialist competitive algorithm. In temporal contaminant transport modeling, de-noised data enhanced the performance of ANN and ANFIS methods in terms of the determination coefficient, up to 6 and 5%, respectively, in the experimental study and up to 39 and 18%, respectively, in the field study. Results showed that the efficiency of ANFIS-meshless model was more than ANN-meshless model up to 2 and 13% in the experimental and field studies, respectively. Copyright © 2017. Published by Elsevier B.V.

  8. Accurate, meshless methods for magnetohydrodynamics

    NASA Astrophysics Data System (ADS)

    Hopkins, Philip F.; Raives, Matthias J.

    2016-01-01

    Recently, we explored new meshless finite-volume Lagrangian methods for hydrodynamics: the `meshless finite mass' (MFM) and `meshless finite volume' (MFV) methods; these capture advantages of both smoothed particle hydrodynamics (SPH) and adaptive mesh refinement (AMR) schemes. We extend these to include ideal magnetohydrodynamics (MHD). The MHD equations are second-order consistent and conservative. We augment these with a divergence-cleaning scheme, which maintains nabla \\cdot B≈ 0. We implement these in the code GIZMO, together with state-of-the-art SPH MHD. We consider a large test suite, and show that on all problems the new methods are competitive with AMR using constrained transport (CT) to ensure nabla \\cdot B=0. They correctly capture the growth/structure of the magnetorotational instability, MHD turbulence, and launching of magnetic jets, in some cases converging more rapidly than state-of-the-art AMR. Compared to SPH, the MFM/MFV methods exhibit convergence at fixed neighbour number, sharp shock-capturing, and dramatically reduced noise, divergence errors, and diffusion. Still, `modern' SPH can handle most test problems, at the cost of larger kernels and `by hand' adjustment of artificial diffusion. Compared to non-moving meshes, the new methods exhibit enhanced `grid noise' but reduced advection errors and diffusion, easily include self-gravity, and feature velocity-independent errors and superior angular momentum conservation. They converge more slowly on some problems (smooth, slow-moving flows), but more rapidly on others (involving advection/rotation). In all cases, we show divergence control beyond the Powell 8-wave approach is necessary, or all methods can converge to unphysical answers even at high resolution.

  9. Solving Reynolds Equation in the Head-Disk Interface of Hard Disk Drives by Using a Meshless Method

    NASA Astrophysics Data System (ADS)

    Bao-Jun, Shi; Ting-Yi, Yang; Jian, Zhang; Yun-Dong, Du

    2010-05-01

    With the decrease of the flying height of the magnetic head/slider in hard disk drives (HDDs), Reynolds equation, which is used to describe the pressure distribution of the air bearing film in HDDs, must be modified to account for the rarefaction effect. Meshless local Petrov-Galerkin (MLPG) method has been successfully used in some fields of solid mechanics and fluid mechanics and was proven to be an efficacious method. No meshes are needed in MLPG method either for the interpolation of the trial and test functions, or for the integration of the weak form of the related differential equation. We solve Reynolds equation in the head-disk interface (HDI) of HDDs by using MLPG method. The pressure distribution of the air baring film by using MLPG method is obtained and compared with the exact solution and that obtained by using a least square finite difference (LSFD) method. We also investigate effects of the bearing number on the pressure value and the center of pressure based on this meshless method for different film-thickness ratios.

  10. Coupling Finite Element and Meshless Local Petrov-Galerkin Methods for Two-Dimensional Potential Problems

    NASA Technical Reports Server (NTRS)

    Chen, T.; Raju, I. S.

    2002-01-01

    A coupled finite element (FE) method and meshless local Petrov-Galerkin (MLPG) method for analyzing two-dimensional potential problems is presented in this paper. The analysis domain is subdivided into two regions, a finite element (FE) region and a meshless (MM) region. A single weighted residual form is written for the entire domain. Independent trial and test functions are assumed in the FE and MM regions. A transition region is created between the two regions. The transition region blends the trial and test functions of the FE and MM regions. The trial function blending is achieved using a technique similar to the 'Coons patch' method that is widely used in computer-aided geometric design. The test function blending is achieved by using either FE or MM test functions on the nodes in the transition element. The technique was evaluated by applying the coupled method to two potential problems governed by the Poisson equation. The coupled method passed all the patch test problems and gave accurate solutions for the problems studied.

  11. A meshless EFG-based algorithm for 3D deformable modeling of soft tissue in real-time.

    PubMed

    Abdi, Elahe; Farahmand, Farzam; Durali, Mohammad

    2012-01-01

    The meshless element-free Galerkin method was generalized and an algorithm was developed for 3D dynamic modeling of deformable bodies in real time. The efficacy of the algorithm was investigated in a 3D linear viscoelastic model of human spleen subjected to a time-varying compressive force exerted by a surgical grasper. The model remained stable in spite of the considerably large deformations occurred. There was a good agreement between the results and those of an equivalent finite element model. The computational cost, however, was much lower, enabling the proposed algorithm to be effectively used in real-time applications.

  12. Simulation of the hot rolling of steel with direct iteration

    NASA Astrophysics Data System (ADS)

    Hanoglu, Umut; Šarler, Božidar

    2017-10-01

    In this study a simulation system based on the meshless Local Radial Basis Function Collocation Method (LRBFCM) is applied for the hot rolling of steel. Rolling is a complex, 3D, thermo-mechanical problem; however, 2D cross-sectional slices are used as computational domains that are aligned with the rolling direction and no heat flow or strain is considered in the direction that is orthogonal to the slices. For each predefined position with respect to the rolling direction, the solution procedure is repeated until the slice reaches the final rolling position. Collocation nodes are initially distributed over the domain and boundaries of the initial slice. A local solution is achieved by considering the overlapping influence domains with either 5 or 7 nodes. Radial Basis Functions (RBFs) are used for the temperature discretization in the thermal model and displacement discretization in the mechanical model. The meshless solution procedure does not require a mesh-generation algorithm in the classic sense. Strong-form mechanical and thermal models are run for each slice regarding the contact with the roll's surface. Ideal plastic material behavior is considered for the mechanical results, where the nonlinear stress-strain relation is solved with a direct iteration. The majority of the Finite Element Model (FEM) simulations, including commercial software, use a conventional Newton-Raphson algorithm. However, direct iteration is chosen here due to its better compatibility with meshless methods. In order to overcome any unforeseen stability issues, the redistribution of the nodes by Elliptic Node Generation (ENG) is applied to one or more slices throughout the simulation. The rolling simulation presented here helps the user to design, test and optimize different rolling schedules. The results can be seen minutes after the simulation's start in terms of temperature, displacement, stress and strain fields as well as important technological parameters, like the roll-separating forces, roll toque, etc. An example of a rolling simulation, in which an initial size of 110x110 mm steel is rolled to a round bar with 80 mm diameter, is shown in Fig. 3. A user-friendly computer application for industrial use is created by using the C# and .NET frameworks.

  13. Reconstruction and analysis of hybrid composite shells using meshless methods

    NASA Astrophysics Data System (ADS)

    Bernardo, G. M. S.; Loja, M. A. R.

    2017-06-01

    The importance of focusing on the research of viable models to predict the behaviour of structures which may possess in some cases complex geometries is an issue that is growing in different scientific areas, ranging from the civil and mechanical engineering to the architecture or biomedical devices fields. In these cases, the research effort to find an efficient approach to fit laser scanning point clouds, to the desired surface, has been increasing, leading to the possibility of modelling as-built/as-is structures and components' features. However, combining the task of surface reconstruction and the implementation of a structural analysis model is not a trivial task. Although there are works focusing those different phases in separate, there is still an effective need to find approaches able to interconnect them in an efficient way. Therefore, achieving a representative geometric model able to be subsequently submitted to a structural analysis in a similar based platform is a fundamental step to establish an effective expeditious processing workflow. With the present work, one presents an integrated methodology based on the use of meshless approaches, to reconstruct shells described by points' clouds, and to subsequently predict their static behaviour. These methods are highly appropriate on dealing with unstructured points clouds, as they do not need to have any specific spatial or geometric requirement when implemented, depending only on the distance between the points. Details on the formulation, and a set of illustrative examples focusing the reconstruction of cylindrical and double-curvature shells, and its further analysis, are presented.

  14. Meshless Local Petrov-Galerkin Euler-Bernoulli Beam Problems: A Radial Basis Function Approach

    NASA Technical Reports Server (NTRS)

    Raju, I. S.; Phillips, D. R.; Krishnamurthy, T.

    2003-01-01

    A radial basis function implementation of the meshless local Petrov-Galerkin (MLPG) method is presented to study Euler-Bernoulli beam problems. Radial basis functions, rather than generalized moving least squares (GMLS) interpolations, are used to develop the trial functions. This choice yields a computationally simpler method as fewer matrix inversions and multiplications are required than when GMLS interpolations are used. Test functions are chosen as simple weight functions as in the conventional MLPG method. Compactly and noncompactly supported radial basis functions are considered. The non-compactly supported cubic radial basis function is found to perform very well. Results obtained from the radial basis MLPG method are comparable to those obtained using the conventional MLPG method for mixed boundary value problems and problems with discontinuous loading conditions.

  15. A numerical scheme based on radial basis function finite difference (RBF-FD) technique for solving the high-dimensional nonlinear Schrödinger equations using an explicit time discretization: Runge-Kutta method

    NASA Astrophysics Data System (ADS)

    Dehghan, Mehdi; Mohammadi, Vahid

    2017-08-01

    In this research, we investigate the numerical solution of nonlinear Schrödinger equations in two and three dimensions. The numerical meshless method which will be used here is RBF-FD technique. The main advantage of this method is the approximation of the required derivatives based on finite difference technique at each local-support domain as Ωi. At each Ωi, we require to solve a small linear system of algebraic equations with a conditionally positive definite matrix of order 1 (interpolation matrix). This scheme is efficient and its computational cost is same as the moving least squares (MLS) approximation. A challengeable issue is choosing suitable shape parameter for interpolation matrix in this way. In order to overcome this matter, an algorithm which was established by Sarra (2012), will be applied. This algorithm computes the condition number of the local interpolation matrix using the singular value decomposition (SVD) for obtaining the smallest and largest singular values of that matrix. Moreover, an explicit method based on Runge-Kutta formula of fourth-order accuracy will be applied for approximating the time variable. It also decreases the computational costs at each time step since we will not solve a nonlinear system. On the other hand, to compare RBF-FD method with another meshless technique, the moving kriging least squares (MKLS) approximation is considered for the studied model. Our results demonstrate the ability of the present approach for solving the applicable model which is investigated in the current research work.

  16. Meshless Solution of the Problem on the Static Behavior of Thin and Thick Laminated Composite Beams

    NASA Astrophysics Data System (ADS)

    Xiang, S.; Kang, G. W.

    2018-03-01

    For the first time, the static behavior of laminated composite beams is analyzed using the meshless collocation method based on a thin-plate-spline radial basis function. In the approximation of a partial differential equation by using a radial basis function, the shape parameter has an important role in ensuring the numerical accuracy. The choice of a shape parameter in the thin plate spline radial basis function is easier than in other radial basis functions. The governing differential equations are derived based on Reddy's third-order shear deformation theory. Numerical results are obtained for symmetric cross-ply laminated composite beams with simple-simple and cantilever boundary conditions under a uniform load. The results found are compared with available published ones and demonstrate the accuracy of the present method.

  17. A New Hybrid Viscoelastic Soft Tissue Model based on Meshless Method for Haptic Surgical Simulation

    PubMed Central

    Bao, Yidong; Wu, Dongmei; Yan, Zhiyuan; Du, Zhijiang

    2013-01-01

    This paper proposes a hybrid soft tissue model that consists of a multilayer structure and many spheres for surgical simulation system based on meshless. To improve accuracy of the model, tension is added to the three-parameter viscoelastic structure that connects the two spheres. By using haptic device, the three-parameter viscoelastic model (TPM) produces accurate deformationand also has better stress-strain, stress relaxation and creep properties. Stress relaxation and creep formulas have been obtained by mathematical formula derivation. Comparing with the experimental results of the real pig liver which were reported by Evren et al. and Amy et al., the curve lines of stress-strain, stress relaxation and creep of TPM are close to the experimental data of the real liver. Simulated results show that TPM has better real-time, stability and accuracy. PMID:24339837

  18. A second order radiative transfer equation and its solution by meshless method with application to strongly inhomogeneous media

    NASA Astrophysics Data System (ADS)

    Zhao, J. M.; Tan, J. Y.; Liu, L. H.

    2013-01-01

    A new second order form of radiative transfer equation (named MSORTE) is proposed, which overcomes the singularity problem of a previously proposed second order radiative transfer equation [J.E. Morel, B.T. Adams, T. Noh, J.M. McGhee, T.M. Evans, T.J. Urbatsch, Spatial discretizations for self-adjoint forms of the radiative transfer equations, J. Comput. Phys. 214 (1) (2006) 12-40 (where it was termed SAAI), J.M. Zhao, L.H. Liu, Second order radiative transfer equation and its properties of numerical solution using finite element method, Numer. Heat Transfer B 51 (2007) 391-409] in dealing with inhomogeneous media where some locations have very small/zero extinction coefficient. The MSORTE contains a naturally introduced diffusion (or second order) term which provides better numerical property than the classic first order radiative transfer equation (RTE). The stability and convergence characteristics of the MSORTE discretized by central difference scheme is analyzed theoretically, and the better numerical stability of the second order form radiative transfer equations than the RTE when discretized by the central difference type method is proved. A collocation meshless method is developed based on the MSORTE to solve radiative transfer in inhomogeneous media. Several critical test cases are taken to verify the performance of the presented method. The collocation meshless method based on the MSORTE is demonstrated to be capable of stably and accurately solve radiative transfer in strongly inhomogeneous media, media with void region and even with discontinuous extinction coefficient.

  19. A compatible high-order meshless method for the Stokes equations with applications to suspension flows

    NASA Astrophysics Data System (ADS)

    Trask, Nathaniel; Maxey, Martin; Hu, Xiaozhe

    2018-02-01

    A stable numerical solution of the steady Stokes problem requires compatibility between the choice of velocity and pressure approximation that has traditionally proven problematic for meshless methods. In this work, we present a discretization that couples a staggered scheme for pressure approximation with a divergence-free velocity reconstruction to obtain an adaptive, high-order, finite difference-like discretization that can be efficiently solved with conventional algebraic multigrid techniques. We use analytic benchmarks to demonstrate equal-order convergence for both velocity and pressure when solving problems with curvilinear geometries. In order to study problems in dense suspensions, we couple the solution for the flow to the equations of motion for freely suspended particles in an implicit monolithic scheme. The combination of high-order accuracy with fully-implicit schemes allows the accurate resolution of stiff lubrication forces directly from the solution of the Stokes problem without the need to introduce sub-grid lubrication models.

  20. Free Mesh Method: fundamental conception, algorithms and accuracy study

    PubMed Central

    YAGAWA, Genki

    2011-01-01

    The finite element method (FEM) has been commonly employed in a variety of fields as a computer simulation method to solve such problems as solid, fluid, electro-magnetic phenomena and so on. However, creation of a quality mesh for the problem domain is a prerequisite when using FEM, which becomes a major part of the cost of a simulation. It is natural that the concept of meshless method has evolved. The free mesh method (FMM) is among the typical meshless methods intended for particle-like finite element analysis of problems that are difficult to handle using global mesh generation, especially on parallel processors. FMM is an efficient node-based finite element method that employs a local mesh generation technique and a node-by-node algorithm for the finite element calculations. In this paper, FMM and its variation are reviewed focusing on their fundamental conception, algorithms and accuracy. PMID:21558752

  1. Simple Test Functions in Meshless Local Petrov-Galerkin Methods

    NASA Technical Reports Server (NTRS)

    Raju, Ivatury S.

    2016-01-01

    Two meshless local Petrov-Galerkin (MLPG) methods based on two different trial functions but that use a simple linear test function were developed for beam and column problems. These methods used generalized moving least squares (GMLS) and radial basis (RB) interpolation functions as trial functions. These two methods were tested on various patch test problems. Both methods passed the patch tests successfully. Then the methods were applied to various beam vibration problems and problems involving Euler and Beck's columns. Both methods yielded accurate solutions for all problems studied. The simple linear test function offers considerable savings in computing efforts as the domain integrals involved in the weak form are avoided. The two methods based on this simple linear test function method produced accurate results for frequencies and buckling loads. Of the two methods studied, the method with radial basis trial functions is very attractive as the method is simple, accurate, and robust.

  2. Element free Galerkin formulation of composite beam with longitudinal slip

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ahmad, Dzulkarnain; Mokhtaram, Mokhtazul Haizad; Badli, Mohd Iqbal

    2015-05-15

    Behaviour between two materials in composite beam is assumed partially interact when longitudinal slip at its interfacial surfaces is considered. Commonly analysed by the mesh-based formulation, this study used meshless formulation known as Element Free Galerkin (EFG) method in the beam partial interaction analysis, numerically. As meshless formulation implies that the problem domain is discretised only by nodes, the EFG method is based on Moving Least Square (MLS) approach for shape functions formulation with its weak form is developed using variational method. The essential boundary conditions are enforced by Langrange multipliers. The proposed EFG formulation gives comparable results, after beenmore » verified by analytical solution, thus signify its application in partial interaction problems. Based on numerical test results, the Cubic Spline and Quartic Spline weight functions yield better accuracy for the EFG formulation, compares to other proposed weight functions.« less

  3. Meshless deformable models for 3D cardiac motion and strain analysis from tagged MRI.

    PubMed

    Wang, Xiaoxu; Chen, Ting; Zhang, Shaoting; Schaerer, Joël; Qian, Zhen; Huh, Suejung; Metaxas, Dimitris; Axel, Leon

    2015-01-01

    Tagged magnetic resonance imaging (TMRI) provides a direct and noninvasive way to visualize the in-wall deformation of the myocardium. Due to the through-plane motion, the tracking of 3D trajectories of the material points and the computation of 3D strain field call for the necessity of building 3D cardiac deformable models. The intersections of three stacks of orthogonal tagging planes are material points in the myocardium. With these intersections as control points, 3D motion can be reconstructed with a novel meshless deformable model (MDM). Volumetric MDMs describe an object as point cloud inside the object boundary and the coordinate of each point can be written in parametric functions. A generic heart mesh is registered on the TMRI with polar decomposition. A 3D MDM is generated and deformed with MR image tagging lines. Volumetric MDMs are deformed by calculating the dynamics function and minimizing the local Laplacian coordinates. The similarity transformation of each point is computed by assuming its neighboring points are making the same transformation. The deformation is computed iteratively until the control points match the target positions in the consecutive image frame. The 3D strain field is computed from the 3D displacement field with moving least squares. We demonstrate that MDMs outperformed the finite element method and the spline method with a numerical phantom. Meshless deformable models can track the trajectory of any material point in the myocardium and compute the 3D strain field of any particular area. The experimental results on in vivo healthy and patient heart MRI show that the MDM can fully recover the myocardium motion in three dimensions. Copyright © 2014. Published by Elsevier Inc.

  4. Meshless deformable models for 3D cardiac motion and strain analysis from tagged MRI

    PubMed Central

    Wang, Xiaoxu; Chen, Ting; Zhang, Shaoting; Schaerer, Joël; Qian, Zhen; Huh, Suejung; Metaxas, Dimitris; Axel, Leon

    2016-01-01

    Tagged magnetic resonance imaging (TMRI) provides a direct and noninvasive way to visualize the in-wall deformation of the myocardium. Due to the through-plane motion, the tracking of 3D trajectories of the material points and the computation of 3D strain field call for the necessity of building 3D cardiac deformable models. The intersections of three stacks of orthogonal tagging planes are material points in the myocardium. With these intersections as control points, 3D motion can be reconstructed with a novel meshless deformable model (MDM). Volumetric MDMs describe an object as point cloud inside the object boundary and the coordinate of each point can be written in parametric functions. A generic heart mesh is registered on the TMRI with polar decomposition. A 3D MDM is generated and deformed with MR image tagging lines. Volumetric MDMs are deformed by calculating the dynamics function and minimizing the local Laplacian coordinates. The similarity transformation of each point is computed by assuming its neighboring points are making the same transformation. The deformation is computed iteratively until the control points match the target positions in the consecutive image frame. The 3D strain field is computed from the 3D displacement field with moving least squares. We demonstrate that MDMs outperformed the finite element method and the spline method with a numerical phantom. Meshless deformable models can track the trajectory of any material point in the myocardium and compute the 3D strain field of any particular area. The experimental results on in vivo healthy and patient heart MRI show that the MDM can fully recover the myocardium motion in three dimensions. PMID:25157446

  5. A meshless approach to thermomechanics of DC casting of aluminium billets

    NASA Astrophysics Data System (ADS)

    Mavrič, B.; Šarler, B.

    2016-03-01

    The ability to model thermomechanics in DC casting is important due to the technological challenges caused by physical phenomena such as different ingot distortions, cracking, hot tearing and residual stress. Many thermomechanical models already exist and usually take into account three contributions: elastic, thermal expansion, and viscoplastic to model the mushy zone. These models are, in a vast majority, solved by the finite element method. In the present work the elastic model that accounts for linear thermal expansion is considered. The method used for solving the model is of a novel meshless type and extends our previous meshless attempts in solving fluid mechanics problems. The solution to the problem is constructed using collocation on the overlapping subdomains, which are composed of computational nodes. Multiquadric radial basis functions, augmented by monomials, are used for the displacement interpolation. The interpolation is constructed in such a manner that it readily satisfies the boundary conditions. The discretization results in construction of a global square sparse matrix representing the system of linear equations for the displacement field. The developed method has many advantages. The system of equations can be easily constructed and efficiently solved. There is no need to perform expensive meshing of the domain and the formulation of the method is similar in two and three dimensions. Since no meshing is required, the nodes can easily be added or removed, which allows for efficient adaption of the node arrangement density. The order of convergence, estimated through an analytically solvable test, can be adjusted through the number of interpolation nodes in the subdomain, with 6 nodes being enough for the second order convergence. Simulations of axisymmetric mechanical problems, associated with low frequency electromagnetic DC casting are presented.

  6. A class of renormalised meshless Laplacians for boundary value problems

    NASA Astrophysics Data System (ADS)

    Basic, Josip; Degiuli, Nastia; Ban, Dario

    2018-02-01

    A meshless approach to approximating spatial derivatives on scattered point arrangements is presented in this paper. Three various derivations of approximate discrete Laplace operator formulations are produced using the Taylor series expansion and renormalised least-squares correction of the first spatial derivatives. Numerical analyses are performed for the introduced Laplacian formulations, and their convergence rate and computational efficiency are examined. The tests are conducted on regular and highly irregular scattered point arrangements. The results are compared to those obtained by the smoothed particle hydrodynamics method and the finite differences method on a regular grid. Finally, the strong form of various Poisson and diffusion equations with Dirichlet or Robin boundary conditions are solved in two and three dimensions by making use of the introduced operators in order to examine their stability and accuracy for boundary value problems. The introduced Laplacian operators perform well for highly irregular point distribution and offer adequate accuracy for mesh and mesh-free numerical methods that require frequent movement of the grid or point cloud.

  7. A review on recent contribution of meshfree methods to structure and fracture mechanics applications.

    PubMed

    Daxini, S D; Prajapati, J M

    2014-01-01

    Meshfree methods are viewed as next generation computational techniques. With evident limitations of conventional grid based methods, like FEM, in dealing with problems of fracture mechanics, large deformation, and simulation of manufacturing processes, meshfree methods have gained much attention by researchers. A number of meshfree methods have been proposed till now for analyzing complex problems in various fields of engineering. Present work attempts to review recent developments and some earlier applications of well-known meshfree methods like EFG and MLPG to various types of structure mechanics and fracture mechanics applications like bending, buckling, free vibration analysis, sensitivity analysis and topology optimization, single and mixed mode crack problems, fatigue crack growth, and dynamic crack analysis and some typical applications like vibration of cracked structures, thermoelastic crack problems, and failure transition in impact problems. Due to complex nature of meshfree shape functions and evaluation of integrals in domain, meshless methods are computationally expensive as compared to conventional mesh based methods. Some improved versions of original meshfree methods and other techniques suggested by researchers to improve computational efficiency of meshfree methods are also reviewed here.

  8. The meshless local Petrov-Galerkin method based on moving Kriging interpolation for solving the time fractional Navier-Stokes equations.

    PubMed

    Thamareerat, N; Luadsong, A; Aschariyaphotha, N

    2016-01-01

    In this paper, we present a numerical scheme used to solve the nonlinear time fractional Navier-Stokes equations in two dimensions. We first employ the meshless local Petrov-Galerkin (MLPG) method based on a local weak formulation to form the system of discretized equations and then we will approximate the time fractional derivative interpreted in the sense of Caputo by a simple quadrature formula. The moving Kriging interpolation which possesses the Kronecker delta property is applied to construct shape functions. This research aims to extend and develop further the applicability of the truly MLPG method to the generalized incompressible Navier-Stokes equations. Two numerical examples are provided to illustrate the accuracy and efficiency of the proposed algorithm. Very good agreement between the numerically and analytically computed solutions can be observed in the verification. The present MLPG method has proved its efficiency and reliability for solving the two-dimensional time fractional Navier-Stokes equations arising in fluid dynamics as well as several other problems in science and engineering.

  9. 2D modeling of direct laser metal deposition process using a finite particle method

    NASA Astrophysics Data System (ADS)

    Anedaf, T.; Abbès, B.; Abbès, F.; Li, Y. M.

    2018-05-01

    Direct laser metal deposition is one of the material additive manufacturing processes used to produce complex metallic parts. A thorough understanding of the underlying physical phenomena is required to obtain a high-quality parts. In this work, a mathematical model is presented to simulate the coaxial laser direct deposition process tacking into account of mass addition, heat transfer, and fluid flow with free surface and melting. The fluid flow in the melt pool together with mass and energy balances are solved using the Computational Fluid Dynamics (CFD) software NOGRID-points, based on the meshless Finite Pointset Method (FPM). The basis of the computations is a point cloud, which represents the continuum fluid domain. Each finite point carries all fluid information (density, velocity, pressure and temperature). The dynamic shape of the molten zone is explicitly described by the point cloud. The proposed model is used to simulate a single layer cladding.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Ch.; Gao, X. W.; Sladek, J.

    This paper reports our recent research works on crack analysis in continuously non-homogeneous and linear elastic functionally graded materials. A meshless boundary element method is developed for this purpose. Numerical examples are presented and discussed to demonstrate the efficiency and the accuracy of the present numerical method, and to show the effects of the material gradation on the crack-opening-displacements and the stress intensity factors.

  11. Boundary particle method for Laplace transformed time fractional diffusion equations

    NASA Astrophysics Data System (ADS)

    Fu, Zhuo-Jia; Chen, Wen; Yang, Hai-Tian

    2013-02-01

    This paper develops a novel boundary meshless approach, Laplace transformed boundary particle method (LTBPM), for numerical modeling of time fractional diffusion equations. It implements Laplace transform technique to obtain the corresponding time-independent inhomogeneous equation in Laplace space and then employs a truly boundary-only meshless boundary particle method (BPM) to solve this Laplace-transformed problem. Unlike the other boundary discretization methods, the BPM does not require any inner nodes, since the recursive composite multiple reciprocity technique (RC-MRM) is used to convert the inhomogeneous problem into the higher-order homogeneous problem. Finally, the Stehfest numerical inverse Laplace transform (NILT) is implemented to retrieve the numerical solutions of time fractional diffusion equations from the corresponding BPM solutions. In comparison with finite difference discretization, the LTBPM introduces Laplace transform and Stehfest NILT algorithm to deal with time fractional derivative term, which evades costly convolution integral calculation in time fractional derivation approximation and avoids the effect of time step on numerical accuracy and stability. Consequently, it can effectively simulate long time-history fractional diffusion systems. Error analysis and numerical experiments demonstrate that the present LTBPM is highly accurate and computationally efficient for 2D and 3D time fractional diffusion equations.

  12. Efficient physics-based tracking of heart surface motion for beating heart surgery robotic systems.

    PubMed

    Bogatyrenko, Evgeniya; Pompey, Pascal; Hanebeck, Uwe D

    2011-05-01

    Tracking of beating heart motion in a robotic surgery system is required for complex cardiovascular interventions. A heart surface motion tracking method is developed, including a stochastic physics-based heart surface model and an efficient reconstruction algorithm. The algorithm uses the constraints provided by the model that exploits the physical characteristics of the heart. The main advantage of the model is that it is more realistic than most standard heart models. Additionally, no explicit matching between the measurements and the model is required. The application of meshless methods significantly reduces the complexity of physics-based tracking. Based on the stochastic physical model of the heart surface, this approach considers the motion of the intervention area and is robust to occlusions and reflections. The tracking algorithm is evaluated in simulations and experiments on an artificial heart. Providing higher accuracy than the standard model-based methods, it successfully copes with occlusions and provides high performance even when all measurements are not available. Combining the physical and stochastic description of the heart surface motion ensures physically correct and accurate prediction. Automatic initialization of the physics-based cardiac motion tracking enables system evaluation in a clinical environment.

  13. 2005 22nd International Symposium on Ballistics. Volume 3 Thursday - Friday

    DTIC Science & Technology

    2005-11-18

    QinetiQ; Vladimir Titarev, Eleuterio Toro , Umeritek Limited The Mechanism Analysis of Interior Ballistics of Serial Chamber Gun, Dr. Sanjiu Ying, Charge...Elements and Meshless Particles, Gordon R. Johnson and Robert A. Stryk, Network Computing Services, Inc. Experimental and Numerical Study of the...Internal Ballistics Clive R. Woodley, David Finbow, QinetiQ; Vladimir Titarev, Eleuterio Toro , Numeritek Limited 22nd International Symposium on

  14. Simbol-X Mirror Module Thermal Shields: I-Design and X-Ray Transmission

    NASA Astrophysics Data System (ADS)

    Collura, A.; Barbera, M.; Varisco, S.; Basso, S.; Pareschi, G.; Tagliaferri, G.; Ayers, T.

    2009-05-01

    The Simbol-X mission is designed to fly in formation flight configuration. As a consequence, the telescope has both ends open to space, and thermal shielding at telescope entrance and exit is required to maintain temperature uniformity throughout the mirrors. Both mesh and meshless solutions are presently under study for the shields. We discuss the design and the X-ray transmission.

  15. SPH simulation of free surface flow over a sharp-crested weir

    NASA Astrophysics Data System (ADS)

    Ferrari, Angela

    2010-03-01

    In this paper the numerical simulation of a free surface flow over a sharp-crested weir is presented. Since in this case the usual shallow water assumptions are not satisfied, we propose to solve the problem using the full weakly compressible Navier-Stokes equations with the Tait equation of state for water. The numerical method used consists of the new meshless Smooth Particle Hydrodynamics (SPH) formulation proposed by Ferrari et al. (2009) [8], that accurately tracks the free surface profile and provides monotone pressure fields. Thus, the unsteady evolution of the complex moving material interface (free surface) can been properly solved. The simulations involving about half a million of fluid particles have been run in parallel on two of the most powerful High Performance Computing (HPC) facilities in Europe. The validation of the results has been carried out analysing the pressure field and comparing the free surface profiles obtained with the SPH scheme with experimental measurements available in literature [18]. A very good quantitative agreement has been obtained.

  16. A 3D smoothed particle hydrodynamics model for erosional dam-break floods

    NASA Astrophysics Data System (ADS)

    Amicarelli, Andrea; Kocak, Bozhana; Sibilla, Stefano; Grabe, Jürgen

    2017-11-01

    A mesh-less smoothed particle hydrodynamics (SPH) model for bed-load transport on erosional dam-break floods is presented. This mixture model describes both the liquid phase and the solid granular material. The model is validated on the results from several experiments on erosional dam breaks. A comparison between the present model and a 2-phase SPH model for geotechnical applications (Gadget Soil; TUHH) is performed. A demonstrative 3D erosional dam break on complex topography is investigated. The present 3D mixture model is characterised by: no tuning parameter for the mixture viscosity; consistency with the Kinetic Theory of Granular Flow; ability to reproduce the evolution of the free surface and the bed-load transport layer; applicability to practical problems in civil engineering. The numerical developments of this study are represented by a new SPH scheme for bed-load transport, which is implemented in the SPH code SPHERA v.8.0 (RSE SpA), distributed as FOSS on GitHub.

  17. Object-constrained meshless deformable algorithm for high speed 3D nonrigid registration between CT and CBCT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen Ting; Kim, Sung; Goyal, Sharad

    2010-01-15

    Purpose: High-speed nonrigid registration between the planning CT and the treatment CBCT data is critical for real time image guided radiotherapy (IGRT) to improve the dose distribution and to reduce the toxicity to adjacent organs. The authors propose a new fully automatic 3D registration framework that integrates object-based global and seed constraints with the grayscale-based ''demons'' algorithm. Methods: Clinical objects were segmented on the planning CT images and were utilized as meshless deformable models during the nonrigid registration process. The meshless models reinforced a global constraint in addition to the grayscale difference between CT and CBCT in order to maintainmore » the shape and the volume of geometrically complex 3D objects during the registration. To expedite the registration process, the framework was stratified into hierarchies, and the authors used a frequency domain formulation to diffuse the displacement between the reference and the target in each hierarchy. Also during the registration of pelvis images, they replaced the air region inside the rectum with estimated pixel values from the surrounding rectal wall and introduced an additional seed constraint to robustly track and match the seeds implanted into the prostate. The proposed registration framework and algorithm were evaluated on 15 real prostate cancer patients. For each patient, prostate gland, seminal vesicle, bladder, and rectum were first segmented by a radiation oncologist on planning CT images for radiotherapy planning purpose. The same radiation oncologist also manually delineated the tumor volumes and critical anatomical structures in the corresponding CBCT images acquired at treatment. These delineated structures on the CBCT were only used as the ground truth for the quantitative validation, while structures on the planning CT were used both as the input to the registration method and the ground truth in validation. By registering the planning CT to the CBCT, a displacement map was generated. Segmented volumes in the CT images deformed using the displacement field were compared against the manual segmentations in the CBCT images to quantitatively measure the convergence of the shape and the volume. Other image features were also used to evaluate the overall performance of the registration. Results: The algorithm was able to complete the segmentation and registration process within 1 min, and the superimposed clinical objects achieved a volumetric similarity measure of over 90% between the reference and the registered data. Validation results also showed that the proposed registration could accurately trace the deformation inside the target volume with average errors of less than 1 mm. The method had a solid performance in registering the simulated images with up to 20 Hounsfield unit white noise added. Also, the side by side comparison with the original demons algorithm demonstrated its improved registration performance over the local pixel-based registration approaches. Conclusions: Given the strength and efficiency of the algorithm, the proposed method has significant clinical potential to accelerate and to improve the CBCT delineation and targets tracking in online IGRT applications.« less

  18. 2005 22nd International Symposium on Ballistics Volume 2 Wednesday

    DTIC Science & Technology

    2005-11-18

    Information 1 Experimental and Numerical Study of the Penetration of Tungsten Carbide Into Steel Targets During High Rates of Strain John F . Moxnes...QinetiQ; Vladimir Titarev, Eleuterio Toro , Umeritek Limited The Mechanism Analysis of Interior Ballistics of Serial Chamber Gun, Dr. Sanjiu Ying, Charge...Elements and Meshless Particles, Gordon R. Johnson and Robert A. Stryk, Network Computing Services, Inc. Experimental and Numerical Study of the

  19. Modeling RF Fields in Hot Plasmas with Parallel Full Wave Code

    NASA Astrophysics Data System (ADS)

    Spencer, Andrew; Svidzinski, Vladimir; Zhao, Liangji; Galkin, Sergei; Kim, Jin-Soo

    2016-10-01

    FAR-TECH, Inc. is developing a suite of full wave RF plasma codes. It is based on a meshless formulation in configuration space with adapted cloud of computational points (CCP) capability and using the hot plasma conductivity kernel to model the nonlocal plasma dielectric response. The conductivity kernel is calculated by numerically integrating the linearized Vlasov equation along unperturbed particle trajectories. Work has been done on the following calculations: 1) the conductivity kernel in hot plasmas, 2) a monitor function based on analytic solutions of the cold-plasma dispersion relation, 3) an adaptive CCP based on the monitor function, 4) stencils to approximate the wave equations on the CCP, 5) the solution to the full wave equations in the cold-plasma model in tokamak geometry for ECRH and ICRH range of frequencies, and 6) the solution to the wave equations using the calculated hot plasma conductivity kernel. We will present results on using a meshless formulation on adaptive CCP to solve the wave equations and on implementing the non-local hot plasma dielectric response to the wave equations. The presentation will include numerical results of wave propagation and absorption in the cold and hot tokamak plasma RF models, using DIII-D geometry and plasma parameters. Work is supported by the U.S. DOE SBIR program.

  20. Normalized Implicit Radial Models for Scattered Point Cloud Data without Normal Vectors

    DTIC Science & Technology

    2009-03-23

    points by shrinking a discrete membrane, Computer Graphics Forum, Vol. 24-4, 2005, pp. 791-808 [8] Floater , M. S., Reimers, M.: Meshless...Parameterization and Surface Reconstruction, Computer Aided Geometric Design 18, 2001, pp 77-92 [9] Floater , M. S.: Parameterization of Triangulations and...Unorganized Points, In: Tutorials on Multiresolution in Geometric Modelling, A. Iske, E. Quak and M. S. Floater (eds.), Springer , 2002, pp. 287-316 [10

  1. Smoothed Particle Hydrodynamics and its applications for multiphase flow and reactive transport in porous media

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tartakovsky, Alexandre M.; Trask, Nathaniel; Pan, K.

    2016-03-11

    Smoothed Particle Hydrodynamics (SPH) is a Lagrangian method based on a meshless discretization of partial differential equations. In this review, we present SPH discretization of the Navier-Stokes and Advection-Diffusion-Reaction equations, implementation of various boundary conditions, and time integration of the SPH equations, and we discuss applications of the SPH method for modeling pore-scale multiphase flows and reactive transport in porous and fractured media.

  2. Computational Study of Colloidal Droplet Interactions with Three Dimensional Structures

    DTIC Science & Technology

    2015-05-18

    on the meshless SPH method for droplet impact on and sorption into a powder bed considering free surface flow above the powder bed surface ...considering free surface flow above the powder bed surface , infiltration of the liquid in the porous matrix, and the interfacial forces on the free moving...infiltration of the liquid in the porous matrix, and the interfacial forces on the free moving surface . The model has been used to study the effect of impact

  3. MUFASA: galaxy formation simulations with meshless hydrodynamics

    NASA Astrophysics Data System (ADS)

    Davé, Romeel; Thompson, Robert; Hopkins, Philip F.

    2016-11-01

    We present the MUFASA suite of cosmological hydrodynamic simulations, which employs the GIZMO meshless finite mass (MFM) code including H2-based star formation, nine-element chemical evolution, two-phase kinetic outflows following scalings from the Feedback in Realistic Environments zoom simulations, and evolving halo mass-based quenching. Our fiducial (50 h-1 Mpc)3 volume is evolved to z = 0 with a quarter billion elements. The predicted galaxy stellar mass functions (GSMFs) reproduces observations from z = 4 → 0 to ≲ 1.2σ in cosmic variance, providing an unprecedented match to this key diagnostic. The cosmic star formation history and stellar mass growth show general agreement with data, with a strong archaeological downsizing trend such that dwarf galaxies form the majority of their stars after z ˜ 1. We run 25 and 12.5 h-1 Mpc volumes to z = 2 with identical feedback prescriptions, the latter resolving all hydrogen-cooling haloes, and the three runs display fair resolution convergence. The specific star formation rates broadly agree with data at z = 0, but are underpredicted at z ˜ 2 by a factor of 3, re-emphasizing a longstanding puzzle in galaxy evolution models. We compare runs using MFM and two flavours of smoothed particle hydrodynamics, and show that the GSMF is sensitive to hydrodynamics methodology at the ˜×2 level, which is sub-dominant to choices for parametrizing feedback.

  4. A multiphysics and multiscale model for low frequency electromagnetic direct-chill casting

    NASA Astrophysics Data System (ADS)

    Košnik, N.; Guštin, A. Z.; Mavrič, B.; Šarler, B.

    2016-03-01

    Simulation and control of macrosegregation, deformation and grain size in low frequency electromagnetic (EM) direct-chill casting (LFEMC) is important for downstream processing. Respectively, a multiphysics and multiscale model is developed for solution of Lorentz force, temperature, velocity, concentration, deformation and grain structure of LFEMC processed aluminum alloys, with focus on axisymmetric billets. The mixture equations with lever rule, linearized phase diagram, and stationary thermoelastic solid phase are assumed, together with EM induction equation for the field imposed by the coil. Explicit diffuse approximate meshless solution procedure [1] is used for solving the EM field, and the explicit local radial basis function collocation method [2] is used for solving the coupled transport phenomena and thermomechanics fields. Pressure-velocity coupling is performed by the fractional step method [3]. The point automata method with modified KGT model is used to estimate the grain structure [4] in a post-processing mode. Thermal, mechanical, EM and grain structure outcomes of the model are demonstrated. A systematic study of the complicated influences of the process parameters can be investigated by the model, including intensity and frequency of the electromagnetic field. The meshless solution framework, with the implemented simplest physical models, will be further extended by including more sophisticated microsegregation and grain structure models, as well as a more realistic solid and solid-liquid phase rheology.

  5. Meshless Method with Operator Splitting Technique for Transient Nonlinear Bioheat Transfer in Two-Dimensional Skin Tissues

    PubMed Central

    Zhang, Ze-Wei; Wang, Hui; Qin, Qing-Hua

    2015-01-01

    A meshless numerical scheme combining the operator splitting method (OSM), the radial basis function (RBF) interpolation, and the method of fundamental solutions (MFS) is developed for solving transient nonlinear bioheat problems in two-dimensional (2D) skin tissues. In the numerical scheme, the nonlinearity caused by linear and exponential relationships of temperature-dependent blood perfusion rate (TDBPR) is taken into consideration. In the analysis, the OSM is used first to separate the Laplacian operator and the nonlinear source term, and then the second-order time-stepping schemes are employed for approximating two splitting operators to convert the original governing equation into a linear nonhomogeneous Helmholtz-type governing equation (NHGE) at each time step. Subsequently, the RBF interpolation and the MFS involving the fundamental solution of the Laplace equation are respectively employed to obtain approximated particular and homogeneous solutions of the nonhomogeneous Helmholtz-type governing equation. Finally, the full fields consisting of the particular and homogeneous solutions are enforced to fit the NHGE at interpolation points and the boundary conditions at boundary collocations for determining unknowns at each time step. The proposed method is verified by comparison of other methods. Furthermore, the sensitivity of the coefficients in the cases of a linear and an exponential relationship of TDBPR is investigated to reveal their bioheat effect on the skin tissue. PMID:25603180

  6. Meshless method with operator splitting technique for transient nonlinear bioheat transfer in two-dimensional skin tissues.

    PubMed

    Zhang, Ze-Wei; Wang, Hui; Qin, Qing-Hua

    2015-01-16

    A meshless numerical scheme combining the operator splitting method (OSM), the radial basis function (RBF) interpolation, and the method of fundamental solutions (MFS) is developed for solving transient nonlinear bioheat problems in two-dimensional (2D) skin tissues. In the numerical scheme, the nonlinearity caused by linear and exponential relationships of temperature-dependent blood perfusion rate (TDBPR) is taken into consideration. In the analysis, the OSM is used first to separate the Laplacian operator and the nonlinear source term, and then the second-order time-stepping schemes are employed for approximating two splitting operators to convert the original governing equation into a linear nonhomogeneous Helmholtz-type governing equation (NHGE) at each time step. Subsequently, the RBF interpolation and the MFS involving the fundamental solution of the Laplace equation are respectively employed to obtain approximated particular and homogeneous solutions of the nonhomogeneous Helmholtz-type governing equation. Finally, the full fields consisting of the particular and homogeneous solutions are enforced to fit the NHGE at interpolation points and the boundary conditions at boundary collocations for determining unknowns at each time step. The proposed method is verified by comparison of other methods. Furthermore, the sensitivity of the coefficients in the cases of a linear and an exponential relationship of TDBPR is investigated to reveal their bioheat effect on the skin tissue.

  7. Progress on the development of FullWave, a Hot and Cold Plasma Parallel Full Wave Code

    NASA Astrophysics Data System (ADS)

    Spencer, J. Andrew; Svidzinski, Vladimir; Zhao, Liangji; Kim, Jin-Soo

    2017-10-01

    FullWave is being developed at FAR-TECH, Inc. to simulate RF waves in hot inhomogeneous magnetized plasmas without making small orbit approximations. FullWave is based on a meshless formulation in configuration space on non-uniform clouds of computational points (CCP) adapted to better resolve plasma resonances, antenna structures and complex boundaries. The linear frequency domain wave equation is formulated using two approaches: for cold plasmas the local cold plasma dielectric tensor is used (resolving resonances by particle collisions), while for hot plasmas the conductivity kernel is calculated. The details of FullWave and some preliminary results will be presented, including: 1) a monitor function based on analytic solutions of the cold-plasma dispersion relation; 2) an adaptive CCP based on the monitor function; 3) construction of the finite differences for approximation of derivatives on adaptive CCP; 4) results of 2-D full wave simulations in the cold plasma model in tokamak geometry using the formulated approach for ECRH, ICRH and Lower Hybrid range of frequencies. Work is supported by the U.S. DOE SBIR program.

  8. Development of full wave code for modeling RF fields in hot non-uniform plasmas

    NASA Astrophysics Data System (ADS)

    Zhao, Liangji; Svidzinski, Vladimir; Spencer, Andrew; Kim, Jin-Soo

    2016-10-01

    FAR-TECH, Inc. is developing a full wave RF modeling code to model RF fields in fusion devices and in general plasma applications. As an important component of the code, an adaptive meshless technique is introduced to solve the wave equations, which allows resolving plasma resonances efficiently and adapting to the complexity of antenna geometry and device boundary. The computational points are generated using either a point elimination method or a force balancing method based on the monitor function, which is calculated by solving the cold plasma dispersion equation locally. Another part of the code is the conductivity kernel calculation, used for modeling the nonlocal hot plasma dielectric response. The conductivity kernel is calculated on a coarse grid of test points and then interpolated linearly onto the computational points. All the components of the code are parallelized using MPI and OpenMP libraries to optimize the execution speed and memory. The algorithm and the results of our numerical approach to solving 2-D wave equations in a tokamak geometry will be presented. Work is supported by the U.S. DOE SBIR program.

  9. Set-free Markov state model building

    NASA Astrophysics Data System (ADS)

    Weber, Marcus; Fackeldey, Konstantin; Schütte, Christof

    2017-03-01

    Molecular dynamics (MD) simulations face challenging problems since the time scales of interest often are much longer than what is possible to simulate; and even if sufficiently long simulations are possible the complex nature of the resulting simulation data makes interpretation difficult. Markov State Models (MSMs) help to overcome these problems by making experimentally relevant time scales accessible via coarse grained representations that also allow for convenient interpretation. However, standard set-based MSMs exhibit some caveats limiting their approximation quality and statistical significance. One of the main caveats results from the fact that typical MD trajectories repeatedly re-cross the boundary between the sets used to build the MSM which causes statistical bias in estimating the transition probabilities between these sets. In this article, we present a set-free approach to MSM building utilizing smooth overlapping ansatz functions instead of sets and an adaptive refinement approach. This kind of meshless discretization helps to overcome the recrossing problem and yields an adaptive refinement procedure that allows us to improve the quality of the model while exploring state space and inserting new ansatz functions into the MSM.

  10. Free vibrations and buckling analysis of laminated plates by oscillatory radial basis functions

    NASA Astrophysics Data System (ADS)

    Neves, A. M. A.; Ferreira, A. J. M.

    2015-12-01

    In this paper the free vibrations and buckling analysis of laminated plates is performed using a global meshless method. A refined version of Kant's theorie which accounts for transverse normal stress and through-the-thickness deformation is used. The innovation is the use of oscillatory radial basis functions. Numerical examples are performed and results are presented and compared to available references. Such functions proved to be an alternative to the tradicional nonoscillatory radial basis functions.

  11. Mingus Discontinuous Multiphysics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pat Notz, Dan Turner

    Mingus provides hybrid coupled local/non-local mechanics analysis capabilities that extend several traditional methods to applications with inherent discontinuities. Its primary features include adaptations of solid mechanics, fluid dynamics and digital image correlation that naturally accommodate dijointed data or irregular solution fields by assimilating a variety of discretizations (such as control volume finite elements, peridynamics and meshless control point clouds). The goal of this software is to provide an analysis framework form multiphysics engineering problems with an integrated image correlation capability that can be used for experimental validation and model

  12. Meshless Local Petrov-Galerkin Method for Solving Contact, Impact and Penetration Problems

    DTIC Science & Technology

    2006-11-30

    Crack Growth 3 point of view, this approach makes the full use of the ex- isting FE models to avoid any model regeneration , which is extremely high in...process, at point C, the pressure reduces to zero, but the volumet- ric strain does not go to zero due to the collapsed void volume. 2.2 Damage...lease rate to go beyond the critical strain energy release rate. Thus, the micro-cracks begin to growth inside these areas. At 10 micro-seconds, these

  13. Spatiotemporal groundwater level modeling using hybrid artificial intelligence-meshless method

    NASA Astrophysics Data System (ADS)

    Nourani, Vahid; Mousavi, Shahram

    2016-05-01

    Uncertainties of the field parameters, noise of the observed data and unknown boundary conditions are the main factors involved in the groundwater level (GL) time series which limit the modeling and simulation of GL. This paper presents a hybrid artificial intelligence-meshless model for spatiotemporal GL modeling. In this way firstly time series of GL observed in different piezometers were de-noised using threshold-based wavelet method and the impact of de-noised and noisy data was compared in temporal GL modeling by artificial neural network (ANN) and adaptive neuro-fuzzy inference system (ANFIS). In the second step, both ANN and ANFIS models were calibrated and verified using GL data of each piezometer, rainfall and runoff considering various input scenarios to predict the GL at one month ahead. In the final step, the simulated GLs in the second step of modeling were considered as interior conditions for the multiquadric radial basis function (RBF) based solve of governing partial differential equation of groundwater flow to estimate GL at any desired point within the plain where there is not any observation. In order to evaluate and compare the GL pattern at different time scales, the cross-wavelet coherence was also applied to GL time series of piezometers. The results showed that the threshold-based wavelet de-noising approach can enhance the performance of the modeling up to 13.4%. Also it was found that the accuracy of ANFIS-RBF model is more reliable than ANN-RBF model in both calibration and validation steps.

  14. Probabilistic numerical methods for PDE-constrained Bayesian inverse problems

    NASA Astrophysics Data System (ADS)

    Cockayne, Jon; Oates, Chris; Sullivan, Tim; Girolami, Mark

    2017-06-01

    This paper develops meshless methods for probabilistically describing discretisation error in the numerical solution of partial differential equations. This construction enables the solution of Bayesian inverse problems while accounting for the impact of the discretisation of the forward problem. In particular, this drives statistical inferences to be more conservative in the presence of significant solver error. Theoretical results are presented describing rates of convergence for the posteriors in both the forward and inverse problems. This method is tested on a challenging inverse problem with a nonlinear forward model.

  15. Particle-based and meshless methods with Aboria

    NASA Astrophysics Data System (ADS)

    Robinson, Martin; Bruna, Maria

    Aboria is a powerful and flexible C++ library for the implementation of particle-based numerical methods. The particles in such methods can represent actual particles (e.g. Molecular Dynamics) or abstract particles used to discretise a continuous function over a domain (e.g. Radial Basis Functions). Aboria provides a particle container, compatible with the Standard Template Library, spatial search data structures, and a Domain Specific Language to specify non-linear operators on the particle set. This paper gives an overview of Aboria's design, an example of use, and a performance benchmark.

  16. Comparison of Response Surface Construction Methods for Derivative Estimation Using Moving Least Squares, Kriging and Radial Basis Functions

    NASA Technical Reports Server (NTRS)

    Krishnamurthy, Thiagarajan

    2005-01-01

    Response construction methods using Moving Least Squares (MLS), Kriging and Radial Basis Functions (RBF) are compared with the Global Least Squares (GLS) method in three numerical examples for derivative generation capability. Also, a new Interpolating Moving Least Squares (IMLS) method adopted from the meshless method is presented. It is found that the response surface construction methods using the Kriging and RBF interpolation yields more accurate results compared with MLS and GLS methods. Several computational aspects of the response surface construction methods also discussed.

  17. Modeling of single film bubble and numerical study of the plateau structure in foam system

    NASA Astrophysics Data System (ADS)

    Sun, Zhong-guo; Ni, Ni; Sun, Yi-jie; Xi, Guang

    2018-02-01

    The single-film bubble has a special geometry with a certain amount of gas shrouded by a thin layer of liquid film under the surface tension force both on the inside and outside surfaces of the bubble. Based on the mesh-less moving particle semi-implicit (MPS) method, a single-film double-gas-liquid-interface surface tension (SDST) model is established for the single-film bubble, which characteristically has totally two gas-liquid interfaces on both sides of the film. Within this framework, the conventional surface free energy surface tension model is improved by using a higher order potential energy equation between particles, and the modification results in higher accuracy and better symmetry properties. The complex interface movement in the oscillation process of the single-film bubble is numerically captured, as well as typical flow phenomena and deformation characteristics of the liquid film. In addition, the basic behaviors of the coalescence and connection process between two and even three single-film bubbles are studied, and the cases with bubbles of different sizes are also included. Furthermore, the classic plateau structure in the foam system is reproduced and numerically proved to be in the steady state for multi-bubble connections.

  18. Radiation effects on bifurcation and dual solutions in transient natural convection in a horizontal annulus

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Luo, Kang; Yi, Hong-Liang, E-mail: yihongliang@hit.edu.cn; Tan, He-Ping, E-mail: tanheping@hit.edu.cn

    2014-05-15

    Transitions and bifurcations of transient natural convection in a horizontal annulus with radiatively participating medium are numerically investigated using the coupled lattice Boltzmann and direct collocation meshless (LB-DCM) method. As a hybrid approach based on a common multi-scale Boltzmann-type model, the LB-DCM scheme is easy to implement and has an excellent flexibility in dealing with the irregular geometries. Separate particle distribution functions in the LBM are used to calculate the density field, the velocity field and the thermal field. In the radiatively participating medium, the contribution of thermal radiation to natural convection must be taken into account, and it ismore » considered as a radiative term in the energy equation that is solved by the meshless method with moving least-squares (MLS) approximation. The occurrence of various instabilities and bifurcative phenomena is analyzed for different Rayleigh number Ra and Prandtl number Pr with and without radiation. Then, bifurcation diagrams and dual solutions are presented for relevant radiative parameters, such as convection-radiation parameter Rc and optical thickness τ. Numerical results show that the presence of volumetric radiation changes the static temperature gradient of the fluid, and generally results in an increase in the flow critical value. Besides, the existence and development of dual solutions of transient convection in the presence of radiation are greatly affected by radiative parameters. Finally, the advantage of LB-DCM combination is discussed, and the potential benefits of applying the LB-DCM method to multi-field coupling problems are demonstrated.« less

  19. The solitary wave solution of coupled Klein-Gordon-Zakharov equations via two different numerical methods

    NASA Astrophysics Data System (ADS)

    Dehghan, Mehdi; Nikpour, Ahmad

    2013-09-01

    In this research, we propose two different methods to solve the coupled Klein-Gordon-Zakharov (KGZ) equations: the Differential Quadrature (DQ) and Globally Radial Basis Functions (GRBFs) methods. In the DQ method, the derivative value of a function with respect to a point is directly approximated by a linear combination of all functional values in the global domain. The principal work in this method is the determination of weight coefficients. We use two ways for obtaining these coefficients: cosine expansion (CDQ) and radial basis functions (RBFs-DQ), the former is a mesh-based method and the latter categorizes in the set of meshless methods. Unlike the DQ method, the GRBF method directly substitutes the expression of the function approximation by RBFs into the partial differential equation. The main problem in the GRBFs method is ill-conditioning of the interpolation matrix. Avoiding this problem, we study the bases introduced in Pazouki and Schaback (2011) [44]. Some examples are presented to compare the accuracy and easy implementation of the proposed methods. In numerical examples, we concentrate on Inverse Multiquadric (IMQ) and second-order Thin Plate Spline (TPS) radial basis functions. The variable shape parameter (exponentially and random) strategies are applied in the IMQ function and the results are compared with the constant shape parameter.

  20. Discrete and continuum modelling of soil cutting

    NASA Astrophysics Data System (ADS)

    Coetzee, C. J.

    2014-12-01

    Both continuum and discrete methods are used to investigate the soil cutting process. The Discrete Element Method ( dem) is used for the discrete modelling and the Material-Point Method ( mpm) is used for continuum modelling. M pmis a so-called particle method or meshless finite element method. Standard finite element methods have difficulty in modelling the entire cutting process due to large displacements and deformation of the mesh. The use of meshless methods overcomes this problem. M pm can model large deformations, frictional contact at the soil-tool interface, and dynamic effects (inertia forces). In granular materials the discreteness of the system is often important and rotational degrees of freedom are active, which might require enhanced theoretical approaches like polar continua. In polar continuum theories, the material points are considered to possess orientations. A material point has three degrees-of-freedom for rigid rotations, in addition to the three classic translational degrees-of-freedom. The Cosserat continuum is the most transparent and straightforward extension of the nonpolar (classic) continuum. Two-dimensional dem and mpm (polar and nonpolar) simulations of the cutting problem are compared to experiments. The drag force and flow patterns are compared using cohesionless corn grains as material. The corn macro (continuum) and micro ( dem) properties were obtained from shear and oedometer tests. Results show that the dilatancy angle plays a significant role in the flow of material but has less of an influence on the draft force. Nonpolar mpm is the most accurate in predicting blade forces, blade-soil interface stresses and the position and orientation of shear bands. Polar mpm fails in predicting the orientation of the shear band, but is less sensitive to mesh size and mesh orientation compared to nonpolar mpm. dem simulations show less material dilation than observed during experiments.

  1. A-posteriori error estimation for the finite point method with applications to compressible flow

    NASA Astrophysics Data System (ADS)

    Ortega, Enrique; Flores, Roberto; Oñate, Eugenio; Idelsohn, Sergio

    2017-08-01

    An a-posteriori error estimate with application to inviscid compressible flow problems is presented. The estimate is a surrogate measure of the discretization error, obtained from an approximation to the truncation terms of the governing equations. This approximation is calculated from the discrete nodal differential residuals using a reconstructed solution field on a modified stencil of points. Both the error estimation methodology and the flow solution scheme are implemented using the Finite Point Method, a meshless technique enabling higher-order approximations and reconstruction procedures on general unstructured discretizations. The performance of the proposed error indicator is studied and applications to adaptive grid refinement are presented.

  2. A meshless method using radial basis functions for numerical solution of the two-dimensional KdV-Burgers equation

    NASA Astrophysics Data System (ADS)

    Zabihi, F.; Saffarian, M.

    2016-07-01

    The aim of this article is to obtain the numerical solution of the two-dimensional KdV-Burgers equation. We construct the solution by using a different approach, that is based on using collocation points. The solution is based on using the thin plate splines radial basis function, which builds an approximated solution with discretizing the time and the space to small steps. We use a predictor-corrector scheme to avoid solving the nonlinear system. The results of numerical experiments are compared with analytical solutions to confirm the accuracy and efficiency of the presented scheme.

  3. Novel two-way artificial boundary condition for 2D vertical water wave propagation modelled with Radial-Basis-Function Collocation Method

    NASA Astrophysics Data System (ADS)

    Mueller, A.

    2018-04-01

    A new transparent artificial boundary condition for the two-dimensional (vertical) (2DV) free surface water wave propagation modelled using the meshless Radial-Basis-Function Collocation Method (RBFCM) as boundary-only solution is derived. The two-way artificial boundary condition (2wABC) works as pure incidence, pure radiation and as combined incidence/radiation BC. In this work the 2wABC is applied to harmonic linear water waves; its performance is tested against the analytical solution for wave propagation over horizontal sea bottom, standing and partially standing wave as well as wave interference of waves with different periods.

  4. Modeling Soft Tissue Damage and Failure Using a Combined Particle/Continuum Approach.

    PubMed

    Rausch, M K; Karniadakis, G E; Humphrey, J D

    2017-02-01

    Biological soft tissues experience damage and failure as a result of injury, disease, or simply age; examples include torn ligaments and arterial dissections. Given the complexity of tissue geometry and material behavior, computational models are often essential for studying both damage and failure. Yet, because of the need to account for discontinuous phenomena such as crazing, tearing, and rupturing, continuum methods are limited. Therefore, we model soft tissue damage and failure using a particle/continuum approach. Specifically, we combine continuum damage theory with Smoothed Particle Hydrodynamics (SPH). Because SPH is a meshless particle method, and particle connectivity is determined solely through a neighbor list, discontinuities can be readily modeled by modifying this list. We show, for the first time, that an anisotropic hyperelastic constitutive model commonly employed for modeling soft tissue can be conveniently implemented within a SPH framework and that SPH results show excellent agreement with analytical solutions for uniaxial and biaxial extension as well as finite element solutions for clamped uniaxial extension in 2D and 3D. We further develop a simple algorithm that automatically detects damaged particles and disconnects the spatial domain along rupture lines in 2D and rupture surfaces in 3D. We demonstrate the utility of this approach by simulating damage and failure under clamped uniaxial extension and in a peeling experiment of virtual soft tissue samples. In conclusion, SPH in combination with continuum damage theory may provide an accurate and efficient framework for modeling damage and failure in soft tissues.

  5. Understanding casing flow in Pelton turbines by numerical simulation

    NASA Astrophysics Data System (ADS)

    Rentschler, M.; Neuhauser, M.; Marongiu, J. C.; Parkinson, E.

    2016-11-01

    For rehabilitation projects of Pelton turbines, the flow in the casing may have an important influence on the overall performance of the machine. Water sheets returning on the jets or on the runner significantly reduce efficiency, and run-away speed depends on the flow in the casing. CFD simulations can provide a detailed insight into this type of flow, but these simulations are computationally intensive. As in general the volume of water in a Pelton turbine is small compared to the complete volume of the turbine housing, a single phase simulation greatly reduces the complexity of the simulation. In the present work a numerical tool based on the SPH-ALE meshless method is used to simulate the casing flow in a Pelton turbine. Using improved order schemes reduces the numerical viscosity. This is necessary to resolve the flow in the jet and on the casing wall, where the velocity differs by two orders of magnitude. The results are compared to flow visualizations and measurement in a hydraulic laboratory. Several rehabilitation projects proved the added value of understanding the flow in the Pelton casing. The flow simulation helps designing casing insert, not only to see their influence on the flow, but also to calculate the stress in the inserts. In some projects, the casing simulation leads to the understanding of unexpected behavior of the flow. One such example is presented where the backsplash of a deflector hit the runner, creating a reversed rotation of the runner.

  6. Testing hydrodynamics schemes in galaxy disc simulations

    NASA Astrophysics Data System (ADS)

    Few, C. G.; Dobbs, C.; Pettitt, A.; Konstandin, L.

    2016-08-01

    We examine how three fundamentally different numerical hydrodynamics codes follow the evolution of an isothermal galactic disc with an external spiral potential. We compare an adaptive mesh refinement code (RAMSES), a smoothed particle hydrodynamics code (SPHNG), and a volume-discretized mesh-less code (GIZMO). Using standard refinement criteria, we find that RAMSES produces a disc that is less vertically concentrated and does not reach such high densities as the SPHNG or GIZMO runs. The gas surface density in the spiral arms increases at a lower rate for the RAMSES simulations compared to the other codes. There is also a greater degree of substructure in the SPHNG and GIZMO runs and secondary spiral arms are more pronounced. By resolving the Jeans length with a greater number of grid cells, we achieve more similar results to the Lagrangian codes used in this study. Other alterations to the refinement scheme (adding extra levels of refinement and refining based on local density gradients) are less successful in reducing the disparity between RAMSES and SPHNG/GIZMO. Although more similar, SPHNG displays different density distributions and vertical mass profiles to all modes of GIZMO (including the smoothed particle hydrodynamics version). This suggests differences also arise which are not intrinsic to the particular method but rather due to its implementation. The discrepancies between codes (in particular, the densities reached in the spiral arms) could potentially result in differences in the locations and time-scales for gravitational collapse, and therefore impact star formation activity in more complex galaxy disc simulations.

  7. Modeling Soft Tissue Damage and Failure Using a Combined Particle/Continuum Approach

    PubMed Central

    Rausch, M. K.; Karniadakis, G. E.; Humphrey, J. D.

    2016-01-01

    Biological soft tissues experience damage and failure as a result of injury, disease, or simply age; examples include torn ligaments and arterial dissections. Given the complexity of tissue geometry and material behavior, computational models are often essential for studying both damage and failure. Yet, because of the need to account for discontinuous phenomena such as crazing, tearing, and rupturing, continuum methods are limited. Therefore, we model soft tissue damage and failure using a particle/continuum approach. Specifically, we combine continuum damage theory with Smoothed Particle Hydrodynamics (SPH). Because SPH is a meshless particle method, and particle connectivity is determined solely through a neighbor list, discontinuities can be readily modeled by modifying this list. We show, for the first time, that an anisotropic hyperelastic constitutive model commonly employed for modeling soft tissue can be conveniently implemented within a SPH framework and that SPH results show excellent agreement with analytical solutions for uniaxial and biaxial extension as well as finite element solutions for clamped uniaxial extension in 2D and 3D. We further develop a simple algorithm that automatically detects damaged particles and disconnects the spatial domain along rupture lines in 2D and rupture surfaces in 3D. We demonstrate the utility of this approach by simulating damage and failure under clamped uniaxial extension and in a peeling experiment of virtual soft tissue samples. In conclusion, SPH in combination with continuum damage theory may provide an accurate and efficient framework for modeling damage and failure in soft tissues. PMID:27538848

  8. Computational performance of Free Mesh Method applied to continuum mechanics problems

    PubMed Central

    YAGAWA, Genki

    2011-01-01

    The free mesh method (FMM) is a kind of the meshless methods intended for particle-like finite element analysis of problems that are difficult to handle using global mesh generation, or a node-based finite element method that employs a local mesh generation technique and a node-by-node algorithm. The aim of the present paper is to review some unique numerical solutions of fluid and solid mechanics by employing FMM as well as the Enriched Free Mesh Method (EFMM), which is a new version of FMM, including compressible flow and sounding mechanism in air-reed instruments as applications to fluid mechanics, and automatic remeshing for slow crack growth, dynamic behavior of solid as well as large-scale Eigen-frequency of engine block as applications to solid mechanics. PMID:21558753

  9. 3D SPH numerical simulation of the wave generated by the Vajont rockslide

    NASA Astrophysics Data System (ADS)

    Vacondio, R.; Mignosa, P.; Pagani, S.

    2013-09-01

    A 3D numerical modeling of the wave generated by the Vajont slide, one of the most destructive ever occurred, is presented in this paper. A meshless Lagrangian Smoothed Particle Hydrodynamics (SPH) technique was adopted to simulate the highly fragmented violent flow generated by the falling slide in the artificial reservoir. The speed-up achievable via General Purpose Graphic Processing Units (GP-GPU) allowed to adopt the adequate resolution to describe the phenomenon. The comparison with the data available in literature showed that the results of the numerical simulation reproduce satisfactorily the maximum run-up, also the water surface elevation in the residual lake after the event. Moreover, the 3D velocity field of the flow during the event and the discharge hydrograph which overtopped the dam, were obtained.

  10. Modeling of Flow, Transport and Controlled Sedimentation Phenomena during Mixing of Salt Solutions in Complex Porous Formations

    NASA Astrophysics Data System (ADS)

    Skouras, Eugene D.; Jaho, Sofia; Pavlakou, Efstathia I.; Sygouni, Varvara; Petsi, Anastasia; Paraskeva, Christakis A.

    2015-04-01

    The deposition of salts in porous media is a major engineering phenomenon encountered in a plethora of industrial and environmental applications where in some cases is desirable and in other not (oil production, geothermal systems, soil stabilization etc). Systematic approach of these problems requires knowledge of the key mechanisms of precipitating salts within the porous structures, in order to develop new methods to control the process. In this work, the development and the solution of spatiotemporally variable mass balances during salt solution mixing along specific pores were performed. Both analytical models and finite differences CFD models were applied for the study of flow and transport with simultaneous homogeneous and heterogeneous nucleation (by crystal growth on the surface of the pores) in simple geometries, while unstructured finite elements and meshless methods were developed and implemented for spatial discretization, reconstruction, and solution of transport equations and homogeneous / heterogeneous reactions in more complex geometries. At initial stages of this work, critical problem parameters were identified, such as the characteristics of the porosity, the number of dissolved components, etc. The parameters were then used for solving problems which correspond to available experimental data. For each combination of ions and materials, specific data and process characteristics were included: (a) crystal kinetics (nucleation, growth rates or reaction surface rates of crystals, critical suspension concentrations), (b) physico-chemical properties (bulk density, dimensions of generated crystals, ion diffusion coefficients in the solution), (c) operating parameters (macroscopic velocity, flow, or pressure gradient of the solution, ion concentration) (d) microfluidic data (geometry, flow area), (e) porosity data in Darcy description (initial porosity, specific surface area, tortuosity). During the modeling of flow and transport in three-dimensional pore network, the dependence of the mass balance in all major directions is taken into account, either as a three-dimensional network of pores with specific geometry (cylinders, sinusoidal cells), or as a homogeneous random medium (Darcy description). The distribution of the crystals along the porous medium was calculated in the case of selective crystallization on the walls, which is the predominant effect to date in the experiments. The crystals distribution was also examined in the case where crystallization was carried out in the bulk solution. Salts sedimentation experiments were simulated both in an unsaturated porous medium and in a medium saturated with an oil phase. A comparison of the simulation results with corresponding experimental results was performed in order to design improved selective sedimentation of salts systems in porous formations. ACKNOWLEDGMENTS This research was partially funded by the European Union (European Social Fund-ESF) and Greek National Funds through the Operational program "Education and Lifelong Learning" under the action Aristeia II (Code No 4420).

  11. Numerical Modelling of the Compressive and Tensile Response of Glass and Ceramic under High Pressure Dynamic Loading

    NASA Astrophysics Data System (ADS)

    Clegg, Richard A.; Hayhurst, Colin J.

    1999-06-01

    Ceramic materials, including glass, are commonly used as ballistic protection materials. The response of a ceramic to impact, perforation and penetration is complex and difficult and/or expensive to instrument for obtaining detailed physical data. This paper demonstrates how a hydrocode, such as AUTODYN, can be used to aid in the understanding of the response of brittle materials to high pressure impact loading and thus promote an efficient and cost effective design process. Hydrocode simulations cannot be made without appropriate characterisation of the material. Because of the complexitiy of the response of ceramic materials this often requires a number of complex material tests. Here we present a methodology for using the results of flyer plate tests, in conjunction with numerical simulations, to derive input to the Johnson-Holmquist material model for ceramics. Most of the research effort in relation to the development of hydrocode material models for ceramics has concentrated on the material behaviour under compression and shear. While the penetration process is dominated by these aspects of the material response, the final damaged state of the material can be significantly influenced by the tensile behaviour. Modelling of the final damage state is important since this is often the only physical information which is available. In this paper we present a unique implementation, in a hydrocode, for improved modelling of brittle materials in the tensile regime. Tensile failure initiation is based on any combination of principal stress or strain while the post-failure tensile response of the material is controlled through a Rankine plasticity damaging failure surface. The tensile failure surface can be combined with any of the traditional plasticity and/or compressive damage models. Finally, the models and data are applied in both traditional grid based Lagrangian and Eulerian solution techniques and the relativley new SPH (Smooth Particle Hydrodynamics) meshless technique. Simulations of long rod impacts onto ceramic faced armour and hypervelocity impacts on glass solar array space structures are presented and compared with experiments.

  12. Three-Dimensional Smoothed Particle Hydrodynamics Modeling of Preferential Flow Dynamics at Fracture Intersections on a High-Performance Computing Platform

    NASA Astrophysics Data System (ADS)

    Kordilla, J.; Bresinsky, L. T.

    2017-12-01

    The physical mechanisms that govern preferential flow dynamics in unsaturated fractured rock formations are complex and not well understood. Fracture intersections may act as an integrator of unsaturated flow, leading to temporal delay, intermittent flow and partitioning dynamics. In this work, a three-dimensional Pairwise-Force Smoothed Particle Hydrodynamics (PF-SPH) model is being applied in order to simulate gravity-driven multiphase flow at synthetic fracture intersections. SPH, as a meshless Lagrangian method, is particularly suitable for modeling deformable interfaces, such as three-phase contact dynamics of droplets, rivulets and free-surface films. The static and dynamic contact angle can be recognized as the most important parameter of gravity-driven free-surface flow. In SPH, surface tension and adhesion naturally emerges from the implemented pairwise fluid-fluid (sff) and solid-fluid (ssf) interaction force. The model was calibrated to a contact angle of 65°, which corresponds to the wetting properties of water on Poly(methyl methacrylate). The accuracy of the SPH simulations were validated against an analytical solution of Poiseuille flow between two parallel plates and against laboratory experiments. Using the SPH model, the complex flow mode transitions from droplet to rivulet flow of an experimental study were reproduced. Additionally, laboratory dimensionless scaling experiments of water droplets were successfully replicated in SPH. Finally, SPH simulations were used to investigate the partitioning dynamics of single droplets into synthetic horizontal fractures with various apertures (Δdf = 0, 0.5, 1.0, 1.5, 2.0, 2.5, 3.0, 3.5, 4.0 mm) and offsets (Δdoff = -1.5, -1.0, -0.5, 0, 1.0, 2.0, 3.0 mm). Fluid masses were measured in the domains R1, R2 and R3. The perfect conditions of ideally smooth surfaces and the SPH inherent advantage of particle tracking allow the recognition of small scale partitioning mechanisms and its importance for bulk flow behavior.

  13. Hydrodynamic Simulations of Protoplanetary Disks with GIZMO

    NASA Astrophysics Data System (ADS)

    Rice, Malena; Laughlin, Greg

    2018-01-01

    Over the past several decades, the field of computational fluid dynamics has rapidly advanced as the range of available numerical algorithms and computationally feasible physical problems has expanded. The development of modern numerical solvers has provided a compelling opportunity to reconsider previously obtained results in search for yet undiscovered effects that may be revealed through longer integration times and more precise numerical approaches. In this study, we compare the results of past hydrodynamic disk simulations with those obtained from modern analytical resources. We focus our study on the GIZMO code (Hopkins 2015), which uses meshless methods to solve the homogeneous Euler equations of hydrodynamics while eliminating problems arising as a result of advection between grid cells. By comparing modern simulations with prior results, we hope to provide an improved understanding of the impact of fluid mechanics upon the evolution of protoplanetary disks.

  14. Smoothed particle hydrodynamics method from a large eddy simulation perspective

    NASA Astrophysics Data System (ADS)

    Di Mascio, A.; Antuono, M.; Colagrossi, A.; Marrone, S.

    2017-03-01

    The Smoothed Particle Hydrodynamics (SPH) method, often used for the modelling of the Navier-Stokes equations by a meshless Lagrangian approach, is revisited from the point of view of Large Eddy Simulation (LES). To this aim, the LES filtering procedure is recast in a Lagrangian framework by defining a filter that moves with the positions of the fluid particles at the filtered velocity. It is shown that the SPH smoothing procedure can be reinterpreted as a sort of LES Lagrangian filtering, and that, besides the terms coming from the LES convolution, additional contributions (never accounted for in the SPH literature) appear in the equations when formulated in a filtered fashion. Appropriate closure formulas are derived for the additional terms and a preliminary numerical test is provided to show the main features of the proposed LES-SPH model.

  15. Solution of Grad-Shafranov equation by the method of fundamental solutions

    NASA Astrophysics Data System (ADS)

    Nath, D.; Kalra, M. S.; Kalra

    2014-06-01

    In this paper we have used the Method of Fundamental Solutions (MFS) to solve the Grad-Shafranov (GS) equation for the axisymmetric equilibria of tokamak plasmas with monomial sources. These monomials are the individual terms appearing on the right-hand side of the GS equation if one expands the nonlinear terms into polynomials. Unlike the Boundary Element Method (BEM), the MFS does not involve any singular integrals and is a meshless boundary-alone method. Its basic idea is to create a fictitious boundary around the actual physical boundary of the computational domain. This automatically removes the involvement of singular integrals. The results obtained by the MFS match well with the earlier results obtained using the BEM. The method is also applied to Solov'ev profiles and it is found that the results are in good agreement with analytical results.

  16. Meshless modelling of dynamic behaviour of glasses under intense shock loadings: Application to matter ejection during high velocity impacts on thin brittle targets

    NASA Astrophysics Data System (ADS)

    Michel, Y.; Chevalier, J.-M.; Durin, C.; Espinosa, C.; Malaise, F.; Barrau, J.-J.

    2006-08-01

    The purpose of this study is to present a new material model adapted to SPH modelling of dynamic behaviour of glasses under shock loadings. This model has the ability to reproduce fragmentation and densification of glasses under compression as well as brittle tensile failure. It has been implemented in Ls-Dyna software and coupled with a SPH code. By comparison with CEA-CESTA experimental data the model has been validated for fused silica and Pyrex glass for stress level up to 35GPa. For Laser MegaJoule applications, the present material model was applied to 3D high velocity impacts on thin brittle targets with good agreement with experimental data obtained using CESTA's double stage light gas gun in term of damages and matter ejection.

  17. Modelling highly deformable metal extrusion using SPH

    NASA Astrophysics Data System (ADS)

    Prakash, Mahesh; Cleary, Paul W.

    2015-05-01

    Computational modelling is often used to reduce trial extrusions through accurate defect prediction. Traditionally, metal extrusion is modelled using mesh based finite element methods. However, large plastic deformations can lead to heavy re-meshing and numerical diffusion. Here we use the mesh-less smoothed particle hydrodynamics method since it allows simulation of large deformations without re-meshing and the tracking of history dependent properties such as plastic strain making it suitable for defect prediction. The variation in plastic strain and deformation for aluminium alloy in a cylindrical 3D geometry with extrusion ratio and die angle is evaluated. The extrusion process is found to have three distinct phases consisting of an initial sharp rise in extrusion force, a steady phase requiring constant force and terminating in a sharp decline in force as metal is completely extruded. Deformation and plastic strain increased significantly with extrusion ratio but only moderately with die angle. Extrusion force increased by 150 % as the extrusion ratio increased from 2:1 to 4:1 but had only a marginal change with die angle. A low strain zone in the centre of the extruded product was found to be a function of extrusion ratio but was persistent and did not vary with die angle. Simulation of a complex 3D building industry component showed large variations in plastic strain along the length of the product at two scales. These were due to change in metal behaviour as extrusion progressed from phase 1 to phase 2. A stagnation zone at the back of the die was predicted that could lead to the "funnel" or "pipe" defect.

  18. Simplified galaxy formation with mesh-less hydrodynamics

    NASA Astrophysics Data System (ADS)

    Lupi, Alessandro; Volonteri, Marta; Silk, Joseph

    2017-09-01

    Numerical simulations have become a necessary tool to describe the complex interactions among the different processes involved in galaxy formation and evolution, unfeasible via an analytic approach. The last decade has seen a great effort by the scientific community in improving the sub-grid physics modelling and the numerical techniques used to make numerical simulations more predictive. Although the recently publicly available code gizmo has proven to be successful in reproducing galaxy properties when coupled with the model of the MUFASA simulations and the more sophisticated prescriptions of the Feedback In Realistic Environment (FIRE) set-up, it has not been tested yet using delayed cooling supernova feedback, which still represent a reasonable approach for large cosmological simulations, for which detailed sub-grid models are prohibitive. In order to limit the computational cost and to be able to resolve the disc structure in the galaxies we perform a suite of zoom-in cosmological simulations with rather low resolution centred around a sub-L* galaxy with a halo mass of 3 × 1011 M⊙ at z = 0, to investigate the ability of this simple model, coupled with the new hydrodynamic method of gizmo, to reproduce observed galaxy scaling relations (stellar to halo mass, stellar and baryonic Tully-Fisher, stellar mass-metallicity and mass-size). We find that the results are in good agreement with the main scaling relations, except for the total stellar mass, larger than that predicted by the abundance matching technique, and the effective sizes for the most massive galaxies in the sample, which are too small.

  19. Meshless analysis of shear deformable shells: the linear model

    NASA Astrophysics Data System (ADS)

    Costa, Jorge C.; Tiago, Carlos M.; Pimenta, Paulo M.

    2013-10-01

    This work develops a kinematically linear shell model departing from a consistent nonlinear theory. The initial geometry is mapped from a flat reference configuration by a stress-free finite deformation, after which, the actual shell motion takes place. The model maintains the features of a complete stress-resultant theory with Reissner-Mindlin kinematics based on an inextensible director. A hybrid displacement variational formulation is presented, where the domain displacements and kinematic boundary reactions are independently approximated. The resort to a flat reference configuration allows the discretization using 2-D Multiple Fixed Least-Squares (MFLS) on the domain. The consistent definition of stress resultants and consequent plane stress assumption led to a neat formulation for the analysis of shells. The consistent linear approximation, combined with MFLS, made possible efficient computations with a desired continuity degree, leading to smooth results for the displacement, strain and stress fields, as shown by several numerical examples.

  20. Modified Finite Particle Methods for Stokes problems

    NASA Astrophysics Data System (ADS)

    Montanino, A.; Asprone, D.; Reali, A.; Auricchio, F.

    2018-04-01

    The Modified Finite Particle Method (MFPM) is a numerical method belonging to the class of meshless methods, nowadays widely investigated due to their characteristic of being capable to easily model large deformation and fluid-dynamic problems. Here we use the MFPM to approximate the Stokes problem. Since the classical formulation of the Stokes problem may lead to pressure spurious oscillations, we investigate alternative formulations and focus on how MFPM discretization behaves in those situations. Some of the investigated formulations, in fact, do not enforce strongly the incompressibility constraint, and therefore an important issue of the present work is to verify if the MFPM is able to correctly reproduce the incompressibility in those cases. The numerical results show that for the formulations in which the incompressibility constraint is properly satisfied from a numerical point of view, the expected second-order is achieved, both in static and in dynamic problems.

  1. GANDALF - Graphical Astrophysics code for N-body Dynamics And Lagrangian Fluids

    NASA Astrophysics Data System (ADS)

    Hubber, D. A.; Rosotti, G. P.; Booth, R. A.

    2018-01-01

    GANDALF is a new hydrodynamics and N-body dynamics code designed for investigating planet formation, star formation and star cluster problems. GANDALF is written in C++, parallelized with both OPENMP and MPI and contains a PYTHON library for analysis and visualization. The code has been written with a fully object-oriented approach to easily allow user-defined implementations of physics modules or other algorithms. The code currently contains implementations of smoothed particle hydrodynamics, meshless finite-volume and collisional N-body schemes, but can easily be adapted to include additional particle schemes. We present in this paper the details of its implementation, results from the test suite, serial and parallel performance results and discuss the planned future development. The code is freely available as an open source project on the code-hosting website github at https://github.com/gandalfcode/gandalf and is available under the GPLv2 license.

  2. Fast multipole methods on a cluster of GPUs for the meshless simulation of turbulence

    NASA Astrophysics Data System (ADS)

    Yokota, R.; Narumi, T.; Sakamaki, R.; Kameoka, S.; Obi, S.; Yasuoka, K.

    2009-11-01

    Recent advances in the parallelizability of fast N-body algorithms, and the programmability of graphics processing units (GPUs) have opened a new path for particle based simulations. For the simulation of turbulence, vortex methods can now be considered as an interesting alternative to finite difference and spectral methods. The present study focuses on the efficient implementation of the fast multipole method and pseudo-particle method on a cluster of NVIDIA GeForce 8800 GT GPUs, and applies this to a vortex method calculation of homogeneous isotropic turbulence. The results of the present vortex method agree quantitatively with that of the reference calculation using a spectral method. We achieved a maximum speed of 7.48 TFlops using 64 GPUs, and the cost performance was near 9.4/GFlops. The calculation of the present vortex method on 64 GPUs took 4120 s, while the spectral method on 32 CPUs took 4910 s.

  3. A new method for solving the quantum hydrodynamic equations of motion: application to two-dimensional reactive scattering.

    PubMed

    Pauler, Denise K; Kendrick, Brian K

    2004-01-08

    The de Broglie-Bohm hydrodynamic equations of motion are solved using a meshless method based on a moving least squares approach and an arbitrary Lagrangian-Eulerian frame of reference. A regridding algorithm adds and deletes computational points as needed in order to maintain a uniform interparticle spacing, and unitary time evolution is obtained by propagating the wave packet using averaged fields. The numerical instabilities associated with the formation of nodes in the reflected portion of the wave packet are avoided by adding artificial viscosity to the equations of motion. The methodology is applied to a two-dimensional model collinear reaction with an activation barrier. Reaction probabilities are computed as a function of both time and energy, and are in excellent agreement with those based on the quantum trajectory method. (c) 2004 American Institute of Physics

  4. gpuSPHASE-A shared memory caching implementation for 2D SPH using CUDA

    NASA Astrophysics Data System (ADS)

    Winkler, Daniel; Meister, Michael; Rezavand, Massoud; Rauch, Wolfgang

    2017-04-01

    Smoothed particle hydrodynamics (SPH) is a meshless Lagrangian method that has been successfully applied to computational fluid dynamics (CFD), solid mechanics and many other multi-physics problems. Using the method to solve transport phenomena in process engineering requires the simulation of several days to weeks of physical time. Based on the high computational demand of CFD such simulations in 3D need a computation time of years so that a reduction to a 2D domain is inevitable. In this paper gpuSPHASE, a new open-source 2D SPH solver implementation for graphics devices, is developed. It is optimized for simulations that must be executed with thousands of frames per second to be computed in reasonable time. A novel caching algorithm for Compute Unified Device Architecture (CUDA) shared memory is proposed and implemented. The software is validated and the performance is evaluated for the well established dambreak test case.

  5. A system of three-dimensional complex variables

    NASA Technical Reports Server (NTRS)

    Martin, E. Dale

    1986-01-01

    Some results of a new theory of multidimensional complex variables are reported, including analytic functions of a three-dimensional (3-D) complex variable. Three-dimensional complex numbers are defined, including vector properties and rules of multiplication. The necessary conditions for a function of a 3-D variable to be analytic are given and shown to be analogous to the 2-D Cauchy-Riemann equations. A simple example also demonstrates the analogy between the newly defined 3-D complex velocity and 3-D complex potential and the corresponding ordinary complex velocity and complex potential in two dimensions.

  6. Development of stress boundary conditions in smoothed particle hydrodynamics (SPH) for the modeling of solids deformation

    NASA Astrophysics Data System (ADS)

    Douillet-Grellier, Thomas; Pramanik, Ranjan; Pan, Kai; Albaiz, Abdulaziz; Jones, Bruce D.; Williams, John R.

    2017-10-01

    This paper develops a method for imposing stress boundary conditions in smoothed particle hydrodynamics (SPH) with and without the need for dummy particles. SPH has been used for simulating phenomena in a number of fields, such as astrophysics and fluid mechanics. More recently, the method has gained traction as a technique for simulation of deformation and fracture in solids, where the meshless property of SPH can be leveraged to represent arbitrary crack paths. Despite this interest, application of boundary conditions within the SPH framework is typically limited to imposed velocity or displacement using fictitious dummy particles to compensate for the lack of particles beyond the boundary interface. While this is enough for a large variety of problems, especially in the case of fluid flow, for problems in solid mechanics there is a clear need to impose stresses upon boundaries. In addition to this, the use of dummy particles to impose a boundary condition is not always suitable or even feasibly, especially for those problems which include internal boundaries. In order to overcome these difficulties, this paper first presents an improved method for applying stress boundary conditions in SPH with dummy particles. This is then followed by a proposal of a formulation which does not require dummy particles. These techniques are then validated against analytical solutions to two common problems in rock mechanics, the Brazilian test and the penny-shaped crack problem both in 2D and 3D. This study highlights the fact that SPH offers a good level of accuracy to solve these problems and that results are reliable. This validation work serves as a foundation for addressing more complex problems involving plasticity and fracture propagation.

  7. Solving the Inverse-Square Problem with Complex Variables

    ERIC Educational Resources Information Center

    Gauthier, N.

    2005-01-01

    The equation of motion for a mass that moves under the influence of a central, inverse-square force is formulated and solved as a problem in complex variables. To find the solution, the constancy of angular momentum is first established using complex variables. Next, the complex position coordinate and complex velocity of the particle are assumed…

  8. Stability of uncertain impulsive complex-variable chaotic systems with time-varying delays.

    PubMed

    Zheng, Song

    2015-09-01

    In this paper, the robust exponential stabilization of uncertain impulsive complex-variable chaotic delayed systems is considered with parameters perturbation and delayed impulses. It is assumed that the considered complex-variable chaotic systems have bounded parametric uncertainties together with the state variables on the impulses related to the time-varying delays. Based on the theories of adaptive control and impulsive control, some less conservative and easily verified stability criteria are established for a class of complex-variable chaotic delayed systems with delayed impulses. Some numerical simulations are given to validate the effectiveness of the proposed criteria of impulsive stabilization for uncertain complex-variable chaotic delayed systems. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  9. A compact large-format streak tube for imaging lidar

    NASA Astrophysics Data System (ADS)

    Hui, Dandan; Luo, Duan; Tian, Liping; Lu, Yu; Chen, Ping; Wang, Junfeng; Sai, Xiaofeng; Wen, Wenlong; Wang, Xing; Xin, Liwei; Zhao, Wei; Tian, Jinshou

    2018-04-01

    The streak tubes with a large effective photocathode area, large effective phosphor screen area, and high photocathode radiant sensitivity are essential for improving the field of view, depth of field, and detectable range of the multiple-slit streak tube imaging lidar. In this paper, a high spatial resolution, large photocathode area, and compact meshless streak tube with a spherically curved cathode and screen is designed and tested. Its spatial resolution reaches 20 lp/mm over the entire Φ28 mm photocathode working area, and the simulated physical temporal resolution is better than 30 ps. The temporal distortion in our large-format streak tube, which is shown to be a non-negligible factor, has a minimum value as the radius of curvature of the photocathode varies. Furthermore, the photocathode radiant sensitivity and radiant power gain reach 41 mA/W and 18.4 at the wavelength of 550 nm, respectively. Most importantly, the external dimensions of our streak tube are no more than Φ60 mm × 110 mm.

  10. Conjunction of radial basis function interpolator and artificial intelligence models for time-space modeling of contaminant transport in porous media

    NASA Astrophysics Data System (ADS)

    Nourani, Vahid; Mousavi, Shahram; Dabrowska, Dominika; Sadikoglu, Fahreddin

    2017-05-01

    As an innovation, both black box and physical-based models were incorporated into simulating groundwater flow and contaminant transport. Time series of groundwater level (GL) and chloride concentration (CC) observed at different piezometers of study plain were firstly de-noised by the wavelet-based de-noising approach. The effect of de-noised data on the performance of artificial neural network (ANN) and adaptive neuro-fuzzy inference system (ANFIS) was evaluated. Wavelet transform coherence was employed for spatial clustering of piezometers. Then for each cluster, ANN and ANFIS models were trained to predict GL and CC values. Finally, considering the predicted water heads of piezometers as interior conditions, the radial basis function as a meshless method which solves partial differential equations of GFCT, was used to estimate GL and CC values at any point within the plain where there is not any piezometer. Results indicated that efficiency of ANFIS based spatiotemporal model was more than ANN based model up to 13%.

  11. Singular boundary method for wave propagation analysis in periodic structures

    NASA Astrophysics Data System (ADS)

    Fu, Zhuojia; Chen, Wen; Wen, Pihua; Zhang, Chuanzeng

    2018-07-01

    A strong-form boundary collocation method, the singular boundary method (SBM), is developed in this paper for the wave propagation analysis at low and moderate wavenumbers in periodic structures. The SBM is of several advantages including mathematically simple, easy-to-program, meshless with the application of the concept of origin intensity factors in order to eliminate the singularity of the fundamental solutions and avoid the numerical evaluation of the singular integrals in the boundary element method. Due to the periodic behaviors of the structures, the SBM coefficient matrix can be represented as a block Toeplitz matrix. By employing three different fast Toeplitz-matrix solvers, the computational time and storage requirements are significantly reduced in the proposed SBM analysis. To demonstrate the effectiveness of the proposed SBM formulation for wave propagation analysis in periodic structures, several benchmark examples are presented and discussed The proposed SBM results are compared with the analytical solutions, the reference results and the COMSOL software.

  12. A trans-phase granular continuum relation and its use in simulation

    NASA Astrophysics Data System (ADS)

    Kamrin, Ken; Dunatunga, Sachith; Askari, Hesam

    The ability to model a large granular system as a continuum would offer tremendous benefits in computation time compared to discrete particle methods. However, two infamous problems arise in the pursuit of this vision: (i) the constitutive relation for granular materials is still unclear and hotly debated, and (ii) a model and corresponding numerical method must wear ``many hats'' as, in general circumstances, it must be able to capture and accurately represent the material as it crosses through its collisional, dense-flowing, and solid-like states. Here we present a minimal trans-phase model, merging an elastic response beneath a fictional yield criterion, a mu(I) rheology for liquid-like flow above the static yield criterion, and a disconnection rule to model separation of the grains into a low-temperature gas. We simulate our model with a meshless method (in high strain/mixing cases) and the finite-element method. It is able to match experimental data in many geometries, including collapsing columns, impact on granular beds, draining silos, and granular drag problems.

  13. Two-dimensional fracture analysis of piezoelectric material based on the scaled boundary node method

    NASA Astrophysics Data System (ADS)

    Shen-Shen, Chen; Juan, Wang; Qing-Hua, Li

    2016-04-01

    A scaled boundary node method (SBNM) is developed for two-dimensional fracture analysis of piezoelectric material, which allows the stress and electric displacement intensity factors to be calculated directly and accurately. As a boundary-type meshless method, the SBNM employs the moving Kriging (MK) interpolation technique to an approximate unknown field in the circumferential direction and therefore only a set of scattered nodes are required to discretize the boundary. As the shape functions satisfy Kronecker delta property, no special techniques are required to impose the essential boundary conditions. In the radial direction, the SBNM seeks analytical solutions by making use of analytical techniques available to solve ordinary differential equations. Numerical examples are investigated and satisfactory solutions are obtained, which validates the accuracy and simplicity of the proposed approach. Project supported by the National Natural Science Foundation of China (Grant Nos. 11462006 and 21466012), the Foundation of Jiangxi Provincial Educational Committee, China (Grant No. KJLD14041), and the Foundation of East China Jiaotong University, China (Grant No. 09130020).

  14. Rectus abdominus fascial sheath usage for crural reinforcement during surgical management of GERD: preliminary report of a prospective randomized clinical trial.

    PubMed

    Yigit, Taner; Coskun, Ali Kagan; Sinan, Huseyin; Harlak, Ali; Kantarcioglu, Murat; Kilbas, Zafer; Kozak, Orhan; Cetiner, Sadettin

    2012-08-01

    Many materials are currently being used to reinforce the crural repair. Perforation, intensive fibrosis, and price are limiting the usage of these materials. Our purpose was to seek an alternative, cheap, always available, and inert material to use for cruroplasty reinforcement. Twenty-four patients participated and were randomly divided into 2 groups (graft+laparoscopic Nissen fundoplication and laparoscopic Nissen fundoplication alone) with 12 patients in each group. Total operation time, postoperative dysphagia rate, dysphagia improvement time, postoperative pain, recurrence, and incisional hernia rate were compared. There was no difference in terms of study parameters between both groups except for the mean operation time. Autograft hiatoplasty seems to be a good alternative for crural reinforcement. It provides safe reinforcement, has the same dysphagia rates as meshless hiatoplasty, and avoids potential complications of redo surgery by minimizing extensive fibrosis. Furthermore, the rectus abdominus sheath is always available and inexpensive.

  15. The effects of pressure dependent constitutive model to simulate concrete structures failure under impact loads

    NASA Astrophysics Data System (ADS)

    Mokhatar, S. N.; Sonoda, Y.; Kamarudin, A. F.; Noh, M. S. Md; Tokumaru, S.

    2018-04-01

    The main objective of this paper is to explore the effect of confining pressure in the compression and tension zone by simulating the behaviour of reinforced concrete/mortar structures subjected to the impact load. The analysis comprises the numerical simulation of the influences of high mass low speed impact weight dropping on concrete structures, where the analyses are incorporated with meshless method namely as Smoothed Particle Hydrodynamics (SPH) method. The derivation of the plastic stiffness matrix of Drucker-Prager (DP) that extended from Von-Mises (VM) yield criteria to simulate the concrete behaviour were presented in this paper. In which, the displacements for concrete/mortar structures are assumed to be infinitesimal. Furthermore, the influence of the different material model of DP and VM that used numerically for concrete and mortar structures are also discussed. Validation upon existing experimental test results is carried out to investigate the effect of confining pressure, it is found that VM criterion causes unreal impact failure (flexural cracking) of concrete structures.

  16. Meshless methods in shape optimization of linear elastic and thermoelastic solids

    NASA Astrophysics Data System (ADS)

    Bobaru, Florin

    This dissertation proposes a meshless approach to problems in shape optimization of elastic and thermoelastic solids. The Element-free Galerkin (EFG) method is used for this purpose. The ability of the EFG to avoid remeshing, that is normally done in a Finite Element approach to correct highly distorted meshes, is clearly demonstrated by several examples. The shape optimization example of a thermal cooling fin shows a dramatic improvement in the objective compared to a previous FEM analysis. More importantly, the new solution, displaying large shape changes contrasted to the initial design, was completely missed by the FEM analysis. The EFG formulation given here for shape optimization "uncovers" new solutions that are, apparently, unobtainable via a FEM approach. This is one of the main achievements of our work. The variational formulations for the analysis problem and for the sensitivity problems are obtained with a penalty method for imposing the displacement boundary conditions. The continuum formulation is general and this facilitates 2D and 3D with minor differences from one another. Also, transient thermoelastic problems can use the present development at each time step to solve shape optimization problems for time-dependent thermal problems. For the elasticity framework, displacement sensitivity is obtained in the EFG context. Excellent agreements with analytical solutions for some test problems are obtained. The shape optimization of a fillet is carried out in great detail, and results show significant improvement of the EFG solution over the FEM or the Boundary Element Method solutions. In our approach we avoid differentiating the complicated EFG shape functions, with respect to the shape design parameters, by using a particular discretization for sensitivity calculations. Displacement and temperature sensitivities are formulated for the shape optimization of a linear thermoelastic solid. Two important examples considered in this work, the optimization of a thermal fin and of a uniformly loaded thermoelastic beam, reveal new characteristics of the EFG method in shape optimization applications. Among other advantages of the EFG method over traditional FEM treatments of shape optimization problems, some of the most important ones are shown to be: elimination of post-processing for stress and strain recovery that directly gives more accurate results in critical positions (near the boundaries, for example) for shape optimization problems; nodes movement flexibility that permits new, better shapes (previously missed by an FEM analysis) to be discovered. Several new research directions that need further consideration are exposed.

  17. Fluid Mechanics and Complex Variable Theory: Getting Past the 19th Century

    ERIC Educational Resources Information Center

    Newton, Paul K.

    2017-01-01

    The subject of fluid mechanics is a rich, vibrant, and rapidly developing branch of applied mathematics. Historically, it has developed hand-in-hand with the elegant subject of complex variable theory. The Westmont College NSF-sponsored workshop on the revitalization of complex variable theory in the undergraduate curriculum focused partly on…

  18. Calculation of steady and unsteady transonic flow using a Cartesian mesh and gridless boundary conditions with application to aeroelasticity

    NASA Astrophysics Data System (ADS)

    Kirshman, David

    A numerical method for the solution of inviscid compressible flow using an array of embedded Cartesian meshes in conjunction with gridless surface boundary conditions is developed. The gridless boundary treatment is implemented by means of a least squares fitting of the conserved flux variables using a cloud of nodes in the vicinity of the surface geometry. The method allows for accurate treatment of the surface boundary conditions using a grid resolution an order of magnitude coarser than required of typical Cartesian approaches. Additionally, the method does not suffer from issues associated with thin body geometry or extremely fine cut cells near the body. Unlike some methods that consider a gridless (or "meshless") treatment throughout the entire domain, multi-grid acceleration can be effectively incorporated and issues associated with global conservation are alleviated. The "gridless" surface boundary condition provides for efficient and simple problem set up since definition of the body geometry is generated independently from the field mesh, and automatically incorporated into the field discretization of the domain. The applicability of the method is first demonstrated for steady flow of single and multi-element airfoil configurations. Using this method, comparisons with traditional body-fitted grid simulations reveal that steady flow solutions can be obtained accurately with minimal effort associated with grid generation. The method is then extended to unsteady flow predictions. In this application, flow field simulations for the prescribed oscillation of an airfoil indicate excellent agreement with experimental data. Furthermore, it is shown that the phase lag associated with shock oscillation is accurately predicted without the need for a deformable mesh. Lastly, the method is applied to the prediction of transonic flutter using a two-dimensional wing model, in which comparisons with moving mesh simulations yield nearly identical results. As a result, applicability of the method to transient and vibrating fluid-structure interaction problems is established in which the requirement for a deformable mesh is eliminated.

  19. Some elements of a theory of multidimensional complex variables. I - General theory. II - Expansions of analytic functions and application to fluid flows

    NASA Technical Reports Server (NTRS)

    Martin, E. Dale

    1989-01-01

    The paper introduces a new theory of N-dimensional complex variables and analytic functions which, for N greater than 2, is both a direct generalization and a close analog of the theory of ordinary complex variables. The algebra in the present theory is a commutative ring, not a field. Functions of a three-dimensional variable were defined and the definition of the derivative then led to analytic functions.

  20. Variable Complexity Optimization of Composite Structures

    NASA Technical Reports Server (NTRS)

    Haftka, Raphael T.

    2002-01-01

    The use of several levels of modeling in design has been dubbed variable complexity modeling. The work under the grant focused on developing variable complexity modeling strategies with emphasis on response surface techniques. Applications included design of stiffened composite plates for improved damage tolerance, the use of response surfaces for fitting weights obtained by structural optimization, and design against uncertainty using response surface techniques.

  1. Dannie Heineman Prize for Mathematical Physics: Applying mathematical techniques to solve important problems in quantum theory

    NASA Astrophysics Data System (ADS)

    Bender, Carl

    2017-01-01

    The theory of complex variables is extremely useful because it helps to explain the mathematical behavior of functions of a real variable. Complex variable theory also provides insight into the nature of physical theories. For example, it provides a simple and beautiful picture of quantization and it explains the underlying reason for the divergence of perturbation theory. By using complex-variable methods one can generalize conventional Hermitian quantum theories into the complex domain. The result is a new class of parity-time-symmetric (PT-symmetric) theories whose remarkable physical properties have been studied and verified in many recent laboratory experiments.

  2. COED Transactions, Vol. IX, No. 3, March 1977. Evaluation of a Complex Variable Using Analog/Hybrid Computation Techniques.

    ERIC Educational Resources Information Center

    Marcovitz, Alan B., Ed.

    Described is the use of an analog/hybrid computer installation to study those physical phenomena that can be described through the evaluation of an algebraic function of a complex variable. This is an alternative way to study such phenomena on an interactive graphics terminal. The typical problem used, involving complex variables, is that of…

  3. Anisotropic diffusion in mesh-free numerical magnetohydrodynamics

    NASA Astrophysics Data System (ADS)

    Hopkins, Philip F.

    2017-04-01

    We extend recently developed mesh-free Lagrangian methods for numerical magnetohydrodynamics (MHD) to arbitrary anisotropic diffusion equations, including: passive scalar diffusion, Spitzer-Braginskii conduction and viscosity, cosmic ray diffusion/streaming, anisotropic radiation transport, non-ideal MHD (Ohmic resistivity, ambipolar diffusion, the Hall effect) and turbulent 'eddy diffusion'. We study these as implemented in the code GIZMO for both new meshless finite-volume Godunov schemes (MFM/MFV). We show that the MFM/MFV methods are accurate and stable even with noisy fields and irregular particle arrangements, and recover the correct behaviour even in arbitrarily anisotropic cases. They are competitive with state-of-the-art AMR/moving-mesh methods, and can correctly treat anisotropic diffusion-driven instabilities (e.g. the MTI and HBI, Hall MRI). We also develop a new scheme for stabilizing anisotropic tensor-valued fluxes with high-order gradient estimators and non-linear flux limiters, which is trivially generalized to AMR/moving-mesh codes. We also present applications of some of these improvements for SPH, in the form of a new integral-Godunov SPH formulation that adopts a moving-least squares gradient estimator and introduces a flux-limited Riemann problem between particles.

  4. A novel partitioning method for block-structured adaptive meshes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fu, Lin, E-mail: lin.fu@tum.de; Litvinov, Sergej, E-mail: sergej.litvinov@aer.mw.tum.de; Hu, Xiangyu Y., E-mail: xiangyu.hu@tum.de

    We propose a novel partitioning method for block-structured adaptive meshes utilizing the meshless Lagrangian particle concept. With the observation that an optimum partitioning has high analogy to the relaxation of a multi-phase fluid to steady state, physically motivated model equations are developed to characterize the background mesh topology and are solved by multi-phase smoothed-particle hydrodynamics. In contrast to well established partitioning approaches, all optimization objectives are implicitly incorporated and achieved during the particle relaxation to stationary state. Distinct partitioning sub-domains are represented by colored particles and separated by a sharp interface with a surface tension model. In order to obtainmore » the particle relaxation, special viscous and skin friction models, coupled with a tailored time integration algorithm are proposed. Numerical experiments show that the present method has several important properties: generation of approximately equal-sized partitions without dependence on the mesh-element type, optimized interface communication between distinct partitioning sub-domains, continuous domain decomposition which is physically localized and implicitly incremental. Therefore it is particularly suitable for load-balancing of high-performance CFD simulations.« less

  5. A novel partitioning method for block-structured adaptive meshes

    NASA Astrophysics Data System (ADS)

    Fu, Lin; Litvinov, Sergej; Hu, Xiangyu Y.; Adams, Nikolaus A.

    2017-07-01

    We propose a novel partitioning method for block-structured adaptive meshes utilizing the meshless Lagrangian particle concept. With the observation that an optimum partitioning has high analogy to the relaxation of a multi-phase fluid to steady state, physically motivated model equations are developed to characterize the background mesh topology and are solved by multi-phase smoothed-particle hydrodynamics. In contrast to well established partitioning approaches, all optimization objectives are implicitly incorporated and achieved during the particle relaxation to stationary state. Distinct partitioning sub-domains are represented by colored particles and separated by a sharp interface with a surface tension model. In order to obtain the particle relaxation, special viscous and skin friction models, coupled with a tailored time integration algorithm are proposed. Numerical experiments show that the present method has several important properties: generation of approximately equal-sized partitions without dependence on the mesh-element type, optimized interface communication between distinct partitioning sub-domains, continuous domain decomposition which is physically localized and implicitly incremental. Therefore it is particularly suitable for load-balancing of high-performance CFD simulations.

  6. A fast object-oriented Matlab implementation of the Reproducing Kernel Particle Method

    NASA Astrophysics Data System (ADS)

    Barbieri, Ettore; Meo, Michele

    2012-05-01

    Novel numerical methods, known as Meshless Methods or Meshfree Methods and, in a wider perspective, Partition of Unity Methods, promise to overcome most of disadvantages of the traditional finite element techniques. The absence of a mesh makes meshfree methods very attractive for those problems involving large deformations, moving boundaries and crack propagation. However, meshfree methods still have significant limitations that prevent their acceptance among researchers and engineers, namely the computational costs. This paper presents an in-depth analysis of computational techniques to speed-up the computation of the shape functions in the Reproducing Kernel Particle Method and Moving Least Squares, with particular focus on their bottlenecks, like the neighbour search, the inversion of the moment matrix and the assembly of the stiffness matrix. The paper presents numerous computational solutions aimed at a considerable reduction of the computational times: the use of kd-trees for the neighbour search, sparse indexing of the nodes-points connectivity and, most importantly, the explicit and vectorized inversion of the moment matrix without using loops and numerical routines.

  7. Burton-Miller-type singular boundary method for acoustic radiation and scattering

    NASA Astrophysics Data System (ADS)

    Fu, Zhuo-Jia; Chen, Wen; Gu, Yan

    2014-08-01

    This paper proposes the singular boundary method (SBM) in conjunction with Burton and Miller's formulation for acoustic radiation and scattering. The SBM is a strong-form collocation boundary discretization technique using the singular fundamental solutions, which is mathematically simple, easy-to-program, meshless and introduces the concept of source intensity factors (SIFs) to eliminate the singularities of the fundamental solutions. Therefore, it avoids singular numerical integrals in the boundary element method (BEM) and circumvents the troublesome placement of the fictitious boundary in the method of fundamental solutions (MFS). In the present method, we derive the SIFs of exterior Helmholtz equation by means of the SIFs of exterior Laplace equation owing to the same order of singularities between the Laplace and Helmholtz fundamental solutions. In conjunction with the Burton-Miller formulation, the SBM enhances the quality of the solution, particularly in the vicinity of the corresponding interior eigenfrequencies. Numerical illustrations demonstrate efficiency and accuracy of the present scheme on some benchmark examples under 2D and 3D unbounded domains in comparison with the analytical solutions, the boundary element solutions and Dirichlet-to-Neumann finite element solutions.

  8. Syntactic Complexity, Lexical Variation and Accuracy as a Function of Task Complexity and Proficiency Level in L2 Writing and Speaking

    ERIC Educational Resources Information Center

    Kuiken, Folkert; Vedder, Ineke

    2012-01-01

    The research project reported in this chapter consists of three studies in which syntactic complexity, lexical variation and fluency appear as dependent variables. The independent variables are task complexity and proficiency level, as the three studies investigate the effect of task complexity on the written and oral performance of L2 learners of…

  9. On Chaotic and Hyperchaotic Complex Nonlinear Dynamical Systems

    NASA Astrophysics Data System (ADS)

    Mahmoud, Gamal M.

    Dynamical systems described by real and complex variables are currently one of the most popular areas of scientific research. These systems play an important role in several fields of physics, engineering, and computer sciences, for example, laser systems, control (or chaos suppression), secure communications, and information science. Dynamical basic properties, chaos (hyperchaos) synchronization, chaos control, and generating hyperchaotic behavior of these systems are briefly summarized. The main advantage of introducing complex variables is the reduction of phase space dimensions by a half. They are also used to describe and simulate the physics of detuned laser and thermal convection of liquid flows, where the electric field and the atomic polarization amplitudes are both complex. Clearly, if the variables of the system are complex the equations involve twice as many variables and control parameters, thus making it that much harder for a hostile agent to intercept and decipher the coded message. Chaotic and hyperchaotic complex systems are stated as examples. Finally there are many open problems in the study of chaotic and hyperchaotic complex nonlinear dynamical systems, which need further investigations. Some of these open problems are given.

  10. Complexity Variability Assessment of Nonlinear Time-Varying Cardiovascular Control

    NASA Astrophysics Data System (ADS)

    Valenza, Gaetano; Citi, Luca; Garcia, Ronald G.; Taylor, Jessica Noggle; Toschi, Nicola; Barbieri, Riccardo

    2017-02-01

    The application of complex systems theory to physiology and medicine has provided meaningful information about the nonlinear aspects underlying the dynamics of a wide range of biological processes and their disease-related aberrations. However, no studies have investigated whether meaningful information can be extracted by quantifying second-order moments of time-varying cardiovascular complexity. To this extent, we introduce a novel mathematical framework termed complexity variability, in which the variance of instantaneous Lyapunov spectra estimated over time serves as a reference quantifier. We apply the proposed methodology to four exemplary studies involving disorders which stem from cardiology, neurology and psychiatry: Congestive Heart Failure (CHF), Major Depression Disorder (MDD), Parkinson’s Disease (PD), and Post-Traumatic Stress Disorder (PTSD) patients with insomnia under a yoga training regime. We show that complexity assessments derived from simple time-averaging are not able to discern pathology-related changes in autonomic control, and we demonstrate that between-group differences in measures of complexity variability are consistent across pathologies. Pathological states such as CHF, MDD, and PD are associated with an increased complexity variability when compared to healthy controls, whereas wellbeing derived from yoga in PTSD is associated with lower time-variance of complexity.

  11. Aging and the complexity of cardiovascular dynamics

    NASA Technical Reports Server (NTRS)

    Kaplan, D. T.; Furman, M. I.; Pincus, S. M.; Ryan, S. M.; Lipsitz, L. A.; Goldberger, A. L.

    1991-01-01

    Biomedical signals often vary in a complex and irregular manner. Analysis of variability in such signals generally does not address directly their complexity, and so may miss potentially useful information. We analyze the complexity of heart rate and beat-to-beat blood pressure using two methods motivated by nonlinear dynamics (chaos theory). A comparison of a group of healthy elderly subjects with healthy young adults indicates that the complexity of cardiovascular dynamics is reduced with aging. This suggests that complexity of variability may be a useful physiological marker.

  12. Variability in Rheumatology day care hospitals in Spain: VALORA study.

    PubMed

    Hernández Miguel, María Victoria; Martín Martínez, María Auxiliadora; Corominas, Héctor; Sanchez-Piedra, Carlos; Sanmartí, Raimon; Fernandez Martinez, Carmen; García-Vicuña, Rosario

    To describe the variability of the day care hospital units (DCHUs) of Rheumatology in Spain, in terms of structural resources and operating processes. Multicenter descriptive study with data from a self-completed questionnaire of DCHUs self-assessment based on DCHUs quality standards of the Spanish Society of Rheumatology. Structural resources and operating processes were analyzed and stratified by hospital complexity (regional, general, major and complex). Variability was determined using the coefficient of variation (CV) of the variable with clinical relevance that presented statistically significant differences when was compared by centers. A total of 89 hospitals (16 autonomous regions and Melilla) were included in the analysis. 11.2% of hospitals are regional, 22,5% general, 27%, major and 39,3% complex. A total of 92% of DCHUs were polyvalent. The number of treatments applied, the coordination between DCHUs and hospital pharmacy and the post graduate training process were the variables that showed statistically significant differences depending on the complexity of hospital. The highest rate of rheumatologic treatments was found in complex hospitals (2.97 per 1,000 population), and the lowest in general hospitals (2.01 per 1,000 population). The CV was 0.88 in major hospitals; 0.86 in regional; 0.76 in general, and 0.72 in the complex. there was variability in the number of treatments delivered in DCHUs, being greater in major hospitals and then in regional centers. Nonetheless, the variability in terms of structure and function does not seem due to differences in center complexity. Copyright © 2016 Elsevier España, S.L.U. and Sociedad Española de Reumatología y Colegio Mexicano de Reumatología. All rights reserved.

  13. Modeling Psychological Attributes in Psychology – An Epistemological Discussion: Network Analysis vs. Latent Variables

    PubMed Central

    Guyon, Hervé; Falissard, Bruno; Kop, Jean-Luc

    2017-01-01

    Network Analysis is considered as a new method that challenges Latent Variable models in inferring psychological attributes. With Network Analysis, psychological attributes are derived from a complex system of components without the need to call on any latent variables. But the ontological status of psychological attributes is not adequately defined with Network Analysis, because a psychological attribute is both a complex system and a property emerging from this complex system. The aim of this article is to reappraise the legitimacy of latent variable models by engaging in an ontological and epistemological discussion on psychological attributes. Psychological attributes relate to the mental equilibrium of individuals embedded in their social interactions, as robust attractors within complex dynamic processes with emergent properties, distinct from physical entities located in precise areas of the brain. Latent variables thus possess legitimacy, because the emergent properties can be conceptualized and analyzed on the sole basis of their manifestations, without exploring the upstream complex system. However, in opposition with the usual Latent Variable models, this article is in favor of the integration of a dynamic system of manifestations. Latent Variables models and Network Analysis thus appear as complementary approaches. New approaches combining Latent Network Models and Network Residuals are certainly a promising new way to infer psychological attributes, placing psychological attributes in an inter-subjective dynamic approach. Pragmatism-realism appears as the epistemological framework required if we are to use latent variables as representations of psychological attributes. PMID:28572780

  14. Complex Variables throughout the Curriculum

    ERIC Educational Resources Information Center

    D'Angelo, John P.

    2017-01-01

    We offer many specific detailed examples, several of which are new, that instructors can use (in lecture or as student projects) to revitalize the role of complex variables throughout the curriculum. We conclude with three primary recommendations: revise the syllabus of Calculus II to allow early introductions of complex numbers and linear…

  15. Independent variable complexity for regional regression of the flow duration curve in ungauged basins

    NASA Astrophysics Data System (ADS)

    Fouad, Geoffrey; Skupin, André; Hope, Allen

    2016-04-01

    The flow duration curve (FDC) is one of the most widely used tools to quantify streamflow. Its percentile flows are often required for water resource applications, but these values must be predicted for ungauged basins with insufficient or no streamflow data. Regional regression is a commonly used approach for predicting percentile flows that involves identifying hydrologic regions and calibrating regression models to each region. The independent variables used to describe the physiographic and climatic setting of the basins are a critical component of regional regression, yet few studies have investigated their effect on resulting predictions. In this study, the complexity of the independent variables needed for regional regression is investigated. Different levels of variable complexity are applied for a regional regression consisting of 918 basins in the US. Both the hydrologic regions and regression models are determined according to the different sets of variables, and the accuracy of resulting predictions is assessed. The different sets of variables include (1) a simple set of three variables strongly tied to the FDC (mean annual precipitation, potential evapotranspiration, and baseflow index), (2) a traditional set of variables describing the average physiographic and climatic conditions of the basins, and (3) a more complex set of variables extending the traditional variables to include statistics describing the distribution of physiographic data and temporal components of climatic data. The latter set of variables is not typically used in regional regression, and is evaluated for its potential to predict percentile flows. The simplest set of only three variables performed similarly to the other more complex sets of variables. Traditional variables used to describe climate, topography, and soil offered little more to the predictions, and the experimental set of variables describing the distribution of basin data in more detail did not improve predictions. These results are largely reflective of cross-correlation existing in hydrologic datasets, and highlight the limited predictive power of many traditionally used variables for regional regression. A parsimonious approach including fewer variables chosen based on their connection to streamflow may be more efficient than a data mining approach including many different variables. Future regional regression studies may benefit from having a hydrologic rationale for including different variables and attempting to create new variables related to streamflow.

  16. Variability in Second Language Learning: The Roles of Individual Differences, Learning Conditions, and Linguistic Complexity

    ERIC Educational Resources Information Center

    Tagarelli, Kaitlyn M.; Ruiz, Simón; Vega, José Luis Moreno; Rebuschat, Patrick

    2016-01-01

    Second language learning outcomes are highly variable, due to a variety of factors, including individual differences, exposure conditions, and linguistic complexity. However, exactly how these factors interact to influence language learning is unknown. This article examines the relationship between these three variables in language learners.…

  17. Diminished heart rate complexity in adolescent girls: a sign of vulnerability to anxiety disorders?

    PubMed

    Fiol-Veny, Aina; De la Torre-Luque, Alejandro; Balle, Maria; Bornas, Xavier

    2018-07-01

    Diminished heart rate variability has been found to be associated with high anxiety symptomatology. Since adolescence is the period of onset for many anxiety disorders, this study aimed to determine sex- and anxiety-related differences in heart rate variability and complexity in adolescents. We created four groups according to sex and anxiety symptomatology: high-anxiety girls (n = 24) and boys (n = 25), and low-anxiety girls (n = 22) and boys (n = 24) and recorded their cardiac function while they performed regular school activities. A series of two-way (sex and anxiety) MANOVAs were performed on time domain variability, frequency domain variability, and non-linear complexity. We obtained no multivariate interaction effects between sex and anxiety, but highly anxious participants had lower heart rate variability than the low-anxiety group. Regarding sex, girls showed lower heart rate variability and complexity than boys. The results suggest that adolescent girls have a less flexible cardiac system that could be a marker of the girls' vulnerability to developing anxiety disorders.

  18. The QSAR study of flavonoid-metal complexes scavenging rad OH free radical

    NASA Astrophysics Data System (ADS)

    Wang, Bo-chu; Qian, Jun-zhen; Fan, Ying; Tan, Jun

    2014-10-01

    Flavonoid-metal complexes have antioxidant activities. However, quantitative structure-activity relationships (QSAR) of flavonoid-metal complexes and their antioxidant activities has still not been tackled. On the basis of 21 structures of flavonoid-metal complexes and their antioxidant activities for scavenging rad OH free radical, we optimised their structures using Gaussian 03 software package and we subsequently calculated and chose 18 quantum chemistry descriptors such as dipole, charge and energy. Then we chose several quantum chemistry descriptors that are very important to the IC50 of flavonoid-metal complexes for scavenging rad OH free radical through method of stepwise linear regression, Meanwhile we obtained 4 new variables through the principal component analysis. Finally, we built the QSAR models based on those important quantum chemistry descriptors and the 4 new variables as the independent variables and the IC50 as the dependent variable using an Artificial Neural Network (ANN), and we validated the two models using experimental data. These results show that the two models in this paper are reliable and predictable.

  19. Environmental variability and indicators: a few observations

    Treesearch

    William F. Laudenslayer

    1991-01-01

    Abstract The environment of the earth is exceedingly complex and variable. Indicator species are used to reduce thaf complexity and variability to a level that can be more emily understood. In recent years, use of indicators has increased dramatically. For the Forest Service, as an example, regulations that interpret the National Forest Management Act require the use...

  20. Environmental variability and acoustic signals: a multi-level approach in songbirds.

    PubMed

    Medina, Iliana; Francis, Clinton D

    2012-12-23

    Among songbirds, growing evidence suggests that acoustic adaptation of song traits occurs in response to habitat features. Despite extensive study, most research supporting acoustic adaptation has only considered acoustic traits averaged for species or populations, overlooking intraindividual variation of song traits, which may facilitate effective communication in heterogeneous and variable environments. Fewer studies have explicitly incorporated sexual selection, which, if strong, may favour variation across environments. Here, we evaluate the prevalence of acoustic adaptation among 44 species of songbirds by determining how environmental variability and sexual selection intensity are associated with song variability (intraindividual and intraspecific) and short-term song complexity. We show that variability in precipitation can explain short-term song complexity among taxonomically diverse songbirds, and that precipitation seasonality and the intensity of sexual selection are related to intraindividual song variation. Our results link song complexity to environmental variability, something previously found for mockingbirds (Family Mimidae). Perhaps more importantly, our results illustrate that individual variation in song traits may be shaped by both environmental variability and strength of sexual selection.

  1. Adaptive Synchronization of Fractional Order Complex-Variable Dynamical Networks via Pinning Control

    NASA Astrophysics Data System (ADS)

    Ding, Da-Wei; Yan, Jie; Wang, Nian; Liang, Dong

    2017-09-01

    In this paper, the synchronization of fractional order complex-variable dynamical networks is studied using an adaptive pinning control strategy based on close center degree. Some effective criteria for global synchronization of fractional order complex-variable dynamical networks are derived based on the Lyapunov stability theory. From the theoretical analysis, one concludes that under appropriate conditions, the complex-variable dynamical networks can realize the global synchronization by using the proper adaptive pinning control method. Meanwhile, we succeed in solving the problem about how much coupling strength should be applied to ensure the synchronization of the fractional order complex networks. Therefore, compared with the existing results, the synchronization method in this paper is more general and convenient. This result extends the synchronization condition of the real-variable dynamical networks to the complex-valued field, which makes our research more practical. Finally, two simulation examples show that the derived theoretical results are valid and the proposed adaptive pinning method is effective. Supported by National Natural Science Foundation of China under Grant No. 61201227, National Natural Science Foundation of China Guangdong Joint Fund under Grant No. U1201255, the Natural Science Foundation of Anhui Province under Grant No. 1208085MF93, 211 Innovation Team of Anhui University under Grant Nos. KJTD007A and KJTD001B, and also supported by Chinese Scholarship Council

  2. Meshless bubble filter using ultrasound for extracorporeal circulation and its effect on blood.

    PubMed

    Mino, Koji; Imura, Masato; Koyama, Daisuke; Omori, Masayoshi; Kawarabata, Shigeki; Sato, Masafumi; Watanabe, Yoshiaki

    2015-02-01

    A bubble filter with no mesh structure for extracorporeal circulation using ultrasound was developed. Hemolysis was evaluated by measuring free hemoglobin (FHb). FHb in 120 mL of bovine blood was measured in acoustic standing-wave fields. With a sound pressure amplitude of 60 kPa at driving frequencies of 1 MHz, 500 kHz and 27 kHz for 15 min. FHb values were 641.6, 2575 and 8903 mg/dL, respectively. Thus, hemolysis was inhibited with higher driving frequencies when the same sound pressure amplitude was applied. An ultrasound bubble filter with a resonance frequency of 1 MHz was designed. The filtering characteristics of the flowing microbubbles were investigated with a circulation system using bovine blood with a flow rate of 5.0 L/min. Approximately 99.1% of microbubbles were filtered with 250 kPa and a flow of 5.0 L/min. Hemolysis decreased as the sound pressure decreased; FHb values were 225.8 and 490.7 mg/dL when using 150 and 200 kPa, respectively. Copyright © 2015 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.

  3. PDEs on moving surfaces via the closest point method and a modified grid based particle method

    NASA Astrophysics Data System (ADS)

    Petras, A.; Ruuth, S. J.

    2016-05-01

    Partial differential equations (PDEs) on surfaces arise in a wide range of applications. The closest point method (Ruuth and Merriman (2008) [20]) is a recent embedding method that has been used to solve a variety of PDEs on smooth surfaces using a closest point representation of the surface and standard Cartesian grid methods in the embedding space. The original closest point method (CPM) was designed for problems posed on static surfaces, however the solution of PDEs on moving surfaces is of considerable interest as well. Here we propose solving PDEs on moving surfaces using a combination of the CPM and a modification of the grid based particle method (Leung and Zhao (2009) [12]). The grid based particle method (GBPM) represents and tracks surfaces using meshless particles and an Eulerian reference grid. Our modification of the GBPM introduces a reconstruction step into the original method to ensure that all the grid points within a computational tube surrounding the surface are active. We present a number of examples to illustrate the numerical convergence properties of our combined method. Experiments for advection-diffusion equations that are strongly coupled to the velocity of the surface are also presented.

  4. Radiation Hydrodynamics with GIZMO: The Disruption of Giant Molecular Clouds by Stellar Radiation Pressure

    NASA Astrophysics Data System (ADS)

    Khatami, David; Hopkins, Philip F.

    2016-01-01

    We present a numerical implementation of radiation hydrodynamics for the meshless code GIZMO. The radiation transport is treated as an anisotropic diffusion process combined with radiation pressure effects, photoionization with heating and cooling routines, and a multifrequency treatment of an arbitrary number of sources. As a first application of the method, we investigate the disruption of giant molecular clouds by stellar radiative feedback. Specifically, what fraction of the gas must a GMC convert into stars to cause self-disruption? We test a range of cloud masses and sizes with several source luminosities to probe the effects of photoheating and radiation pressure on timescales shorter than the onset of the first supernovae. Observationally, only ~1-10% of gas is converted into stars, an inefficiency that is likely the result of feedback from newly formed stars. Whether photoheating or radiation pressure dominates is dependent on the given cloud properties. For denser clouds, we expect photoheating to play a negligible role with most of the feedback driven by radiation pressure. This work explores the necessary parameters a GMC must have in order for radiation pressure to be the main disruption process.

  5. A 1D-2D coupled SPH-SWE model applied to open channel flow simulations in complicated geometries

    NASA Astrophysics Data System (ADS)

    Chang, Kao-Hua; Sheu, Tony Wen-Hann; Chang, Tsang-Jung

    2018-05-01

    In this study, a one- and two-dimensional (1D-2D) coupled model is developed to solve the shallow water equations (SWEs). The solutions are obtained using a Lagrangian meshless method called smoothed particle hydrodynamics (SPH) to simulate shallow water flows in converging, diverging and curved channels. A buffer zone is introduced to exchange information between the 1D and 2D SPH-SWE models. Interpolated water discharge values and water surface levels at the internal boundaries are prescribed as the inflow/outflow boundary conditions in the two SPH-SWE models. In addition, instead of using the SPH summation operator, we directly solve the continuity equation by introducing a diffusive term to suppress oscillations in the predicted water depth. The performance of the two approaches in calculating the water depth is comprehensively compared through a case study of a straight channel. Additionally, three benchmark cases involving converging, diverging and curved channels are adopted to demonstrate the ability of the proposed 1D and 2D coupled SPH-SWE model through comparisons with measured data and predicted mesh-based numerical results. The proposed model provides satisfactory accuracy and guaranteed convergence.

  6. A novel multiphysic model for simulation of swelling equilibrium of ionized thermal-stimulus responsive hydrogels

    NASA Astrophysics Data System (ADS)

    Li, Hua; Wang, Xiaogui; Yan, Guoping; Lam, K. Y.; Cheng, Sixue; Zou, Tao; Zhuo, Renxi

    2005-03-01

    In this paper, a novel multiphysic mathematical model is developed for simulation of swelling equilibrium of ionized temperature sensitive hydrogels with the volume phase transition, and it is termed the multi-effect-coupling thermal-stimulus (MECtherm) model. This model consists of the steady-state Nernst-Planck equation, Poisson equation and swelling equilibrium governing equation based on the Flory's mean field theory, in which two types of polymer-solvent interaction parameters, as the functions of temperature and polymer-network volume fraction, are specified with or without consideration of the hydrogen bond interaction. In order to examine the MECtherm model consisting of nonlinear partial differential equations, a meshless Hermite-Cloud method is used for numerical solution of one-dimensional swelling equilibrium of thermal-stimulus responsive hydrogels immersed in a bathing solution. The computed results are in very good agreements with experimental data for the variation of volume swelling ratio with temperature. The influences of the salt concentration and initial fixed-charge density are discussed in detail on the variations of volume swelling ratio of hydrogels, mobile ion concentrations and electric potential of both interior hydrogels and exterior bathing solution.

  7. Distinguishing cold dark matter dwarfs from self-interacting dark matter dwarfs in baryonic simulations

    NASA Astrophysics Data System (ADS)

    Strickland, Emily; Fitts, Alex; Boylan-Kolchin, Michael

    2018-01-01

    Our collaboration has simulated several high-resolution (mbaryon = 500Mo, mdm = 2500Mo) cosmological zoom-in simulations of isolated dwarf galaxies. We simulate each galaxy in standard cold dark matter (ΛCDM) as well as a self-interacting dark matter (SIDM) (with a cross section of σ/m ~ 1 cm2/g), both with and without baryons, to identify distinguishing characteristics between the two. The simulations are run using GIZMO, a meshless-finite-mass (MFM) hydrodynamical code, and are part of the Feedback in Realistic Environments (FIRE) project. By analyzing both the global properties and inner structure of the dwarfs in varying dark matter prescriptions, we provide a side-by-side comparison of isolated, dark matter dominated galaxies at the mass scale where differences in the two models of dark matter are thought to be the most obvious. We find that the edge of classical dwarfs and ultra-faint dwarfs (UFDs) (at ~105 Mo) provides the clearest window for distinguishing between the two theories. Here our SIDM galaxies continue to display a cored inner profile unlike their CDM counterparts. The SIDM versions of each galaxy also have measurably lower stellar velocity dispersions than their CDM counterparts.

  8. Simulations on Monitoring and Evaluation of Plasticity-Driven Material Damage Based on Second Harmonic of S0 Mode Lamb Waves in Metallic Plates

    PubMed Central

    Sun, Xiaoqiang; Liu, Xuyang; Liu, Yaolu; Hu, Ning; Zhao, Youxuan; Ding, Xiangyan; Qin, Shiwei; Zhang, Jianyu; Zhang, Jun; Liu, Feng; Fu, Shaoyun

    2017-01-01

    In this study, a numerical approach—the discontinuous Meshless Local Petrov-Galerkin-Eshelby Method (MLPGEM)—was adopted to simulate and measure material plasticity in an Al 7075-T651 plate. The plate was modeled in two dimensions by assemblies of small particles that interact with each other through bonding stiffness. The material plasticity of the model loaded to produce different levels of strain is evaluated with the Lamb waves of S0 mode. A tone burst at the center frequency of 200 kHz was used as excitation. Second-order nonlinear wave was extracted from the spectrogram of a signal receiving point. Tensile-driven plastic deformation and cumulative second harmonic generation of S0 mode were observed in the simulation. Simulated measurement of the acoustic nonlinearity increased monotonically with the level of tensile-driven plastic strain captured by MLPGEM, whereas achieving this state by other numerical methods is comparatively more difficult. This result indicates that the second harmonics of S0 mode can be employed to monitor and evaluate the material or structural early-stage damage induced by plasticity. PMID:28773188

  9. Quantum hydrodynamics: capturing a reactive scattering resonance.

    PubMed

    Derrickson, Sean W; Bittner, Eric R; Kendrick, Brian K

    2005-08-01

    The hydrodynamic equations of motion associated with the de Broglie-Bohm formulation of quantum mechanics are solved using a meshless method based upon a moving least-squares approach. An arbitrary Lagrangian-Eulerian frame of reference and a regridding algorithm which adds and deletes computational points are used to maintain a uniform and nearly constant interparticle spacing. The methodology also uses averaged fields to maintain unitary time evolution. The numerical instabilities associated with the formation of nodes in the reflected portion of the wave packet are avoided by adding artificial viscosity to the equations of motion. A new and more robust artificial viscosity algorithm is presented which gives accurate scattering results and is capable of capturing quantum resonances. The methodology is applied to a one-dimensional model chemical reaction that is known to exhibit a quantum resonance. The correlation function approach is used to compute the reactive scattering matrix, reaction probability, and time delay as a function of energy. Excellent agreement is obtained between the scattering results based upon the quantum hydrodynamic approach and those based upon standard quantum mechanics. This is the first clear demonstration of the ability of moving grid approaches to accurately and robustly reproduce resonance structures in a scattering system.

  10. Numerical treatment for solving two-dimensional space-fractional advection-dispersion equation using meshless method

    NASA Astrophysics Data System (ADS)

    Cheng, Rongjun; Sun, Fengxin; Wei, Qi; Wang, Jufeng

    2018-02-01

    Space-fractional advection-dispersion equation (SFADE) can describe particle transport in a variety of fields more accurately than the classical models of integer-order derivative. Because of nonlocal property of integro-differential operator of space-fractional derivative, it is very challenging to deal with fractional model, and few have been reported in the literature. In this paper, a numerical analysis of the two-dimensional SFADE is carried out by the element-free Galerkin (EFG) method. The trial functions for the SFADE are constructed by the moving least-square (MLS) approximation. By the Galerkin weak form, the energy functional is formulated. Employing the energy functional minimization procedure, the final algebraic equations system is obtained. The Riemann-Liouville operator is discretized by the Grünwald formula. With center difference method, EFG method and Grünwald formula, the fully discrete approximation schemes for SFADE are established. Comparing with exact results and available results by other well-known methods, the computed approximate solutions are presented in the format of tables and graphs. The presented results demonstrate the validity, efficiency and accuracy of the proposed techniques. Furthermore, the error is computed and the proposed method has reasonable convergence rates in spatial and temporal discretizations.

  11. Deformation of Soft Tissue and Force Feedback Using the Smoothed Particle Hydrodynamics

    PubMed Central

    Liu, Xuemei; Wang, Ruiyi; Li, Yunhua; Song, Dongdong

    2015-01-01

    We study the deformation and haptic feedback of soft tissue in virtual surgery based on a liver model by using a force feedback device named PHANTOM OMNI developed by SensAble Company in USA. Although a significant amount of research efforts have been dedicated to simulating the behaviors of soft tissue and implementing force feedback, it is still a challenging problem. This paper introduces a kind of meshfree method for deformation simulation of soft tissue and force computation based on viscoelastic mechanical model and smoothed particle hydrodynamics (SPH). Firstly, viscoelastic model can present the mechanical characteristics of soft tissue which greatly promotes the realism. Secondly, SPH has features of meshless technique and self-adaption, which supply higher precision than methods based on meshes for force feedback computation. Finally, a SPH method based on dynamic interaction area is proposed to improve the real time performance of simulation. The results reveal that SPH methodology is suitable for simulating soft tissue deformation and force feedback calculation, and SPH based on dynamic local interaction area has a higher computational efficiency significantly compared with usual SPH. Our algorithm has a bright prospect in the area of virtual surgery. PMID:26417380

  12. Mesoscale Convective Complex versus Non-Mesoscale Convective Complex Thunderstorms: A Comparison of Selected Meteorological Variables.

    DTIC Science & Technology

    1986-08-01

    mean square errors for selected variables . . 34 8. Variable range and mean value for MCC and non-MCC cases . . 36 9. Alpha ( a ) levels at which the...Table 9. For each variable, the a level is listed at which the two mean values are determined to be significantly 38 Table 9. Alpha ( a ) levels at...vorticity advection None 700 mb vertical velocity forecast .20 different. These a levels express the probability of erroneously con- cluding that the

  13. Multivariate analysis: greater insights into complex systems

    USDA-ARS?s Scientific Manuscript database

    Many agronomic researchers measure and collect multiple response variables in an effort to understand the more complex nature of the system being studied. Multivariate (MV) statistical methods encompass the simultaneous analysis of all random variables (RV) measured on each experimental or sampling ...

  14. Effects of head-down bed rest on complex heart rate variability: Response to LBNP testing

    NASA Technical Reports Server (NTRS)

    Goldberger, Ary L.; Mietus, Joseph E.; Rigney, David R.; Wood, Margie L.; Fortney, Suzanne M.

    1994-01-01

    Head-down bed rest is used to model physiological changes during spaceflight. We postulated that bed rest would decrease the degree of complex physiological heart rate variability. We analyzed continuous heart rate data from digitized Holter recordings in eight healthy female volunteers (age 28-34 yr) who underwent a 13-day 6 deg head-down bed rest study with serial lower body negative pressure (LBNP) trials. Heart rate variability was measured on a 4-min data sets using conventional time and frequency domain measures as well as with a new measure of signal 'complexity' (approximate entropy). Data were obtained pre-bed rest (control), during bed rest (day 4 and day 9 or 11), and 2 days post-bed rest (recovery). Tolerance to LBNP was significantly reduced on both bed rest days vs. pre-bed rest. Heart rate variability was assessed at peak LBNP. Heart rate approximate entropy was significantly decreased at day 4 and day 9 or 11, returning toward normal during recovery. Heart rate standard deviation and the ratio of high- to low-power frequency did not change significantly. We conclude that short-term bed rest is associated with a decrease in the complex variability of heart rate during LBNP testing in healthy young adult women. Measurement of heart rate complexity, using a method derived from nonlinear dynamics ('chaos theory'), may provide a sensitive marker of this loss of physiological variability, complementing conventional time and frequency domain statistical measures.

  15. A Geometric View of Complex Trigonometric Functions

    ERIC Educational Resources Information Center

    Hammack, Richard

    2007-01-01

    Given that the sine and cosine functions of a real variable can be interpreted as the coordinates of points on the unit circle, the author of this article asks whether there is something similar for complex variables, and shows that indeed there is.

  16. Spatio-temporal error growth in the multi-scale Lorenz'96 model

    NASA Astrophysics Data System (ADS)

    Herrera, S.; Fernández, J.; Rodríguez, M. A.; Gutiérrez, J. M.

    2010-07-01

    The influence of multiple spatio-temporal scales on the error growth and predictability of atmospheric flows is analyzed throughout the paper. To this aim, we consider the two-scale Lorenz'96 model and study the interplay of the slow and fast variables on the error growth dynamics. It is shown that when the coupling between slow and fast variables is weak the slow variables dominate the evolution of fluctuations whereas in the case of strong coupling the fast variables impose a non-trivial complex error growth pattern on the slow variables with two different regimes, before and after saturation of fast variables. This complex behavior is analyzed using the recently introduced Mean-Variance Logarithmic (MVL) diagram.

  17. Key variables influencing patterns of lava dome growth and collapse

    NASA Astrophysics Data System (ADS)

    Husain, T.; Elsworth, D.; Voight, B.; Mattioli, G. S.; Jansma, P. E.

    2013-12-01

    Lava domes are conical structures that grow by the infusion of viscous silicic or intermediate composition magma from a central volcanic conduit. Dome growth can be characterized by repeated cycles of growth punctuated by collapse, as the structure becomes oversized for its composite strength. Within these cycles, deformation ranges from slow long term deformation to sudden deep-seated collapses. Collapses may range from small raveling failures to voluminous and fast-moving pyroclastic flows with rapid and long-downslope-reach from the edifice. Infusion rate and magma rheology together with crystallization temperature and volatile content govern the spatial distribution of strength in the structure. Solidification, driven by degassing-induced crystallization of magma leads to the formation of a continuously evolving frictional talus as a hard outer shell. This shell encapsulates the cohesion-dominated soft ductile core. Here we explore the mechanics of lava dome growth and failure using a two-dimensional particle-dynamics model. This meshless model follows the natural evolution of a brittle carapace formed by loss of volatiles and rheological stiffening and avoids difficulties of hour-glassing and mesh-entangelment typical in meshed models. We test the fidelity of the model against existing experimental and observational models of lava dome growth. The particle-dynamics model follows the natural development of dome growth and collapse which is infeasible using simple analytical models. The model provides insight into the triggers that lead to the transition in collapse mechasnism from shallow flank collapse to deep seated sector collapse. Increase in material stiffness due to decrease in infusion rate results in the transition of growth pattern from endogenous to exogenous. The material stiffness and strength are strongly controlled by the magma infusion rate. Increase in infusion rate decreases the time available for degassing induced crystallization leading to a transition in the growth pattern, while a decrease in infusion rate results in larger crystals causing the material to stiffen leading to formation of spines. Material stiffness controls the growth direction of the viscous plug in the lava dome interior. Material strength and stiffness controled by rate of infusion influence lava dome growth more significantly than coefficient of frictional of the talus.

  18. Workspace Program for Complex-Number Arithmetic

    NASA Technical Reports Server (NTRS)

    Patrick, M. C.; Howell, Leonard W., Jr.

    1986-01-01

    COMPLEX is workspace program designed to empower APL with complexnumber capabilities. Complex-variable methods provide analytical tools invaluable for applications in mathematics, science, and engineering. COMPLEX written in APL.

  19. Evaluation of alternative model selection criteria in the analysis of unimodal response curves using CART

    USGS Publications Warehouse

    Ribic, C.A.; Miller, T.W.

    1998-01-01

    We investigated CART performance with a unimodal response curve for one continuous response and four continuous explanatory variables, where two variables were important (ie directly related to the response) and the other two were not. We explored performance under three relationship strengths and two explanatory variable conditions: equal importance and one variable four times as important as the other. We compared CART variable selection performance using three tree-selection rules ('minimum risk', 'minimum risk complexity', 'one standard error') to stepwise polynomial ordinary least squares (OLS) under four sample size conditions. The one-standard-error and minimum-risk-complexity methods performed about as well as stepwise OLS with large sample sizes when the relationship was strong. With weaker relationships, equally important explanatory variables and larger sample sizes, the one-standard-error and minimum-risk-complexity rules performed better than stepwise OLS. With weaker relationships and explanatory variables of unequal importance, tree-structured methods did not perform as well as stepwise OLS. Comparing performance within tree-structured methods, with a strong relationship and equally important explanatory variables, the one-standard-error-rule was more likely to choose the correct model than were the other tree-selection rules 1) with weaker relationships and equally important explanatory variables; and 2) under all relationship strengths when explanatory variables were of unequal importance and sample sizes were lower.

  20. The trajectory of life. Decreasing physiological network complexity through changing fractal patterns

    PubMed Central

    Sturmberg, Joachim P.; Bennett, Jeanette M.; Picard, Martin; Seely, Andrew J. E.

    2015-01-01

    In this position paper, we submit a synthesis of theoretical models based on physiology, non-equilibrium thermodynamics, and non-linear time-series analysis. Based on an understanding of the human organism as a system of interconnected complex adaptive systems, we seek to examine the relationship between health, complexity, variability, and entropy production, as it might be useful to help understand aging, and improve care for patients. We observe the trajectory of life is characterized by the growth, plateauing and subsequent loss of adaptive function of organ systems, associated with loss of functioning and coordination of systems. Understanding development and aging requires the examination of interdependence among these organ systems. Increasing evidence suggests network interconnectedness and complexity can be captured/measured/associated with the degree and complexity of healthy biologic rhythm variability (e.g., heart and respiratory rate variability). We review physiological mechanisms linking the omics, arousal/stress systems, immune function, and mitochondrial bioenergetics; highlighting their interdependence in normal physiological function and aging. We argue that aging, known to be characterized by a loss of variability, is manifested at multiple scales, within functional units at the small scale, and reflected by diagnostic features at the larger scale. While still controversial and under investigation, it appears conceivable that the integrity of whole body complexity may be, at least partially, reflected in the degree and variability of intrinsic biologic rhythms, which we believe are related to overall system complexity that may be a defining feature of health and it's loss through aging. Harnessing this information for the development of therapeutic and preventative strategies may hold an opportunity to significantly improve the health of our patients across the trajectory of life. PMID:26082722

  1. Making Student Online Teams Work

    ERIC Educational Resources Information Center

    Olsen, Joel; Kalinski, Ray

    2017-01-01

    Online professors typically assign teams based on time zones, performance, or alphabet, but are these the best ways to position student virtual teams for success? Personality and task complexity could provide additional direction. Personality and task complexity were used as independent variables related to the depended variable of team…

  2. Aortic arch atherosclerosis in patients with severe aortic stenosis can be argued by greater day-by-day blood pressure variability.

    PubMed

    Iwata, Shinichi; Sugioka, Kenichi; Fujita, Suwako; Ito, Asahiro; Matsumura, Yoshiki; Hanatani, Akihisa; Takagi, Masahiko; Di Tullio, Marco R; Homma, Shunichi; Yoshiyama, Minoru

    2015-07-01

    Although it is well known that the prevalence of aortic arch plaques, one of the risk factors for ischemic stroke, is high in patients with severe aortic stenosis, the underlying mechanisms are not well understood. Increased day-by-day blood pressure (BP) variability is also known to be associated with stroke; however, little is known on the association between day-by-bay BP variability and aortic arch atherosclerosis in patients with aortic stenosis. Our objective was to clarify the association between day-by-day BP variables (average values and variability) and aortic arch atherosclerosis in patients with severe aortic stenosis. The study population consisted of 104 consecutive patients (mean age 75 ± 8 years) with severe aortic stenosis who were scheduled for aortic valve replacement. BP was measured in the morning in at least 4 consecutive days (mean 6.8 days) prior to the day of surgery. Large (≥4 mm), ulcerated, or mobile plaques were defined as complex plaques using transesophageal echocardiography. Cigarette smoking and all systolic BP variables were associated with the presence of complex plaques (p < 0.05), whereas diastolic BP variables were not. Multiple regression analysis indicated that day-by-day mean systolic BP and day-by-day systolic BP variability remained independently associated with the presence of complex plaques (p < 0.05) after adjustment for age, male sex, cigarette smoking, hypertension, hypercholesterolemia, and diabetes mellitus. These findings suggest that higher day-by-day mean systolic BP and day-by-day systolic BP variability are associated with complex plaques in the aortic arch and consequently stroke risk in patients with aortic stenosis. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  3. Floodplain complexity and surface metrics: influences of scale and geomorphology

    USGS Publications Warehouse

    Scown, Murray W.; Thoms, Martin C.; DeJager, Nathan R.

    2015-01-01

    Many studies of fluvial geomorphology and landscape ecology examine a single river or landscape, thus lack generality, making it difficult to develop a general understanding of the linkages between landscape patterns and larger-scale driving variables. We examined the spatial complexity of eight floodplain surfaces in widely different geographic settings and determined how patterns measured at different scales relate to different environmental drivers. Floodplain surface complexity is defined as having highly variable surface conditions that are also highly organised in space. These two components of floodplain surface complexity were measured across multiple sampling scales from LiDAR-derived DEMs. The surface character and variability of each floodplain were measured using four surface metrics; namely, standard deviation, skewness, coefficient of variation, and standard deviation of curvature from a series of moving window analyses ranging from 50 to 1000 m in radius. The spatial organisation of each floodplain surface was measured using spatial correlograms of the four surface metrics. Surface character, variability, and spatial organisation differed among the eight floodplains; and random, fragmented, highly patchy, and simple gradient spatial patterns were exhibited, depending upon the metric and window size. Differences in surface character and variability among the floodplains became statistically stronger with increasing sampling scale (window size), as did their associations with environmental variables. Sediment yield was consistently associated with differences in surface character and variability, as were flow discharge and variability at smaller sampling scales. Floodplain width was associated with differences in the spatial organization of surface conditions at smaller sampling scales, while valley slope was weakly associated with differences in spatial organisation at larger scales. A comparison of floodplain landscape patterns measured at different scales would improve our understanding of the role that different environmental variables play at different scales and in different geomorphic settings.

  4. Self-consistent adjoint analysis for topology optimization of electromagnetic waves

    NASA Astrophysics Data System (ADS)

    Deng, Yongbo; Korvink, Jan G.

    2018-05-01

    In topology optimization of electromagnetic waves, the Gâteaux differentiability of the conjugate operator to the complex field variable results in the complexity of the adjoint sensitivity, which evolves the original real-valued design variable to be complex during the iterative solution procedure. Therefore, the self-inconsistency of the adjoint sensitivity is presented. To enforce the self-consistency, the real part operator has been used to extract the real part of the sensitivity to keep the real-value property of the design variable. However, this enforced self-consistency can cause the problem that the derived structural topology has unreasonable dependence on the phase of the incident wave. To solve this problem, this article focuses on the self-consistent adjoint analysis of the topology optimization problems for electromagnetic waves. This self-consistent adjoint analysis is implemented by splitting the complex variables of the wave equations into the corresponding real parts and imaginary parts, sequentially substituting the split complex variables into the wave equations with deriving the coupled equations equivalent to the original wave equations, where the infinite free space is truncated by the perfectly matched layers. Then, the topology optimization problems of electromagnetic waves are transformed into the forms defined on real functional spaces instead of complex functional spaces; the adjoint analysis of the topology optimization problems is implemented on real functional spaces with removing the variational of the conjugate operator; the self-consistent adjoint sensitivity is derived, and the phase-dependence problem is avoided for the derived structural topology. Several numerical examples are implemented to demonstrate the robustness of the derived self-consistent adjoint analysis.

  5. [Variable selection methods combined with local linear embedding theory used for optimization of near infrared spectral quantitative models].

    PubMed

    Hao, Yong; Sun, Xu-Dong; Yang, Qiang

    2012-12-01

    Variables selection strategy combined with local linear embedding (LLE) was introduced for the analysis of complex samples by using near infrared spectroscopy (NIRS). Three methods include Monte Carlo uninformation variable elimination (MCUVE), successive projections algorithm (SPA) and MCUVE connected with SPA were used for eliminating redundancy spectral variables. Partial least squares regression (PLSR) and LLE-PLSR were used for modeling complex samples. The results shown that MCUVE can both extract effective informative variables and improve the precision of models. Compared with PLSR models, LLE-PLSR models can achieve more accurate analysis results. MCUVE combined with LLE-PLSR is an effective modeling method for NIRS quantitative analysis.

  6. An index of floodplain surface complexity

    USGS Publications Warehouse

    Scown, Murray W.; Thoms, Martin C.; DeJager, Nathan R.

    2016-01-01

    Floodplain surface topography is an important component of floodplain ecosystems. It is the primary physical template upon which ecosystem processes are acted out, and complexity in this template can contribute to the high biodiversity and productivity of floodplain ecosystems. There has been a limited appreciation of floodplain surface complexity because of the traditional focus on temporal variability in floodplains as well as limitations to quantifying spatial complexity. An index of floodplain surface complexity (FSC) is developed in this paper and applied to eight floodplains from different geographic settings. The index is based on two key indicators of complexity, variability in surface geometry (VSG) and the spatial organisation of surface conditions (SPO), and was determined at three sampling scales. FSC, VSG, and SPO varied between the eight floodplains and these differences depended upon sampling scale. Relationships between these measures of spatial complexity and seven geomorphological and hydrological drivers were investigated. There was a significant decline in all complexity measures with increasing floodplain width, which was explained by either a power, logarithmic, or exponential function. There was an initial rapid decline in surface complexity as floodplain width increased from 1.5 to 5 km, followed by little change in floodplains wider than 10 km. VSG also increased significantly with increasing sediment yield. No significant relationships were determined between any of the four hydrological variables and floodplain surface complexity.

  7. Outline of a new approach to the analysis of complex systems and decision processes.

    NASA Technical Reports Server (NTRS)

    Zadeh, L. A.

    1973-01-01

    Development of a conceptual framework for dealing with systems which are too complex or too ill-defined to admit of precise quantitative analysis. The approach outlined is based on the premise that the key elements in human thinking are not numbers, but labels of fuzzy sets - i.e., classes of objects in which the transition from membership to nonmembership is gradual rather than abrupt. The approach in question has three main distinguishing features - namely, the use of so-called 'linguistic' variables in place of or in addition to numerical variables, the characterization of simple relations between variables by conditional fuzzy statements, and the characterization of complex relations by fuzzy algorithms.

  8. Complex and Simple Clinical Reaction Times Are Associated with Gait, Balance, and Major Fall Injury in Older Subjects with Diabetic Peripheral Neuropathy

    PubMed Central

    Richardson, James K.; Eckner, James T.; Allet, Lara; Kim, Hogene; Ashton-Miller, James

    2016-01-01

    Objective To identify relationships between complex and simple clinical measures of reaction time (RTclin), and indicators of balance in older subjects with and without diabetic peripheral neuropathy (DPN). Design Prospective cohort design. Complex RTclin Accuracy, Simple RTclin Latency, and their ratio were determined using a novel device in 42 subjects (age = 69.1 ± 8.3 yrs), 26 with DPN and 16 without. Dependent variables included unipedal stance time (UST), step width variability and range on an uneven surface, and major fall-related injury over 12 months. Results In the DPN subjects the ratio of Complex RTclin Accuracy:Simple RTclin Latency was strongly associated with longer UST (r/p = .653/.004), and decreased step width variability and range (r/p = −.696/.001 and −.782/<.001, respectively) on an uneven surface. Additionally, the two DPN subjects sustaining major injuries had lower Complex RTclin Accuracy:Simple: RTclin Latency than those without. Conclusions The ratio of Complex RTclin Accuracy:Simple RTclin Latency is a potent predictor of UST and frontal plane gait variability in response to perturbations, and may predict major fall injury in older subjects with DPN. These short latency neurocognitive measures may compensate for lower limb neuromuscular impairments, and provide a more comprehensive understanding of balance and fall risk. PMID:27552354

  9. Complexity in relational processing predicts changes in functional brain network dynamics.

    PubMed

    Cocchi, Luca; Halford, Graeme S; Zalesky, Andrew; Harding, Ian H; Ramm, Brentyn J; Cutmore, Tim; Shum, David H K; Mattingley, Jason B

    2014-09-01

    The ability to link variables is critical to many high-order cognitive functions, including reasoning. It has been proposed that limits in relating variables depend critically on relational complexity, defined formally as the number of variables to be related in solving a problem. In humans, the prefrontal cortex is known to be important for reasoning, but recent studies have suggested that such processes are likely to involve widespread functional brain networks. To test this hypothesis, we used functional magnetic resonance imaging and a classic measure of deductive reasoning to examine changes in brain networks as a function of relational complexity. As expected, behavioral performance declined as the number of variables to be related increased. Likewise, increments in relational complexity were associated with proportional enhancements in brain activity and task-based connectivity within and between 2 cognitive control networks: A cingulo-opercular network for maintaining task set, and a fronto-parietal network for implementing trial-by-trial control. Changes in effective connectivity as a function of increased relational complexity suggested a key role for the left dorsolateral prefrontal cortex in integrating and implementing task set in a trial-by-trial manner. Our findings show that limits in relational processing are manifested in the brain as complexity-dependent modulations of large-scale networks. © The Author 2013. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  10. ESCAPE: Eco-Behavioral System for Complex Assessments of Preschool Environments. Research Draft.

    ERIC Educational Resources Information Center

    Carta, Judith J.; And Others

    The manual details an observational code designed to track a child during an entire day in a preschool setting. The Eco-Behavioral System for Complex Assessments of Preschool Environments (ESCAPE) encompasses assessment of the following three major categories of variables with their respective subcategories: (1) ecological variables (designated…

  11. Evaluation of a laser scanning sensor on detection of complex shaped targets for variable-rate sprayer development

    USDA-ARS?s Scientific Manuscript database

    Sensors that can accurately measure canopy structures are prerequisites for development of advanced variable-rate sprayers. A 270° radial range laser sensor was evaluated for its accuracy to measure dimensions of target surfaces with complex shapes and sizes. An algorithm for data acquisition and 3-...

  12. Statistical Assessment of Variability of Terminal Restriction Fragment Length Polymorphism Analysis Applied to Complex Microbial Communities ▿ †

    PubMed Central

    Rossi, Pierre; Gillet, François; Rohrbach, Emmanuelle; Diaby, Nouhou; Holliger, Christof

    2009-01-01

    The variability of terminal restriction fragment polymorphism analysis applied to complex microbial communities was assessed statistically. Recent technological improvements were implemented in the successive steps of the procedure, resulting in a standardized procedure which provided a high level of reproducibility. PMID:19749066

  13. How does complex terrain influence responses of carbon and water cycle processes to climate variability and climate change?

    EPA Science Inventory

    We are pursuing the ambitious goal of understanding how complex terrain influences the responses of carbon and water cycle processes to climate variability and climate change. Our studies take place in H.J. Andrews Experimental Forest, an LTER (Long Term Ecological Research) site...

  14. Complex systems and the technology of variability analysis

    PubMed Central

    Seely, Andrew JE; Macklem, Peter T

    2004-01-01

    Characteristic patterns of variation over time, namely rhythms, represent a defining feature of complex systems, one that is synonymous with life. Despite the intrinsic dynamic, interdependent and nonlinear relationships of their parts, complex biological systems exhibit robust systemic stability. Applied to critical care, it is the systemic properties of the host response to a physiological insult that manifest as health or illness and determine outcome in our patients. Variability analysis provides a novel technology with which to evaluate the overall properties of a complex system. This review highlights the means by which we scientifically measure variation, including analyses of overall variation (time domain analysis, frequency distribution, spectral power), frequency contribution (spectral analysis), scale invariant (fractal) behaviour (detrended fluctuation and power law analysis) and regularity (approximate and multiscale entropy). Each technique is presented with a definition, interpretation, clinical application, advantages, limitations and summary of its calculation. The ubiquitous association between altered variability and illness is highlighted, followed by an analysis of how variability analysis may significantly improve prognostication of severity of illness and guide therapeutic intervention in critically ill patients. PMID:15566580

  15. Sensitivity Analysis of Multidisciplinary Rotorcraft Simulations

    NASA Technical Reports Server (NTRS)

    Wang, Li; Diskin, Boris; Biedron, Robert T.; Nielsen, Eric J.; Bauchau, Olivier A.

    2017-01-01

    A multidisciplinary sensitivity analysis of rotorcraft simulations involving tightly coupled high-fidelity computational fluid dynamics and comprehensive analysis solvers is presented and evaluated. An unstructured sensitivity-enabled Navier-Stokes solver, FUN3D, and a nonlinear flexible multibody dynamics solver, DYMORE, are coupled to predict the aerodynamic loads and structural responses of helicopter rotor blades. A discretely-consistent adjoint-based sensitivity analysis available in FUN3D provides sensitivities arising from unsteady turbulent flows and unstructured dynamic overset meshes, while a complex-variable approach is used to compute DYMORE structural sensitivities with respect to aerodynamic loads. The multidisciplinary sensitivity analysis is conducted through integrating the sensitivity components from each discipline of the coupled system. Numerical results verify accuracy of the FUN3D/DYMORE system by conducting simulations for a benchmark rotorcraft test model and comparing solutions with established analyses and experimental data. Complex-variable implementation of sensitivity analysis of DYMORE and the coupled FUN3D/DYMORE system is verified by comparing with real-valued analysis and sensitivities. Correctness of adjoint formulations for FUN3D/DYMORE interfaces is verified by comparing adjoint-based and complex-variable sensitivities. Finally, sensitivities of the lift and drag functions obtained by complex-variable FUN3D/DYMORE simulations are compared with sensitivities computed by the multidisciplinary sensitivity analysis, which couples adjoint-based flow and grid sensitivities of FUN3D and FUN3D/DYMORE interfaces with complex-variable sensitivities of DYMORE structural responses.

  16. Initial fractal exponent of heart-rate variability is associated with success of early resuscitation in patients with severe sepsis or septic shock: a prospective cohort study

    PubMed Central

    Brown, Samuel M.; Tate, Quinn; Jones, Jason P.; Knox, Daniel; Kuttler, Kathryn G.; Lanspa, Michael; Rondina, Matthew T.; Grissom, Colin K.; Behera, Subhasis; Mathews, V.J.; Morris, Alan

    2013-01-01

    Introduction Heart-rate variability reflects autonomic nervous system tone as well as the overall health of the baroreflex system. We hypothesized that loss of complexity in heart-rate variability upon ICU admission would be associated with unsuccessful early resuscitation of sepsis. Methods We prospectively enrolled patients admitted to ICUs with severe sepsis or septic shock from 2009 to 2011. We studied 30 minutes of EKG, sampled at 500 Hz, at ICU admission and calculated heart-rate complexity via detrended fluctuation analysis. Primary outcome was vasopressor independence at 24 hours after ICU admission. Secondary outcome was 28-day mortality. Results We studied 48 patients, of whom 60% were vasopressor independent at 24 hours. Five (10%) died within 28 days. The ratio of fractal alpha parameters was associated with both vasopressor independence and 28-day mortality (p=0.04) after controlling for mean heart rate. In the optimal model, SOFA score and the long-term fractal alpha parameter were associated with vasopressor independence. Conclusions Loss of complexity in heart rate variability is associated with worse outcome early in severe sepsis and septic shock. Further work should evaluate whether complexity of heart rate variability (HRV) could guide treatment in sepsis. PMID:23958243

  17. Increased ventilatory variability and complexity in patients with hyperventilation disorder.

    PubMed

    Bokov, Plamen; Fiamma, Marie-Noëlle; Chevalier-Bidaud, Brigitte; Chenivesse, Cécile; Straus, Christian; Similowski, Thomas; Delclaux, Christophe

    2016-05-15

    It has been hypothesized that hyperventilation disorders could be characterized by an abnormal ventilatory control leading to enhanced variability of resting ventilation. The variability of tidal volume (VT) often depicts a nonnormal distribution that can be described by the negative slope characterizing augmented breaths formed by the relationship between the probability density distribution of VT and VT on a log-log scale. The objectives of this study were to describe the variability of resting ventilation [coefficient of variation (CV) of VT and slope], the stability in respiratory control (loop, controller and plant gains characterizing ventilatory-chemoresponsiveness interactions) and the chaotic-like dynamics (embedding dimension, Kappa values characterizing complexity) of resting ventilation in patients with a well-defined dysfunctional breathing pattern characterized by air hunger and constantly decreased PaCO2 during a cardiopulmonary exercise test. Compared with 14 healthy subjects with similar anthropometrics, 23 patients with hyperventilation were characterized by increased variability of resting tidal ventilation (CV of VT median [interquartile]: 26% [19-35] vs. 36% [28-48], P = 0.020; slope: -6.63 [-7.65; -5.36] vs. -3.88 [-5.91; -2.66], P = 0.004) that was not related to increased chemical drive (loop gain: 0.051 [0.039-0.221] vs. 0.044 [0.012-0.087], P = 0.149) but that was related to an increased ventilatory complexity (Kappa values, P < 0.05). Plant gain was decreased in patients and correlated with complexity (with Kappa 5 - degree 5: Rho = -0.48, P = 0.006). In conclusion, well-defined patients suffering from hyperventilation disorder are characterized by increased variability of their resting ventilation due to increased ventilatory complexity with stable ventilatory-chemoresponsiveness interactions. Copyright © 2016 the American Physiological Society.

  18. Evaluation of terrain complexity by autocorrelation. [geomorphology and geobotany

    NASA Technical Reports Server (NTRS)

    Craig, R. G.

    1982-01-01

    The topographic complexity of various sections of the Ozark, Appalachian, and Interior Low Plateaus, as well as of the New England, Piedmont, Blue Ridge, Ouachita, and Valley and Ridge Provinces of the Eastern United States were characterized. The variability of autocorrelation within a small area (7 1/2-ft quadrangle) to the variability at widely separated and diverse areas within the same physiographic region was compared to measure the degree of uniformity of the processes which can be expected to be encountered within a given physiographic province. The variability of autocorrelation across the eight geomorphic regions was compared and contrasted. The total study area was partitioned into subareas homogeneous in terrain complexity. The relation between the complexity measured, the geomorphic process mix implied, and the way in which geobotanical information is modified into a more or less recognizable entity is demonstrated. Sampling strategy is described.

  19. Mathematics for Physics

    NASA Astrophysics Data System (ADS)

    Stone, Michael; Goldbart, Paul

    2009-07-01

    Preface; 1. Calculus of variations; 2. Function spaces; 3. Linear ordinary differential equations; 4. Linear differential operators; 5. Green functions; 6. Partial differential equations; 7. The mathematics of real waves; 8. Special functions; 9. Integral equations; 10. Vectors and tensors; 11. Differential calculus on manifolds; 12. Integration on manifolds; 13. An introduction to differential topology; 14. Group and group representations; 15. Lie groups; 16. The geometry of fibre bundles; 17. Complex analysis I; 18. Applications of complex variables; 19. Special functions and complex variables; Appendixes; Reference; Index.

  20. Regional-scale brine migration along vertical pathways due to CO2 injection - Part 2: A simulated case study in the North German Basin

    NASA Astrophysics Data System (ADS)

    Kissinger, Alexander; Noack, Vera; Knopf, Stefan; Konrad, Wilfried; Scheer, Dirk; Class, Holger

    2017-06-01

    Saltwater intrusion into potential drinking water aquifers due to the injection of CO2 into deep saline aquifers is one of the hazards associated with the geological storage of CO2. Thus, in a site-specific risk assessment, models for predicting the fate of the displaced brine are required. Practical simulation of brine displacement involves decisions regarding the complexity of the model. The choice of an appropriate level of model complexity depends on multiple criteria: the target variable of interest, the relevant physical processes, the computational demand, the availability of data, and the data uncertainty. In this study, we set up a regional-scale geological model for a realistic (but not real) onshore site in the North German Basin with characteristic geological features for that region. A major aim of this work is to identify the relevant parameters controlling saltwater intrusion in a complex structural setting and to test the applicability of different model simplifications. The model that is used to identify relevant parameters fully couples flow in shallow freshwater aquifers and deep saline aquifers. This model also includes variable-density transport of salt and realistically incorporates surface boundary conditions with groundwater recharge. The complexity of this model is then reduced in several steps, by neglecting physical processes (two-phase flow near the injection well, variable-density flow) and by simplifying the complex geometry of the geological model. The results indicate that the initial salt distribution prior to the injection of CO2 is one of the key parameters controlling shallow aquifer salinization. However, determining the initial salt distribution involves large uncertainties in the regional-scale hydrogeological parameterization and requires complex and computationally demanding models (regional-scale variable-density salt transport). In order to evaluate strategies for minimizing leakage into shallow aquifers, other target variables can be considered, such as the volumetric leakage rate into shallow aquifers or the pressure buildup in the injection horizon. Our results show that simplified models, which neglect variable-density salt transport, can reach an acceptable agreement with more complex models.

  1. A Comparative Study of the Variables Used to Measure Syntactic Complexity and Accuracy in Task-Based Research

    ERIC Educational Resources Information Center

    Inoue, Chihiro

    2016-01-01

    The constructs of complexity, accuracy and fluency (CAF) have been used extensively to investigate learner performance on second language tasks. However, a serious concern is that the variables used to measure these constructs are sometimes used conventionally without any empirical justification. It is crucial for researchers to understand how…

  2. A Program Complexity Metric Based on Variable Usage for Algorithmic Thinking Education of Novice Learners

    ERIC Educational Resources Information Center

    Fuwa, Minori; Kayama, Mizue; Kunimune, Hisayoshi; Hashimoto, Masami; Asano, David K.

    2015-01-01

    We have explored educational methods for algorithmic thinking for novices and implemented a block programming editor and a simple learning management system. In this paper, we propose a program/algorithm complexity metric specified for novice learners. This metric is based on the variable usage in arithmetic and relational formulas in learner's…

  3. Progress on the DPASS project

    NASA Astrophysics Data System (ADS)

    Galkin, Sergei A.; Bogatu, I. N.; Svidzinski, V. A.

    2015-11-01

    A novel project to develop Disruption Prediction And Simulation Suite (DPASS) of comprehensive computational tools to predict, model, and analyze disruption events in tokamaks has been recently started at FAR-TECH Inc. DPASS will eventually address the following aspects of the disruption problem: MHD, plasma edge dynamics, plasma-wall interaction, generation and losses of runaway electrons. DPASS uses the 3-D Disruption Simulation Code (DSC-3D) as a core tool and will have a modular structure. DSC is a one fluid non-linear, time-dependent 3D MHD code to simulate dynamics of tokamak plasma surrounded by pure vacuum B-field in the real geometry of a conducting tokamak vessel. DSC utilizes the adaptive meshless technique with adaptation to the moving plasma boundary, with accurate magnetic flux conservation and resolution of the plasma surface current. DSC has also an option to neglect the plasma inertia to eliminate fast magnetosonic scale. This option can be turned on/off as needed. During Phase I of the project, two modules will be developed: the computational module for modeling the massive gas injection and main plasma respond; and the module for nanoparticle plasma jet injection as an innovative disruption mitigation scheme. We will report on this development progress. Work is supported by the US DOE SBIR grant # DE-SC0013727.

  4. Numerical study of the shape parameter dependence of the local radial point interpolation method in linear elasticity.

    PubMed

    Moussaoui, Ahmed; Bouziane, Touria

    2016-01-01

    The method LRPIM is a Meshless method with properties of simple implementation of the essential boundary conditions and less costly than the moving least squares (MLS) methods. This method is proposed to overcome the singularity associated to polynomial basis by using radial basis functions. In this paper, we will present a study of a 2D problem of an elastic homogenous rectangular plate by using the method LRPIM. Our numerical investigations will concern the influence of different shape parameters on the domain of convergence,accuracy and using the radial basis function of the thin plate spline. It also will presents a comparison between numerical results for different materials and the convergence domain by precising maximum and minimum values as a function of distribution nodes number. The analytical solution of the deflection confirms the numerical results. The essential points in the method are: •The LRPIM is derived from the local weak form of the equilibrium equations for solving a thin elastic plate.•The convergence of the LRPIM method depends on number of parameters derived from local weak form and sub-domains.•The effect of distributions nodes number by varying nature of material and the radial basis function (TPS).

  5. Thermomechanically coupled conduction mode laser welding simulations using smoothed particle hydrodynamics

    NASA Astrophysics Data System (ADS)

    Hu, Haoyue; Eberhard, Peter

    2017-10-01

    Process simulations of conduction mode laser welding are performed using the meshless Lagrangian smoothed particle hydrodynamics (SPH) method. The solid phase is modeled based on the governing equations in thermoelasticity. For the liquid phase, surface tension effects are taken into account to simulate the melt flow in the weld pool, including the Marangoni force caused by a temperature-dependent surface tension gradient. A non-isothermal solid-liquid phase transition with the release or absorption of additional energy known as the latent heat of fusion is considered. The major heat transfer through conduction is modeled, whereas heat convection and radiation are neglected. The energy input from the laser beam is modeled as a Gaussian heat source acting on the initial material surface. The developed model is implemented in Pasimodo. Numerical results obtained with the model are presented for laser spot welding and seam welding of aluminum and iron. The change of process parameters like welding speed and laser power, and their effects on weld dimensions are investigated. Furthermore, simulations may be useful to obtain the threshold for deep penetration welding and to assess the overall welding quality. A scalability and performance analysis of the implemented SPH algorithm in Pasimodo is run in a shared memory environment. The analysis reveals the potential of large welding simulations on multi-core machines.

  6. Mathematical Methods for Physics and Engineering Third Edition Paperback Set

    NASA Astrophysics Data System (ADS)

    Riley, Ken F.; Hobson, Mike P.; Bence, Stephen J.

    2006-06-01

    Prefaces; 1. Preliminary algebra; 2. Preliminary calculus; 3. Complex numbers and hyperbolic functions; 4. Series and limits; 5. Partial differentiation; 6. Multiple integrals; 7. Vector algebra; 8. Matrices and vector spaces; 9. Normal modes; 10. Vector calculus; 11. Line, surface and volume integrals; 12. Fourier series; 13. Integral transforms; 14. First-order ordinary differential equations; 15. Higher-order ordinary differential equations; 16. Series solutions of ordinary differential equations; 17. Eigenfunction methods for differential equations; 18. Special functions; 19. Quantum operators; 20. Partial differential equations: general and particular; 21. Partial differential equations: separation of variables; 22. Calculus of variations; 23. Integral equations; 24. Complex variables; 25. Application of complex variables; 26. Tensors; 27. Numerical methods; 28. Group theory; 29. Representation theory; 30. Probability; 31. Statistics; Index.

  7. Geoelectrical characterisation of basement aquifers: the case of Iberekodo, southwestern Nigeria

    NASA Astrophysics Data System (ADS)

    Aizebeokhai, Ahzegbobor P.; Oyeyemi, Kehinde D.

    2018-03-01

    Basement aquifers, which occur within the weathered and fractured zones of crystalline bedrocks, are important groundwater resources in tropical and subtropical regions. The development of basement aquifers is complex owing to their high spatial variability. Geophysical techniques are used to obtain information about the hydrologic characteristics of the weathered and fractured zones of the crystalline basement rocks, which relates to the occurrence of groundwater in the zones. The spatial distributions of these hydrologic characteristics are then used to map the spatial variability of the basement aquifers. Thus, knowledge of the spatial variability of basement aquifers is useful in siting wells and boreholes for optimal and perennial yield. Geoelectrical resistivity is one of the most widely used geophysical methods for assessing the spatial variability of the weathered and fractured zones in groundwater exploration efforts in basement complex terrains. The presented study focuses on combining vertical electrical sounding with two-dimensional (2D) geoelectrical resistivity imaging to characterise the weathered and fractured zones in a crystalline basement complex terrain in southwestern Nigeria. The basement aquifer was delineated, and the nature, extent and spatial variability of the delineated basement aquifer were assessed based on the spatial variability of the weathered and fractured zones. The study shows that a multiple-gradient array for 2D resistivity imaging is sensitive to vertical and near-surface stratigraphic features, which have hydrological implications. The integration of resistivity sounding with 2D geoelectrical resistivity imaging is efficient and enhances near-surface characterisation in basement complex terrain.

  8. Intraindividual variability is related to cognitive change in older adults: evidence for within-person coupling.

    PubMed

    Bielak, Allison A M; Hultsch, David F; Strauss, Esther; MacDonald, Stuart W S; Hunter, Michael A

    2010-09-01

    In this study, the authors addressed the longitudinal nature of intraindividual variability over 3 years. A sample of 304 community-dwelling older adults, initially between the ages of 64 and 92 years, completed 4 waves of annual testing on a battery of accuracy- and latency-based tests covering a wide range of cognitive complexity. Increases in response-time inconsistency on moderately and highly complex tasks were associated with increasing age, but there were significant individual differences in change across the entire sample. The time-varying covariation between cognition and inconsistency was significant across the 1-year intervals and remained stable across both time and age. On occasions when intraindividual variability was high, participants' cognitive performance was correspondingly low. The strength of the coupling relationship was greater for more fluid cognitive domains such as memory, reasoning, and processing speed than for more crystallized domains such as verbal ability. Variability based on moderately and highly complex tasks provided the strongest prediction. These results suggest that intraindividual variability is highly sensitive to even subtle changes in cognitive ability. (c) 2010 APA, all rights reserved.

  9. Extended q -Gaussian and q -exponential distributions from gamma random variables

    NASA Astrophysics Data System (ADS)

    Budini, Adrián A.

    2015-05-01

    The family of q -Gaussian and q -exponential probability densities fit the statistical behavior of diverse complex self-similar nonequilibrium systems. These distributions, independently of the underlying dynamics, can rigorously be obtained by maximizing Tsallis "nonextensive" entropy under appropriate constraints, as well as from superstatistical models. In this paper we provide an alternative and complementary scheme for deriving these objects. We show that q -Gaussian and q -exponential random variables can always be expressed as a function of two statistically independent gamma random variables with the same scale parameter. Their shape index determines the complexity q parameter. This result also allows us to define an extended family of asymmetric q -Gaussian and modified q -exponential densities, which reduce to the standard ones when the shape parameters are the same. Furthermore, we demonstrate that a simple change of variables always allows relating any of these distributions with a beta stochastic variable. The extended distributions are applied in the statistical description of different complex dynamics such as log-return signals in financial markets and motion of point defects in a fluid flow.

  10. Analytic complexity of functions of two variables

    NASA Astrophysics Data System (ADS)

    Beloshapka, V. K.

    2007-09-01

    The definition of analytic complexity of an analytic function of two variables is given. It is proved that the class of functions of a chosen complexity is a differentialalgebraic set. A differential polynomial defining the functions of first class is constructed. An algorithm for obtaining relations defining an arbitrary class is described. Examples of functions are given whose order of complexity is equal to zero, one, two, and infinity. It is shown that the formal order of complexity of the Cardano and Ferrari formulas is significantly higher than their analytic complexity. The complexity classes turn out to be invariant with respect to a certain infinite-dimensional transformation pseudogroup. In this connection, we describe the orbits of the action of this pseudogroup in the jets of orders one, two, and three. The notion of complexity order is extended to plane (or “planar”) 3-webs. It is discovered that webs of complexity order one are the hexagonal webs. Some problems are posed.

  11. An outline of graphical Markov models in dentistry.

    PubMed

    Helfenstein, U; Steiner, M; Menghini, G

    1999-12-01

    In the usual multiple regression model there is one response variable and one block of several explanatory variables. In contrast, in reality there may be a block of several possibly interacting response variables one would like to explain. In addition, the explanatory variables may split into a sequence of several blocks, each block containing several interacting variables. The variables in the second block are explained by those in the first block; the variables in the third block by those in the first and the second block etc. During recent years methods have been developed allowing analysis of problems where the data set has the above complex structure. The models involved are called graphical models or graphical Markov models. The main result of an analysis is a picture, a conditional independence graph with precise statistical meaning, consisting of circles representing variables and lines or arrows representing significant conditional associations. The absence of a line between two circles signifies that the corresponding two variables are independent conditional on the presence of other variables in the model. An example from epidemiology is presented in order to demonstrate application and use of the models. The data set in the example has a complex structure consisting of successive blocks: the variable in the first block is year of investigation; the variables in the second block are age and gender; the variables in the third block are indices of calculus, gingivitis and mutans streptococci and the final response variables in the fourth block are different indices of caries. Since the statistical methods may not be easily accessible to dentists, this article presents them in an introductory form. Graphical models may be of great value to dentists in allowing analysis and visualisation of complex structured multivariate data sets consisting of a sequence of blocks of interacting variables and, in particular, several possibly interacting responses in the final block.

  12. The effect of muscle fatigue and low back pain on lumbar movement variability and complexity.

    PubMed

    Bauer, C M; Rast, F M; Ernst, M J; Meichtry, A; Kool, J; Rissanen, S M; Suni, J H; Kankaanpää, M

    2017-04-01

    Changes in movement variability and complexity may reflect an adaptation strategy to fatigue. One unresolved question is whether this adaptation is hampered by the presence of low back pain (LBP). This study investigated if changes in movement variability and complexity after fatigue are influenced by the presence of LBP. It is hypothesised that pain free people and people suffering from LBP differ in their response to fatigue. The effect of an isometric endurance test on lumbar movement was tested in 27 pain free participants and 59 participants suffering from LBP. Movement variability and complexity were quantified with %determinism and sample entropy of lumbar angular displacement and velocity. Generalized linear models were fitted for each outcome. Bayesian estimation of the group-fatigue effect with 95% highest posterior density intervals (95%HPDI) was performed. After fatiguing %determinism decreased and sample entropy increased in the pain free group, compared to the LBP group. The corresponding group-fatigue effects were 3.7 (95%HPDI: 2.3-7.1) and -1.4 (95%HPDI: -2.7 to -0.1). These effects manifested in angular velocity, but not in angular displacement. The effects indicate that pain free participants showed more complex and less predictable lumbar movement with a lower degree of structure in its variability following fatigue while participants suffering from LBP did not. This may be physiological responses to avoid overload of fatigued tissue, increase endurance, or a consequence of reduced movement control caused by fatigue. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. Decade-long bird community response to the spatial pattern of variable retention harvesting in red pine (Pinus resinosa) forests

    Treesearch

    Eddie L. Shea; Lisa A. Schulte; Brian J. Palik

    2017-01-01

    Structural complexity is widely recognized as an inherent characteristic of unmanaged forests critical to their function and resilience, but often reduced in their managed counterparts. Variable retention harvesting (VRH) has been proposed as a way to restore or enhance structural complexity in managed forests, and thereby sustain attendant biodiversity and ecosystem...

  14. Analysis and Design of Complex Network Environments

    DTIC Science & Technology

    2014-02-01

    entanglements among un- measured variables. This “potential entanglement ” type of network complexity is previously unaddressed in the literature, yet it...Appreciating the power of structural representations that allow for potential entanglement among unmeasured variables to simplify network inference problems...rely on the idea of subsystems and allows for potential entanglement among unmeasured states. As a result, inferring a system’s signal structure

  15. An unusual kind of complex synchronizations and its applications in secure communications

    NASA Astrophysics Data System (ADS)

    Mahmoud, Emad E.

    2017-11-01

    In this paper, we talk about the meaning of complex anti-syncrhonization (CAS) of hyperchaotic nonlinear frameworks comprehensive complex variables and indeterminate parameters. This sort of synchronization can break down just for complex nonlinear frameworks. The CAS contains or fuses two sorts of synchronizations (complete synchronization and anti-synchronization). In the CAS the attractors of the master and slave frameworks are moving opposite or orthogonal to each other with a similar form; this phenomenon does not exist in the literature. Upon confirmation of the Lyapunov function and a versatile control strategy, a plan is made to play out the CAS of two indistinguishable hyperchaotic attractors of these frameworks. The adequacy of the obtained results is shown by a simulation case. Numerical issues are plotted to decide state variables, synchronization errors, modules errors, and phases errors of those hyperchaotic attractors after synchronization to determine that the CAS is accomplished. The above outcomes will present the possible establishment to the secure communication applications. The CAS of hyperchaotic complex frameworks in which a state variable of the master framework synchronizes with an alternate state variable of the slave framework is an encouraging kind of synchronization as it contributes fantastic security in secure communications. Amid this secure communications, the synchronization between transmitter and collector is shut and message signs are recouped. The encryption and reclamation of the signs are reproduced numerically.

  16. Distribution, abundance, and diversity of stream fishes under variable environmental conditions

    Treesearch

    Christopher M. Taylor; Thomas L. Holder; Richard A. Fiorillo; Lance R. Williams; R. Brent Thomas; Melvin L. Warren

    2006-01-01

    The effects of stream size and flow regime on spatial and temporal variability of stream fish distribution, abundance, and diversity patterns were investigated. Assemblage variability and species richness were each significantly associated with a complex environmental gradient contrasting smaller, hydrologically variable stream localities with larger localities...

  17. 89. 22'X34' original vellum, VariableAngle Launcher 'ELEVATION OF LAUNCHER BRIDGE ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    89. 22'X34' original vellum, Variable-Angle Launcher 'ELEVATION OF LAUNCHER BRIDGE ON TEMPORARY SUPPORT' drawn at 1'=20'. (BUORD Sketch # 209786, PAPW 1932). - Variable Angle Launcher Complex, Variable Angle Launcher, CA State Highway 39 at Morris Reservior, Azusa, Los Angeles County, CA

  18. 90. 22'X34' original blueprint, VariableAngle Launcher, 'FRONT ELEVATION OF LAUNCHER ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    90. 22'X34' original blueprint, Variable-Angle Launcher, 'FRONT ELEVATION OF LAUNCHER BRIDGE, CONNECTING BRIDGE AND BARGES' drawn at 1/4'=1'0'. (BUROD Sketch # 208247). - Variable Angle Launcher Complex, Variable Angle Launcher, CA State Highway 39 at Morris Reservior, Azusa, Los Angeles County, CA

  19. Student Solution Manual for Mathematical Methods for Physics and Engineering Third Edition

    NASA Astrophysics Data System (ADS)

    Riley, K. F.; Hobson, M. P.

    2006-03-01

    Preface; 1. Preliminary algebra; 2. Preliminary calculus; 3. Complex numbers and hyperbolic functions; 4. Series and limits; 5. Partial differentiation; 6. Multiple integrals; 7. Vector algebra; 8. Matrices and vector spaces; 9. Normal modes; 10. Vector calculus; 11. Line, surface and volume integrals; 12. Fourier series; 13. Integral transforms; 14. First-order ordinary differential equations; 15. Higher-order ordinary differential equations; 16. Series solutions of ordinary differential equations; 17. Eigenfunction methods for differential equations; 18. Special functions; 19. Quantum operators; 20. Partial differential equations: general and particular; 21. Partial differential equations: separation of variables; 22. Calculus of variations; 23. Integral equations; 24. Complex variables; 25. Application of complex variables; 26. Tensors; 27. Numerical methods; 28. Group theory; 29. Representation theory; 30. Probability; 31. Statistics.

  20. Coulomb wave functions with complex values of the variable and the parameters

    NASA Astrophysics Data System (ADS)

    Dzieciol, Aleksander; Yngve, Staffan; Fröman, Per Olof

    1999-12-01

    The motivation for the present paper lies in the fact that the literature concerning the Coulomb wave functions FL(η,ρ) and GL(η,ρ) is a jungle in which it may be hard to find a safe way when one needs general formulas for the Coulomb wave functions with complex values of the variable ρ and the parameters L and η. For the Coulomb wave functions and certain linear combinations of these functions we discuss the connection with the Whittaker function, the Coulomb phase shift, Wronskians, reflection formulas (L→-L-1), integral representations, series expansions, circuital relations (ρ→ρe±iπ) and asymptotic formulas on a Riemann surface for the variable ρ. The parameters L and η are allowed to assume complex values.

  1. Cognitive Agility Measurement in a Complex Environment

    DTIC Science & Technology

    2017-04-01

    correlate with their corresponding historical psychology tests? EEA3.1: Does the variable for Make Goal cognitive flexibility correlate with the...Stroop Test cognitive flexibility variable? EEA3.2: Does the variable for Make Goal cognitive openness correlate with the AUT cognitive openness...variable? EEA3.3: Does the variable for Make Goal focused attention correlate with the Go, No Go Paradigm focused attention variable? 1.6

  2. RMS Spectral Modelling - a powerful tool to probe the origin of variability in Active Galactic Nuclei

    NASA Astrophysics Data System (ADS)

    Mallick, Labani; Dewangan, Gulab chand; Misra, Ranjeev

    2016-07-01

    The broadband energy spectra of Active Galactic Nuclei (AGN) are very complex in nature with the contribution from many ingredients: accretion disk, corona, jets, broad-line region (BLR), narrow-line region (NLR) and Compton-thick absorbing cloud or TORUS. The complexity of the broadband AGN spectra gives rise to mean spectral model degeneracy, e.g, there are competing models for the broad feature near 5-7 keV in terms of blurred reflection and complex absorption. In order to overcome the energy spectral model degeneracy, the most reliable approach is to study the RMS variability spectrum which connects the energy spectrum with temporal variability. The origin of variability could be pivoting of the primary continuum, reflection and/or absorption. The study of RMS (Root Mean Square) spectra would help us to connect the energy spectra with the variability. In this work, we study the energy dependent variability of AGN by developing theoretical RMS spectral model in ISIS (Interactive Spectral Interpretation System) for different input energy spectra. In this talk, I would like to present results of RMS spectral modelling for few radio-loud and radio-quiet AGN observed by XMM-Newton, Suzaku, NuSTAR and ASTROSAT and will probe the dichotomy between these two classes of AGN.

  3. Student Solution Manual for Essential Mathematical Methods for the Physical Sciences

    NASA Astrophysics Data System (ADS)

    Riley, K. F.; Hobson, M. P.

    2011-02-01

    1. Matrices and vector spaces; 2. Vector calculus; 3. Line, surface and volume integrals; 4. Fourier series; 5. Integral transforms; 6. Higher-order ODEs; 7. Series solutions of ODEs; 8. Eigenfunction methods; 9. Special functions; 10. Partial differential equations; 11. Solution methods for PDEs; 12. Calculus of variations; 13. Integral equations; 14. Complex variables; 15. Applications of complex variables; 16. Probability; 17. Statistics.

  4. Essential Mathematical Methods for the Physical Sciences

    NASA Astrophysics Data System (ADS)

    Riley, K. F.; Hobson, M. P.

    2011-02-01

    1. Matrices and vector spaces; 2. Vector calculus; 3. Line, surface and volume integrals; 4. Fourier series; 5. Integral transforms; 6. Higher-order ODEs; 7. Series solutions of ODEs; 8. Eigenfunction methods; 9. Special functions; 10. Partial differential equations; 11. Solution methods for PDEs; 12. Calculus of variations; 13. Integral equations; 14. Complex variables; 15. Applications of complex variables; 16. Probability; 17. Statistics; Appendices; Index.

  5. Understanding the Complexity of Temperature Dynamics in Xinjiang, China, from Multitemporal Scale and Spatial Perspectives

    PubMed Central

    Chen, Yaning; Li, Weihong; Liu, Zuhan; Wei, Chunmeng; Tang, Jie

    2013-01-01

    Based on the observed data from 51 meteorological stations during the period from 1958 to 2012 in Xinjiang, China, we investigated the complexity of temperature dynamics from the temporal and spatial perspectives by using a comprehensive approach including the correlation dimension (CD), classical statistics, and geostatistics. The main conclusions are as follows (1) The integer CD values indicate that the temperature dynamics are a complex and chaotic system, which is sensitive to the initial conditions. (2) The complexity of temperature dynamics decreases along with the increase of temporal scale. To describe the temperature dynamics, at least 3 independent variables are needed at daily scale, whereas at least 2 independent variables are needed at monthly, seasonal, and annual scales. (3) The spatial patterns of CD values at different temporal scales indicate that the complex temperature dynamics are derived from the complex landform. PMID:23843732

  6. Loss of 'complexity' and aging. Potential applications of fractals and chaos theory to senescence

    NASA Technical Reports Server (NTRS)

    Lipsitz, L. A.; Goldberger, A. L.

    1992-01-01

    The concept of "complexity," derived from the field of nonlinear dynamics, can be adapted to measure the output of physiologic processes that generate highly variable fluctuations resembling "chaos." We review data suggesting that physiologic aging is associated with a generalized loss of such complexity in the dynamics of healthy organ system function and hypothesize that such loss of complexity leads to an impaired ability to adapt to physiologic stress. This hypothesis is supported by observations showing an age-related loss of complex variability in multiple physiologic processes including cardiovascular control, pulsatile hormone release, and electroencephalographic potentials. If further research supports this hypothesis, measures of complexity based on chaos theory and the related geometric concept of fractals may provide new ways to monitor senescence and test the efficacy of specific interventions to modify the age-related decline in adaptive capacity.

  7. Variable speed limit strategies analysis with mesoscopic traffic flow model based on complex networks

    NASA Astrophysics Data System (ADS)

    Li, Shu-Bin; Cao, Dan-Ni; Dang, Wen-Xiu; Zhang, Lin

    As a new cross-discipline, the complexity science has penetrated into every field of economy and society. With the arrival of big data, the research of the complexity science has reached its summit again. In recent years, it offers a new perspective for traffic control by using complex networks theory. The interaction course of various kinds of information in traffic system forms a huge complex system. A new mesoscopic traffic flow model is improved with variable speed limit (VSL), and the simulation process is designed, which is based on the complex networks theory combined with the proposed model. This paper studies effect of VSL on the dynamic traffic flow, and then analyzes the optimal control strategy of VSL in different network topologies. The conclusion of this research is meaningful to put forward some reasonable transportation plan and develop effective traffic management and control measures to help the department of traffic management.

  8. The Generalization of Mutual Information as the Information between a Set of Variables: The Information Correlation Function Hierarchy and the Information Structure of Multi-Agent Systems

    NASA Technical Reports Server (NTRS)

    Wolf, David R.

    2004-01-01

    The topic of this paper is a hierarchy of information-like functions, here named the information correlation functions, where each function of the hierarchy may be thought of as the information between the variables it depends upon. The information correlation functions are particularly suited to the description of the emergence of complex behaviors due to many- body or many-agent processes. They are particularly well suited to the quantification of the decomposition of the information carried among a set of variables or agents, and its subsets. In more graphical language, they provide the information theoretic basis for understanding the synergistic and non-synergistic components of a system, and as such should serve as a forceful toolkit for the analysis of the complexity structure of complex many agent systems. The information correlation functions are the natural generalization to an arbitrary number of sets of variables of the sequence starting with the entropy function (one set of variables) and the mutual information function (two sets). We start by describing the traditional measures of information (entropy) and mutual information.

  9. Multiscale entropy-based methods for heart rate variability complexity analysis

    NASA Astrophysics Data System (ADS)

    Silva, Luiz Eduardo Virgilio; Cabella, Brenno Caetano Troca; Neves, Ubiraci Pereira da Costa; Murta Junior, Luiz Otavio

    2015-03-01

    Physiologic complexity is an important concept to characterize time series from biological systems, which associated to multiscale analysis can contribute to comprehension of many complex phenomena. Although multiscale entropy has been applied to physiological time series, it measures irregularity as function of scale. In this study we purpose and evaluate a set of three complexity metrics as function of time scales. Complexity metrics are derived from nonadditive entropy supported by generation of surrogate data, i.e. SDiffqmax, qmax and qzero. In order to access accuracy of proposed complexity metrics, receiver operating characteristic (ROC) curves were built and area under the curves was computed for three physiological situations. Heart rate variability (HRV) time series in normal sinus rhythm, atrial fibrillation, and congestive heart failure data set were analyzed. Results show that proposed metric for complexity is accurate and robust when compared to classic entropic irregularity metrics. Furthermore, SDiffqmax is the most accurate for lower scales, whereas qmax and qzero are the most accurate when higher time scales are considered. Multiscale complexity analysis described here showed potential to assess complex physiological time series and deserves further investigation in wide context.

  10. LinguisticBelief: a java application for linguistic evaluation using belief, fuzzy sets, and approximate reasoning.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Darby, John L.

    LinguisticBelief is a Java computer code that evaluates combinations of linguistic variables using an approximate reasoning rule base. Each variable is comprised of fuzzy sets, and a rule base describes the reasoning on combinations of variables fuzzy sets. Uncertainty is considered and propagated through the rule base using the belief/plausibility measure. The mathematics of fuzzy sets, approximate reasoning, and belief/ plausibility are complex. Without an automated tool, this complexity precludes their application to all but the simplest of problems. LinguisticBelief automates the use of these techniques, allowing complex problems to be evaluated easily. LinguisticBelief can be used free of chargemore » on any Windows XP machine. This report documents the use and structure of the LinguisticBelief code, and the deployment package for installation client machines.« less

  11. Variation of M···H-C Interactions in Square-Planar Complexes of Nickel(II), Palladium(II), and Platinum(II) Probed by Luminescence Spectroscopy and X-ray Diffraction at Variable Pressure.

    PubMed

    Poirier, Stéphanie; Lynn, Hudson; Reber, Christian; Tailleur, Elodie; Marchivie, Mathieu; Guionneau, Philippe; Probert, Michael R

    2018-06-12

    Luminescence spectra of isoelectronic square-planar d 8 complexes with 3d, 4d, and 5d metal centers show d-d luminescence with an energetic order different from that of the spectrochemical series, indicating that additional structural effects, such as different ligand-metal-ligand angles, are important factors. Variable-pressure luminescence spectra of square-planar nickel(II), palladium(II), and platinum(II) complexes with dimethyldithiocarbamate ({CH 3 } 2 DTC) ligands and their deuterated analogues show unexpected variations of the shifts of their maxima. High-resolution crystal structures and crystal structures at variable pressure for [Pt{(CH 3 ) 2 DTC} 2 ] indicate that intermolecular M···H-C interactions are at the origin of these different shifts.

  12. Comparison of Metabolomics Approaches for Evaluating the Variability of Complex Botanical Preparations: Green Tea (Camellia sinensis) as a Case Study.

    PubMed

    Kellogg, Joshua J; Graf, Tyler N; Paine, Mary F; McCune, Jeannine S; Kvalheim, Olav M; Oberlies, Nicholas H; Cech, Nadja B

    2017-05-26

    A challenge that must be addressed when conducting studies with complex natural products is how to evaluate their complexity and variability. Traditional methods of quantifying a single or a small range of metabolites may not capture the full chemical complexity of multiple samples. Different metabolomics approaches were evaluated to discern how they facilitated comparison of the chemical composition of commercial green tea [Camellia sinensis (L.) Kuntze] products, with the goal of capturing the variability of commercially used products and selecting representative products for in vitro or clinical evaluation. Three metabolomic-related methods-untargeted ultraperformance liquid chromatography-mass spectrometry (UPLC-MS), targeted UPLC-MS, and untargeted, quantitative 1 HNMR-were employed to characterize 34 commercially available green tea samples. Of these methods, untargeted UPLC-MS was most effective at discriminating between green tea, green tea supplement, and non-green-tea products. A method using reproduced correlation coefficients calculated from principal component analysis models was developed to quantitatively compare differences among samples. The obtained results demonstrated the utility of metabolomics employing UPLC-MS data for evaluating similarities and differences between complex botanical products.

  13. Comparison of Metabolomics Approaches for Evaluating the Variability of Complex Botanical Preparations: Green Tea (Camellia sinensis) as a Case Study

    PubMed Central

    2017-01-01

    A challenge that must be addressed when conducting studies with complex natural products is how to evaluate their complexity and variability. Traditional methods of quantifying a single or a small range of metabolites may not capture the full chemical complexity of multiple samples. Different metabolomics approaches were evaluated to discern how they facilitated comparison of the chemical composition of commercial green tea [Camellia sinensis (L.) Kuntze] products, with the goal of capturing the variability of commercially used products and selecting representative products for in vitro or clinical evaluation. Three metabolomic-related methods—untargeted ultraperformance liquid chromatography–mass spectrometry (UPLC-MS), targeted UPLC-MS, and untargeted, quantitative 1HNMR—were employed to characterize 34 commercially available green tea samples. Of these methods, untargeted UPLC-MS was most effective at discriminating between green tea, green tea supplement, and non-green-tea products. A method using reproduced correlation coefficients calculated from principal component analysis models was developed to quantitatively compare differences among samples. The obtained results demonstrated the utility of metabolomics employing UPLC-MS data for evaluating similarities and differences between complex botanical products. PMID:28453261

  14. 11. 28'X40' original vellum, VariableAngle Launcher, 'INDEX TO Drawings' drawn ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    11. 28'X40' original vellum, Variable-Angle Launcher, 'INDEX TO Drawings' drawn at no scale (P.W.DWG.No. 1781). - Variable Angle Launcher Complex, CA State Highway 39 at Morris Reservior, Azusa, Los Angeles County, CA

  15. Influence of learner knowledge and case complexity on handover accuracy and cognitive load: results from a simulation study.

    PubMed

    Young, John Q; van Dijk, Savannah M; O'Sullivan, Patricia S; Custers, Eugene J; Irby, David M; Ten Cate, Olle

    2016-09-01

    The handover represents a high-risk event in which errors are common and lead to patient harm. A better understanding of the cognitive mechanisms of handover errors is essential to improving handover education and practice. This paper reports on an experiment conducted to study the effects of learner knowledge, case complexity (i.e. cases with or without a clear diagnosis) and their interaction on handover accuracy and cognitive load. Participants were 52 Dutch medical students in Years 2 and 6. The experiment employed a repeated-measures design with two explanatory variables: case complexity (simple or complex) as the within-subject variable, and learner knowledge (as indicated by illness script maturity) as the between-subject covariate. The dependent variables were handover accuracy and cognitive load. Each participant performed a total of four simulated handovers involving two simple cases and two complex cases. Higher illness script maturity predicted increased handover accuracy (p < 0.001) and lower cognitive load (p = 0.007). Case complexity did not independently affect either outcome. For handover accuracy, there was no interaction between case complexity and illness script maturity. For cognitive load, there was an interaction effect between illness script maturity and case complexity, indicating that more mature illness scripts reduced cognitive load less in complex cases than in simple cases. Students with more mature illness scripts performed more accurate handovers and experienced lower cognitive load. For cognitive load, these effects were more pronounced in simple than complex cases. If replicated, these findings suggest that handover curricula and protocols should provide support that varies according to the knowledge of the trainee. © 2016 John Wiley & Sons Ltd and The Association for the Study of Medical Education.

  16. Complex-Spectrum Magnetic Environment enhances and/or modifies Bioeffects of Hypokinetic Stress Condition: an Animal Study

    NASA Astrophysics Data System (ADS)

    Temuriantz, N. A.; Martinyuk, V. S.; Ptitsyna, N. G.; Villoresi, G.; Iucci, N.; Tyasto, M. I.; Dorman, L. I.

    During last decades it was shown by many authors that ultra-low and extremely low frequency electric and magnetic fields ULF 0-10 Hz ELF 10-1000 Hz may produce biological effects and consequently may be a possible source for health problems Spaceflight electric and magnetic environments are characterized by complex combination of static and time-varying components in ULF-ELF range and by high variability The objective of this study was to investigate the possible influence of such magnetic fields on rats to understand the pathway regarding functional state of cardiovascular system Magnetic field MF pattern with variable complex spectra in 0-150 Hz frequency range was simulated using 3-axial Helmholtz coils and special computer-based equipment The effect of the real world MF exposure on rats was also tested in combination with hypokinetic stress condition which is typical for spaceflights It was revealed that variable complex-spectrum MF acts as a weak or moderate stress-like factor which amplifies and or modifies the functional shifts caused by other stress-factors The value and direction of the functional shifts caused by MF exposure significantly depend on gender individual-typological constitutional features and also on the physiological state norm stress of organism Our results support the idea that variable complex-spectrum MF action involves sympathetic activation overload in cholesterol transport in blood and also secretor activation of tissue basophyls mast cells that can influence the regional haemodynamics These

  17. Cotton-type and joint invariants for linear elliptic systems.

    PubMed

    Aslam, A; Mahomed, F M

    2013-01-01

    Cotton-type invariants for a subclass of a system of two linear elliptic equations, obtainable from a complex base linear elliptic equation, are derived both by spliting of the corresponding complex Cotton invariants of the base complex equation and from the Laplace-type invariants of the system of linear hyperbolic equations equivalent to the system of linear elliptic equations via linear complex transformations of the independent variables. It is shown that Cotton-type invariants derived from these two approaches are identical. Furthermore, Cotton-type and joint invariants for a general system of two linear elliptic equations are also obtained from the Laplace-type and joint invariants for a system of two linear hyperbolic equations equivalent to the system of linear elliptic equations by complex changes of the independent variables. Examples are presented to illustrate the results.

  18. Cotton-Type and Joint Invariants for Linear Elliptic Systems

    PubMed Central

    Aslam, A.; Mahomed, F. M.

    2013-01-01

    Cotton-type invariants for a subclass of a system of two linear elliptic equations, obtainable from a complex base linear elliptic equation, are derived both by spliting of the corresponding complex Cotton invariants of the base complex equation and from the Laplace-type invariants of the system of linear hyperbolic equations equivalent to the system of linear elliptic equations via linear complex transformations of the independent variables. It is shown that Cotton-type invariants derived from these two approaches are identical. Furthermore, Cotton-type and joint invariants for a general system of two linear elliptic equations are also obtained from the Laplace-type and joint invariants for a system of two linear hyperbolic equations equivalent to the system of linear elliptic equations by complex changes of the independent variables. Examples are presented to illustrate the results. PMID:24453871

  19. Use of complex hydraulic variables to predict the distribution and density of unionids in a side channel of the Upper Mississippi River

    USGS Publications Warehouse

    Steuer, J.J.; Newton, T.J.; Zigler, S.J.

    2008-01-01

    Previous attempts to predict the importance of abiotic and biotic factors to unionids in large rivers have been largely unsuccessful. Many simple physical habitat descriptors (e.g., current velocity, substrate particle size, and water depth) have limited ability to predict unionid density. However, more recent studies have found that complex hydraulic variables (e.g., shear velocity, boundary shear stress, and Reynolds number) may be more useful predictors of unionid density. We performed a retrospective analysis with unionid density, current velocity, and substrate particle size data from 1987 to 1988 in a 6-km reach of the Upper Mississippi River near Prairie du Chien, Wisconsin. We used these data to model simple and complex hydraulic variables under low and high flow conditions. We then used classification and regression tree analysis to examine the relationships between hydraulic variables and unionid density. We found that boundary Reynolds number, Froude number, boundary shear stress, and grain size were the best predictors of density. Models with complex hydraulic variables were a substantial improvement over previously published discriminant models and correctly classified 65-88% of the observations for the total mussel fauna and six species. These data suggest that unionid beds may be constrained by threshold limits at both ends of the flow regime. Under low flow, mussels may require a minimum hydraulic variable (Rez.ast;, Fr) to transport nutrients, oxygen, and waste products. Under high flow, areas with relatively low boundary shear stress may provide a hydraulic refuge for mussels. Data on hydraulic preferences and identification of other conditions that constitute unionid habitat are needed to help restore and enhance habitats for unionids in rivers. ?? 2008 Springer Science+Business Media B.V.

  20. Foundational Principles for Large-Scale Inference: Illustrations Through Correlation Mining.

    PubMed

    Hero, Alfred O; Rajaratnam, Bala

    2016-01-01

    When can reliable inference be drawn in fue "Big Data" context? This paper presents a framework for answering this fundamental question in the context of correlation mining, wifu implications for general large scale inference. In large scale data applications like genomics, connectomics, and eco-informatics fue dataset is often variable-rich but sample-starved: a regime where the number n of acquired samples (statistical replicates) is far fewer than fue number p of observed variables (genes, neurons, voxels, or chemical constituents). Much of recent work has focused on understanding the computational complexity of proposed methods for "Big Data". Sample complexity however has received relatively less attention, especially in the setting when the sample size n is fixed, and the dimension p grows without bound. To address fuis gap, we develop a unified statistical framework that explicitly quantifies the sample complexity of various inferential tasks. Sampling regimes can be divided into several categories: 1) the classical asymptotic regime where fue variable dimension is fixed and fue sample size goes to infinity; 2) the mixed asymptotic regime where both variable dimension and sample size go to infinity at comparable rates; 3) the purely high dimensional asymptotic regime where the variable dimension goes to infinity and the sample size is fixed. Each regime has its niche but only the latter regime applies to exa cale data dimension. We illustrate this high dimensional framework for the problem of correlation mining, where it is the matrix of pairwise and partial correlations among the variables fua t are of interest. Correlation mining arises in numerous applications and subsumes the regression context as a special case. we demonstrate various regimes of correlation mining based on the unifying perspective of high dimensional learning rates and sample complexity for different structured covariance models and different inference tasks.

  1. Synchronization in node of complex networks consist of complex chaotic system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wei, Qiang, E-mail: qiangweibeihua@163.com; Digital Images Processing Institute of Beihua University, BeiHua University, Jilin, 132011, Jilin; Faculty of Electronic Information and Electrical Engineering, Dalian University of Technology, Dalian, 116024

    2014-07-15

    A new synchronization method is investigated for node of complex networks consists of complex chaotic system. When complex networks realize synchronization, different component of complex state variable synchronize up to different scaling complex function by a designed complex feedback controller. This paper change synchronization scaling function from real field to complex field for synchronization in node of complex networks with complex chaotic system. Synchronization in constant delay and time-varying coupling delay complex networks are investigated, respectively. Numerical simulations are provided to show the effectiveness of the proposed method.

  2. Variable sensory perception in autism.

    PubMed

    Haigh, Sarah M

    2018-03-01

    Autism is associated with sensory and cognitive abnormalities. Individuals with autism generally show normal or superior early sensory processing abilities compared to healthy controls, but deficits in complex sensory processing. In the current opinion paper, it will be argued that sensory abnormalities impact cognition by limiting the amount of signal that can be used to interpret and interact with environment. There is a growing body of literature showing that individuals with autism exhibit greater trial-to-trial variability in behavioural and cortical sensory responses. If multiple sensory signals that are highly variable are added together to process more complex sensory stimuli, then this might destabilise later perception and impair cognition. Methods to improve sensory processing have shown improvements in more general cognition. Studies that specifically investigate differences in sensory trial-to-trial variability in autism, and the potential changes in variability before and after treatment, could ascertain if trial-to-trial variability is a good mechanism to target for treatment in autism. © 2017 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  3. The Aristotle Comprehensive Complexity score predicts mortality and morbidity after congenital heart surgery.

    PubMed

    Bojan, Mirela; Gerelli, Sébastien; Gioanni, Simone; Pouard, Philippe; Vouhé, Pascal

    2011-04-01

    The Aristotle Comprehensive Complexity (ACC) score has been proposed for complexity adjustment in the analysis of outcome after congenital heart surgery. The score is the sum of the Aristotle Basic Complexity score, largely used but poorly related to mortality and morbidity, and of the Comprehensive Complexity items accounting for comorbidities and procedure-specific and anatomic variability. This study aims to demonstrate the ability of the ACC score to predict 30-day mortality and morbidity assessed by the length of the intensive care unit (ICU) stay. We retrospectively enrolled patients undergoing congenital heart surgery in our institution. We modeled the ACC score as a continuous variable, mortality as a binary variable, and length of ICU stay as a censored variable. For each mortality and morbidity model we performed internal validation by bootstrapping and assessed overall performance by R(2), calibration by the calibration slope, and discrimination by the c index. Among all 1,454 patients enrolled, 30-day mortality rate was 3.4% and median length of ICU stay was 3 days. The ACC score strongly related to mortality, but related to length of ICU stay only during the first postoperative week. For the mortality model, R(2) = 0.24, calibration slope = 0.98, c index = 0.86, and 95% confidence interval was 0.82 to 0.91. For the morbidity model, R(2) = 0.094, calibration slope = 0.94, c index = 0.64, and 95% confidence interval was 0.62 to 0.66. The ACC score predicts 30-day mortality and length of ICU stay during the first postoperative week. The score is an adequate tool for complexity adjustment in the analysis of outcome after congenital heart surgery. Copyright © 2011 The Society of Thoracic Surgeons. Published by Elsevier Inc. All rights reserved.

  4. Surface-Sensitive Microwear Texture Analysis of Attrition and Erosion.

    PubMed

    Ranjitkar, S; Turan, A; Mann, C; Gully, G A; Marsman, M; Edwards, S; Kaidonis, J A; Hall, C; Lekkas, D; Wetselaar, P; Brook, A H; Lobbezoo, F; Townsend, G C

    2017-03-01

    Scale-sensitive fractal analysis of high-resolution 3-dimensional surface reconstructions of wear patterns has advanced our knowledge in evolutionary biology, and has opened up opportunities for translatory applications in clinical practice. To elucidate the microwear characteristics of attrition and erosion in worn natural teeth, we scanned 50 extracted human teeth using a confocal profiler at a high optical resolution (X-Y, 0.17 µm; Z < 3 nm). Our hypothesis was that microwear complexity would be greater in erosion and that anisotropy would be greater in attrition. The teeth were divided into 4 groups, including 2 wear types (attrition and erosion) and 2 locations (anterior and posterior teeth; n = 12 for each anterior group, n = 13 for each posterior group) for 2 tissue types (enamel and dentine). The raw 3-dimensional data cloud was subjected to a newly developed rigorous standardization technique to reduce interscanner variability as well as to filter anomalous scanning data. Linear mixed effects (regression) analyses conducted separately for the dependent variables, complexity and anisotropy, showed the following effects of the independent variables: significant interactions between wear type and tissue type ( P = 0.0157 and P = 0.0003, respectively) and significant effects of location ( P < 0.0001 and P = 0.0035, respectively). There were significant associations between complexity and anisotropy when the dependent variable was either complexity ( P = 0.0003) or anisotropy ( P = 0.0014). Our findings of greater complexity in erosion and greater anisotropy in attrition confirm our hypothesis. The greatest geometric means were noted in dentine erosion for complexity and dentine attrition for anisotropy. Dentine also exhibited microwear characteristics that were more consistent with wear types than enamel. Overall, our findings could complement macrowear assessment in dental clinical practice and research and could assist in the early detection and management of pathologic tooth wear.

  5. The Effects of Cognitive Task Complexity on L2 Oral Production

    ERIC Educational Resources Information Center

    Levkina, Mayya; Gilabert, Roger

    2012-01-01

    This paper examines the impact of task complexity on L2 production. The study increases task complexity by progressively removing pre-task planning time and increasing the number of elements. The combined effects of manipulating these two variables of task complexity simultaneously are also analyzed. Using a repeated measures design, 42…

  6. A comparative ultrastructural analysis of spermatozoa in Pleurodema (Anura, Leptodactylidae, Leiuperinae).

    PubMed

    Cruz, Julio C; Ferraro, Daiana P; Farías, Alejandro; Santos, Julio S; Recco-Pimentel, Shirlei M; Faivovich, Julián; Hermida, Gladys N

    2016-07-01

    This study describes the spermatozoa of 10 of the 15 species of the Neotropical frog genus Pleurodema through transmission electron microscopy. The diversity of oviposition modes coupled with a recent phylogenetic hypothesis of Pleurodema makes it an interesting group for the study of ultrastructural sperm evolution in relation to fertilization environment and egg-clutch structure. We found that Pleurodema has an unusual variability in sperm morphology. The more variable structures were the acrosomal complex, the midpiece, and the tail. The acrosomal complex has all the structures commonly reported in hyloid frogs but with different degree of development of the subacrosomal cone. Regarding the midpiece, the variability is given by the presence or absence of the mitochondrial collar. Finally, the tail is the most variable structure, ranging from single (only axoneme) to more complex (presence of paraxonemal rod, cytoplasmic sheath, and undulating membrane), with the absence of the typical axial fiber present in hyloid frogs, also shared with some other genera of Leiuperinae. J. Morphol. 277:957-977, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  7. Community structural characteristics and the adoption of fluoridation.

    PubMed Central

    Smith, R A

    1981-01-01

    A study of community structural characteristics associated with fluoridation outcomes was conducted in 47 communities. A three-part outcome distinction was utilized: communities never having publicly considered the fluoridation issue, those rejecting it, and those accepting it. The independent variables reflect the complexity of the community social and economic structure, social integration, and the centralization of authority. Results of mean comparisons show statistically significant differences between the three outcome types on the independent variables. A series of discriminant analyses provides furtheor evidence of how the independent variables are associated with each outcome type. Non-considering communities are shown to be low in complexity, and high in social integration and the centralization of governmental authority. Rejecters are shown to be high in complexity, but low in social integration and centralized authority. Adopters are relatively high on all three sets of variables. Theretical reasoning is provided to support the hypothesis and why these results are expected. The utility of these results and structural explanations in general are discussed, especially for public/environmental health planning and political activities. PMID:7258427

  8. Effects of in-sewer processes: a stochastic model approach.

    PubMed

    Vollertsen, J; Nielsen, A H; Yang, W; Hvitved-Jacobsen, T

    2005-01-01

    Transformations of organic matter, nitrogen and sulfur in sewers can be simulated taking into account the relevant transformation and transport processes. One objective of such simulation is the assessment and management of hydrogen sulfide formation and corrosion. Sulfide is formed in the biofilms and sediments of the water phase, but corrosion occurs on the moist surfaces of the sewer gas phase. Consequently, both phases and the transport of volatile substances between these phases must be included. Furthermore, wastewater composition and transformations in sewers are complex and subject to high, natural variability. This paper presents the latest developments of the WATS model concept, allowing integrated aerobic, anoxic and anaerobic simulation of the water phase and of gas phase processes. The resulting model is complex and with high parameter variability. An example applying stochastic modeling shows how this complexity and variability can be taken into account.

  9. Efficient Construction of Discrete Adjoint Operators on Unstructured Grids by Using Complex Variables

    NASA Technical Reports Server (NTRS)

    Nielsen, Eric J.; Kleb, William L.

    2005-01-01

    A methodology is developed and implemented to mitigate the lengthy software development cycle typically associated with constructing a discrete adjoint solver for aerodynamic simulations. The approach is based on a complex-variable formulation that enables straightforward differentiation of complicated real-valued functions. An automated scripting process is used to create the complex-variable form of the set of discrete equations. An efficient method for assembling the residual and cost function linearizations is developed. The accuracy of the implementation is verified through comparisons with a discrete direct method as well as a previously developed handcoded discrete adjoint approach. Comparisons are also shown for a large-scale configuration to establish the computational efficiency of the present scheme. To ultimately demonstrate the power of the approach, the implementation is extended to high temperature gas flows in chemical nonequilibrium. Finally, several fruitful research and development avenues enabled by the current work are suggested.

  10. Efficient Construction of Discrete Adjoint Operators on Unstructured Grids Using Complex Variables

    NASA Technical Reports Server (NTRS)

    Nielsen, Eric J.; Kleb, William L.

    2005-01-01

    A methodology is developed and implemented to mitigate the lengthy software development cycle typically associated with constructing a discrete adjoint solver for aerodynamic simulations. The approach is based on a complex-variable formulation that enables straightforward differentiation of complicated real-valued functions. An automated scripting process is used to create the complex-variable form of the set of discrete equations. An efficient method for assembling the residual and cost function linearizations is developed. The accuracy of the implementation is verified through comparisons with a discrete direct method as well as a previously developed handcoded discrete adjoint approach. Comparisons are also shown for a large-scale configuration to establish the computational efficiency of the present scheme. To ultimately demonstrate the power of the approach, the implementation is extended to high temperature gas flows in chemical nonequilibrium. Finally, several fruitful research and development avenues enabled by the current work are suggested.

  11. Shame, Dissociation, and Complex PTSD Symptoms in Traumatized Psychiatric and Control Groups: Direct and Indirect Associations With Relationship Distress.

    PubMed

    Dorahy, Martin J; Corry, Mary; Black, Rebecca; Matheson, Laura; Coles, Holly; Curran, David; Seager, Lenaire; Middleton, Warwick; Dyer, Kevin F W

    2017-04-01

    Elevated shame and dissociation are common in dissociative identity disorder (DID) and chronic posttraumatic stress disorder (PTSD) and are part of the constellation of symptoms defined as complex PTSD. Previous work examined the relationship between shame, dissociation, and complex PTSD and whether they are associated with intimate relationship anxiety, relationship depression, and fear of relationships. This study investigated these variables in traumatized clinical samples and a nonclinical community group. Participants were drawn from the DID (n = 20), conflict-related chronic PTSD (n = 65), and nonclinical (n = 125) populations and completed questionnaires assessing the variables of interest. A model examining the direct impact of shame and dissociation on relationship functioning, and their indirect effect via complex PTSD symptoms, was tested through path analysis. The DID sample reported significantly higher dissociation, shame, complex PTSD symptom severity, relationship anxiety, relationship depression, and fear of relationships than the other two samples. Support was found for the proposed model, with shame directly affecting relationship anxiety and fear of relationships, and pathological dissociation directly affecting relationship anxiety and relationship depression. The indirect effect of shame and dissociation via complex PTSD symptom severity was evident on all relationship variables. Shame and pathological dissociation are important for not only the effect they have on the development of other complex PTSD symptoms, but also their direct and indirect effects on distress associated with relationships. © 2016 Wiley Periodicals, Inc.

  12. ALGORITHM TO REDUCE APPROXIMATION ERROR FROM THE COMPLEX-VARIABLE BOUNDARY-ELEMENT METHOD APPLIED TO SOIL FREEZING.

    USGS Publications Warehouse

    Hromadka, T.V.; Guymon, G.L.

    1985-01-01

    An algorithm is presented for the numerical solution of the Laplace equation boundary-value problem, which is assumed to apply to soil freezing or thawing. The Laplace equation is numerically approximated by the complex-variable boundary-element method. The algorithm aids in reducing integrated relative error by providing a true measure of modeling error along the solution domain boundary. This measure of error can be used to select locations for adding, removing, or relocating nodal points on the boundary or to provide bounds for the integrated relative error of unknown nodal variable values along the boundary.

  13. Invariant resolutions for several Fueter operators

    NASA Astrophysics Data System (ADS)

    Colombo, Fabrizio; Souček, Vladimir; Struppa, Daniele C.

    2006-07-01

    A proper generalization of complex function theory to higher dimension is Clifford analysis and an analogue of holomorphic functions of several complex variables were recently described as the space of solutions of several Dirac equations. The four-dimensional case has special features and is closely connected to functions of quaternionic variables. In this paper we present an approach to the Dolbeault sequence for several quaternionic variables based on symmetries and representation theory. In particular we prove that the resolution of the Cauchy-Fueter system obtained algebraically, via Gröbner bases techniques, is equivalent to the one obtained by R.J. Baston (J. Geom. Phys. 1992).

  14. Longitudinal variability of complexities associated with equatorial electrojet

    NASA Astrophysics Data System (ADS)

    Rabiu, A. B.; Ogunjo, S. T.; Fuwape, I. A.

    2017-12-01

    Equatorial electrojet indices obtained from ground based magnetometers at 6 representative stations across the magnetic equatorial belt for the year 2009 (mean annual sunspot number Rz = 3.1) were treated to nonlinear time series analysis technique to ascertain the longitudinal dependence of the chaos/complexities associated with the phenomena. The selected stations were along the magnetic equator in the South American (Huancayo, dip latitude -1.80°), African (Ilorin, dip latitude -1.82°; Addis Ababa, dip latitude - 0.18°), and Philippine (Langkawi, dip latitude -2.32°; Davao, dip latitude -1.02°; Yap, dip latitude -1.49°) sectors. The non-linear quantifiers engaged in this work include: Recurrence rate, determinism, diagonal line length, entropy, laminarity, Tsallis entropy, Lyapunov exponent and correlation dimension. Ordinarily the EEJ was found to undergo variability from one longitudinal representative station to another, with the strongest EEJ of about 192.5 nT at the South American axis at Huancayo. The degree of complexity in the EEJ was found to vary qualitatively from one sector to another. Probable physical mechanisms responsible for longitudinal variability of EEJ strength and its complexities were highlighted.

  15. Circularly-symmetric complex normal ratio distribution for scalar transmissibility functions. Part I: Fundamentals

    NASA Astrophysics Data System (ADS)

    Yan, Wang-Ji; Ren, Wei-Xin

    2016-12-01

    Recent advances in signal processing and structural dynamics have spurred the adoption of transmissibility functions in academia and industry alike. Due to the inherent randomness of measurement and variability of environmental conditions, uncertainty impacts its applications. This study is focused on statistical inference for raw scalar transmissibility functions modeled as complex ratio random variables. The goal is achieved through companion papers. This paper (Part I) is dedicated to dealing with a formal mathematical proof. New theorems on multivariate circularly-symmetric complex normal ratio distribution are proved on the basis of principle of probabilistic transformation of continuous random vectors. The closed-form distributional formulas for multivariate ratios of correlated circularly-symmetric complex normal random variables are analytically derived. Afterwards, several properties are deduced as corollaries and lemmas to the new theorems. Monte Carlo simulation (MCS) is utilized to verify the accuracy of some representative cases. This work lays the mathematical groundwork to find probabilistic models for raw scalar transmissibility functions, which are to be expounded in detail in Part II of this study.

  16. Service and Education: The Association Between Workload, Patient Complexity, and Teaching on Internal Medicine Inpatient Services.

    PubMed

    Ratcliffe, Temple A; Crabtree, Meghan A; Palmer, Raymond F; Pugh, Jacqueline A; Lanham, Holly J; Leykum, Luci K

    2018-04-01

    Attending rounds remain the primary venue for formal teaching and learning at academic medical centers. Little is known about the effect of increasing clinical demands on teaching during attending rounds. To explore the relationships among teaching time, teaching topics, clinical workload, and patient complexity variables. Observational study of medicine teaching teams from September 2008 through August 2014. Teams at two large teaching hospitals associated with a single medical school were observed for periods of 2 to 4 weeks. Twelve medicine teaching teams consisting of one attending, one second- or third-year resident, two to three interns, and two to three medical students. The study examined relationships between patient complexity (comorbidities, complications) and clinical workload variables (census, turnover) with educational measures. Teams were clustered based on clinical workload and patient complexity. Educational measures of interest were time spent teaching and number of teaching topics. Data were analyzed both at the daily observation level and across a given patient's admission. We observed 12 teams, 1994 discussions (approximately 373 h of rounds) of 563 patients over 244 observation days. Teams clustered into three groups: low patient complexity/high clinical workload, average patient complexity/low clinical workload, and high patient complexity/high clinical workload. Modest associations for team, patient complexity, and clinical workload variables were noted with total time spent teaching (9.1% of the variance in time spent teaching during a patient's admission; F[8,549] = 6.90, p < 0.001) and number of teaching topics (16% of the variance in the total number of teaching topics during a patient's admission; F[8,548] = 14.18, p < 0.001). Clinical workload and patient complexity characteristics among teams were only modestly associated with total teaching time and teaching topics.

  17. Characterizing Tityus discrepans scorpion venom from a fractal perspective: Venom complexity, effects of captivity, sexual dimorphism, differences among species.

    PubMed

    D'Suze, Gina; Sandoval, Moisés; Sevcik, Carlos

    2015-12-15

    A characteristic of venom elution patterns, shared with many other complex systems, is that many their features cannot be properly described with statistical or euclidean concepts. The understanding of such systems became possible with Mandelbrot's fractal analysis. Venom elution patterns were produced using the reversed phase high performance liquid chromatography (HPLC) with 1 mg of venom. One reason for the lack of quantitative analyses of the sources of venom variability is parametrizing the venom chromatograms' complexity. We quantize this complexity by means of an algorithm which estimates the contortedness (Q) of a waveform. Fractal analysis was used to compare venoms and to measure inter- and intra-specific venom variability. We studied variations in venom complexity derived from gender, seasonal and environmental factors, duration of captivity in the laboratory, technique used to milk venom. Copyright © 2015 Elsevier Ltd. All rights reserved.

  18. A Galleria Boundary Element Method for two-dimensional nonlinear magnetostatics

    NASA Astrophysics Data System (ADS)

    Brovont, Aaron D.

    The Boundary Element Method (BEM) is a numerical technique for solving partial differential equations that is used broadly among the engineering disciplines. The main advantage of this method is that one needs only to mesh the boundary of a solution domain. A key drawback is the myriad of integrals that must be evaluated to populate the full system matrix. To this day these integrals have been evaluated using numerical quadrature. In this research, a Galerkin formulation of the BEM is derived and implemented to solve two-dimensional magnetostatic problems with a focus on accurate, rapid computation. To this end, exact, closed-form solutions have been derived for all the integrals comprising the system matrix as well as those required to compute fields in post-processing; the need for numerical integration has been eliminated. It is shown that calculation of the system matrix elements using analytical solutions is 15-20 times faster than with numerical integration of similar accuracy. Furthermore, through the example analysis of a c-core inductor, it is demonstrated that the present BEM formulation is a competitive alternative to the Finite Element Method (FEM) for linear magnetostatic analysis. Finally, the BEM formulation is extended to analyze nonlinear magnetostatic problems via the Dual Reciprocity Method (DRBEM). It is shown that a coarse, meshless analysis using the DRBEM is able to achieve RMS error of 3-6% compared to a commercial FEM package in lightly saturated conditions.

  19. Wuestite (Fe/1-x/O) - A review of its defect structure and physical properties

    NASA Technical Reports Server (NTRS)

    Hazen, R. M.; Jeanloz, R.

    1984-01-01

    Such complexities of the Wustite structure as nonstoichiometry, ferric iron variable site distribution, long and short range ordering, and exsolution, yield complex physical properties. Magnesiowustite, a phase which has been suggested to occur in the earth's lower mantle, is also expected to exhibit many of these complexities. Geophysical models including the properties of (Mg, Fe)O should accordingly take into account the uncertainties associated with the synthesis and measurement of iron-rich oxides. Given the variability of the Fe(1-x)O structure, it is important that future researchers define the structural state and extent of exsolution of their samples.

  20. Assessing multiscale complexity of short heart rate variability series through a model-based linear approach

    NASA Astrophysics Data System (ADS)

    Porta, Alberto; Bari, Vlasta; Ranuzzi, Giovanni; De Maria, Beatrice; Baselli, Giuseppe

    2017-09-01

    We propose a multiscale complexity (MSC) method assessing irregularity in assigned frequency bands and being appropriate for analyzing the short time series. It is grounded on the identification of the coefficients of an autoregressive model, on the computation of the mean position of the poles generating the components of the power spectral density in an assigned frequency band, and on the assessment of its distance from the unit circle in the complex plane. The MSC method was tested on simulations and applied to the short heart period (HP) variability series recorded during graded head-up tilt in 17 subjects (age from 21 to 54 years, median = 28 years, 7 females) and during paced breathing protocols in 19 subjects (age from 27 to 35 years, median = 31 years, 11 females) to assess the contribution of time scales typical of the cardiac autonomic control, namely in low frequency (LF, from 0.04 to 0.15 Hz) and high frequency (HF, from 0.15 to 0.5 Hz) bands to the complexity of the cardiac regulation. The proposed MSC technique was compared to a traditional model-free multiscale method grounded on information theory, i.e., multiscale entropy (MSE). The approach suggests that the reduction of HP variability complexity observed during graded head-up tilt is due to a regularization of the HP fluctuations in LF band via a possible intervention of sympathetic control and the decrement of HP variability complexity observed during slow breathing is the result of the regularization of the HP variations in both LF and HF bands, thus implying the action of physiological mechanisms working at time scales even different from that of respiration. MSE did not distinguish experimental conditions at time scales larger than 1. Over a short time series MSC allows a more insightful association between cardiac control complexity and physiological mechanisms modulating cardiac rhythm compared to a more traditional tool such as MSE.

  1. Describing the complexity of systems: multivariable "set complexity" and the information basis of systems biology.

    PubMed

    Galas, David J; Sakhanenko, Nikita A; Skupin, Alexander; Ignac, Tomasz

    2014-02-01

    Context dependence is central to the description of complexity. Keying on the pairwise definition of "set complexity," we use an information theory approach to formulate general measures of systems complexity. We examine the properties of multivariable dependency starting with the concept of interaction information. We then present a new measure for unbiased detection of multivariable dependency, "differential interaction information." This quantity for two variables reduces to the pairwise "set complexity" previously proposed as a context-dependent measure of information in biological systems. We generalize it here to an arbitrary number of variables. Critical limiting properties of the "differential interaction information" are key to the generalization. This measure extends previous ideas about biological information and provides a more sophisticated basis for the study of complexity. The properties of "differential interaction information" also suggest new approaches to data analysis. Given a data set of system measurements, differential interaction information can provide a measure of collective dependence, which can be represented in hypergraphs describing complex system interaction patterns. We investigate this kind of analysis using simulated data sets. The conjoining of a generalized set complexity measure, multivariable dependency analysis, and hypergraphs is our central result. While our focus is on complex biological systems, our results are applicable to any complex system.

  2. A consensus for the development of a vector model to assess clinical complexity.

    PubMed

    Corazza, Gino Roberto; Klersy, Catherine; Formagnana, Pietro; Lenti, Marco Vincenzo; Padula, Donatella

    2017-12-01

    The progressive rise in multimorbidity has made management of complex patients one of the most topical and challenging issues in medicine, both in clinical practice and for healthcare organizations. To make this easier, a score of clinical complexity (CC) would be useful. A vector model to evaluate biological and extra-biological (socio-economic, cultural, behavioural, environmental) domains of CC was proposed a few years ago. However, given that the variables that grade each domain had never been defined, this model has never been used in clinical practice. To overcome these limits, a consensus meeting was organised to grade each domain of CC, and to establish the hierarchy of the domains. A one-day consensus meeting consisting of a multi-professional panel of 25 people was held at our Hospital. In a preliminary phase, the proponents selected seven variables as qualifiers for each of the five above-mentioned domains. In the course of the meeting, the panel voted for five variables considered to be the most representative for each domain. Consensus was established with 2/3 agreement, and all variables were dichotomised. Finally, the various domains were parametrized and ranked within a feasible vector model. A Clinical Complexity Index was set up using the chosen variables. All the domains were graphically represented through a vector model: the biological domain was chosen as the most significant (highest slope), followed by the behavioural and socio-economic domains (intermediate slope), and lastly by the cultural and environmental ones (lowest slope). A feasible and comprehensive tool to evaluate CC in clinical practice is proposed herein.

  3. 13. 22'X34' original vellum, VariableAngle Launcher, 'SIDEVIEW CAMERA CAR TRACK ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    13. 22'X34' original vellum, Variable-Angle Launcher, 'SIDEVIEW CAMERA CAR TRACK DETAILS' drawn at 1/4'=1'-0' (BUORD Sketch # 208078, PAPW 908). - Variable Angle Launcher Complex, Camera Car & Track, CA State Highway 39 at Morris Reservior, Azusa, Los Angeles County, CA

  4. 10. 22'X34' original blueprint, VariableAngle Launcher, 'SIDE VIEW CAMERA CARSTEEL ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    10. 22'X34' original blueprint, Variable-Angle Launcher, 'SIDE VIEW CAMERA CAR-STEEL FRAME AND AXLES' drawn at 1/2'=1'-0'. (BOURD Sketch # 209124). - Variable Angle Launcher Complex, Camera Car & Track, CA State Highway 39 at Morris Reservior, Azusa, Los Angeles County, CA

  5. Designing eHealth Applications to Reduce Cognitive Effort for Persons With Severe Mental Illness: Page Complexity, Navigation Simplicity, and Comprehensibility

    PubMed Central

    Spring, Michael R; Hanusa, Barbara H; Eack, Shaun M; Haas, Gretchen L

    2017-01-01

    Background eHealth technologies offer great potential for improving the use and effectiveness of treatments for those with severe mental illness (SMI), including schizophrenia and schizoaffective disorder. This potential can be muted by poor design. There is limited research on designing eHealth technologies for those with SMI, others with cognitive impairments, and those who are not technology savvy. We previously tested a design model, the Flat Explicit Design Model (FEDM), to create eHealth interventions for individuals with SMI. Subsequently, we developed the design concept page complexity, defined via the design variables we created of distinct topic areas, distinct navigation areas, and number of columns used to organize contents and the variables of text reading level, text reading ease (a newly added variable to the FEDM), and the number of hyperlinks and number of words on a page. Objective The objective of our study was to report the influence that the 19 variables of the FEDM have on the ability of individuals with SMI to use a website, ratings of a website’s ease of use, and performance on a novel usability task we created termed as content disclosure (a measure of the influence of a homepage’s design on the understanding user’s gain of a website). Finally, we assessed the performance of 3 groups or dimensions we developed that organize the 19 variables of the FEDM, termed as page complexity, navigational simplicity, and comprehensibility. Methods We measured 4 website usability outcomes: ability to find information, time to find information, ease of use, and a user’s ability to accurately judge a website’s contents. A total of 38 persons with SMI (chart diagnosis of schizophrenia or schizoaffective disorder) and 5 mental health websites were used to evaluate the importance of the new design concepts, as well as the other variables in the FEDM. Results We found that 11 of the FEDM’s 19 variables were significantly associated with all 4 usability outcomes. Most other variables were significantly related to 2 or 3 of these usability outcomes. With the 5 tested websites, 7 of the 19 variables of the FEDM overlapped with other variables, resulting in 12 distinct variable groups. The 3 design dimensions had acceptable coefficient alphas. Both navigational simplicity and comprehensibility were significantly related to correctly identifying whether information was available on a website. Page complexity and navigational simplicity were significantly associated with the ability and time to find information and ease-of-use ratings. Conclusions The 19 variables and 3 dimensions (page complexity, navigational simplicity, and comprehensibility) of the FEDM offer evidence-based design guidance intended to reduce the cognitive effort required to effectively use eHealth applications, particularly for persons with SMI, and potentially others, including those with cognitive impairments and limited skills or experience with technology. The new variables we examined (topic areas, navigational areas, columns) offer additional and very simple ways to improve simplicity. PMID:28057610

  6. Entropy-based complexity of the cardiovascular control in Parkinson disease: comparison between binning and k-nearest-neighbor approaches.

    PubMed

    Porta, Alberto; Bari, Vlasta; Bassani, Tito; Marchi, Andrea; Tassin, Stefano; Canesi, Margherita; Barbic, Franca; Furlan, Raffaello

    2013-01-01

    Entropy-based approaches are frequently used to quantify complexity of short-term cardiovascular control from spontaneous beat-to-beat variability of heart period (HP) and systolic arterial pressure (SAP). Among these tools the ones optimizing a critical parameter such as the pattern length are receiving more and more attention. This study compares two entropy-based techniques for the quantification of complexity making use of completely different strategies to optimize the pattern length. Comparison was carried out over HP and SAP variability series recorded from 12 Parkinson's disease (PD) patients without orthostatic hypotension or symptoms of orthostatic intolerance and 12 age-matched healthy control (HC) subjects. Regardless of the method, complexity of cardiovascular control increased in PD group, thus suggesting the early impairment of cardiovascular function.

  7. Predictive model of complexity in early palliative care: a cohort of advanced cancer patients (PALCOM study).

    PubMed

    Tuca, Albert; Gómez-Martínez, Mónica; Prat, Aleix

    2018-01-01

    Model of early palliative care (PC) integrated in oncology is based on shared care from the diagnosis to the end of life and is mainly focused on patients with greater complexity. However, there is no definition or tools to evaluate PC complexity. The objectives of the study were to identify the factors influencing level determination of complexity, propose predictive models, and build a complexity scale of PC. We performed a prospective, observational, multicenter study in a cohort of advanced cancer patients with an estimated prognosis ≤ 6 months. An ad hoc structured evaluation including socio-demographic and clinical data, symptom burden, functional and cognitive status, psychosocial problems, and existential-ethic dilemmas was recorded systematically. According to this multidimensional evaluation, investigator classified patients as high, medium, or low palliative complexity, associated to need of basic or specialized PC. Logistic regression was used to identify the variables influencing determination of level of PC complexity and explore predictive models. We included 324 patients; 41% were classified as having high PC complexity and 42.9% as medium, both levels being associated with specialized PC. Variables influencing determination of PC complexity were as follows: high symptom burden (OR 3.19 95%CI: 1.72-6.17), difficult pain (OR 2.81 95%CI:1.64-4.9), functional status (OR 0.99 95%CI:0.98-0.9), and social-ethical existential risk factors (OR 3.11 95%CI:1.73-5.77). Logistic analysis of variables allowed construct a complexity model and structured scales (PALCOM 1 and 2) with high predictive value (AUC ROC 76%). This study provides a new model and tools to assess complexity in palliative care, which may be very useful to manage referral to specialized PC services, and agree intensity of their intervention in a model of early-shared care integrated in oncology.

  8. Complexity and valued landscapes

    Treesearch

    Michael M. McCarthy

    1979-01-01

    The variable "complexity," or "diversity," has received a great deal of attention in recent research efforts concerned with visual resource management, including the identification of complexity as one of the primary evaluation measures. This paper describes research efforts that support the hypothesis that the landscapes we value are those with...

  9. Reef flattening effects on total richness and species responses in the Caribbean.

    PubMed

    Newman, Steven P; Meesters, Erik H; Dryden, Charlie S; Williams, Stacey M; Sanchez, Cristina; Mumby, Peter J; Polunin, Nicholas V C

    2015-11-01

    There has been ongoing flattening of Caribbean coral reefs with the loss of habitat having severe implications for these systems. Complexity and its structural components are important to fish species richness and community composition, but little is known about its role for other taxa or species-specific responses. This study reveals the importance of reef habitat complexity and structural components to different taxa of macrofauna, total species richness, and individual coral and fish species in the Caribbean. Species presence and richness of different taxa were visually quantified in one hundred 25-m(2) plots in three marine reserves in the Caribbean. Sampling was evenly distributed across five levels of visually estimated reef complexity, with five structural components also recorded: the number of corals, number of large corals, slope angle, maximum sponge and maximum octocoral height. Taking advantage of natural heterogeneity in structural complexity within a particular coral reef habitat (Orbicella reefs) and discrete environmental envelope, thus minimizing other sources of variability, the relative importance of reef complexity and structural components was quantified for different taxa and individual fish and coral species on Caribbean coral reefs using boosted regression trees (BRTs). Boosted regression tree models performed very well when explaining variability in total (82·3%), coral (80·6%) and fish species richness (77·3%), for which the greatest declines in richness occurred below intermediate reef complexity levels. Complexity accounted for very little of the variability in octocorals, sponges, arthropods, annelids or anemones. BRTs revealed species-specific variability and importance for reef complexity and structural components. Coral and fish species occupancy generally declined at low complexity levels, with the exception of two coral species (Pseudodiploria strigosa and Porites divaricata) and four fish species (Halichoeres bivittatus, H. maculipinna, Malacoctenus triangulatus and Stegastes partitus) more common at lower reef complexity levels. A significant interaction between country and reef complexity revealed a non-additive decline in species richness in areas of low complexity and the reserve in Puerto Rico. Flattening of Caribbean coral reefs will result in substantial species losses, with few winners. Individual structural components have considerable value to different species, and their loss may have profound impacts on population responses of coral and fish due to identity effects of key species, which underpin population richness and resilience and may affect essential ecosystem processes and services. © 2015 The Authors. Journal of Animal Ecology © 2015 British Ecological Society.

  10. 5. VAL LAUNCHER BRIDGE OVER LAUNCHER SLAB TAKEN FROM RESERVOIR ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    5. VAL LAUNCHER BRIDGE OVER LAUNCHER SLAB TAKEN FROM RESERVOIR LOOKING NORTH. - Variable Angle Launcher Complex, Variable Angle Launcher, CA State Highway 39 at Morris Reservior, Azusa, Los Angeles County, CA

  11. Probing AGN Accretion Physics through AGN Variability: Insights from Kepler

    NASA Astrophysics Data System (ADS)

    Kasliwal, Vishal Pramod

    Active Galactic Nuclei (AGN) exhibit large luminosity variations over the entire electromagnetic spectrum on timescales ranging from hours to years. The variations in luminosity are devoid of any periodic character and appear stochastic. While complex correlations exist between the variability observed in different parts of the electromagnetic spectrum, no frequency band appears to be completely dominant, suggesting that the physical processes producing the variability are exceedingly rich and complex. In the absence of a clear theoretical explanation of the variability, phenomenological models are used to study AGN variability. The stochastic behavior of AGN variability makes formulating such models difficult and connecting them to the underlying physics exceedingly hard. We study AGN light curves serendipitously observed by the NASA Kepler planet-finding mission. Compared to previous ground-based observations, Kepler offers higher precision and a smaller sampling interval resulting in potentially higher quality light curves. Using structure functions, we demonstrate that (1) the simplest statistical model of AGN variability, the damped random walk (DRW), is insufficient to characterize the observed behavior of AGN light curves; and (2) variability begins to occur in AGN on time-scales as short as hours. Of the 20 light curves studied by us, only 3-8 may be consistent with the DRW. The structure functions of the AGN in our sample exhibit complex behavior with pronounced dips on time-scales of 10-100 d suggesting that AGN variability can be very complex and merits further analysis. We examine the accuracy of the Kepler pipeline-generated light curves and find that the publicly available light curves may require re-processing to reduce contamination from field sources. We show that while the re-processing changes the exact PSD power law slopes inferred by us, it is unlikely to change the conclusion of our structure function study-Kepler AGN light curves indicate that the DRW is insufficient to characterize AGN variability. We provide a new approach to probing accretion physics with variability by decomposing observed light curves into a set of impulses that drive diffusive processes using C-ARMA models. Applying our approach to Kepler data, we demonstrate how the time-scales reported in the literature can be interpreted in the context of the growth and decay time-scales for flux perturbations and tentatively identify the flux perturbation driving process with accretion disk turbulence on length-scales much longer than the characteristic eddy size. Our analysis technique is applicable to (1) studying the connection between AGN sub-type and variability properties; (2) probing the origins of variability by studying the multi-wavelength behavior of AGN; (3) testing numerical simulations of accretion flows with the goal of creating a library of the variability properties of different accretion mechanisms; (4) hunting for changes in the behavior of the accretion flow by block-analyzing observed light curves; and (5) constraining the sampling requirements of future surveys of AGN variability.

  12. Navigating complex sample analysis using national survey data.

    PubMed

    Saylor, Jennifer; Friedmann, Erika; Lee, Hyeon Joo

    2012-01-01

    The National Center for Health Statistics conducts the National Health and Nutrition Examination Survey and other national surveys with probability-based complex sample designs. Goals of national surveys are to provide valid data for the population of the United States. Analyses of data from population surveys present unique challenges in the research process but are valuable avenues to study the health of the United States population. The aim of this study was to demonstrate the importance of using complex data analysis techniques for data obtained with complex multistage sampling design and provide an example of analysis using the SPSS Complex Samples procedure. Illustration of challenges and solutions specific to secondary data analysis of national databases are described using the National Health and Nutrition Examination Survey as the exemplar. Oversampling of small or sensitive groups provides necessary estimates of variability within small groups. Use of weights without complex samples accurately estimates population means and frequency from the sample after accounting for over- or undersampling of specific groups. Weighting alone leads to inappropriate population estimates of variability, because they are computed as if the measures were from the entire population rather than a sample in the data set. The SPSS Complex Samples procedure allows inclusion of all sampling design elements, stratification, clusters, and weights. Use of national data sets allows use of extensive, expensive, and well-documented survey data for exploratory questions but limits analysis to those variables included in the data set. The large sample permits examination of multiple predictors and interactive relationships. Merging data files, availability of data in several waves of surveys, and complex sampling are techniques used to provide a representative sample but present unique challenges. In sophisticated data analysis techniques, use of these data is optimized.

  13. The Connection between the Complexity of Perception of an Event and Judging Decisions in a Complex Situation

    ERIC Educational Resources Information Center

    Rauchberger, Nirit; Kaniel, Shlomo; Gross, Zehavit

    2017-01-01

    This study examines the process of judging complex real-life events in Israel: the disengagement from Gush Katif, Rabin's assassination and the Second Lebanon War. The process of judging is based on Weiner's attribution model, (Weiner, 2000, 2006); however, due to the complexity of the events studied, variables were added to characterize the…

  14. Spatial patterns of throughfall isotopic composition at the event and seasonal timescales

    Treesearch

    Scott T. Allen; Richard F. Keim; Jeffrey J. McDonnell

    2015-01-01

    Spatial variability of throughfall isotopic composition in forests is indicative of complex processes occurring in the canopy and remains insufficiently understood to properly characterize precipitation inputs to the catchment water balance. Here we investigate variability of throughfall isotopic composition with the objectives: (1) to quantify the spatial variability...

  15. Family medicine outpatient encounters are more complex than those of cardiology and psychiatry.

    PubMed

    Katerndahl, David; Wood, Robert; Jaén, Carlos Roberto

    2011-01-01

    comparison studies suggest that the guideline-concordant care provided for specific medical conditions is less optimal in primary care compared with cardiology and psychiatry settings. The purpose of this study is to estimate the relative complexity of patient encounters in general/family practice, cardiology, and psychiatry settings. secondary analysis of the 2000 National Ambulatory Medical Care Survey data for ambulatory patients seen in general/family practice, cardiology, and psychiatry settings was performed. The complexity for each variable was estimated as the quantity weighted by variability and diversity. there is minimal difference in the unadjusted input and total encounter complexity of general/family practice and cardiology; psychiatry's input is less complex. Cardiology encounters involved more input quantitatively, but the diversity of general/family practice input eliminated the difference. Cardiology also involved more complex output. However, when the duration of visit is factored in, the complexity of care provided per hour in general/family practice is 33% more relative to cardiology and 5 times more relative to psychiatry. care during family physician visits is more complex per hour than the care during visits to cardiologists or psychiatrists. This may account for a lower rate of completion of process items measured for quality of care.

  16. Effect of spray drying on the properties of amylose-hexadecylammonium chloride inclusion complexes

    USDA-ARS?s Scientific Manuscript database

    Water soluble amylose-hexadecyl ammonium chloride complexes were prepared from high amylose corn starch and hexadecyl ammonium chloride by excess steam jet cooking. Amylose inclusion complexes were spray dried to determine the viability of spray drying as a production method. The variables tested in...

  17. The discrete and localized nature of the variable emission from active regions

    NASA Technical Reports Server (NTRS)

    Arndt, Martina Belz; Habbal, Shadia Rifai; Karovska, Margarita

    1994-01-01

    Using data from the Extreme Ultraviolet (EUV) Spectroheliometer on Skylab, we study the empirical characteristics of the variable emission in active regions. These simultaneous multi-wavelength observations clearly confirm that active regions consist of a complex of loops at different temperatures. The variable emission from this complex has very well-defined properties that can be quantitatively summarized as follows: (1) It is localized predominantly around the footpoints where it occurs at discrete locations. (2) The strongest variability does not necessarily coincide with the most intense emission. (3) The fraction of the area of the footpoints, (delta n)/N, that exhibits variable emission, varies by +/- 15% as a function of time, at any of the wavelengths measured. It also varies very little from footpoint to footpoint. (4) This fractional variation is temperature dependent with a maximum around 10(exp 5) K. (5) The ratio of the intensity of the variable to the average background emission, (delta I)/(bar-I), also changes with temperature. In addition, we find that these distinctive characteristics persist even when flares occur within the active region.

  18. An Estimate of the Vertical Variability of Temperature at KSC Launch Complex 39-B

    NASA Technical Reports Server (NTRS)

    Brenton, James

    2017-01-01

    The purpose of this analysis is to determine the vertical variability of the air temperature below 500 feet at Launch Complex (LC) 39-B at Kennedy Space Center (KSC). This analysis utilizes data from the LC39-B Lightning Protection System (LPS) Towers and the 500 foot Tower 313. The results of this analysis will be used to help evaluate the ambient air temperature Launch Commit Criteria (LCC) for the Exploration Mission 1 launch.

  19. COMPLEX VARIABLE BOUNDARY ELEMENT METHOD: APPLICATIONS.

    USGS Publications Warehouse

    Hromadka, T.V.; Yen, C.C.; Guymon, G.L.

    1985-01-01

    The complex variable boundary element method (CVBEM) is used to approximate several potential problems where analytical solutions are known. A modeling result produced from the CVBEM is a measure of relative error in matching the known boundary condition values of the problem. A CVBEM error-reduction algorithm is used to reduce the relative error of the approximation by adding nodal points in boundary regions where error is large. From the test problems, overall error is reduced significantly by utilizing the adaptive integration algorithm.

  20. Landscape structure control on soil CO2 efflux variability in complex terrain: Scaling from point observations to watershed scale fluxes

    Treesearch

    Diego A. Riveros-Iregui; Brian L. McGlynn

    2009-01-01

    We investigated the spatial and temporal variability of soil CO2 efflux across 62 sites of a 393-ha complex watershed of the northern Rocky Mountains. Growing season (83 day) cumulative soil CO2 efflux varied from ~300 to ~2000 g CO2 m-2, depending upon landscape position, with a median of 879.8 g CO2 m-2. Our findings revealed that highest soil CO2 efflux rates were...

  1. Synaptic dynamics contribute to long-term single neuron response fluctuations.

    PubMed

    Reinartz, Sebastian; Biro, Istvan; Gal, Asaf; Giugliano, Michele; Marom, Shimon

    2014-01-01

    Firing rate variability at the single neuron level is characterized by long-memory processes and complex statistics over a wide range of time scales (from milliseconds up to several hours). Here, we focus on the contribution of non-stationary efficacy of the ensemble of synapses-activated in response to a given stimulus-on single neuron response variability. We present and validate a method tailored for controlled and specific long-term activation of a single cortical neuron in vitro via synaptic or antidromic stimulation, enabling a clear separation between two determinants of neuronal response variability: membrane excitability dynamics vs. synaptic dynamics. Applying this method we show that, within the range of physiological activation frequencies, the synaptic ensemble of a given neuron is a key contributor to the neuronal response variability, long-memory processes and complex statistics observed over extended time scales. Synaptic transmission dynamics impact on response variability in stimulation rates that are substantially lower compared to stimulation rates that drive excitability resources to fluctuate. Implications to network embedded neurons are discussed.

  2. Robust peptidoglycan growth by dynamic and variable multi-protein complexes.

    PubMed

    Pazos, Manuel; Peters, Katharina; Vollmer, Waldemar

    2017-04-01

    In Gram-negative bacteria such as Escherichia coli the peptidoglycan sacculus resides in the periplasm, a compartment that experiences changes in pH value, osmolality, ion strength and other parameters depending on the cell's environment. Hence, the cell needs robust peptidoglycan growth mechanisms to grow and divide under different conditions. Here we propose a model according to which the cell achieves robust peptidoglycan growth by employing dynamic multi-protein complexes, which assemble with variable composition from freely diffusing sets of peptidoglycan synthases, hydrolases and their regulators, whereby the composition of the active complexes depends on the cell cycle state - cell elongation or division - and the periplasmic growth conditions. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Quantifying networks complexity from information geometry viewpoint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Felice, Domenico, E-mail: domenico.felice@unicam.it; Mancini, Stefano; INFN-Sezione di Perugia, Via A. Pascoli, I-06123 Perugia

    We consider a Gaussian statistical model whose parameter space is given by the variances of random variables. Underlying this model we identify networks by interpreting random variables as sitting on vertices and their correlations as weighted edges among vertices. We then associate to the parameter space a statistical manifold endowed with a Riemannian metric structure (that of Fisher-Rao). Going on, in analogy with the microcanonical definition of entropy in Statistical Mechanics, we introduce an entropic measure of networks complexity. We prove that it is invariant under networks isomorphism. Above all, considering networks as simplicial complexes, we evaluate this entropy onmore » simplexes and find that it monotonically increases with their dimension.« less

  4. Healthcare tariffs for specialist inpatient neurorehabilitation services: rationale and development of a UK casemix and costing methodology.

    PubMed

    Turner-Stokes, Lynne; Sutch, Stephen; Dredge, Robert

    2012-03-01

    To describe the rationale and development of a casemix model and costing methodology for tariff development for specialist neurorehabilitation services in the UK. Patients with complex needs incur higher treatment costs. Fair payment should be weighted in proportion to costs of providing treatment, and should allow for variation over time CASEMIX MODEL AND BAND-WEIGHTING: Case complexity is measured by the Rehabilitation Complexity Scale (RCS). Cases are divided into five bands of complexity, based on the total RCS score. The principal determinant of costs in rehabilitation is staff time. Total staff hours/week (estimated from the Northwick Park Nursing and Therapy Dependency Scales) are analysed within each complexity band, through cross-sectional analysis of parallel ratings. A 'band-weighting' factor is derived from the relative proportions of staff time within each of the five bands. Total unit treatment costs are obtained from retrospective analysis of provider hospitals' budget and accounting statements. Mean bed-day costs (total unit cost/occupied bed days) are divided broadly into 'variable' and 'non-variable' components. In the weighted costing model, the band-weighting factor is applied to the variable portion of the bed-day cost to derive a banded cost, and thence a set of cost-multipliers. Preliminary data from one unit are presented to illustrate how this weighted costing model will be applied to derive a multilevel banded payment model, based on serial complexity ratings, to allow for change over time.

  5. Task complexity and maximal isometric strength gains through motor learning

    PubMed Central

    McGuire, Jessica; Green, Lara A.; Gabriel, David A.

    2014-01-01

    Abstract This study compared the effects of a simple versus complex contraction pattern on the acquisition, retention, and transfer of maximal isometric strength gains and reductions in force variability. A control group (N = 12) performed simple isometric contractions of the wrist flexors. An experimental group (N = 12) performed complex proprioceptive neuromuscular facilitation (PNF) contractions consisting of maximal isometric wrist extension immediately reversing force direction to wrist flexion within a single trial. Ten contractions were completed on three consecutive days with a retention and transfer test 2‐weeks later. For the retention test, the groups performed their assigned contraction pattern followed by a transfer test that consisted of the other contraction pattern for a cross‐over design. Both groups exhibited comparable increases in strength (20.2%, P < 0.01) and reductions in mean torque variability (26.2%, P < 0.01), which were retained and transferred. There was a decrease in the coactivation ratio (antagonist/agonist muscle activity) for both groups, which was retained and transferred (35.2%, P < 0.01). The experimental group exhibited a linear decrease in variability of the torque‐ and sEMG‐time curves, indicating transfer to the simple contraction pattern (P < 0.01). The control group underwent a decrease in variability of the torque‐ and sEMG‐time curves from the first day of training to retention, but participants returned to baseline levels during the transfer condition (P < 0.01). However, the difference between torque RMS error versus the variability in torque‐ and sEMG‐time curves suggests the demands of the complex task were transferred, but could not be achieved in a reproducible way. PMID:25428951

  6. Recurrence Quantification Analysis of Sentence-Level Speech Kinematics.

    PubMed

    Jackson, Eric S; Tiede, Mark; Riley, Michael A; Whalen, D H

    2016-12-01

    Current approaches to assessing sentence-level speech variability rely on measures that quantify variability across utterances and use normalization procedures that alter raw trajectory data. The current work tests the feasibility of a less restrictive nonlinear approach-recurrence quantification analysis (RQA)-via a procedural example and subsequent analysis of kinematic data. To test the feasibility of RQA, lip aperture (i.e., the Euclidean distance between lip-tracking sensors) was recorded for 21 typically developing adult speakers during production of a simple utterance. The utterance was produced in isolation and in carrier structures differing just in length or in length and complexity. Four RQA indices were calculated: percent recurrence (%REC), percent determinism (%DET), stability (MAXLINE), and stationarity (TREND). Percent determinism (%DET) decreased only for the most linguistically complex sentence; MAXLINE decreased as a function of linguistic complexity but increased for the longer-only sentence; TREND decreased as a function of both length and linguistic complexity. This research note demonstrates the feasibility of using RQA as a tool to compare speech variability across speakers and groups. RQA offers promise as a technique to assess effects of potential stressors (e.g., linguistic or cognitive factors) on the speech production system.

  7. Recurrence Quantification Analysis of Sentence-Level Speech Kinematics

    PubMed Central

    Tiede, Mark; Riley, Michael A.; Whalen, D. H.

    2016-01-01

    Purpose Current approaches to assessing sentence-level speech variability rely on measures that quantify variability across utterances and use normalization procedures that alter raw trajectory data. The current work tests the feasibility of a less restrictive nonlinear approach—recurrence quantification analysis (RQA)—via a procedural example and subsequent analysis of kinematic data. Method To test the feasibility of RQA, lip aperture (i.e., the Euclidean distance between lip-tracking sensors) was recorded for 21 typically developing adult speakers during production of a simple utterance. The utterance was produced in isolation and in carrier structures differing just in length or in length and complexity. Four RQA indices were calculated: percent recurrence (%REC), percent determinism (%DET), stability (MAXLINE), and stationarity (TREND). Results Percent determinism (%DET) decreased only for the most linguistically complex sentence; MAXLINE decreased as a function of linguistic complexity but increased for the longer-only sentence; TREND decreased as a function of both length and linguistic complexity. Conclusions This research note demonstrates the feasibility of using RQA as a tool to compare speech variability across speakers and groups. RQA offers promise as a technique to assess effects of potential stressors (e.g., linguistic or cognitive factors) on the speech production system. PMID:27824987

  8. Foundational Principles for Large-Scale Inference: Illustrations Through Correlation Mining

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hero, Alfred O.; Rajaratnam, Bala

    When can reliable inference be drawn in the ‘‘Big Data’’ context? This article presents a framework for answering this fundamental question in the context of correlation mining, with implications for general large-scale inference. In large-scale data applications like genomics, connectomics, and eco-informatics, the data set is often variable rich but sample starved: a regime where the number n of acquired samples (statistical replicates) is far fewer than the number p of observed variables (genes, neurons, voxels, or chemical constituents). Much of recent work has focused on understanding the computational complexity of proposed methods for ‘‘Big Data.’’ Sample complexity, however, hasmore » received relatively less attention, especially in the setting when the sample size n is fixed, and the dimension p grows without bound. To address this gap, we develop a unified statistical framework that explicitly quantifies the sample complexity of various inferential tasks. Sampling regimes can be divided into several categories: 1) the classical asymptotic regime where the variable dimension is fixed and the sample size goes to infinity; 2) the mixed asymptotic regime where both variable dimension and sample size go to infinity at comparable rates; and 3) the purely high-dimensional asymptotic regime where the variable dimension goes to infinity and the sample size is fixed. Each regime has its niche but only the latter regime applies to exa-scale data dimension. We illustrate this high-dimensional framework for the problem of correlation mining, where it is the matrix of pairwise and partial correlations among the variables that are of interest. Correlation mining arises in numerous applications and subsumes the regression context as a special case. We demonstrate various regimes of correlation mining based on the unifying perspective of high-dimensional learning rates and sample complexity for different structured covariance models and different inference tasks.« less

  9. Foundational Principles for Large-Scale Inference: Illustrations Through Correlation Mining

    PubMed Central

    Hero, Alfred O.; Rajaratnam, Bala

    2015-01-01

    When can reliable inference be drawn in fue “Big Data” context? This paper presents a framework for answering this fundamental question in the context of correlation mining, wifu implications for general large scale inference. In large scale data applications like genomics, connectomics, and eco-informatics fue dataset is often variable-rich but sample-starved: a regime where the number n of acquired samples (statistical replicates) is far fewer than fue number p of observed variables (genes, neurons, voxels, or chemical constituents). Much of recent work has focused on understanding the computational complexity of proposed methods for “Big Data”. Sample complexity however has received relatively less attention, especially in the setting when the sample size n is fixed, and the dimension p grows without bound. To address fuis gap, we develop a unified statistical framework that explicitly quantifies the sample complexity of various inferential tasks. Sampling regimes can be divided into several categories: 1) the classical asymptotic regime where fue variable dimension is fixed and fue sample size goes to infinity; 2) the mixed asymptotic regime where both variable dimension and sample size go to infinity at comparable rates; 3) the purely high dimensional asymptotic regime where the variable dimension goes to infinity and the sample size is fixed. Each regime has its niche but only the latter regime applies to exa cale data dimension. We illustrate this high dimensional framework for the problem of correlation mining, where it is the matrix of pairwise and partial correlations among the variables fua t are of interest. Correlation mining arises in numerous applications and subsumes the regression context as a special case. we demonstrate various regimes of correlation mining based on the unifying perspective of high dimensional learning rates and sample complexity for different structured covariance models and different inference tasks. PMID:27087700

  10. Foundational Principles for Large-Scale Inference: Illustrations Through Correlation Mining

    DOE PAGES

    Hero, Alfred O.; Rajaratnam, Bala

    2015-12-09

    When can reliable inference be drawn in the ‘‘Big Data’’ context? This article presents a framework for answering this fundamental question in the context of correlation mining, with implications for general large-scale inference. In large-scale data applications like genomics, connectomics, and eco-informatics, the data set is often variable rich but sample starved: a regime where the number n of acquired samples (statistical replicates) is far fewer than the number p of observed variables (genes, neurons, voxels, or chemical constituents). Much of recent work has focused on understanding the computational complexity of proposed methods for ‘‘Big Data.’’ Sample complexity, however, hasmore » received relatively less attention, especially in the setting when the sample size n is fixed, and the dimension p grows without bound. To address this gap, we develop a unified statistical framework that explicitly quantifies the sample complexity of various inferential tasks. Sampling regimes can be divided into several categories: 1) the classical asymptotic regime where the variable dimension is fixed and the sample size goes to infinity; 2) the mixed asymptotic regime where both variable dimension and sample size go to infinity at comparable rates; and 3) the purely high-dimensional asymptotic regime where the variable dimension goes to infinity and the sample size is fixed. Each regime has its niche but only the latter regime applies to exa-scale data dimension. We illustrate this high-dimensional framework for the problem of correlation mining, where it is the matrix of pairwise and partial correlations among the variables that are of interest. Correlation mining arises in numerous applications and subsumes the regression context as a special case. We demonstrate various regimes of correlation mining based on the unifying perspective of high-dimensional learning rates and sample complexity for different structured covariance models and different inference tasks.« less

  11. Decoding the spatial signatures of multi-scale climate variability - a climate network perspective

    NASA Astrophysics Data System (ADS)

    Donner, R. V.; Jajcay, N.; Wiedermann, M.; Ekhtiari, N.; Palus, M.

    2017-12-01

    During the last years, the application of complex networks as a versatile tool for analyzing complex spatio-temporal data has gained increasing interest. Establishing this approach as a new paradigm in climatology has already provided valuable insights into key spatio-temporal climate variability patterns across scales, including novel perspectives on the dynamics of the El Nino Southern Oscillation or the emergence of extreme precipitation patterns in monsoonal regions. In this work, we report first attempts to employ network analysis for disentangling multi-scale climate variability. Specifically, we introduce the concept of scale-specific climate networks, which comprises a sequence of networks representing the statistical association structure between variations at distinct time scales. For this purpose, we consider global surface air temperature reanalysis data and subject the corresponding time series at each grid point to a complex-valued continuous wavelet transform. From this time-scale decomposition, we obtain three types of signals per grid point and scale - amplitude, phase and reconstructed signal, the statistical similarity of which is then represented by three complex networks associated with each scale. We provide a detailed analysis of the resulting connectivity patterns reflecting the spatial organization of climate variability at each chosen time-scale. Global network characteristics like transitivity or network entropy are shown to provide a new view on the (global average) relevance of different time scales in climate dynamics. Beyond expected trends originating from the increasing smoothness of fluctuations at longer scales, network-based statistics reveal different degrees of fragmentation of spatial co-variability patterns at different scales and zonal shifts among the key players of climate variability from tropically to extra-tropically dominated patterns when moving from inter-annual to decadal scales and beyond. The obtained results demonstrate the potential usefulness of systematically exploiting scale-specific climate networks, whose general patterns are in line with existing climatological knowledge, but provide vast opportunities for further quantifications at local, regional and global scales that are yet to be explored.

  12. Recurrence-plot-based measures of complexity and their application to heart-rate-variability data.

    PubMed

    Marwan, Norbert; Wessel, Niels; Meyerfeldt, Udo; Schirdewan, Alexander; Kurths, Jürgen

    2002-08-01

    The knowledge of transitions between regular, laminar or chaotic behaviors is essential to understand the underlying mechanisms behind complex systems. While several linear approaches are often insufficient to describe such processes, there are several nonlinear methods that, however, require rather long time observations. To overcome these difficulties, we propose measures of complexity based on vertical structures in recurrence plots and apply them to the logistic map as well as to heart-rate-variability data. For the logistic map these measures enable us not only to detect transitions between chaotic and periodic states, but also to identify laminar states, i.e., chaos-chaos transitions. The traditional recurrence quantification analysis fails to detect the latter transitions. Applying our measures to the heart-rate-variability data, we are able to detect and quantify the laminar phases before a life-threatening cardiac arrhythmia occurs thereby facilitating a prediction of such an event. Our findings could be of importance for the therapy of malignant cardiac arrhythmias.

  13. A Complex Systems Approach to Causal Discovery in Psychiatry.

    PubMed

    Saxe, Glenn N; Statnikov, Alexander; Fenyo, David; Ren, Jiwen; Li, Zhiguo; Prasad, Meera; Wall, Dennis; Bergman, Nora; Briggs, Ernestine C; Aliferis, Constantin

    2016-01-01

    Conventional research methodologies and data analytic approaches in psychiatric research are unable to reliably infer causal relations without experimental designs, or to make inferences about the functional properties of the complex systems in which psychiatric disorders are embedded. This article describes a series of studies to validate a novel hybrid computational approach--the Complex Systems-Causal Network (CS-CN) method-designed to integrate causal discovery within a complex systems framework for psychiatric research. The CS-CN method was first applied to an existing dataset on psychopathology in 163 children hospitalized with injuries (validation study). Next, it was applied to a much larger dataset of traumatized children (replication study). Finally, the CS-CN method was applied in a controlled experiment using a 'gold standard' dataset for causal discovery and compared with other methods for accurately detecting causal variables (resimulation controlled experiment). The CS-CN method successfully detected a causal network of 111 variables and 167 bivariate relations in the initial validation study. This causal network had well-defined adaptive properties and a set of variables was found that disproportionally contributed to these properties. Modeling the removal of these variables resulted in significant loss of adaptive properties. The CS-CN method was successfully applied in the replication study and performed better than traditional statistical methods, and similarly to state-of-the-art causal discovery algorithms in the causal detection experiment. The CS-CN method was validated, replicated, and yielded both novel and previously validated findings related to risk factors and potential treatments of psychiatric disorders. The novel approach yields both fine-grain (micro) and high-level (macro) insights and thus represents a promising approach for complex systems-oriented research in psychiatry.

  14. Modeling the development of written language

    PubMed Central

    Puranik, Cynthia S.; Foorman, Barbara; Foster, Elizabeth; Wilson, Laura Gehron; Tschinkel, Erika; Kantor, Patricia Thatcher

    2011-01-01

    Alternative models of the structure of individual and developmental differences of written composition and handwriting fluency were tested using confirmatory factor analysis of writing samples provided by first- and fourth-grade students. For both groups, a five-factor model provided the best fit to the data. Four of the factors represented aspects of written composition: macro-organization (use of top sentence and number and ordering of ideas), productivity (number and diversity of words used), complexity (mean length of T-unit and syntactic density), and spelling and punctuation. The fifth factor represented handwriting fluency. Handwriting fluency was correlated with written composition factors at both grades. The magnitude of developmental differences between first grade and fourth grade expressed as effect sizes varied for variables representing the five constructs: large effect sizes were found for productivity and handwriting fluency variables; moderate effect sizes were found for complexity and macro-organization variables; and minimal effect sizes were found for spelling and punctuation variables. PMID:22228924

  15. Study of a variable mass Atwood's machine using a smartphone

    NASA Astrophysics Data System (ADS)

    Lopez, Dany; Caprile, Isidora; Corvacho, Fernando; Reyes, Orfa

    2018-03-01

    The Atwood machine was invented in 1784 by George Atwood and this system has been widely studied both theoretically and experimentally over the years. Nowadays, it is commonplace that many experimental physics courses include both Atwood's machine and variable mass to introduce more complex concepts in physics. To study the dynamics of the masses that compose the variable Atwood's machine, laboratories typically use a smart pulley. Now, the first work that introduced a smartphone as data acquisition equipment to study the acceleration in the Atwood's machine was the one by M. Monteiro et al. Since then, there has been no further information available on the usage of smartphones in variable mass systems. This prompted us to do a study of this kind of system by means of data obtained with a smartphone and to show the practicality of using smartphones in complex experimental situations.

  16. Longitudinal Surveys of Australian Youth (LSAY): Derived Variables. Technical Report 64

    ERIC Educational Resources Information Center

    National Centre for Vocational Education Research (NCVER), 2012

    2012-01-01

    This technical report describes the 24 derived variables developed for users of the Longitudinal Surveys of Australian Youth (LSAY) data. The variables fall into the categories education, employment and social, and help to simplify the complexity of the LSAY data by providing useful indicators for analysis. To help LSAY data users understand and…

  17. Longitudinal Surveys of Australian Youth (LSAY): 1995 Cohort Derived Variables. Technical Report 69

    ERIC Educational Resources Information Center

    National Centre for Vocational Education Research (NCVER), 2012

    2012-01-01

    This technical report details the derived variables developed for users of the Longitudinal Surveys of Australian Youth (LSAY) data. The derived variables fall into the categories education, employment and social, and help to simplify the complexity of the LSAY data by providing useful indicators for analysis. To help LSAY data users understand…

  18. Forest fuels and potential fire behaviour 12 years after variable-retention harvest in lodgepole pine

    Treesearch

    Justin S. Crotteau; Christopher R. Keyes; Elaine K. Sutherland; David K. Wright; Joel M. Egan

    2016-01-01

    Variable-retention harvesting in lodgepole pine offers an alternative to conventional, even-aged management. This harvesting technique promotes structural complexity and age-class diversity in residual stands and promotes resilience to disturbance. We examined fuel loads and potential fire behaviour 12 years after two modes of variable-retention harvesting (...

  19. Are Middle School Mathematics Teachers Able to Solve Word Problems without Using Variable?

    ERIC Educational Resources Information Center

    Gökkurt Özdemir, Burçin; Erdem, Emrullah; Örnek, Tugba; Soylu, Yasin

    2018-01-01

    Many people consider problem solving as a complex process in which variables such as "x," "y" are used. Problems may not be solved by only using "variable." Problem solving can be rationalized and made easier using practical strategies. When especially the development of children at younger ages is considered, it is…

  20. Does Variability across Events Affect Verb Learning in English, Mandarin, and Korean?

    ERIC Educational Resources Information Center

    Childers, Jane B.; Paik, Jae H.; Flores, Melissa; Lai, Gabrielle; Dolan, Megan

    2017-01-01

    Extending new verbs is important in becoming a productive speaker of a language. Prior results show children have difficulty extending verbs when they have seen events with varied agents. This study further examines the impact of variability on verb learning and asks whether variability interacts with event complexity or differs by language.…

  1. New cohort growth and survival in variable retention harvests of a pine ecosystem in Minnesota, USA

    Treesearch

    Rebecca A. Montgomery; Brian J. Palik; Suzanne B. Boyden; Peter B. Reich

    2013-01-01

    There is significant interest in silvicultural systems such as variable retention harvesting (VRH) that emulate natural disturbance and increase structural complexity, spatial heterogeneity, and biological diversity in managed forests. However, the consequences of variable retention harvesting for new cohort growth and survival are not well characterized in many forest...

  2. Because Trucks Aren't Bicycles: Orthographic Complexity as an Important Variable in Reading Research

    ERIC Educational Resources Information Center

    Galletly, Susan A.; Knight, Bruce Allen

    2013-01-01

    Severe enduring reading- and writing-accuracy difficulties seem a phenomenon largely restricted to nations using complex orthographies, notably Anglophone nations, given English's highly complex orthography (Geva and Siegel, "Read Writ" 12:1-30, 2000; Landerl et al., "Cognition" 63:315-334, 1997; Share, "Psychol Bul"l…

  3. Text Complexity and Young Adult Literature: Establishing Its Place

    ERIC Educational Resources Information Center

    Glaus, Marci

    2014-01-01

    Preparing students for college and careers in the 21st century has shed light on text complexity as an important variable for consideration in English Language Arts. Authors of The Common Core State Standards (CCSS) define text complexity as broad, highlighting qualitative, rather than quantitative evaluations of narrative fiction as appropriate…

  4. Influences of Sentence Length and Syntactic Complexity on the Speech Motor Control of Children Who Stutter

    ERIC Educational Resources Information Center

    MacPherson, Megan K.; Smith, Anne

    2013-01-01

    Purpose: To investigate the potential effects of increased sentence length and syntactic complexity on the speech motor control of children who stutter (CWS). Method: Participants repeated sentences of varied length and syntactic complexity. Kinematic measures of articulatory coordination variability and movement duration during perceptually…

  5. Variations in task constraints shape emergent performance outcomes and complexity levels in balancing.

    PubMed

    Caballero Sánchez, Carla; Barbado Murillo, David; Davids, Keith; Moreno Hernández, Francisco J

    2016-06-01

    This study investigated the extent to which specific interacting constraints of performance might increase or decrease the emergent complexity in a movement system, and whether this could affect the relationship between observed movement variability and the central nervous system's capacity to adapt to perturbations during balancing. Fifty-two healthy volunteers performed eight trials where different performance constraints were manipulated: task difficulty (three levels) and visual biofeedback conditions (with and without the center of pressure (COP) displacement and a target displayed). Balance performance was assessed using COP-based measures: mean velocity magnitude (MVM) and bivariate variable error (BVE). To assess the complexity of COP, fuzzy entropy (FE) and detrended fluctuation analysis (DFA) were computed. ANOVAs showed that MVM and BVE increased when task difficulty increased. During biofeedback conditions, individuals showed higher MVM but lower BVE at the easiest level of task difficulty. Overall, higher FE and lower DFA values were observed when biofeedback was available. On the other hand, FE reduced and DFA increased as difficulty level increased, in the presence of biofeedback. However, when biofeedback was not available, the opposite trend in FE and DFA values was observed. Regardless of changes to task constraints and the variable investigated, balance performance was positively related to complexity in every condition. Data revealed how specificity of task constraints can result in an increase or decrease in complexity emerging in a neurobiological system during balance performance.

  6. Datamining approaches for modeling tumor control probability.

    PubMed

    Naqa, Issam El; Deasy, Joseph O; Mu, Yi; Huang, Ellen; Hope, Andrew J; Lindsay, Patricia E; Apte, Aditya; Alaly, James; Bradley, Jeffrey D

    2010-11-01

    Tumor control probability (TCP) to radiotherapy is determined by complex interactions between tumor biology, tumor microenvironment, radiation dosimetry, and patient-related variables. The complexity of these heterogeneous variable interactions constitutes a challenge for building predictive models for routine clinical practice. We describe a datamining framework that can unravel the higher order relationships among dosimetric dose-volume prognostic variables, interrogate various radiobiological processes, and generalize to unseen data before when applied prospectively. Several datamining approaches are discussed that include dose-volume metrics, equivalent uniform dose, mechanistic Poisson model, and model building methods using statistical regression and machine learning techniques. Institutional datasets of non-small cell lung cancer (NSCLC) patients are used to demonstrate these methods. The performance of the different methods was evaluated using bivariate Spearman rank correlations (rs). Over-fitting was controlled via resampling methods. Using a dataset of 56 patients with primary NCSLC tumors and 23 candidate variables, we estimated GTV volume and V75 to be the best model parameters for predicting TCP using statistical resampling and a logistic model. Using these variables, the support vector machine (SVM) kernel method provided superior performance for TCP prediction with an rs=0.68 on leave-one-out testing compared to logistic regression (rs=0.4), Poisson-based TCP (rs=0.33), and cell kill equivalent uniform dose model (rs=0.17). The prediction of treatment response can be improved by utilizing datamining approaches, which are able to unravel important non-linear complex interactions among model variables and have the capacity to predict on unseen data for prospective clinical applications.

  7. The Information Content of Discrete Functions and Their Application in Genetic Data Analysis

    DOE PAGES

    Sakhanenko, Nikita A.; Kunert-Graf, James; Galas, David J.

    2017-10-13

    The complex of central problems in data analysis consists of three components: (1) detecting the dependence of variables using quantitative measures, (2) defining the significance of these dependence measures, and (3) inferring the functional relationships among dependent variables. We have argued previously that an information theory approach allows separation of the detection problem from the inference of functional form problem. We approach here the third component of inferring functional forms based on information encoded in the functions. Here, we present here a direct method for classifying the functional forms of discrete functions of three variables represented in data sets. Discretemore » variables are frequently encountered in data analysis, both as the result of inherently categorical variables and from the binning of continuous numerical variables into discrete alphabets of values. The fundamental question of how much information is contained in a given function is answered for these discrete functions, and their surprisingly complex relationships are illustrated. The all-important effect of noise on the inference of function classes is found to be highly heterogeneous and reveals some unexpected patterns. We apply this classification approach to an important area of biological data analysis—that of inference of genetic interactions. Genetic analysis provides a rich source of real and complex biological data analysis problems, and our general methods provide an analytical basis and tools for characterizing genetic problems and for analyzing genetic data. Finally, we illustrate the functional description and the classes of a number of common genetic interaction modes and also show how different modes vary widely in their sensitivity to noise.« less

  8. The Information Content of Discrete Functions and Their Application in Genetic Data Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sakhanenko, Nikita A.; Kunert-Graf, James; Galas, David J.

    The complex of central problems in data analysis consists of three components: (1) detecting the dependence of variables using quantitative measures, (2) defining the significance of these dependence measures, and (3) inferring the functional relationships among dependent variables. We have argued previously that an information theory approach allows separation of the detection problem from the inference of functional form problem. We approach here the third component of inferring functional forms based on information encoded in the functions. Here, we present here a direct method for classifying the functional forms of discrete functions of three variables represented in data sets. Discretemore » variables are frequently encountered in data analysis, both as the result of inherently categorical variables and from the binning of continuous numerical variables into discrete alphabets of values. The fundamental question of how much information is contained in a given function is answered for these discrete functions, and their surprisingly complex relationships are illustrated. The all-important effect of noise on the inference of function classes is found to be highly heterogeneous and reveals some unexpected patterns. We apply this classification approach to an important area of biological data analysis—that of inference of genetic interactions. Genetic analysis provides a rich source of real and complex biological data analysis problems, and our general methods provide an analytical basis and tools for characterizing genetic problems and for analyzing genetic data. Finally, we illustrate the functional description and the classes of a number of common genetic interaction modes and also show how different modes vary widely in their sensitivity to noise.« less

  9. The Information Content of Discrete Functions and Their Application in Genetic Data Analysis.

    PubMed

    Sakhanenko, Nikita A; Kunert-Graf, James; Galas, David J

    2017-12-01

    The complex of central problems in data analysis consists of three components: (1) detecting the dependence of variables using quantitative measures, (2) defining the significance of these dependence measures, and (3) inferring the functional relationships among dependent variables. We have argued previously that an information theory approach allows separation of the detection problem from the inference of functional form problem. We approach here the third component of inferring functional forms based on information encoded in the functions. We present here a direct method for classifying the functional forms of discrete functions of three variables represented in data sets. Discrete variables are frequently encountered in data analysis, both as the result of inherently categorical variables and from the binning of continuous numerical variables into discrete alphabets of values. The fundamental question of how much information is contained in a given function is answered for these discrete functions, and their surprisingly complex relationships are illustrated. The all-important effect of noise on the inference of function classes is found to be highly heterogeneous and reveals some unexpected patterns. We apply this classification approach to an important area of biological data analysis-that of inference of genetic interactions. Genetic analysis provides a rich source of real and complex biological data analysis problems, and our general methods provide an analytical basis and tools for characterizing genetic problems and for analyzing genetic data. We illustrate the functional description and the classes of a number of common genetic interaction modes and also show how different modes vary widely in their sensitivity to noise.

  10. Complexity of Multi-Dimensional Spontaneous EEG Decreases during Propofol Induced General Anaesthesia

    PubMed Central

    Schartner, Michael; Seth, Anil; Noirhomme, Quentin; Boly, Melanie; Bruno, Marie-Aurelie; Laureys, Steven; Barrett, Adam

    2015-01-01

    Emerging neural theories of consciousness suggest a correlation between a specific type of neural dynamical complexity and the level of consciousness: When awake and aware, causal interactions between brain regions are both integrated (all regions are to a certain extent connected) and differentiated (there is inhomogeneity and variety in the interactions). In support of this, recent work by Casali et al (2013) has shown that Lempel-Ziv complexity correlates strongly with conscious level, when computed on the EEG response to transcranial magnetic stimulation. Here we investigated complexity of spontaneous high-density EEG data during propofol-induced general anaesthesia. We consider three distinct measures: (i) Lempel-Ziv complexity, which is derived from how compressible the data are; (ii) amplitude coalition entropy, which measures the variability in the constitution of the set of active channels; and (iii) the novel synchrony coalition entropy (SCE), which measures the variability in the constitution of the set of synchronous channels. After some simulations on Kuramoto oscillator models which demonstrate that these measures capture distinct ‘flavours’ of complexity, we show that there is a robustly measurable decrease in the complexity of spontaneous EEG during general anaesthesia. PMID:26252378

  11. Field-scale apparent soil electrical conductivity

    USDA-ARS?s Scientific Manuscript database

    Soils are notoriously spatially heterogeneous and many soil properties (e.g., salinity, water content, trace element concentration, etc.) are temporally variable, making soil a complex media. Spatial variability of soil properties has a profound influence on agricultural and environmental processes ...

  12. 64. DETAIL OF CONNECTIONS FOR SIXTEEN CABLES AT THE CARRIAGE ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    64. DETAIL OF CONNECTIONS FOR SIXTEEN CABLES AT THE CARRIAGE SUPPORT STRUCTURE, STRUCTURE. April 20, 1948. 1048. - Variable Angle Launcher Complex, Variable Angle Launcher, CA State Highway 39 at Morris Reservior, Azusa, Los Angeles County, CA

  13. 21. VAL, DETAIL OF MUZZLE END OF LAUNCHER BRIDGE SHOWING ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    21. VAL, DETAIL OF MUZZLE END OF LAUNCHER BRIDGE SHOWING BOTH LAUNCHER TUBES TAKEN FROM RESERVOIR LOOKING NORTH. - Variable Angle Launcher Complex, Variable Angle Launcher, CA State Highway 39 at Morris Reservior, Azusa, Los Angeles County, CA

  14. 18. VAL, DETAIL OF LAUNCHER BRIDGE ALONG THE SIDE OF ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    18. VAL, DETAIL OF LAUNCHER BRIDGE ALONG THE SIDE OF THE 32' DIAMETER LAUNCHING TUBE LOOKING SOUTHWEST. - Variable Angle Launcher Complex, Variable Angle Launcher, CA State Highway 39 at Morris Reservior, Azusa, Los Angeles County, CA

  15. 32. VAL, DETAIL SHOWING LOADING PLATFORM, PROJECTILE LOADING CAR, LAUNCHER ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    32. VAL, DETAIL SHOWING LOADING PLATFORM, PROJECTILE LOADING CAR, LAUNCHER SLAB AND UNDERSIDE OF LAUNCHER BRIDGE LOOKING SOUTHWEST. - Variable Angle Launcher Complex, Variable Angle Launcher, CA State Highway 39 at Morris Reservior, Azusa, Los Angeles County, CA

  16. An example of complex modelling in dentistry using Markov chain Monte Carlo (MCMC) simulation.

    PubMed

    Helfenstein, Ulrich; Menghini, Giorgio; Steiner, Marcel; Murati, Francesca

    2002-09-01

    In the usual regression setting one regression line is computed for a whole data set. In a more complex situation, each person may be observed for example at several points in time and thus a regression line might be calculated for each person. Additional complexities, such as various forms of errors in covariables may make a straightforward statistical evaluation difficult or even impossible. During recent years methods have been developed allowing convenient analysis of problems where the data and the corresponding models show these and many other forms of complexity. The methodology makes use of a Bayesian approach and Markov chain Monte Carlo (MCMC) simulations. The methods allow the construction of increasingly elaborate models by building them up from local sub-models. The essential structure of the models can be represented visually by directed acyclic graphs (DAG). This attractive property allows communication and discussion of the essential structure and the substantial meaning of a complex model without needing algebra. After presentation of the statistical methods an example from dentistry is presented in order to demonstrate their application and use. The dataset of the example had a complex structure; each of a set of children was followed up over several years. The number of new fillings in permanent teeth had been recorded at several ages. The dependent variables were markedly different from the normal distribution and could not be transformed to normality. In addition, explanatory variables were assumed to be measured with different forms of error. Illustration of how the corresponding models can be estimated conveniently via MCMC simulation, in particular, 'Gibbs sampling', using the freely available software BUGS is presented. In addition, how the measurement error may influence the estimates of the corresponding coefficients is explored. It is demonstrated that the effect of the independent variable on the dependent variable may be markedly underestimated if the measurement error is not taken into account ('regression dilution bias'). Markov chain Monte Carlo methods may be of great value to dentists in allowing analysis of data sets which exhibit a wide range of different forms of complexity.

  17. Relation between risk of falling and postural sway complexity in diabetes.

    PubMed

    Morrison, S; Colberg, S R; Parson, H K; Vinik, A I

    2012-04-01

    For older individuals with diabetes, any decline in balance control can be especially problematic since it is often a precursor to an increased risk of falling. This study was designed to evaluate differences in postural motion dynamics and falls risk for older individuals with type 2 diabetes (T2DM) classified as fallers/non-fallers and, to assess what impact exercise has on balance and falls risk. The results demonstrated that the risk of falling is greater for those older individuals with multiple risk factors including diabetes and a previous falls history. The postural motion features of the high-risk individuals (T2DM-fallers) were also different, being characterized by increased variability and complexity, increased AP-ML coupling, less overall COP motion and increased velocity. One suggestion is that these individuals evoked a stiffening strategy during the more challenging postural tasks. Following training, a decline in falls risk was observed for all groups, with this effect being most pronounced for the T2DM-fallers. Interestingly, the COP motion of this group became more similar to controls, exhibiting decreased complexity and variability, and decreased velocity. The reciprocal changes in COP complexity support the broader view that age/disease-related changes in physiological complexity are bi-directional. Overall, these results show that, even for older T2DM individuals at greater risk of falling, targeted interventions can positively enhance their postural dynamics. Further, the finding that the pattern of postural motion variability and complexity was altered highlights that a decline in physiological complexity may not always be negatively associated with aging and/or disease. Copyright © 2011 Elsevier B.V. All rights reserved.

  18. Surface complexation modeling

    USDA-ARS?s Scientific Manuscript database

    Adsorption-desorption reactions are important processes that affect the transport of contaminants in the environment. Surface complexation models are chemical models that can account for the effects of variable chemical conditions, such as pH, on adsorption reactions. These models define specific ...

  19. OLYMPEX Data Workshop: GPM View

    NASA Technical Reports Server (NTRS)

    Petersen, W.

    2017-01-01

    OLYMPEX Primary Objectives: Datasets to enable: (1) Direct validation over complex terrain at multiple scales, liquid and frozen precip types, (a) Do we capture terrain and synoptic regime transitions, orographic enhancements/structure, full range of precipitation intensity (e.g., very light to heavy) and types, spatial variability? (b) How well can we estimate space/time-accumulated precipitation over terrain (liquid + frozen)? (2) Physical validation of algorithms in mid-latitude cold season frontal systems over ocean and complex terrain, (a) What are the column properties of frozen, melting, liquid hydrometeors-their relative contributions to estimated surface precipitation, transition under the influence of terrain gradients, and systematic variability as a function of synoptic regime? (3) Integrated hydrologic validation in complex terrain, (a) Can satellite estimates be combined with modeling over complex topography to drive improved products (assimilation, downscaling) [Level IV products] (b) What are capabilities and limitations for use of satellite-based precipitation estimates in stream/river flow forecasting?

  20. Binding of Soluble Natural Ligands to a Soluble Human T-Cell Receptor Fragment Produced in Escherichia coli

    NASA Astrophysics Data System (ADS)

    Hilyard, Katherine L.; Reyburn, Hugh; Chung, Shan; Bell, John I.; Strominger, Jack L.

    1994-09-01

    An Escherichia coli expression system has been developed to produce milligram quantities of the variable domains of a human T-cell receptor from a cytotoxic T cell that recognizes the HLA-A2-influenza matrix peptide complex as a single polypeptide chain. The recombinant protein was purified by metal-chelate chromatography and then refolded in a redox buffer system. The refolded protein was shown to directly bind both Staphylococcus aureus enterotoxin B and the major histocompatibility complex protein-peptide complex using a BIAcore biosensor. Thus this preparation of a single-chain, variable-domain, T-cell receptor fragment can bind both of its natural ligands and some of it is therefore a functional fragment of the receptor molecule.

  1. Behavior of complex mixtures in aquatic environments: a synthesis of PNL ecological research

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fickeisen, D.H.; Vaughan, B.E.

    1984-06-01

    The term complex mixture has been recently applied to energy-related process streams, products and wastes that typically contain hundreds or thousands of individual organic compounds, like petroleum or synthetic fuel oils; but it is more generally applicable. A six-year program of ecological research has focused on four areas important to understanding the environmental behavior of complex mixtures: physicochemical variables, individual organism responses, ecosystems-level determinations, and metabolism. Of these areas, physicochemical variables and organism responses were intensively studied; system-level determinations and metabolism represent more recent directions. Chemical characterization was integrated throughout all areas of the program, and state-of-the-art methods were applied.more » 155 references, 35 figures, 4 tables.« less

  2. Heart rate complexity in sinoaortic-denervated mice.

    PubMed

    Silva, Luiz Eduardo V; Rodrigues, Fernanda Luciano; de Oliveira, Mauro; Salgado, Hélio Cesar; Fazan, Rubens

    2015-02-01

    What is the central question of this study? New measurements for cardiovascular complexity, such as detrended fluctuation analysis (DFA) and multiscale entropy (MSE), have been shown to predict cardiovascular outcomes. Given that cardiovascular diseases are accompanied by autonomic imbalance and decreased baroreflex sensitivity, the central question is: do baroreceptors contribute to cardiovascular complexity? What is the main finding and its importance? Sinoaortic denervation altered both DFA scaling exponents and MSE, indicating that both short- and long-term mechanisms of complexity are altered in sinoaortic denervated mice, resulting in a loss of physiological complexity. These results suggest that the baroreflex is a key element in the complex structures involved in heart rate variability regulation. Recently, heart rate (HR) oscillations have been recognized as complex behaviours derived from non-linear processes. Physiological complexity theory is based on the idea that healthy systems present high complexity, i.e. non-linear, fractal variability at multiple scales, with long-range correlations. The loss of complexity in heart rate variability (HRV) has been shown to predict adverse cardiovascular outcomes. Based on the idea that most cardiovascular diseases are accompanied by autonomic imbalance and a decrease in baroreflex sensitivity, we hypothesize that the baroreflex plays an important role in complex cardiovascular behaviour. Mice that had been subjected to sinoaortic denervation (SAD) were implanted with catheters in the femoral artery and jugular vein 5 days prior to the experiment. After recording the baseline arterial pressure (AP), pulse interval time series were generated from the intervals between consecutive values of diastolic pressure. The complexity of the HRV was determined using detrended fluctuation analysis and multiscale entropy. The detrended fluctuation analysis α1 scaling exponent (a short-term index) was remarkably decreased in the SAD mice (0.79 ± 0.06 versus 1.13 ± 0.04 for the control mice), whereas SAD slightly increased the α2 scaling exponent (a long-term index; 1.12 ± 0.03 versus 1.04 ± 0.02 for control mice). In the SAD mice, the total multiscale entropy was decreased (13.2 ± 1.3) compared with the control mice (18.9 ± 1.4). In conclusion, fractal and regularity structures of HRV are altered in SAD mice, affecting both short- and long-term mechanisms of complexity, suggesting that the baroreceptors play a considerable role in the complex structure of HRV. © 2014 The Authors. Experimental Physiology © 2014 The Physiological Society.

  3. Structure, magnetic behavior, and anisotropy of homoleptic trinuclear lanthanoid 8-quinolinolate complexes.

    PubMed

    Chilton, Nicholas F; Deacon, Glen B; Gazukin, Olga; Junk, Peter C; Kersting, Berthold; Langley, Stuart K; Moubaraki, Boujemaa; Murray, Keith S; Schleife, Frederik; Shome, Mahasish; Turner, David R; Walker, Julia A

    2014-03-03

    Three complexes of the form [Ln(III)3(OQ)9] (Ln = Gd, Tb, Dy; OQ = 8-quinolinolate) have been synthesized and their magnetic properties studied. The trinuclear complexes adopt V-shaped geometries with three bridging 8-quinolinolate oxygen atoms between the central and peripheral eight-coordinate metal atoms. The magnetic properties of these three complexes differ greatly. Variable-temperature direct-current (dc) magnetic susceptibility measurements reveal that the gadolinium and terbium complexes display weak antiferromagnetic nearest-neighbor magnetic exchange interactions. This was quantified in the isotropic gadolinium case with an exchangecoupling parameter of J = -0.068(2) cm(-1). The dysprosium compound displays weak ferromagnetic exchange. Variable-frequency and -temperature alternating-current magnetic susceptibility measurements on the anisotropic cases reveal that the dysprosium complex displays single-molecule-magnet behavior, in zero dc field, with two distinct relaxation modes of differing time scales within the same molecule. Analysis of the data revealed anisotropy barriers of Ueff = 92 and 48 K for the two processes. The terbium complex, on the other hand, displays no such behavior in zero dc field, but upon application of a static dc field, slow magnetic relaxation can be observed. Ab initio and electrostatic calculations were used in an attempt to explain the origin of the experimentally observed slow relaxation of the magnetization for the dysprosium complex.

  4. Inversion of the anomalous diffraction approximation for variable complex index of refraction near unity. [numerical tests for water-haze aerosol model

    NASA Technical Reports Server (NTRS)

    Smith, C. B.

    1982-01-01

    The Fymat analytic inversion method for retrieving a particle-area distribution function from anomalous diffraction multispectral extinction data and total area is generalized to the case of a variable complex refractive index m(lambda) near unity depending on spectral wavelength lambda. Inversion tests are presented for a water-haze aerosol model. An upper-phase shift limit of 5 pi/2 retrieved an accurate peak area distribution profile. Analytical corrections using both the total number and area improved the inversion.

  5. Complexities in Subsetting Satellite Level 2 Data

    NASA Astrophysics Data System (ADS)

    Huwe, P.; Wei, J.; Albayrak, A.; Silberstein, D. S.; Alfred, J.; Savtchenko, A. K.; Johnson, J. E.; Hearty, T.; Meyer, D. J.

    2017-12-01

    Satellite Level 2 data presents unique challenges for tools and services. From nonlinear spatial geometry to inhomogeneous file data structure to inconsistent temporal variables to complex data variable dimensionality to multiple file formats, there are many difficulties in creating general tools for Level 2 data support. At NASA Goddard Earth Sciences Data and Information Services Center (GES DISC), we are implementing a general Level 2 Subsetting service for Level 2 data. In this presentation, we will unravel some of the challenges faced in creating this service and the strategies we used to surmount them.

  6. Functional Entropy Variables: A New Methodology for Deriving Thermodynamically Consistent Algorithms for Complex Fluids, with Particular Reference to the Isothermal Navier-Stokes-Korteweg Equations

    DTIC Science & Technology

    2012-11-01

    multicorrector algorithm . Predictor stage: Set Cρn+1,(0) = C ρ n, (157) Cun+1,(0) = C u n, (158) Cvn+1,(0) = C v n. (159) Multicorrector stage: Repeat the... corrector algorithm given by (157)-(178). Remark 20. We adopt the preconditioned GMRES algorithm [53] from PETSc [2] to solve the linear system given by (175...ICES REPORT 12-43 November 2012 Functional Entropy Variables: A New Methodology for Deriving Thermodynamically Consistent Algorithms for Complex

  7. The complex variable boundary element method: Applications in determining approximative boundaries

    USGS Publications Warehouse

    Hromadka, T.V.

    1984-01-01

    The complex variable boundary element method (CVBEM) is used to determine approximation functions for boundary value problems of the Laplace equation such as occurs in potential theory. By determining an approximative boundary upon which the CVBEM approximator matches the desired constant (level curves) boundary conditions, the CVBEM is found to provide the exact solution throughout the interior of the transformed problem domain. Thus, the acceptability of the CVBEM approximation is determined by the closeness-of-fit of the approximative boundary to the study problem boundary. ?? 1984.

  8. [Mitochondrial disease due to the deficit of Q-cytochrome C oxidoreductase coenzyme in the respiratory chain. Report of a new case].

    PubMed

    Roldán, S; Lluch, M D; Navarro Quesada, F J; Hevia, A

    1995-01-01

    Reference has been made in the literature of the variability in the clinical presentation of deficiency of complex III of the respiratory chain, identifying up to the moment, four groups, the first of which is characterized by hipotonia and wearness starting at variable ages. We report a new case of mitochondrial myopathy due to deficiency of this complex and included within this first group, and consider the importance of defining the clinical and histochemical characteristics of this polymorphous entity.

  9. Weighting Test Samples in IRT Linking and Equating: Toward an Improved Sampling Design for Complex Equating. Research Report. ETS RR-13-39

    ERIC Educational Resources Information Center

    Qian, Jiahe; Jiang, Yanming; von Davier, Alina A.

    2013-01-01

    Several factors could cause variability in item response theory (IRT) linking and equating procedures, such as the variability across examinee samples and/or test items, seasonality, regional differences, native language diversity, gender, and other demographic variables. Hence, the following question arises: Is it possible to select optimal…

  10. Variable density thinning promotes variable structural responses 14 years after treatment in the Pacific Northwest

    Treesearch

    John L. Willis; Scott D. Roberts; Constance A. Harrington

    2018-01-01

    Young stands are commonly assumed to require centuries to develop into late-successional forest habitat. This viewpoint reflects the fact that young stands often lack many of the structural features that define late-successional habitat, and that these features derive from complex stand dynamics that are difficult to mimic with forest management. Variable density...

  11. Predicting fire behavior in palmetto-gallberry fuel complexes

    Treesearch

    W A. Hough; F. A. Albini

    1978-01-01

    Rate of spread, fireline intensity, and flame length can be predicted with reasonable accuracy for backfires and low-intensity head fires in the palmetto-gallberry fuel complex of the South. This fuel complex was characterized and variables were adjusted for use in Rothermel's (1972) spread model. Age of rough, height of understory, percent of area covered by...

  12. Explorations of the Gauss-Lucas Theorem

    ERIC Educational Resources Information Center

    Brilleslyper, Michael A.; Schaubroeck, Beth

    2017-01-01

    The Gauss-Lucas Theorem is a classical complex analysis result that states the critical points of a single-variable complex polynomial lie inside the closed convex hull of the zeros of the polynomial. Although the result is well-known, it is not typically presented in a first course in complex analysis. The ease with which modern technology allows…

  13. Solution Strategies and Achievement in Dutch Complex Arithmetic: Latent Variable Modeling of Change

    ERIC Educational Resources Information Center

    Hickendorff, Marian; Heiser, Willem J.; van Putten, Cornelis M.; Verhelst, Norman D.

    2009-01-01

    In the Netherlands, national assessments at the end of primary school (Grade 6) show a decline of achievement on problems of complex or written arithmetic over the last two decades. The present study aims at contributing to an explanation of the large achievement decrease on complex division, by investigating the strategies students used in…

  14. Indices of Complexity and Interpretation: Their Computation and Uses in Factor Analysis.

    ERIC Educational Resources Information Center

    Hofmann, Richard J.

    In this methodological paper two indices are developed: a complexity index and an interpretation index. The complexity index is a positive number indicating on the average how many factors are used to explain each variable in a factor solution. The interpretation index will be positive ranging from zero to unity; unity representing a perfect…

  15. Measuring Search Efficiency in Complex Visual Search Tasks: Global and Local Clutter

    ERIC Educational Resources Information Center

    Beck, Melissa R.; Lohrenz, Maura C.; Trafton, J. Gregory

    2010-01-01

    Set size and crowding affect search efficiency by limiting attention for recognition and attention against competition; however, these factors can be difficult to quantify in complex search tasks. The current experiments use a quantitative measure of the amount and variability of visual information (i.e., clutter) in highly complex stimuli (i.e.,…

  16. Probabilistic Geoacoustic Inversion in Complex Environments

    DTIC Science & Technology

    2015-09-30

    Probabilistic Geoacoustic Inversion in Complex Environments Jan Dettmer School of Earth and Ocean Sciences, University of Victoria, Victoria BC...long-range inversion methods can fail to provide sufficient resolution. For proper quantitative examination of variability, parameter uncertainty must...project aims to advance probabilistic geoacoustic inversion methods for complex ocean environments for a range of geoacoustic data types. The work is

  17. 34. VAL, DETAIL OF STAIRS ON COUNTERWEIGHT SLAB WITH COUNTERWEIGHT ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    34. VAL, DETAIL OF STAIRS ON COUNTERWEIGHT SLAB WITH COUNTERWEIGHT CAR RAILS ON RIGHT AND PERSONNEL CAR RAILS ON LEFT. - Variable Angle Launcher Complex, Variable Angle Launcher, CA State Highway 39 at Morris Reservior, Azusa, Los Angeles County, CA

  18. 27. VAL, DETAIL OF LAUNCHER SLAB AND LAUNCHER RAIL WITH ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    27. VAL, DETAIL OF LAUNCHER SLAB AND LAUNCHER RAIL WITH 7 INCH DIAMETER HOLE FOR SUPPORT CARRIAGE LOCKING PIN. - Variable Angle Launcher Complex, Variable Angle Launcher, CA State Highway 39 at Morris Reservior, Azusa, Los Angeles County, CA

  19. Multilevel Model Prediction

    ERIC Educational Resources Information Center

    Frees, Edward W.; Kim, Jee-Seon

    2006-01-01

    Multilevel models are proven tools in social research for modeling complex, hierarchical systems. In multilevel modeling, statistical inference is based largely on quantification of random variables. This paper distinguishes among three types of random variables in multilevel modeling--model disturbances, random coefficients, and future response…

  20. Developing a complex independent component analysis technique to extract non-stationary patterns from geophysical time-series

    NASA Astrophysics Data System (ADS)

    Forootan, Ehsan; Kusche, Jürgen

    2016-04-01

    Geodetic/geophysical observations, such as the time series of global terrestrial water storage change or sea level and temperature change, represent samples of physical processes and therefore contain information about complex physical interactionswith many inherent time scales. Extracting relevant information from these samples, for example quantifying the seasonality of a physical process or its variability due to large-scale ocean-atmosphere interactions, is not possible by rendering simple time series approaches. In the last decades, decomposition techniques have found increasing interest for extracting patterns from geophysical observations. Traditionally, principal component analysis (PCA) and more recently independent component analysis (ICA) are common techniques to extract statistical orthogonal (uncorrelated) and independent modes that represent the maximum variance of observations, respectively. PCA and ICA can be classified as stationary signal decomposition techniques since they are based on decomposing the auto-covariance matrix or diagonalizing higher (than two)-order statistical tensors from centered time series. However, the stationary assumption is obviously not justifiable for many geophysical and climate variables even after removing cyclic components e.g., the seasonal cycles. In this paper, we present a new decomposition method, the complex independent component analysis (CICA, Forootan, PhD-2014), which can be applied to extract to non-stationary (changing in space and time) patterns from geophysical time series. Here, CICA is derived as an extension of real-valued ICA (Forootan and Kusche, JoG-2012), where we (i) define a new complex data set using a Hilbert transformation. The complex time series contain the observed values in their real part, and the temporal rate of variability in their imaginary part. (ii) An ICA algorithm based on diagonalization of fourth-order cumulants is then applied to decompose the new complex data set in (i). (iii) Dominant non-stationary patterns are recognized as independent complex patterns that can be used to represent the space and time amplitude and phase propagations. We present the results of CICA on simulated and real cases e.g., for quantifying the impact of large-scale ocean-atmosphere interaction on global mass changes. Forootan (PhD-2014) Statistical signal decomposition techniques for analyzing time-variable satellite gravimetry data, PhD Thesis, University of Bonn, http://hss.ulb.uni-bonn.de/2014/3766/3766.htm Forootan and Kusche (JoG-2012) Separation of global time-variable gravity signals into maximally independent components, Journal of Geodesy 86 (7), 477-497, doi: 10.1007/s00190-011-0532-5

  1. Time-Based Indicators of Emotional Complexity: Interrelations and Correlates

    PubMed Central

    Grühn, Daniel; Lumley, Mark A.; Diehl, Manfred; Labouvie-Vief, Gisela

    2012-01-01

    Emotional complexity has been regarded as one correlate of adaptive emotion regulation in adulthood. One novel and potentially valuable approach to operationalizing emotional complexity is to use reports of emotions obtained repeatedly in real time, which can generate a number of potential time-based indicators of emotional complexity. It is not known, however, how these indicators relate to each other, to other measures of affective complexity, such as those derived from a cognitive-developmental view of emotional complexity, or to measures of adaptive functioning, such as well-being. A sample of 109 adults, aged 23 to 90 years, participated in an experience-sampling study and reported their negative and positive affect five times a day for one week. Based on these reports, we calculated nine different time-based indicators potentially reflecting emotional complexity. Analyses showed three major findings: First, the indicators showed a diverse pattern of interrelations suggestive of four distinct components of emotional complexity. Second, age was generally not related to time-based indicators of emotional complexity; however, older adults showed overall low variability in negative affect. Third, time-based indicators of emotional complexity were either unrelated or inversely related to measures of adaptive functioning; that is, these measures tended to predict a less adaptive profile, such as lower subjective and psychological well-being. In sum, time-based indicators of emotional complexity displayed a more complex and less beneficial picture than originally thought. In particular, variability in negative affect seems to indicate suboptimal adjustments. Future research would benefit from collecting empirical data for the interrelations and correlates of time-based indicators of emotional complexity in different contexts. PMID:23163712

  2. High dimensional model representation method for fuzzy structural dynamics

    NASA Astrophysics Data System (ADS)

    Adhikari, S.; Chowdhury, R.; Friswell, M. I.

    2011-03-01

    Uncertainty propagation in multi-parameter complex structures possess significant computational challenges. This paper investigates the possibility of using the High Dimensional Model Representation (HDMR) approach when uncertain system parameters are modeled using fuzzy variables. In particular, the application of HDMR is proposed for fuzzy finite element analysis of linear dynamical systems. The HDMR expansion is an efficient formulation for high-dimensional mapping in complex systems if the higher order variable correlations are weak, thereby permitting the input-output relationship behavior to be captured by the terms of low-order. The computational effort to determine the expansion functions using the α-cut method scales polynomically with the number of variables rather than exponentially. This logic is based on the fundamental assumption underlying the HDMR representation that only low-order correlations among the input variables are likely to have significant impacts upon the outputs for most high-dimensional complex systems. The proposed method is first illustrated for multi-parameter nonlinear mathematical test functions with fuzzy variables. The method is then integrated with a commercial finite element software (ADINA). Modal analysis of a simplified aircraft wing with fuzzy parameters has been used to illustrate the generality of the proposed approach. In the numerical examples, triangular membership functions have been used and the results have been validated against direct Monte Carlo simulations. It is shown that using the proposed HDMR approach, the number of finite element function calls can be reduced without significantly compromising the accuracy.

  3. Identifying Slow Molecular Motions in Complex Chemical Reactions.

    PubMed

    Piccini, GiovanniMaria; Polino, Daniela; Parrinello, Michele

    2017-09-07

    We have studied the cyclization reaction of deprotonated 4-chloro-1-butanethiol to tetrahydrothiophene by means of well-tempered metadynamics. To properly select the collective variables, we used the recently proposed variational approach to conformational dynamics within the framework of metadyanmics. This allowed us to select the appropriate linear combinations from a set of collective variables representing the slow degrees of freedom that best describe the slow modes of the reaction. We performed our calculations at three different temperatures, namely, 300, 350, and 400 K. We show that the choice of such collective variables allows one to easily interpret the complex free-energy surface of such a reaction by univocal identification of the conformers belonging to reactants and product states playing a fundamental role in the reaction mechanism.

  4. Visualization of Global Sensitivity Analysis Results Based on a Combination of Linearly Dependent and Independent Directions

    NASA Technical Reports Server (NTRS)

    Davies, Misty D.; Gundy-Burlet, Karen

    2010-01-01

    A useful technique for the validation and verification of complex flight systems is Monte Carlo Filtering -- a global sensitivity analysis that tries to find the inputs and ranges that are most likely to lead to a subset of the outputs. A thorough exploration of the parameter space for complex integrated systems may require thousands of experiments and hundreds of controlled and measured variables. Tools for analyzing this space often have limitations caused by the numerical problems associated with high dimensionality and caused by the assumption of independence of all of the dimensions. To combat both of these limitations, we propose a technique that uses a combination of the original variables with the derived variables obtained during a principal component analysis.

  5. USING SELF-ORGANIZING MAPS TO EXPLORE PATTERNS IN SPECIES RICHNESS AND PROTECTION

    EPA Science Inventory

    The combination of species distributions with abiotic and landscape variables using Geographic Information Systems can be used to help prioritize areas for biodiversity protection, although the number of variables and complexity of the relationships between them can prove difficu...

  6. 1. VARIABLEANGLE LAUNCHER CAMERA CAR, VIEW OF CAMERA CAR AND ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    1. VARIABLE-ANGLE LAUNCHER CAMERA CAR, VIEW OF CAMERA CAR AND TRACK WITH CAMERA STATION ABOVE LOOKING NORTH TAKEN FROM RESERVOIR. - Variable Angle Launcher Complex, Camera Car & Track, CA State Highway 39 at Morris Reservior, Azusa, Los Angeles County, CA

  7. 33. VAL, DETAIL OF PERSONNEL CAR AT THE TOP OF ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    33. VAL, DETAIL OF PERSONNEL CAR AT THE TOP OF THE COUNTERWEIGHT SLAB WITH THE COUNTERWEIGHT CAR IN DISTANCE LOOKING NORTH. - Variable Angle Launcher Complex, Variable Angle Launcher, CA State Highway 39 at Morris Reservior, Azusa, Los Angeles County, CA

  8. Symbol-and-Arrow Diagrams in Teaching Pharmacokinetics.

    ERIC Educational Resources Information Center

    Hayton, William L.

    1990-01-01

    Symbol-and-arrow diagrams are helpful adjuncts to equations derived from pharmacokinetic models. Both show relationships among dependent and independent variables. Diagrams show only qualitative relationships, but clearly show which variables are dependent and which are independent, helping students understand complex but important functional…

  9. 4. VAL PARTIAL ELEVATION SHOWING LAUNCHER BRIDGE ON SUPPORTS, LAUNCHER ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    4. VAL PARTIAL ELEVATION SHOWING LAUNCHER BRIDGE ON SUPPORTS, LAUNCHER SLAB, SUPPORT CARRIAGE, CONCRETE 'A' FRAME STRUCTURE AND CAMERA TOWER LOOKING SOUTHEAST. - Variable Angle Launcher Complex, Variable Angle Launcher, CA State Highway 39 at Morris Reservior, Azusa, Los Angeles County, CA

  10. Applied statistics in agricultural, biological, and environmental sciences.

    USDA-ARS?s Scientific Manuscript database

    Agronomic research often involves measurement and collection of multiple response variables in an effort to understand the more complex nature of the system being studied. Multivariate statistical methods encompass the simultaneous analysis of all random variables measured on each experimental or s...

  11. How to model supernovae in simulations of star and galaxy formation

    NASA Astrophysics Data System (ADS)

    Hopkins, Philip F.; Wetzel, Andrew; Kereš, Dušan; Faucher-Giguère, Claude-André; Quataert, Eliot; Boylan-Kolchin, Michael; Murray, Norman; Hayward, Christopher C.; El-Badry, Kareem

    2018-06-01

    We study the implementation of mechanical feedback from supernovae (SNe) and stellar mass loss in galaxy simulations, within the Feedback In Realistic Environments (FIRE) project. We present the FIRE-2 algorithm for coupling mechanical feedback, which can be applied to any hydrodynamics method (e.g. fixed-grid, moving-mesh, and mesh-less methods), and black hole as well as stellar feedback. This algorithm ensures manifest conservation of mass, energy, and momentum, and avoids imprinting `preferred directions' on the ejecta. We show that it is critical to incorporate both momentum and thermal energy of mechanical ejecta in a self-consistent manner, accounting for SNe cooling radii when they are not resolved. Using idealized simulations of single SN explosions, we show that the FIRE-2 algorithm, independent of resolution, reproduces converged solutions in both energy and momentum. In contrast, common `fully thermal' (energy-dump) or `fully kinetic' (particle-kicking) schemes in the literature depend strongly on resolution: when applied at mass resolution ≳100 M⊙, they diverge by orders of magnitude from the converged solution. In galaxy-formation simulations, this divergence leads to orders-of-magnitude differences in galaxy properties, unless those models are adjusted in a resolution-dependent way. We show that all models that individually time-resolve SNe converge to the FIRE-2 solution at sufficiently high resolution (<100 M⊙). However, in both idealized single-SN simulations and cosmological galaxy-formation simulations, the FIRE-2 algorithm converges much faster than other sub-grid models without re-tuning parameters.

  12. The natural emergence of the correlation between H2 and star formation rate surface densities in galaxy simulations

    NASA Astrophysics Data System (ADS)

    Lupi, Alessandro; Bovino, Stefano; Capelo, Pedro R.; Volonteri, Marta; Silk, Joseph

    2018-03-01

    In this study, we present a suite of high-resolution numerical simulations of an isolated galaxy to test a sub-grid framework to consistently follow the formation and dissociation of H2 with non-equilibrium chemistry. The latter is solved via the package KROME, coupled to the mesh-less hydrodynamic code GIZMO. We include the effect of star formation (SF), modelled with a physically motivated prescription independent of H2, supernova feedback and mass-losses from low-mass stars, extragalactic and local stellar radiation, and dust and H2 shielding, to investigate the emergence of the observed correlation between H2 and SF rate surface densities. We present two different sub-grid models and compare them with on-the-fly radiative transfer (RT) calculations, to assess the main differences and limits of the different approaches. We also discuss a sub-grid clumping factor model to enhance the H2 formation, consistent with our SF prescription, which is crucial, at the achieved resolution, to reproduce the correlation with H2. We find that both sub-grid models perform very well relative to the RT simulation, giving comparable results, with moderate differences, but at much lower computational cost. We also find that, while the Kennicutt-Schmidt relation for the total gas is not strongly affected by the different ingredients included in the simulations, the H2-based counterpart is much more sensitive, because of the crucial role played by the dissociating radiative flux and the gas shielding.

  13. Peri-Elastodynamic Simulations of Guided Ultrasonic Waves in Plate-Like Structure with Surface Mounted PZT.

    PubMed

    Patra, Subir; Ahmed, Hossain; Banerjee, Sourav

    2018-01-18

    Peridynamic based elastodynamic computation tool named Peri-elastodynamics is proposed herein to simulate the three-dimensional (3D) Lamb wave modes in materials for the first time. Peri-elastodynamics is a nonlocal meshless approach which is a scale-independent generalized technique to visualize the acoustic and ultrasonic waves in plate-like structure, micro-electro-mechanical systems (MEMS) and nanodevices for their respective characterization. In this article, the characteristics of the fundamental Lamb wave modes are simulated in a sample plate-like structure. Lamb wave modes are generated using a surface mounted piezoelectric (PZT) transducer which is actuated from the top surface. The proposed generalized Peri-elastodynamics method is not only capable of simulating two dimensional (2D) in plane wave under plane strain condition formulated previously but also capable of accurately simulating the out of plane Symmetric and Antisymmetric Lamb wave modes in plate like structures in 3D. For structural health monitoring (SHM) of plate-like structures and nondestructive evaluation (NDE) of MEMS devices, it is necessary to simulate the 3D wave-damage interaction scenarios and visualize the different wave features due to damages. Hence, in addition, to simulating the guided ultrasonic wave modes in pristine material, Lamb waves were also simulated in a damaged plate. The accuracy of the proposed technique is verified by comparing the modes generated in the plate and the mode shapes across the thickness of the plate with theoretical wave analysis.

  14. Clumpy Disks as a Testbed for Feedback-regulated Galaxy Formation

    NASA Astrophysics Data System (ADS)

    Mayer, Lucio; Tamburello, Valentina; Lupi, Alessandro; Keller, Ben; Wadsley, James; Madau, Piero

    2016-10-01

    We study the dependence of fragmentation in massive gas-rich galaxy disks at z > 1 on stellar feedback schemes and hydrodynamical solvers, employing the GASOLINE2 SPH code and the lagrangian mesh-less code GIZMO in finite mass mode. Non-cosmological galaxy disk runs with the standard delayed-cooling blastwave feedback are compared with runs adopting a new superbubble feedback, which produces winds by modeling the detailed physics of supernova-driven bubbles and leads to efficient self-regulation of star formation. We find that, with blastwave feedback, massive star-forming clumps form in comparable number and with very similar masses in GASOLINE2 and GIZMO. Typical clump masses are in the range 107-108 M ⊙, lower than in most previous works, while giant clumps with masses above 109 M ⊙ are exceedingly rare. By contrast, superbubble feedback does not produce massive star-forming bound clumps as galaxies never undergo a phase of violent disk instability. In this scheme, only sporadic, unbound star-forming overdensities lasting a few tens of Myr can arise, triggered by non-linear perturbations from massive satellite companions. We conclude that there is severe tension between explaining massive star-forming clumps observed at z > 1 primarily as the result of disk fragmentation driven by gravitational instability and the prevailing view of feedback-regulated galaxy formation. The link between disk stability and star formation efficiency should thus be regarded as a key testing ground for galaxy formation theory.

  15. Distinguishing CDM dwarfs from SIDM dwarfs in baryonic simulations

    NASA Astrophysics Data System (ADS)

    Strickland, Emily; Fitts, Alex B.; Boylan-Kolchin, Michael

    2017-06-01

    Dwarf galaxies in the nearby Universe are the most dark-matter-dominated systems known. They are therefore natural probes of the nature of dark matter, which remains unknown. Our collaboration has performed several high-resolution cosmological zoom-in simulations of isolated dwarf galaxies. We simulate each galaxy in standard cold dark matter (ΛCDM) as well as self-interacting dark matter (SIDM, with a cross section of σ/m ~ 1 cm2/g), both with and without baryons, in order to identify distinguishing characteristics between the two. The simulations are run using GIZMO, a meshless-finite-mass hydrodynamical code, and are part of the Feedback in Realistic Environments (FIRE) project. By analyzing both the global properties and inner structure of the dwarfs in varying dark matter prescriptions, we provide a side-by-side comparison of isolated, dark-matter-dominated galaxies at the mass scale where differences in the two models of dark matter are thought to be the most obvious. We find that the edge of classical dwarfs and ultra-faint dwarfs (at stellar masses of ~105 solar masses) provides the clearest window for distinguishing between the two theories. At these low masses, our SIDM galaxies have a cored inner density profile, while their CDM counterparts have “cuspy” centers. The SIDM versions of each galaxy also have measurably lower stellar velocity dispersions than their CDM counterparts. Future observations of ultra faint dwarfs with JWST and 30-m telescopes will be able to discern whether such alternate theories of dark matter are viable.

  16. Unraveling human complexity and disease with systems biology and personalized medicine

    PubMed Central

    Naylor, Stephen; Chen, Jake Y

    2010-01-01

    We are all perplexed that current medical practice often appears maladroit in curing our individual illnesses or disease. However, as is often the case, a lack of understanding, tools and technologies are the root cause of such situations. Human individuality is an often-quoted term but, in the context of human biology, it is poorly understood. This is compounded when there is a need to consider the variability of human populations. In the case of the former, it is possible to quantify human complexity as determined by the 35,000 genes of the human genome, the 1–10 million proteins (including antibodies) and the 2000–3000 metabolites of the human metabolome. Human variability is much more difficult to assess, since many of the variables, such as the definition of race, are not even clearly agreed on. In order to accommodate human complexity, variability and its influence on health and disease, it is necessary to undertake a systematic approach. In the past decade, the emergence of analytical platforms and bioinformatics tools has led to the development of systems biology. Such an approach offers enormous potential in defining key pathways and networks involved in optimal human health, as well as disease onset, progression and treatment. The tools and technologies now available in systems biology analyses offer exciting opportunities to exploit the emerging areas of personalized medicine. In this article, we discuss the current status of human complexity, and how systems biology and personalized medicine can impact at the individual and population level. PMID:20577569

  17. Habitat Complexity Metrics to Guide Restoration of Large Rivers

    NASA Astrophysics Data System (ADS)

    Jacobson, R. B.; McElroy, B. J.; Elliott, C.; DeLonay, A.

    2011-12-01

    Restoration strategies on large, channelized rivers typically strive to recover lost habitat complexity, based on the assumption complexity and biophysical capacity are directly related. Although definition of links between complexity and biotic responses can be tenuous, complexity metrics have appeal because of their potential utility in quantifying habitat quality, defining reference conditions and design criteria, and measuring restoration progress. Hydroacoustic instruments provide many ways to measure complexity on large rivers, yet substantive questions remain about variables and scale of complexity that are meaningful to biota, and how complexity can be measured and monitored cost effectively. We explore these issues on the Missouri River, using the example of channel re-engineering projects that are intended to aid in recovery of the pallid sturgeon, an endangered benthic fish. We are refining understanding of what habitat complexity means for adult fish by combining hydroacoustic habitat assessments with acoustic telemetry to map locations during reproductive migrations and spawning. These data indicate that migrating sturgeon select points with relatively low velocity but adjacent to areas of high velocity (that is, with high velocity gradients); the integration of points defines pathways which minimize energy expenditures during upstream migrations of 10's to 100's of km. Complexity metrics that efficiently quantify migration potential at the reach scale are therefore directly relevant to channel restoration strategies. We are also exploring complexity as it relates to larval sturgeon dispersal. Larvae may drift for as many as 17 days (100's of km at mean velocities) before using up their yolk sac, after which they "settle" into habitats where they initiate feeding. An assumption underlying channel re-engineering is that additional channel complexity, specifically increased shallow, slow water, is necessary for early feeding and refugia. Development of complexity metrics is complicated by the fact that characteristics of channel morphology may increase complexity scores without necessarily increasing biophysical capacity for target species. For example, a cross section that samples depths and velocities across the thalweg (navigation channel) and into lentic habitat may score high on most measures of hydraulic or geomorphic complexity, but does not necessarily provide habitats beneficial to native species. Complexity measures need to be bounded by best estimates of native species requirements. In the absence of specific information, creation of habitat complexity for the sake of complexity may lead to unintended consequences, for example, lentic habitats that increase a complexity score but support invasive species. An additional practical constraint on complexity measures is the need to develop metrics that are can be deployed cost-effectively in an operational monitoring program. Design of a monitoring program requires informed choices of measurement variables, definition of reference sites, and design of sampling effort to capture spatial and temporal variability.

  18. Postural Complexity Differs Between Infant Born Full Term and Preterm During the Development of Early Behaviors

    PubMed Central

    Dusing, Stacey C; Izzo, Theresa A.; Thacker, Leroy R.; Galloway, James C

    2014-01-01

    Background and Aims Postural control differs between infants born preterm and full term at 1–3 weeks of age. It is unclear if differences persist or alter the development of early behaviors. The aim of this longitudinal study was to compare changes in postural control variability during development of head control and reaching in infants born preterm and full term. Methods Eighteen infants born preterm (mean gestational age 28.3±3.1 weeks) were included in this study and compared to existing data from 22 infants born full term. Postural variability was assessed longitudinally using root mean squared displacement and approximate entropy of the center of pressure displacement from birth to 6 months as measures of the magnitude of the variability and complexity of postural control. Behavioral coding was used to quantify development of head control and reaching. Results Group differences were identified in postural complexity during the development of head control and reaching. Infants born preterm used more repetitive and less adaptive postural control strategies than infants born full term. Both groups changed their postural complexity utilized during the development of head control and reaching. Discussion Early postural complexity was decreased in infants born preterm, compared to infants born full term. Commonly used clinical assessments did not identify these early differences in postural control. Altered postural control in infants born preterm influenced ongoing skill development in the first six months of life. PMID:24485170

  19. The Use of Race Variables in Genetic Studies of Complex Traits and the Goal of Reducing Health Disparities: A Transdisciplinary Perspective

    ERIC Educational Resources Information Center

    Shields, Alexandra E.; Fortun, Michael; Hammonds, Evelynn M.; King, Patricia A.; Lerman, Caryn; Rapp, Rayna; Sullivan, Patrick F.

    2005-01-01

    The use of racial variables in genetic studies has become a matter of intense public debate, with implications for research design and translation into practice. Using research on smoking as a springboard, the authors examine the history of racial categories, current research practices, and arguments for and against using race variables in genetic…

  20. Closed-form solution for Eshelby's elliptic inclusion in antiplane elasticity using complex variable

    NASA Astrophysics Data System (ADS)

    Chen, Y. Z.

    2013-12-01

    This paper provides a closed-form solution for the Eshelby's elliptic inclusion in antiplane elasticity. In the formulation, the prescribed eigenstarins are not only for the uniform distribution, but also for the linear form. After using the complex variable and the conformal mapping, the continuation condition for the traction and displacement along the interface in the physical plane can be reduced to a condition along the unit circle. The relevant complex potentials defined in the inclusion and the matrix can be separated from the continuation conditions of the traction and displacement along the interface. The expressions of the real strains and stresses in the inclusion from the assumed eigenstrains are presented. Results for the case of linear distribution of eigenstrain are first obtained in the paper.

  1. Complex character analysis of heart rate variability following brain asphyxia.

    PubMed

    Cai, Yuanyuan; Qiu, Yihong; Wei, Lan; Zhang, Wei; Hu, Sijun; Smith, Peter R; Crabtree, Vincent P; Tong, Shanbao; Thakor, Nitish V; Zhu, Yisheng

    2006-05-01

    In the present study Renyi entropy and L-Z complexity were used to characterize heart rate variability (HRV) of rats that were suffered from brain asphyxia and ischemia. Two groups of rats were studied: (a) rats (n=5) injected with NAALADase inhibitor, 2-PMPA, which has been proven neuroprotective in asphyxia injury and (b) control subjects (n=5) without medication. Renyi entropy and L-Z complexity of the R-R intervals (RRI) at different experiment stages were investigated in the two groups. The results show that both measures indicate less injury and better recovery in the drug injection group. The dynamic change of 90 min RRI signal after the asphyxia was investigated. The sudden reduction of the two parameters shows their sensitivity to the asphyxia insult.

  2. Tuberous sclerosis complex: Recent advances in manifestations and therapy.

    PubMed

    Wataya-Kaneda, Mari; Uemura, Motohide; Fujita, Kazutoshi; Hirata, Haruhiko; Osuga, Keigo; Kagitani-Shimono, Kuriko; Nonomura, Norio

    2017-09-01

    Tuberous sclerosis complex is an autosomal dominant inherited disorder characterized by generalized involvement and variable manifestations with a birth incidence of 1:6000. In a quarter of a century, significant progress in tuberous sclerosis complex has been made. Two responsible genes, TSC1 and TSC2, which encode hamartin and tuberin, respectively, were discovered in the 1990s, and their functions were elucidated in the 2000s. Hamartin-Tuberin complex is involved in the phosphoinositide 3-kinase-protein kinase B-mammalian target of rapamycin signal transduction pathway, and suppresses mammalian target of rapamycin complex 1 activity, which is a center for various functions. Constitutive activation of mammalian target of rapamycin complex 1 causes variable manifestations in tuberous sclerosis complex. Recently, genetic tests were launched to diagnose tuberous sclerosis complex, and mammalian target of rapamycin complex 1 inhibitors are being used to treat tuberous sclerosis complex patients. As a result of these advances, new diagnostic criteria have been established and an indispensable new treatment method; that is, "a cross-sectional medical examination system," a system to involve many experts for tuberous sclerosis complex diagnosis and treatments, was also created. Simultaneously, the frequency of genetic tests and advances in diagnostic technology have resulted in new views on symptoms. The numbers of tuberous sclerosis complex patients without neural symptoms are increasing, and for these patients, renal manifestations and pulmonary lymphangioleiomyomatosis have become important manifestations. New concepts of tuberous sclerosis complex-associated neuropsychiatric disorders or perivascular epithelioid cell tumors are being created. The present review contains a summary of recent advances, significant manifestations and therapy in tuberous sclerosis complex. © 2017 The Japanese Urological Association.

  3. The influence of complex and threatening environments in early life on brain size and behaviour.

    PubMed

    DePasquale, C; Neuberger, T; Hirrlinger, A M; Braithwaite, V A

    2016-01-27

    The ways in which challenging environments during development shape the brain and behaviour are increasingly being addressed. To date, studies typically consider only single variables, but the real world is more complex. Many factors simultaneously affect the brain and behaviour, and whether these work independently or interact remains untested. To address this, zebrafish (Danio rerio) were reared in a two-by-two design in housing that varied in structural complexity and/or exposure to a stressor. Fish experiencing both complexity (enrichment objects changed over time) and mild stress (daily net chasing) exhibited enhanced learning and were less anxious when tested as juveniles (between 77 and 90 days). Adults tested (aged 1 year) were also less anxious even though fish were kept in standard housing after three months of age (i.e. no chasing or enrichment). Volumetric measures of the brain using magnetic resonance imaging (MRI) showed that complexity alone generated fish with a larger brain, but this increase in size was not seen in fish that experienced both complexity and chasing, or chasing alone. The results highlight the importance of looking at multiple variables simultaneously, and reveal differential effects of complexity and stressful experiences during development of the brain and behaviour. © 2016 The Authors.

  4. The influence of complex and threatening environments in early life on brain size and behaviour

    PubMed Central

    Neuberger, T.; Hirrlinger, A. M.; Braithwaite, V. A.

    2016-01-01

    The ways in which challenging environments during development shape the brain and behaviour are increasingly being addressed. To date, studies typically consider only single variables, but the real world is more complex. Many factors simultaneously affect the brain and behaviour, and whether these work independently or interact remains untested. To address this, zebrafish (Danio rerio) were reared in a two-by-two design in housing that varied in structural complexity and/or exposure to a stressor. Fish experiencing both complexity (enrichment objects changed over time) and mild stress (daily net chasing) exhibited enhanced learning and were less anxious when tested as juveniles (between 77 and 90 days). Adults tested (aged 1 year) were also less anxious even though fish were kept in standard housing after three months of age (i.e. no chasing or enrichment). Volumetric measures of the brain using magnetic resonance imaging (MRI) showed that complexity alone generated fish with a larger brain, but this increase in size was not seen in fish that experienced both complexity and chasing, or chasing alone. The results highlight the importance of looking at multiple variables simultaneously, and reveal differential effects of complexity and stressful experiences during development of the brain and behaviour. PMID:26817780

  5. 79. VIEW OF VAL FIRING RANGE LOOKING SOUTHWEST SHOWING LAUNCHER ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    79. VIEW OF VAL FIRING RANGE LOOKING SOUTHWEST SHOWING LAUNCHER BRIDGE, BARGES, SONAR BUOY RANGE AND MORRIS DAM IN BACKGROUND, June 10, 1948. - Variable Angle Launcher Complex, Variable Angle Launcher, CA State Highway 39 at Morris Reservior, Azusa, Los Angeles County, CA

  6. Multivariate Analysis of Income Inequality: Data from 32 Nations.

    ERIC Educational Resources Information Center

    Stack, Steven

    To analyze income inequality in 32 nations, the research tested hypotheses based upon eight socioeconomic variables. The first seven variables, often tested in income research, were: political participation, industrial development, population growth, educational level, inflation rate, economic growth, and technological complexity. The eighth…

  7. 76. FIRST TEST SHOT OF THE VAL AT THE DEDICATION ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    76. FIRST TEST SHOT OF THE VAL AT THE DEDICATION CEREMONIES AS SEEN FROM THE OBSERVATION DECK ABOVE THE CONTROL STATION, May 7, 1948. - Variable Angle Launcher Complex, Variable Angle Launcher, CA State Highway 39 at Morris Reservior, Azusa, Los Angeles County, CA

  8. Diversified models for portfolio selection based on uncertain semivariance

    NASA Astrophysics Data System (ADS)

    Chen, Lin; Peng, Jin; Zhang, Bo; Rosyida, Isnaini

    2017-02-01

    Since the financial markets are complex, sometimes the future security returns are represented mainly based on experts' estimations due to lack of historical data. This paper proposes a semivariance method for diversified portfolio selection, in which the security returns are given subjective to experts' estimations and depicted as uncertain variables. In the paper, three properties of the semivariance of uncertain variables are verified. Based on the concept of semivariance of uncertain variables, two types of mean-semivariance diversified models for uncertain portfolio selection are proposed. Since the models are complex, a hybrid intelligent algorithm which is based on 99-method and genetic algorithm is designed to solve the models. In this hybrid intelligent algorithm, 99-method is applied to compute the expected value and semivariance of uncertain variables, and genetic algorithm is employed to seek the best allocation plan for portfolio selection. At last, several numerical examples are presented to illustrate the modelling idea and the effectiveness of the algorithm.

  9. Genetic variability in cereal isolates of the Fusarium incarnatum-equiseti species complex

    USDA-ARS?s Scientific Manuscript database

    The F. incarnatum-equiseti species complex (FIESC) includes fungi associated with diseases of multiple agricultural crops. Although members of FIESC are considered moderately aggressive, they produce diverse mycotoxins, including trichothecenes. Because FIESC exhibits cryptic speciation, DNA-based p...

  10. Phylogenomic and biogeographic reconstruction of the Trichinella complex

    USDA-ARS?s Scientific Manuscript database

    Trichinellosis is a globally important food-borne parasitic disease of humans. It is caused by roundworms of the Trichinella complex. Extensive biodiversity is reflected in substantial ecological and genetic variability within and among taxa, and major controversy surrounds the systematics of this c...

  11. System Complexity Reduction via Feature Selection

    ERIC Educational Resources Information Center

    Deng, Houtao

    2011-01-01

    This dissertation transforms a set of system complexity reduction problems to feature selection problems. Three systems are considered: classification based on association rules, network structure learning, and time series classification. Furthermore, two variable importance measures are proposed to reduce the feature selection bias in tree…

  12. Discrete-Choice Modeling Of Non-Working Women’s Trip-Chaining Activity Based

    NASA Astrophysics Data System (ADS)

    Hayati, Amelia; Pradono; Purboyo, Heru; Maryati, Sri

    2018-05-01

    Start The urban developments of technology and economics are now changing the lifestyles of the urban societies. It is also changing their travel demand to meet their movement needs. Nowadays, urban women, especially in Bandung, West Java, have a high demand for their daily travel and tend to increase. They have the ease of accessibility to personal modes of transportation and freedom to go anywhere to meet their personal and family needs. This also happens to non-working women or as housewives in the city of Bandung. More than 50% of women’s mobility is outside the home, in the term of trip-chaining, from leaving to returning home in one day. It is based on their complex activities in order to meet the needs of family and home care. While less than 60% of male’s mobility is outdoors, it is a simple trip-chaining or only has a single trip. The trip-chaining has significant differences between non-working women and working-men. This illustrates the pattern of Mom and Dad’s mobility in a family with an activity-based approach for the same purpose, i.e. family welfare. This study explains how complex the trip-chaining of non-working urban women and as housewives, with an activity-based approach done outdoors in a week. Socio-economic and household demographic variables serve as the basis for measuring the independent variables affecting family welfare, as well as the variables of type, time and duration of activities performed by unemployed housewives. This study aims to examine the interrelationships between activity variables, especially the time of activity and travel, and socio-economic of household variables that can generate the complexity of women’s daily travel. Discrete Choice Modeling developed by Ben-Akiva, Chandra Bhat, etc., is used in this study to illustrate the relationship between activity and socio-economic demographic variables based on primary survey data in Bandung, West Java for 466 unemployed housewives. The results of the regression, by Seemingly Unrelated Regression approach methods, showed the interrelationship between all variables, including the complexity of trip chaining of housewives based on their daily activities. The type of mandatory and discretionary activities, and the duration of activities performed during the dismissal in the series of trip chains conducted are intended for the fulfillment of the welfare of all family member.

  13. Complexities in Subsetting Level 2 Data

    NASA Technical Reports Server (NTRS)

    Huwe, Paul; Wei, Jennifer; Meyer, David; Silberstein, David S.; Alfred, Jerome; Savtchenko, Andrey K.; Johnson, James E.; Albayrak, Arif; Hearty, Thomas

    2017-01-01

    Satellite Level 2 data presents unique challenges for tools and services. From nonlinear spatial geometry to inhomogeneous file data structure to inconsistent temporal variables to complex data variable dimensionality to multiple file formats, there are many difficulties in creating general tools for Level 2 data support. At NASA Goddard Earth Sciences Data and Information Services Center (GES DISC), we are implementing a general Level 2 Subsetting service for Level 2 data to a user-specified spatio-temporal region of interest (ROI). In this presentation, we will unravel some of the challenges faced in creating this service and the strategies we used to surmount them.

  14. Improving the Prediction of Mortality and the Need for Life-Saving Interventions in Trauma Patients Using Standard Vital Signs With Heart-Rate Variability and Complexity

    DTIC Science & Technology

    2015-06-01

    Trauma 69:S10YS13, 2010. 2. Liu NT, Holcomb JB, Wade CE, Darrah MI, Salinas J: Utility of vital signs, heart-rate variability and complexity, and machine ... learning for identifying the need for life-saving interventions in trauma patients. Shock 42:108Y114, 2014. 3. Pickering TG, Shimbo D, Hass D...Ann Emerg Med 45:68Y76, 2005. 8. Liu NT, Holcomb JB, Wade CE, Batchinsky AI, Cancio LC, Darrah MI, Salinas J: Development and validation of a machine

  15. Wigner functions defined with Laplace transform kernels.

    PubMed

    Oh, Se Baek; Petruccelli, Jonathan C; Tian, Lei; Barbastathis, George

    2011-10-24

    We propose a new Wigner-type phase-space function using Laplace transform kernels--Laplace kernel Wigner function. Whereas momentum variables are real in the traditional Wigner function, the Laplace kernel Wigner function may have complex momentum variables. Due to the property of the Laplace transform, a broader range of signals can be represented in complex phase-space. We show that the Laplace kernel Wigner function exhibits similar properties in the marginals as the traditional Wigner function. As an example, we use the Laplace kernel Wigner function to analyze evanescent waves supported by surface plasmon polariton. © 2011 Optical Society of America

  16. The Effect of Visual Information on the Manual Approach and Landing

    NASA Technical Reports Server (NTRS)

    Wewerinke, P. H.

    1982-01-01

    The effect of visual information in combination with basic display information on the approach performance. A pre-experimental model analysis was performed in terms of the optimal control model. The resulting aircraft approach performance predictions were compared with the results of a moving base simulator program. The results illustrate that the model provides a meaningful description of the visual (scene) perception process involved in the complex (multi-variable, time varying) manual approach task with a useful predictive capability. The theoretical framework was shown to allow a straight-forward investigation of the complex interaction of a variety of task variables.

  17. Effects of task and age on the magnitude and structure of force fluctuations: insights into underlying neuro-behavioral processes.

    PubMed

    Vieluf, Solveig; Temprado, Jean-Jacques; Berton, Eric; Jirsa, Viktor K; Sleimen-Malkoun, Rita

    2015-03-13

    The present study aimed at characterizing the effects of increasing (relative) force level and aging on isometric force control. To achieve this objective and to infer changes in the underlying control mechanisms, measures of information transmission, as well as magnitude and time-frequency structure of behavioral variability were applied to force-time-series. Older adults were found to be weaker, more variable, and less efficient than young participants. As a function of force level, efficiency followed an inverted-U shape in both groups, suggesting a similar organization of the force control system. The time-frequency structure of force output fluctuations was only significantly affected by task conditions. Specifically, a narrower spectral distribution with more long-range correlations and an inverted-U pattern of complexity changes were observed with increasing force level. Although not significant older participants displayed on average a less complex behavior for low and intermediate force levels. The changes in force signal's regularity presented a strong dependence on time-scales, which significantly interacted with age and condition. An inverted-U profile was only observed for the time-scale relevant to the sensorimotor control process. However, in both groups the peak was not aligned with the optimum of efficiency. Our results support the view that behavioral variability, in terms of magnitude and structure, has a functional meaning and affords non-invasive markers of the adaptations of the sensorimotor control system to various constraints. The measures of efficiency and variability ought to be considered as complementary since they convey specific information on the organization of control processes. The reported weak age effect on variability and complexity measures suggests that the behavioral expression of the loss of complexity hypothesis is not as straightforward as conventionally admitted. However, group differences did not completely vanish, which suggests that age differences can be more or less apparent depending on task properties and whether difficulty is scaled in relative or absolute terms.

  18. Variability of inter-observer agreement on feasibility of partial nephrectomy before and after neoadjuvant axitinib for locally advanced renal cell carcinoma (RCC): independent analysis from a phase II trial.

    PubMed

    Karam, Jose A; Devine, Catherine E; Fellman, Bryan M; Urbauer, Diana L; Abel, E Jason; Allaf, Mohamad E; Bex, Axel; Lane, Brian R; Thompson, R Houston; Wood, Christopher G

    2016-04-01

    To evaluate how many patients could have undergone partial nephrectomy (PN) rather than radical nephrectomy (RN) before and after neoadjuvant axitinib therapy, as assessed by five independent urological oncologists, and to study the variability of inter-observer agreement. Pre- and post-systemic treatment computed tomography scans from 22 patients with clear cell renal cell carcinoma in a phase II neoadjuvant axitinib trial were reviewed by five independent urological oncologists. R.E.N.A.L. nephrometry score and κ statistics were calculated. The median R.E.N.A.L. nephrometry score changed from 11 before treatment to 10 after treatment (P = 0.002). Five tumours with moderate complexity before axitinib treatment remained moderate complexity after treatment. Of 17 tumours with high complexity before axitinib treatment, three became moderate complexity after treatment. The overall κ statistic was 0.611. Moderate-complexity κ was 0.611 vs a high-complexity κ of 0.428. Before axitinib treatment the κ was 0.550 vs 0.609 after treatment. After treatment with axitinib, all five reviewers agreed that only five patients required RN (instead of eight before treatment) and that 10 patients could now undergo PN (instead of three before treatment). The odds of PN feasibility were 22.8-times higher after treatment with axitinib. There is considerable variability in inter-observer agreement on the feasibility of PN in patients treated with neoadjuvant targeted therapy. Although more patients were candidates for PN after neoadjuvant axitinib therapy, it remains difficult to identify these patients a priori. © 2015 The Authors BJU International © 2015 BJU International Published by John Wiley & Sons Ltd.

  19. Complexity and health professions education: a basic glossary.

    PubMed

    Mennin, Stewart

    2010-08-01

    The study of health professions education in the context of complexity science and complex adaptive systems involves different concepts and terminology that are likely to be unfamiliar to many health professions educators. A list of selected key terms and definitions from the literature of complexity science is provided to assist readers to navigate familiar territory from a different perspective. include agent, attractor, bifurcation, chaos, co-evolution, collective variable, complex adaptive systems, complexity science, deterministic systems, dynamical system, edge of chaos, emergence, equilibrium, far from equilibrium, fuzzy boundaries, linear system, non-linear system, random, self-organization and self-similarity.

  20. Dinuclear lanthanide complexes based on amino alcoholate ligands: Structure, magnetic and fluorescent properties

    NASA Astrophysics Data System (ADS)

    Sun, Gui-Fang; Zhang, Cong-Ming; Guo, Jian-Ni; Yang, Meng; Li, Li-Cun

    2017-05-01

    Two binuclear lanthanide complexes [Ln2(hfac)6(HL)2] (LnIII = Dy(1), Tb(2); hfac = hexafluoroacetylacetonate, HL = (R)-2-amino-2-phenylethanol) have been successfully obtained by using amino alcoholate ligand. In two complexes, the Ln(III) ions are bridged by two alkoxido groups from HL ligands, resulting in binuclear complexes. The variable-temperature magnetic susceptibility studies indicate that there exists ferromagnetic interaction between two Ln(III) ions. Frequency dependent out-of-phase signals are observed for complex 1, suggesting SMM type behavior. Complexes 1 and 2 display intensely characteristic luminescent properties.

  1. Analysis of the historical precipitation in the South East Iberian Peninsula at different spatio-temporal scale. Study of the meteorological drought

    NASA Astrophysics Data System (ADS)

    Fernández-Chacón, Francisca; Pulido-Velazquez, David; Jiménez-Sánchez, Jorge; Luque-Espinar, Juan Antonio

    2017-04-01

    Precipitation is a fundamental climate variable that has a pronounced spatial and temporal variability on a global scale, as well as at regional and sub-regional scales. Due to its orographic complexity and its latitude the Iberian Peninsula (IP), located to the west of the Mediterranean Basin between the Atlantic Ocean and the Mediterranean Sea, has a complex climate. Over the peninsula there are strong north-south and east-west gradients, as a consequence of the different low-frequency atmospheric patterns, and he overlap of these over the year will be determinants in the variability of climatic variables. In the southeast of the Iberian Peninsula dominates a dry Mediterranean climate, the precipitation is characterized as being an intermittent and discontinuous variable. In this research information coming from the Spain02 v4 database was used to study the South East (SE) IP for the 1971-2010 period with a spatial resolution of 0.11 x 0.11. We analysed precipitation at different time scale (daily, monthly, seasonal, annual,…) to study the spatial distribution and temporal tendencies. The high spatial, intra-annual and inter-annual climatic variability observed makes it necessary to propose a climatic regionalization. In addition, for the identified areas and subareas of homogeneous climate we have analysed the evolution of the meteorological drought for the same period at different time scales. The standardized precipitation index has been used at 12, 24 and 48 month temporal scale. The climatic complexity of the area determines a high variability in the drought characteristics, duration, intensity and frequency in the different climatic areas. This research has been supported by the GESINHIMPADAPT project (CGL2013-48424-C2-2-R) with Spanish MINECO funds. We would also like to thank Spain02 project for the data provided for this study.

  2. Dissolving variables in connectionist combinatory logic

    NASA Technical Reports Server (NTRS)

    Barnden, John; Srinivas, Kankanahalli

    1990-01-01

    A connectionist system which can represent and execute combinator expressions to elegantly solve the variable binding problem in connectionist networks is presented. This system is a graph reduction machine utilizing graph representations and traversal mechanisms similar to ones described in the BoltzCONS system of Touretzky (1986). It is shown that, as combinators eliminate variables by introducing special functions, these functions can be connectionistically implemented without reintroducing variable binding. This approach 'dissolves' an important part of the variable binding problem, in that a connectionist system still has to manipulate complex data structures, but those structures and their manipulations are rendered more uniform.

  3. Not Noisy, Just Wrong: The Role of Suboptimal Inference in Behavioral Variability

    PubMed Central

    Beck, Jeffrey M.; Ma, Wei Ji; Pitkow, Xaq; Latham, Peter E.; Pouget, Alexandre

    2015-01-01

    Behavior varies from trial to trial even when the stimulus is maintained as constant as possible. In many models, this variability is attributed to noise in the brain. Here, we propose that there is another major source of variability: suboptimal inference. Importantly, we argue that in most tasks of interest, and particularly complex ones, suboptimal inference is likely to be the dominant component of behavioral variability. This perspective explains a variety of intriguing observations, including why variability appears to be larger on the sensory than on the motor side, and why our sensors are sometimes surprisingly unreliable. PMID:22500627

  4. Phytoplankton dynamics of a subtropical reservoir controlled by the complex interplay among hydrological, abiotic, and biotic variables.

    PubMed

    Kuo, Yi-Ming; Wu, Jiunn-Tzong

    2016-12-01

    This study was conducted to identify the key factors related to the spatiotemporal variations in phytoplankton abundance in a subtropical reservoir from 2006 to 2010 and to assist in developing strategies for water quality management. Dynamic factor analysis (DFA), a dimension-reduction technique, was used to identify interactions between explanatory variables (i.e., environmental variables) and abundance (biovolume) of predominant phytoplankton classes. The optimal DFA model significantly described the dynamic changes in abundances of predominant phytoplankton groups (including dinoflagellates, diatoms, and green algae) at five monitoring sites. Water temperature, electrical conductivity, water level, nutrients (total phosphorus, NO 3 -N, and NH 3 -N), macro-zooplankton, and zooplankton were the key factors affecting the dynamics of aforementioned phytoplankton. Therefore, transformations of nutrients and reactions between water quality variables and aforementioned processes altered by hydrological conditions may also control the abundance dynamics of phytoplankton, which may represent common trends in the DFA model. The meandering shape of Shihmen Reservoir and its surrounding rivers caused a complex interplay between hydrological conditions and abiotic and biotic variables, resulting in phytoplankton abundance that could not be estimated using certain variables. Additional water quality and hydrological variables at surrounding rivers and monitoring plans should be executed a few days before and after reservoir operations and heavy storm, which would assist in developing site-specific preventive strategies to control phytoplankton abundance.

  5. Implications of the behavioral approach to hypnosis.

    PubMed

    Starker, S

    1975-07-01

    The findings of behaviorally oriented research regarding the importance of cognitive-motivational variables in hypnosis are examined and some clinical and theoretical implications are explored. Hypnosis seems usefully conceptualized as a complex configuration or gestalt of interacting variables on several different levels, for example, cognitive, motivational, social, physiologic.

  6. Assessing the accuracy and stability of variable selection methods for random forest modeling in ecology

    EPA Science Inventory

    Random forest (RF) modeling has emerged as an important statistical learning method in ecology due to its exceptional predictive performance. However, for large and complex ecological datasets there is limited guidance on variable selection methods for RF modeling. Typically, e...

  7. 30. VAL LOOKING DOWN THE LAUNCHER SLAB STAIRS AT THE ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    30. VAL LOOKING DOWN THE LAUNCHER SLAB STAIRS AT THE PROJECTILE LOADING CAR AND LOADING PLATFORM ADJACENT TO THE PROJECTILE LOADING DECK AND LAUNCHER BRIDGE. - Variable Angle Launcher Complex, Variable Angle Launcher, CA State Highway 39 at Morris Reservior, Azusa, Los Angeles County, CA

  8. 74. DETAIL VIEW OF INSIDE THE LAUNCHING BRIDGE LOOKING SOUTHWEST ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    74. DETAIL VIEW OF INSIDE THE LAUNCHING BRIDGE LOOKING SOUTHWEST SHOWING ADJUSTABLE STAIRS ON THE LEFT AND LAUNCHING TUBE ON THE RIGHT, Date unknown, circa 1948. - Variable Angle Launcher Complex, Variable Angle Launcher, CA State Highway 39 at Morris Reservior, Azusa, Los Angeles County, CA

  9. Soil Temperature Variability in Complex Terrain measured using Distributed a Fiber-Optic Distributed Temperature Sensing

    NASA Astrophysics Data System (ADS)

    Seyfried, M. S.; Link, T. E.

    2013-12-01

    Soil temperature (Ts) exerts critical environmental controls on hydrologic and biogeochemical processes. Rates of carbon cycling, mineral weathering, infiltration and snow melt are all influenced by Ts. Although broadly reflective of the climate, Ts is sensitive to local variations in cover (vegetative, litter, snow), topography (slope, aspect, position), and soil properties (texture, water content), resulting in a spatially and temporally complex distribution of Ts across the landscape. Understanding and quantifying the processes controlled by Ts requires an understanding of that distribution. Relatively few spatially distributed field Ts data exist, partly because traditional Ts data are point measurements. A relatively new technology, fiber optic distributed temperature system (FO-DTS), has the potential to provide such data but has not been rigorously evaluated in the context of remote, long term field research. We installed FO-DTS in a small experimental watershed in the Reynolds Creek Experimental Watershed (RCEW) in the Owyhee Mountains of SW Idaho. The watershed is characterized by complex terrain and a seasonal snow cover. Our objectives are to: (i) evaluate the applicability of fiber optic DTS to remote field environments and (ii) to describe the spatial and temporal variability of soil temperature in complex terrain influenced by a variable snow cover. We installed fiber optic cable at a depth of 10 cm in contrasting snow accumulation and topographic environments and monitored temperature along 750 m with DTS. We found that the DTS can provide accurate Ts data (+/- .4°C) that resolves Ts changes of about 0.03°C at a spatial scale of 1 m with occasional calibration under conditions with an ambient temperature range of 50°C. We note that there are site-specific limitations related cable installation and destruction by local fauna. The FO-DTS provide unique insight into the spatial and temporal variability of Ts in a landscape. We found strong seasonal trends in Ts variability controlled by snow cover and solar radiation as modified by topography. During periods of spatially continuous snow cover Ts was practically homogeneous throughout. In the absence of snow cover, Ts is highly variable, with most of the variability attributable to different topographic units defined by slope and aspect. During transition periods when snow melts out, Ts is highly variable within the watershed and within topographic units. The importance of accounting for these relatively small scale effects is underscored by the fact that the overall range of Ts in study area 600 m long is similar to that of the much large RCEW with 900 m elevation gradient.

  10. Why significant variables aren't automatically good predictors.

    PubMed

    Lo, Adeline; Chernoff, Herman; Zheng, Tian; Lo, Shaw-Hwa

    2015-11-10

    Thus far, genome-wide association studies (GWAS) have been disappointing in the inability of investigators to use the results of identified, statistically significant variants in complex diseases to make predictions useful for personalized medicine. Why are significant variables not leading to good prediction of outcomes? We point out that this problem is prevalent in simple as well as complex data, in the sciences as well as the social sciences. We offer a brief explanation and some statistical insights on why higher significance cannot automatically imply stronger predictivity and illustrate through simulations and a real breast cancer example. We also demonstrate that highly predictive variables do not necessarily appear as highly significant, thus evading the researcher using significance-based methods. We point out that what makes variables good for prediction versus significance depends on different properties of the underlying distributions. If prediction is the goal, we must lay aside significance as the only selection standard. We suggest that progress in prediction requires efforts toward a new research agenda of searching for a novel criterion to retrieve highly predictive variables rather than highly significant variables. We offer an alternative approach that was not designed for significance, the partition retention method, which was very effective predicting on a long-studied breast cancer data set, by reducing the classification error rate from 30% to 8%.

  11. Potential interactions among linguistic, autonomic, and motor factors in speech.

    PubMed

    Kleinow, Jennifer; Smith, Anne

    2006-05-01

    Though anecdotal reports link certain speech disorders to increases in autonomic arousal, few studies have described the relationship between arousal and speech processes. Additionally, it is unclear how increases in arousal may interact with other cognitive-linguistic processes to affect speech motor control. In this experiment we examine potential interactions between autonomic arousal, linguistic processing, and speech motor coordination in adults and children. Autonomic responses (heart rate, finger pulse volume, tonic skin conductance, and phasic skin conductance) were recorded simultaneously with upper and lower lip movements during speech. The lip aperture variability (LA variability index) across multiple repetitions of sentences that varied in length and syntactic complexity was calculated under low- and high-arousal conditions. High arousal conditions were elicited by performance of the Stroop color word task. Children had significantly higher lip aperture variability index values across all speaking tasks, indicating more variable speech motor coordination. Increases in syntactic complexity and utterance length were associated with increases in speech motor coordination variability in both speaker groups. There was a significant effect of Stroop task, which produced increases in autonomic arousal and increased speech motor variability in both adults and children. These results provide novel evidence that high arousal levels can influence speech motor control in both adults and children. (c) 2006 Wiley Periodicals, Inc.

  12. Analysis of stimulus-related activity in rat auditory cortex using complex spectral coefficients

    PubMed Central

    Krause, Bryan M.

    2013-01-01

    The neural mechanisms of sensory responses recorded from the scalp or cortical surface remain controversial. Evoked vs. induced response components (i.e., changes in mean vs. variance) are associated with bottom-up vs. top-down processing, but trial-by-trial response variability can confound this interpretation. Phase reset of ongoing oscillations has also been postulated to contribute to sensory responses. In this article, we present evidence that responses under passive listening conditions are dominated by variable evoked response components. We measured the mean, variance, and phase of complex time-frequency coefficients of epidurally recorded responses to acoustic stimuli in rats. During the stimulus, changes in mean, variance, and phase tended to co-occur. After the stimulus, there was a small, low-frequency offset response in the mean and modest, prolonged desynchronization in the alpha band. Simulations showed that trial-by-trial variability in the mean can account for most of the variance and phase changes observed during the stimulus. This variability was state dependent, with smallest variability during periods of greatest arousal. Our data suggest that cortical responses to auditory stimuli reflect variable inputs to the cortical network. These analyses suggest that caution should be exercised when interpreting variance and phase changes in terms of top-down cortical processing. PMID:23657279

  13. Unsupervised Analysis of the Effects of a Wastewater Treatment Plant Effluent on the Fathead Minnow Ovarian Transcriptome

    EPA Science Inventory

    Wastewater treatment plant (WWTP) effluents contain complex mixtures of chemicals, potentially including endocrine active chemicals (EACs), pharmaceuticals, and other contaminants of emerging concern (CECs). Due to the complex and variable nature of effluents, biological monitori...

  14. A Cognitive Complexity Metric Applied to Cognitive Development

    ERIC Educational Resources Information Center

    Andrews, Glenda; Halford, Graeme S.

    2002-01-01

    Two experiments tested predictions from a theory in which processing load depends on relational complexity (RC), the number of variables related in a single decision. Tasks from six domains (transitivity, hierarchical classification, class inclusion, cardinality, relative-clause sentence comprehension, and hypothesis testing) were administered to…

  15. Comparison of the cattle leukocyte receptor complex with related livestock species

    USDA-ARS?s Scientific Manuscript database

    The natural killer (NK) cell receptor gene complexes are highly variable between species, and their repetitive nature makes genomic assembly and characterization problematic. As a result, most reference genome assemblies are heavily fragmented and/or misassembled over these regions. However, new lon...

  16. Towards an Evidence Framework for Design-Based Implementation Research

    ERIC Educational Resources Information Center

    Means, Barbara; Harris, Christopher J.

    2013-01-01

    Educational interventions typically are complex combinations of human actions, organizational supports, and instructional resources that play out differently in different contexts and with different kinds of students. The complexity and variability of outcomes undermines the notion that interventions either "work" or "don't…

  17. Optimal dimensionality reduction of complex dynamics: the chess game as diffusion on a free-energy landscape.

    PubMed

    Krivov, Sergei V

    2011-07-01

    Dimensionality reduction is ubiquitous in the analysis of complex dynamics. The conventional dimensionality reduction techniques, however, focus on reproducing the underlying configuration space, rather than the dynamics itself. The constructed low-dimensional space does not provide a complete and accurate description of the dynamics. Here I describe how to perform dimensionality reduction while preserving the essential properties of the dynamics. The approach is illustrated by analyzing the chess game--the archetype of complex dynamics. A variable that provides complete and accurate description of chess dynamics is constructed. The winning probability is predicted by describing the game as a random walk on the free-energy landscape associated with the variable. The approach suggests a possible way of obtaining a simple yet accurate description of many important complex phenomena. The analysis of the chess game shows that the approach can quantitatively describe the dynamics of processes where human decision-making plays a central role, e.g., financial and social dynamics.

  18. Variable complexity online sequential extreme learning machine, with applications to streamflow prediction

    NASA Astrophysics Data System (ADS)

    Lima, Aranildo R.; Hsieh, William W.; Cannon, Alex J.

    2017-12-01

    In situations where new data arrive continually, online learning algorithms are computationally much less costly than batch learning ones in maintaining the model up-to-date. The extreme learning machine (ELM), a single hidden layer artificial neural network with random weights in the hidden layer, is solved by linear least squares, and has an online learning version, the online sequential ELM (OSELM). As more data become available during online learning, information on the longer time scale becomes available, so ideally the model complexity should be allowed to change, but the number of hidden nodes (HN) remains fixed in OSELM. A variable complexity VC-OSELM algorithm is proposed to dynamically add or remove HN in the OSELM, allowing the model complexity to vary automatically as online learning proceeds. The performance of VC-OSELM was compared with OSELM in daily streamflow predictions at two hydrological stations in British Columbia, Canada, with VC-OSELM significantly outperforming OSELM in mean absolute error, root mean squared error and Nash-Sutcliffe efficiency at both stations.

  19. Quantification for complex assessment: uncertainty estimation in final year project thesis assessment

    NASA Astrophysics Data System (ADS)

    Kim, Ho Sung

    2013-12-01

    A quantitative method for estimating an expected uncertainty (reliability and validity) in assessment results arising from the relativity between four variables, viz examiner's expertise, examinee's expertise achieved, assessment task difficulty and examinee's performance, was developed for the complex assessment applicable to final year project thesis assessment including peer assessment. A guide map can be generated by the method for finding expected uncertainties prior to the assessment implementation with a given set of variables. It employs a scale for visualisation of expertise levels, derivation of which is based on quantified clarities of mental images for levels of the examiner's expertise and the examinee's expertise achieved. To identify the relevant expertise areas that depend on the complexity in assessment format, a graphical continuum model was developed. The continuum model consists of assessment task, assessment standards and criterion for the transition towards the complex assessment owing to the relativity between implicitness and explicitness and is capable of identifying areas of expertise required for scale development.

  20. Optimal dimensionality reduction of complex dynamics: The chess game as diffusion on a free-energy landscape

    NASA Astrophysics Data System (ADS)

    Krivov, Sergei V.

    2011-07-01

    Dimensionality reduction is ubiquitous in the analysis of complex dynamics. The conventional dimensionality reduction techniques, however, focus on reproducing the underlying configuration space, rather than the dynamics itself. The constructed low-dimensional space does not provide a complete and accurate description of the dynamics. Here I describe how to perform dimensionality reduction while preserving the essential properties of the dynamics. The approach is illustrated by analyzing the chess game—the archetype of complex dynamics. A variable that provides complete and accurate description of chess dynamics is constructed. The winning probability is predicted by describing the game as a random walk on the free-energy landscape associated with the variable. The approach suggests a possible way of obtaining a simple yet accurate description of many important complex phenomena. The analysis of the chess game shows that the approach can quantitatively describe the dynamics of processes where human decision-making plays a central role, e.g., financial and social dynamics.

  1. Explicit resolutions for the complex of several Fueter operators

    NASA Astrophysics Data System (ADS)

    Bureš, Jarolim; Damiano, Alberto; Sabadini, Irene

    2007-02-01

    An analogue of the Dolbeault complex is introduced for regular functions of several quaternionic variables and studied by means of two different methods. The first one comes from algebraic analysis (for a thorough treatment see the book [F. Colombo, I. Sabadini, F. Sommen, D.C. Struppa, Analysis of Dirac systems and computational algebra, Progress in Mathematical Physics, Vol. 39, Birkhäuser, Boston, 2004]), while the other one relies on the symmetry of the equations and the methods of representation theory (see [F. Colombo, V. Souček, D.C. Struppa, Invariant resolutions for several Fueter operators, J. Geom. Phys. 56 (2006) 1175-1191; R.J. Baston, Quaternionic Complexes, J. Geom. Phys. 8 (1992) 29-52]). The comparison of the two results allows one to describe the operators appearing in the complex in an explicit form. This description leads to a duality theorem which is the generalization of the classical Martineau-Harvey theorem and which is related to hyperfunctions of several quaternionic variables.

  2. Investigating the Complexity of NGC 2992 with HETG

    NASA Astrophysics Data System (ADS)

    Canizares, Claude

    2009-09-01

    NGC 2992 is a nearby (z = 0.00771) Seyfert galaxy with a variable 1.5-2 classification. Over the past 30 years, the 2-10 keV continuum flux has varied by a factor of ~20. This was accompanied by complex variability in the multi-component Fe K line emission, which may indicate violent flaring activity in the innermost regions of the accretion disk. By observing NGC 2992 with the HETG, we will obtain the best constraint to date on the FWHM of the narrow, distant-matter Fe K line emission, along with precision measurement of its centroid energy, thereby enabling more accurate modeling of the variable broad component. We will also test models of the soft excess through measurement of narrow absorption lines attributable to a warm absorber and narrow emission lines arising from photoexcitation.

  3. Cumulative cultural learning: Development and diversity

    PubMed Central

    2017-01-01

    The complexity and variability of human culture is unmatched by any other species. Humans live in culturally constructed niches filled with artifacts, skills, beliefs, and practices that have been inherited, accumulated, and modified over generations. A causal account of the complexity of human culture must explain its distinguishing characteristics: It is cumulative and highly variable within and across populations. I propose that the psychological adaptations supporting cumulative cultural transmission are universal but are sufficiently flexible to support the acquisition of highly variable behavioral repertoires. This paper describes variation in the transmission practices (teaching) and acquisition strategies (imitation) that support cumulative cultural learning in childhood. Examining flexibility and variation in caregiver socialization and children’s learning extends our understanding of evolution in living systems by providing insight into the psychological foundations of cumulative cultural transmission—the cornerstone of human cultural diversity. PMID:28739945

  4. Cumulative cultural learning: Development and diversity.

    PubMed

    Legare, Cristine H

    2017-07-24

    The complexity and variability of human culture is unmatched by any other species. Humans live in culturally constructed niches filled with artifacts, skills, beliefs, and practices that have been inherited, accumulated, and modified over generations. A causal account of the complexity of human culture must explain its distinguishing characteristics: It is cumulative and highly variable within and across populations. I propose that the psychological adaptations supporting cumulative cultural transmission are universal but are sufficiently flexible to support the acquisition of highly variable behavioral repertoires. This paper describes variation in the transmission practices (teaching) and acquisition strategies (imitation) that support cumulative cultural learning in childhood. Examining flexibility and variation in caregiver socialization and children's learning extends our understanding of evolution in living systems by providing insight into the psychological foundations of cumulative cultural transmission-the cornerstone of human cultural diversity.

  5. Effects of additional data on Bayesian clustering.

    PubMed

    Yamazaki, Keisuke

    2017-10-01

    Hierarchical probabilistic models, such as mixture models, are used for cluster analysis. These models have two types of variables: observable and latent. In cluster analysis, the latent variable is estimated, and it is expected that additional information will improve the accuracy of the estimation of the latent variable. Many proposed learning methods are able to use additional data; these include semi-supervised learning and transfer learning. However, from a statistical point of view, a complex probabilistic model that encompasses both the initial and additional data might be less accurate due to having a higher-dimensional parameter. The present paper presents a theoretical analysis of the accuracy of such a model and clarifies which factor has the greatest effect on its accuracy, the advantages of obtaining additional data, and the disadvantages of increasing the complexity. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. Investigating the Interactions among Genre, Task Complexity, and Proficiency in L2 Writing: A Comprehensive Text Analysis and Study of Learner Perceptions

    ERIC Educational Resources Information Center

    Yoon, Hyung-Jo

    2017-01-01

    In this study, I explored the interactions among genre, task complexity, and L2 proficiency in learners' writing task performance. Specifically, after identifying the lack of valid operationalizations of genre and task dimensions in L2 writing research, I examined how genre functions as a task complexity variable, and how learners' perceptions and…

  7. Effects of Topography-driven Micro-climatology on Evaporation

    NASA Astrophysics Data System (ADS)

    Adams, D. D.; Boll, J.; Wagenbrenner, N. S.

    2017-12-01

    The effects of spatial-temporal variation of climatic conditions on evaporation in micro-climates are not well defined. Current spatially-based remote sensing and modeling for evaporation is limited for high resolutions and complex topographies. We investigated the effect of topography-driven micro-climatology on evaporation supported by field measurements and modeling. Fourteen anemometers and thermometers were installed in intersecting transects over the complex topography of the Cook Agronomy Farm, Pullman, WA. WindNinja was used to create 2-D vector maps based on recorded observations for wind. Spatial analysis of vector maps using ArcGIS was performed for analysis of wind patterns and variation. Based on field measurements, wind speed and direction show consequential variability based on hill-slope location in this complex topography. Wind speed and wind direction varied up to threefold and more than 45 degrees, respectively for a given time interval. The use of existing wind models enables prediction of wind variability over the landscape and subsequently topography-driven evaporation patterns relative to wind. The magnitude of the spatial-temporal variability of wind therefore resulted in variable evaporation rates over the landscape. These variations may contribute to uneven crop development patterns observed during the late growth stages of the agricultural crops at the study location. Use of hill-slope location indexes and appropriate methods for estimating actual evaporation support development of methodologies to better define topography-driven heterogeneity in evaporation. The cumulative effects of spatially-variable climatic factors on evaporation are important to quantify the localized water balance and inform precision farming practices.

  8. Untangling the Influences of Voluntary Running, Environmental Complexity, Social Housing and Stress on Adult Hippocampal Neurogenesis

    PubMed Central

    Grégoire, Catherine-Alexandra; Bonenfant, David; Le Nguyen, Adalie; Aumont, Anne; Fernandes, Karl J. L.

    2014-01-01

    Environmental enrichment (EE) exerts powerful effects on brain physiology, and is widely used as an experimental and therapeutic tool. Typical EE paradigms are multifactorial, incorporating elements of physical exercise, environmental complexity, social interactions and stress, however the specific contributions of these variables have not been separable using conventional housing paradigms. Here, we evaluated the impacts of these individual variables on adult hippocampal neurogenesis by using a novel “Alternating EE” paradigm. For 4 weeks, adult male CD1 mice were alternated daily between two enriched environments; by comparing groups that differed in one of their two environments, the individual and combinatorial effects of EE variables could be resolved. The Alternating EE paradigm revealed that (1) voluntary running for 3 days/week was sufficient to increase both mitotic and post-mitotic stages of hippocampal neurogenesis, confirming the central importance of exercise; (2) a complex environment (comprised of both social interactions and rotated inanimate objects) had no effect on neurogenesis itself, but enhanced depolarization-induced c-Fos expression (attributable to social interactions) and buffered stress-induced plasma corticosterone levels (attributable to inanimate objects); and (3) neither social isolation, group housing, nor chronically increased levels of plasma corticosterone had a prolonged impact on neurogenesis. Mouse strain, handling and type of running apparatus were tested and excluded as potential confounding factors. These findings provide valuable insights into the relative effects of key EE variables on adult neurogenesis, and this “Alternating EE” paradigm represents a useful tool for exploring the contributions of individual EE variables to mechanisms of neural plasticity. PMID:24465980

  9. Understanding thermal circulations and near-surface turbulence processes in a small mountain valley

    NASA Astrophysics Data System (ADS)

    Pardyjak, E.; Dupuy, F.; Durand, P.; Gunawardena, N.; Thierry, H.; Roubin, P.

    2017-12-01

    The interaction of turbulence and thermal circulations in complex terrain can be significantly different from idealized flat terrain. In particular, near-surface horizontal spatial and temporal variability of winds and thermodynamic variables can be significant event over very small spatial scales. The KASCADE (KAtabatic winds and Stability over CAdarache for Dispersion of Effluents) 2017 conducted from January through March 2017 was designed to address these issues and to ultimately improve prediction of dispersion in complex terrain, particularly during stable atmospheric conditions. We have used a relatively large number of sensors to improve our understanding of the spatial and temporal development, evolution and breakdown of topographically driven flows. KASCADE 2017 consisted of continuous observations and fourteen Intensive Observation Periods (IOPs) conducted in the Cadarache Valley located in southeastern France. The Cadarache Valley is a relatively small valley (5 km x 1 km) with modest slopes and relatively small elevation differences between the valley floor and nearby hilltops ( 100 m). During winter, winds in the valley are light and stably stratified at night leading to thermal circulations as well as complex near-surface atmospheric layering. In this presentation we present results quantifying spatial variability of thermodynamic and turbulence variables as a function of different large -scale forcing conditions (e.g., quiescent conditions, strong westerly flow, and Mistral flow). In addition, we attempt to characterize highly-regular nocturnal horizontal wind meandering and associated turbulence statistics.

  10. Human Mars Mission Design - The Ultimate Systems Challenge

    NASA Technical Reports Server (NTRS)

    Connolly, John F.; Joosten, B. Kent; Drake, Bret; Hoffman, Steve; Polsgrove, Tara; Rucker, Michelle; Andrews, Alida; Williams, Nehemiah

    2017-01-01

    A human mission to Mars will occur at some time in the coming decades. When it does, it will be the end result of a complex network of interconnected design choices, systems analyses, technical optimizations, and non-technical compromises. This mission will extend the technologies, engineering design, and systems analyses to new limits, and may very well be the most complex undertaking in human history. It can be illustrated as a large menu, or as a large decision tree. Whatever the visualization tool, there are numerous design decisions required to assemble a human Mars mission, and many of these interconnect with one another. This paper examines these many decisions and further details a number of choices that are highly interwoven throughout the mission design. The large quantity of variables and their interconnectedness results in a highly complex systems challenge, and the paper illustrates how a change in one variable results in ripples (sometimes unintended) throughout many other facets of the design. The paper concludes with a discussion of some mission design variables that can be addressed first, and those that have already been addressed as a result of ongoing National Aeronautics and Space Administration (NASA) developments, or as a result of decisions outside the technical arena. It advocates the need for a 'reference design' that can be used as a point of comparison, and to illustrate the system-wide impacts as design variables change.

  11. Genetic Divergence and Chemotype Diversity in the Fusarium Head Blight Pathogen Fusarium poae.

    PubMed

    Vanheule, Adriaan; De Boevre, Marthe; Moretti, Antonio; Scauflaire, Jonathan; Munaut, Françoise; De Saeger, Sarah; Bekaert, Boris; Haesaert, Geert; Waalwijk, Cees; van der Lee, Theo; Audenaert, Kris

    2017-08-23

    Fusarium head blight is a disease caused by a complex of Fusarium species. F. poae is omnipresent throughout Europe in spite of its low virulence. In this study, we assessed a geographically diverse collection of F. poae isolates for its genetic diversity using AFLP (Amplified Fragment Length Polymorphism). Furthermore, studying the mating type locus and chromosomal insertions, we identified hallmarks of both sexual recombination and clonal spread of successful genotypes in the population. Despite the large genetic variation found, all F. poae isolates possess the nivalenol chemotype based on Tri7 sequence analysis. Nevertheless, Tri gene clusters showed two layers of genetic variability. Firstly, the Tri1 locus was highly variable with mostly synonymous mutations and mutations in introns pointing to a strong purifying selection pressure. Secondly, in a subset of isolates, the main trichothecene gene cluster was invaded by a transposable element between Tri5 and Tri6 . To investigate the impact of these variations on the phenotypic chemotype, mycotoxin production was assessed on artificial medium. Complex blends of type A and type B trichothecenes were produced but neither genetic variability in the Tri genes nor variability in the genome or geography accounted for the divergence in trichothecene production. In view of its complex chemotype, it will be of utmost interest to uncover the role of trichothecenes in virulence, spread and survival of F. poae .

  12. Multi-level emulation of complex climate model responses to boundary forcing data

    NASA Astrophysics Data System (ADS)

    Tran, Giang T.; Oliver, Kevin I. C.; Holden, Philip B.; Edwards, Neil R.; Sóbester, András; Challenor, Peter

    2018-04-01

    Climate model components involve both high-dimensional input and output fields. It is desirable to efficiently generate spatio-temporal outputs of these models for applications in integrated assessment modelling or to assess the statistical relationship between such sets of inputs and outputs, for example, uncertainty analysis. However, the need for efficiency often compromises the fidelity of output through the use of low complexity models. Here, we develop a technique which combines statistical emulation with a dimensionality reduction technique to emulate a wide range of outputs from an atmospheric general circulation model, PLASIM, as functions of the boundary forcing prescribed by the ocean component of a lower complexity climate model, GENIE-1. Although accurate and detailed spatial information on atmospheric variables such as precipitation and wind speed is well beyond the capability of GENIE-1's energy-moisture balance model of the atmosphere, this study demonstrates that the output of this model is useful in predicting PLASIM's spatio-temporal fields through multi-level emulation. Meaningful information from the fast model, GENIE-1 was extracted by utilising the correlation between variables of the same type in the two models and between variables of different types in PLASIM. We present here the construction and validation of several PLASIM variable emulators and discuss their potential use in developing a hybrid model with statistical components.

  13. Health behavior change models for HIV prevention and AIDS care: practical recommendations for a multi-level approach.

    PubMed

    Kaufman, Michelle R; Cornish, Flora; Zimmerman, Rick S; Johnson, Blair T

    2014-08-15

    Despite increasing recent emphasis on the social and structural determinants of HIV-related behavior, empirical research and interventions lag behind, partly because of the complexity of social-structural approaches. This article provides a comprehensive and practical review of the diverse literature on multi-level approaches to HIV-related behavior change in the interest of contributing to the ongoing shift to more holistic theory, research, and practice. It has the following specific aims: (1) to provide a comprehensive list of relevant variables/factors related to behavior change at all points on the individual-structural spectrum, (2) to map out and compare the characteristics of important recent multi-level models, (3) to reflect on the challenges of operating with such complex theoretical tools, and (4) to identify next steps and make actionable recommendations. Using a multi-level approach implies incorporating increasing numbers of variables and increasingly context-specific mechanisms, overall producing greater intricacies. We conclude with recommendations on how best to respond to this complexity, which include: using formative research and interdisciplinary collaboration to select the most appropriate levels and variables in a given context; measuring social and institutional variables at the appropriate level to ensure meaningful assessments of multiple levels are made; and conceptualizing intervention and research with reference to theoretical models and mechanisms to facilitate transferability, sustainability, and scalability.

  14. Assessment of the Suitability of High Resolution Numerical Weather Model Outputs for Hydrological Modelling in Mountainous Cold Regions

    NASA Astrophysics Data System (ADS)

    Rasouli, K.; Pomeroy, J. W.; Hayashi, M.; Fang, X.; Gutmann, E. D.; Li, Y.

    2017-12-01

    The hydrology of mountainous cold regions has a large spatial variability that is driven both by climate variability and near-surface process variability associated with complex terrain and patterns of vegetation, soils, and hydrogeology. There is a need to downscale large-scale atmospheric circulations towards the fine scales that cold regions hydrological processes operate at to assess their spatial variability in complex terrain and quantify uncertainties by comparison to field observations. In this research, three high resolution numerical weather prediction models, namely, the Intermediate Complexity Atmosphere Research (ICAR), Weather Research and Forecasting (WRF), and Global Environmental Multiscale (GEM) models are used to represent spatial and temporal patterns of atmospheric conditions appropriate for hydrological modelling. An area covering high mountains and foothills of the Canadian Rockies was selected to assess and compare high resolution ICAR (1 km × 1 km), WRF (4 km × 4 km), and GEM (2.5 km × 2.5 km) model outputs with station-based meteorological measurements. ICAR with very low computational cost was run with different initial and boundary conditions and with finer spatial resolution, which allowed an assessment of modelling uncertainty and scaling that was difficult with WRF. Results show that ICAR, when compared with WRF and GEM, performs very well in precipitation and air temperature modelling in the Canadian Rockies, while all three models show a fair performance in simulating wind and humidity fields. Representation of local-scale atmospheric dynamics leading to realistic fields of temperature and precipitation by ICAR, WRF, and GEM makes these models suitable for high resolution cold regions hydrological predictions in complex terrain, which is a key factor in estimating water security in western Canada.

  15. Between-session intra-individual variability in sustained, selective, and integrational non-linguistic attention in aphasia.

    PubMed

    Villard, Sarah; Kiran, Swathi

    2015-01-01

    A number of studies have identified impairments in one or more types/aspects of attention processing in patients with aphasia (PWA) relative to healthy controls; person-to-person variability in performance on attention tasks within the PWA group has also been noted. Studies using non-linguistic stimuli have found evidence that attention is impaired in this population even in the absence of language processing demands. An underlying impairment in non-linguistic, or domain-general, attention processing could have implications for the ability of PWA to attend during therapy sessions, which in turn could impact long-term treatment outcomes. With this in mind, this study aimed to systematically examine the effect of task complexity on reaction time (RT) during a non-linguistic attention task, in both PWA and controls. Additional goals were to assess the effect of task complexity on between-session intra-individual variability (BS-IIV) in RT and to examine inter-individual differences in BS-IIV. Eighteen PWA and five age-matched neurologically healthy controls each completed a novel computerized non-linguistic attention task measuring five types of attention on each of four different non-consecutive days. A significant effect of task complexity on both RT and BS-IIV in RT was found for the PWA group, whereas the control group showed a significant effect of task complexity on RT but not on BS-IIV in RT. Finally, in addition to these group-level findings, it was noted that different patients exhibited different patterns of BS-IIV, indicating the existence of inter-individual variability in BS-IIV within the PWA group. Results may have implications for session-to-session fluctuations in attention during language testing and therapy for PWA. Copyright © 2014 Elsevier Ltd. All rights reserved.

  16. Real topological entropy versus metric entropy for birational measure-preserving transformations

    NASA Astrophysics Data System (ADS)

    Abarenkova, N.; Anglès d'Auriac, J.-Ch.; Boukraa, S.; Maillard, J.-M.

    2000-10-01

    We consider a family of birational measure-preserving transformations of two complex variables, depending on one parameter for which simple rational expressions for the dynamical zeta function have been conjectured, together with an equality between the topological entropy and the logarithm of the Arnold complexity (divided by the number of iterations). Similar results have been obtained for the adaptation of these two concepts to dynamical systems of real variables, yielding to introduce a “real topological entropy” and a “real Arnold complexity”. We try to compare, here, the Kolmogorov-Sinai metric entropy and this real Arnold complexity, or real topological entropy, on this particular example of a one-parameter dependent birational transformation of two variables. More precisely, we analyze, using an infinite precision calculation, the Lyapunov characteristic exponents for various values of the parameter of the birational transformation, in order to compare these results with the ones for the real Arnold complexity. We find a quite surprising result: for this very birational example, and, in fact, for a large set of birational measure-preserving mappings generated by involutions, the Lyapunov characteristic exponents seem to be equal to zero or, at least, extremely small, for all the orbits we have considered, and for all values of the parameter. Birational measure-preserving transformations, generated by involutions, could thus allow to better understand the difference between the topological description and the probabilistic description of discrete dynamical systems. Many birational measure-preserving transformations, generated by involutions, seem to provide examples of discrete dynamical systems which can be topologically chaotic while they are metrically almost quasi-periodic. Heuristically, this can be understood as a consequence of the fact that their orbits seem to form some kind of “transcendental foliation” of the two-dimensional space of variables.

  17. Layered intrusions of the Duluth Complex, Minnesota, USA

    USGS Publications Warehouse

    Miller, J.D.; Ripley, E.M.; ,

    1996-01-01

    The Duluth Complex and associated subvolcanic intrusions comprise a large (5000 km2) intrusive complex in northeastern Minnesota that was emplaced into comagmatic volcanics during the development of the 1.1 Ga Midcontinent rift in North America. In addition to anorthositic and felsic intrusions, the Duluth Complex is composed of many individual mafic layered intrusions of tholeiitic affinity. The cumulate stratigraphies and cryptic variations of six of the better exposed and better studied intrusions are described here to demonstrate the variability in their cumulus mineral paragenesis.

  18. Soil temperature variability in complex terrain measured using fiber-optic distributed temperature sensing

    USDA-ARS?s Scientific Manuscript database

    Soil temperature (Ts) exerts critical controls on hydrologic and biogeochemical processes but magnitude and nature of Ts variability in a landscape setting are rarely documented. Fiber optic distributed temperature sensing systems (FO-DTS) potentially measure Ts at high density over a large extent. ...

  19. 7. VARIABLEANGLE LAUNCHER DEDICATION PLAQUE SHOWING JAMES H. JENNISON (LEFT), ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    7. VARIABLE-ANGLE LAUNCHER DEDICATION PLAQUE SHOWING JAMES H. JENNISON (LEFT), AND W.H. SAYLOR (RIGHT), AT THE DEDICATION CEREMONY, May 7, 1948. (Original photograph in possession of Dave Willis, San Diego, California.) - Variable Angle Launcher Complex, CA State Highway 39 at Morris Reservior, Azusa, Los Angeles County, CA

  20. 54. VAL COUNTERWEIGHT CAR DURING CONSTRUCTION SHOWING CAR FRAME, WHEEL ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    54. VAL COUNTERWEIGHT CAR DURING CONSTRUCTION SHOWING CAR FRAME, WHEEL ASSEMBLIES AND METAL REINFORCING, December 19, 1947. (Original photograph in possession of Dave Willis, San Diego, California.) - Variable Angle Launcher Complex, Variable Angle Launcher, CA State Highway 39 at Morris Reservior, Azusa, Los Angeles County, CA

  1. 22. VAL, VIEW OF PROJECTILE LOADING DECK LOOKING NORTHEAST TOWARD ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    22. VAL, VIEW OF PROJECTILE LOADING DECK LOOKING NORTHEAST TOWARD TOP OF CONCRETE 'A' FRAME STRUCTURE SHOWING DRIVE CABLES, DRIVE GEAR, BOTTOM OF CAMERA TOWER AND 'CROWS NEST' CONTROL ROOM. - Variable Angle Launcher Complex, Variable Angle Launcher, CA State Highway 39 at Morris Reservior, Azusa, Los Angeles County, CA

  2. Modeling the Components of an Economy as a Complex Adaptive System

    DTIC Science & Technology

    principles of constrained optimization and fails to see economic variables as part of an interconnected network. While tools for forecasting economic...data sets such as the stock market . This research portrays the stock market as one component of a networked system of economic variables, with the

  3. Modeling Coast Redwood Variable Retention Management Regimes

    Treesearch

    John-Pascal Berrill; Kevin O' Hara

    2007-01-01

    Variable retention is a flexible silvicultural system that provides forest managers with an alternative to clearcutting. While much of the standing volume is removed in one harvesting operation, residual stems are retained to provide structural complexity and wildlife habitat functions, or to accrue volume before removal during subsequent stand entries. The residual...

  4. How Students Circumvent Problem-Solving Strategies that Require Greater Cognitive Complexity.

    ERIC Educational Resources Information Center

    Niaz, Mansoor

    1996-01-01

    Analyzes the great diversity in problem-solving strategies used by students in solving a chemistry problem and discusses the relationship between these variables and different cognitive variables. Concludes that students try to circumvent certain problem-solving strategies by adapting flexible and stylistic innovations that render the cognitive…

  5. No Small Feat! Taking Time for Change.

    ERIC Educational Resources Information Center

    Solomon, Pearl Gold

    This book provides practical information about the complexity of school change, with an emphasis on the role of time and its impact, along with other variables, on the change process. The other interacting variables in school change include vision, history, leadership and power, the use of support and pressure, capacity building, consensual…

  6. Factors Associated with Attrition in Weight Loss Programs

    ERIC Educational Resources Information Center

    Grave, Riccardo Dalle; Suppini, Alessandro; Calugi, Simona; Marchesini, Giulio

    2006-01-01

    Attrition in weight loss programs is a complex process, influenced by patients' pretreatment characteristics and treatment variables, but available data are contradictory. Only a few variables have been confirmed by more than one study as relevant risk factors, but recently new data of clinical utility emerged from "real world" large observational…

  7. Transcriptome sequencing of diverse peanut (arachis) wild species and the cultivated species reveals a wealth of untapped genetic variability

    USDA-ARS?s Scientific Manuscript database

    Next generation sequencing technologies and improved bioinformatics methods have provided opportunities to study sequence variability in complex polyploid transcriptomes. In this study, we used a diverse panel of twenty-two Arachis accessions representing seven Arachis hypogaea market classes, A-, B...

  8. Specifying and Refining a Complex Measurement Model.

    ERIC Educational Resources Information Center

    Levy, Roy; Mislevy, Robert J.

    This paper aims to describe a Bayesian approach to modeling and estimating cognitive models both in terms of statistical machinery and actual instrument development. Such a method taps the knowledge of experts to provide initial estimates for the probabilistic relationships among the variables in a multivariate latent variable model and refines…

  9. Circuit variability interacts with excitatory-inhibitory diversity of interneurons to regulate network encoding capacity.

    PubMed

    Tsai, Kuo-Ting; Hu, Chin-Kun; Li, Kuan-Wei; Hwang, Wen-Liang; Chou, Ya-Hui

    2018-05-23

    Local interneurons (LNs) in the Drosophila olfactory system exhibit neuronal diversity and variability, yet it is still unknown how these features impact information encoding capacity and reliability in a complex LN network. We employed two strategies to construct a diverse excitatory-inhibitory neural network beginning with a ring network structure and then introduced distinct types of inhibitory interneurons and circuit variability to the simulated network. The continuity of activity within the node ensemble (oscillation pattern) was used as a readout to describe the temporal dynamics of network activity. We found that inhibitory interneurons enhance the encoding capacity by protecting the network from extremely short activation periods when the network wiring complexity is very high. In addition, distinct types of interneurons have differential effects on encoding capacity and reliability. Circuit variability may enhance the encoding reliability, with or without compromising encoding capacity. Therefore, we have described how circuit variability of interneurons may interact with excitatory-inhibitory diversity to enhance the encoding capacity and distinguishability of neural networks. In this work, we evaluate the effects of different types and degrees of connection diversity on a ring model, which may simulate interneuron networks in the Drosophila olfactory system or other biological systems.

  10. Multi-region statistical shape model for cochlear implantation

    NASA Astrophysics Data System (ADS)

    Romera, Jordi; Kjer, H. Martin; Piella, Gemma; Ceresa, Mario; González Ballester, Miguel A.

    2016-03-01

    Statistical shape models are commonly used to analyze the variability between similar anatomical structures and their use is established as a tool for analysis and segmentation of medical images. However, using a global model to capture the variability of complex structures is not enough to achieve the best results. The complexity of a proper global model increases even more when the amount of data available is limited to a small number of datasets. Typically, the anatomical variability between structures is associated to the variability of their physiological regions. In this paper, a complete pipeline is proposed for building a multi-region statistical shape model to study the entire variability from locally identified physiological regions of the inner ear. The proposed model, which is based on an extension of the Point Distribution Model (PDM), is built for a training set of 17 high-resolution images (24.5 μm voxels) of the inner ear. The model is evaluated according to its generalization ability and specificity. The results are compared with the ones of a global model built directly using the standard PDM approach. The evaluation results suggest that better accuracy can be achieved using a regional modeling of the inner ear.

  11. The treatment of parental height as a biological factor in studies of birth weight and childhood growth

    PubMed Central

    Spencer, N; Logan, S

    2002-01-01

    Parental height is frequently treated as a biological variable in studies of birth weight and childhood growth. Elimination of social variables from multivariate models including parental height as a biological variable leads researchers to conclude that social factors have no independent effect on the outcome. This paper challenges the treatment of parental height as a biological variable, drawing on extensive evidence for the determination of adult height through a complex interaction of genetic and social factors. The paper firstly seeks to establish the importance of social factors in the determination of height. The methodological problems associated with treatment of parental height as a purely biological variable are then discussed, illustrated by data from published studies and by analysis of data from the 1958 National Childhood Development Study (NCDS). The paper concludes that a framework for studying pathways to pregnancy and childhood outcomes needs to take account of the complexity of the relation between genetic and social factors and be able to account for the effects of multiple risk factors acting cumulatively across time and across generations. Illustrations of these approaches are given using NCDS data. PMID:12193422

  12. Variable forgetting factor mechanisms for diffusion recursive least squares algorithm in sensor networks

    NASA Astrophysics Data System (ADS)

    Zhang, Ling; Cai, Yunlong; Li, Chunguang; de Lamare, Rodrigo C.

    2017-12-01

    In this work, we present low-complexity variable forgetting factor (VFF) techniques for diffusion recursive least squares (DRLS) algorithms. Particularly, we propose low-complexity VFF-DRLS algorithms for distributed parameter and spectrum estimation in sensor networks. For the proposed algorithms, they can adjust the forgetting factor automatically according to the posteriori error signal. We develop detailed analyses in terms of mean and mean square performance for the proposed algorithms and derive mathematical expressions for the mean square deviation (MSD) and the excess mean square error (EMSE). The simulation results show that the proposed low-complexity VFF-DRLS algorithms achieve superior performance to the existing DRLS algorithm with fixed forgetting factor when applied to scenarios of distributed parameter and spectrum estimation. Besides, the simulation results also demonstrate a good match for our proposed analytical expressions.

  13. Design and performance of optimal detectors for guided wave structural health monitoring

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dib, G.; Udpa, L.

    2016-01-01

    Ultrasonic guided wave measurements in a long term structural health monitoring system are affected by measurement noise, environmental conditions, transducer aging and malfunction. This results in measurement variability which affects detection performance, especially in complex structures where baseline data comparison is required. This paper derives the optimal detector structure, within the framework of detection theory, where a guided wave signal at the sensor is represented by a single feature value that can be used for comparison with a threshold. Three different types of detectors are derived depending on the underlying structure’s complexity: (i) Simple structures where defect reflections can bemore » identified without the need for baseline data; (ii) Simple structures that require baseline data due to overlap of defect scatter with scatter from structural features; (iii) Complex structure with dense structural features that require baseline data. The detectors are derived by modeling the effects of variabilities and uncertainties as random processes. Analytical solutions for the performance of detectors in terms of the probability of detection and false alarm are derived. A finite element model is used to generate guided wave signals and the performance results of a Monte-Carlo simulation are compared with the theoretical performance. initial results demonstrate that the problems of signal complexity and environmental variability can in fact be exploited to improve detection performance.« less

  14. The theory and method of variable frequency directional seismic wave under the complex geologic conditions

    NASA Astrophysics Data System (ADS)

    Jiang, T.; Yue, Y.

    2017-12-01

    It is well known that the mono-frequency directional seismic wave technology can concentrate seismic waves into a beam. However, little work on the method and effect of variable frequency directional seismic wave under complex geological conditions have been done .We studied the variable frequency directional wave theory in several aspects. Firstly, we studied the relation between directional parameters and the direction of the main beam. Secondly, we analyzed the parameters that affect the beam width of main beam significantly, such as spacing of vibrator, wavelet dominant frequency, and number of vibrator. In addition, we will study different characteristics of variable frequency directional seismic wave in typical velocity models. In order to examine the propagation characteristics of directional seismic wave, we designed appropriate parameters according to the character of direction parameters, which is capable to enhance the energy of the main beam direction. Further study on directional seismic wave was discussed in the viewpoint of power spectral. The results indicate that the energy intensity of main beam direction increased 2 to 6 times for a multi-ore body velocity model. It showed us that the variable frequency directional seismic technology provided an effective way to strengthen the target signals under complex geological conditions. For concave interface model, we introduced complicated directional seismic technology which supports multiple main beams to obtain high quality data. Finally, we applied the 9-element variable frequency directional seismic wave technology to process the raw data acquired in a oil-shale exploration area. The results show that the depth of exploration increased 4 times with directional seismic wave method. Based on the above analysis, we draw the conclusion that the variable frequency directional seismic wave technology can improve the target signals of different geologic conditions and increase exploration depth with little cost. Due to inconvenience of hydraulic vibrators in complicated surface area, we suggest that the combination of high frequency portable vibrator and variable frequency directional seismic wave method is an alternative technology to increase depth of exploration or prospecting.

  15. Deficits of long-term memory in ecstasy users are related to cognitive complexity of the task.

    PubMed

    Brown, John; McKone, Elinor; Ward, Jeff

    2010-03-01

    Despite animal evidence that methylenedioxymethamphetamine (ecstasy) causes lasting damage in brain regions related to long-term memory, results regarding human memory performance have been variable. This variability may reflect the cognitive complexity of the memory tasks. However, previous studies have tested only a limited range of cognitive complexity. Furthermore, comparisons across different studies are made difficult by regional variations in ecstasy composition and patterns of use. The objective of this study is to evaluate ecstasy-related deficits in human verbal memory over a wide range of cognitive complexity using subjects drawn from a single geographical population. Ecstasy users were compared to non-drug using controls on verbal tasks with low cognitive complexity (stem completion), moderate cognitive complexity (stem-cued recall and word list learning) and high cognitive complexity (California Verbal Learning Test, Verbal Paired Associates and a novel Verbal Triplet Associates test). Where significant differences were found, both groups were also compared to cannabis users. More cognitively complex memory tasks were associated with clearer ecstasy-related deficits than low complexity tasks. In the most cognitively demanding task, ecstasy-related deficits remained even after multiple learning opportunities, whereas the performance of cannabis users approached that of non-drug using controls. Ecstasy users also had weaker deliberate strategy use than both non-drug and cannabis controls. Results were consistent with the proposal that ecstasy-related memory deficits are more reliable on tasks with greater cognitive complexity. This could arise either because such tasks require a greater contribution from the frontal lobe or because they require greater interaction between multiple brain regions.

  16. Heart Rate Dynamics after Combined Strength and Endurance Training in Middle-Aged Women: Heterogeneity of Responses

    PubMed Central

    Goldberger, Ary L.; Tulppo, Mikko P.; Laaksonen, David E.; Nyman, Kai; Keskitalo, Marko; Häkkinen, Arja; Häkkinen, Keijo

    2013-01-01

    The loss of complexity in physiological systems may be a dynamical biomarker of aging and disease. In this study the effects of combined strength and endurance training compared with those of endurance training or strength training alone on heart rate (HR) complexity and traditional HR variability indices were examined in middle-aged women. 90 previously untrained female volunteers between the age of 40 and 65 years completed a 21 week progressive training period of either strength training, endurance training or their combination, or served as controls. Continuous HR time series were obtained during supine rest and submaximal steady state exercise. The complexity of HR dynamics was assessed using multiscale entropy analysis. In addition, standard time and frequency domain measures were also computed. Endurance training led to increases in HR complexity and selected time and frequency domain measures of HR variability (P<0.01) when measured during exercise. Combined strength and endurance training or strength training alone did not produce significant changes in HR dynamics. Inter-subject heterogeneity of responses was particularly noticeable in the combined training group. At supine rest, no training-induced changes in HR parameters were observed in any of the groups. The present findings emphasize the potential utility of endurance training in increasing the complex variability of HR in middle-aged women. Further studies are needed to explore the combined endurance and strength training adaptations and possible gender and age related factors, as well as other mechanisms, that may mediate the effects of different training regimens on HR dynamics. PMID:24013586

  17. Continuous-variable quantum homomorphic signature

    NASA Astrophysics Data System (ADS)

    Li, Ke; Shang, Tao; Liu, Jian-wei

    2017-10-01

    Quantum cryptography is believed to be unconditionally secure because its security is ensured by physical laws rather than computational complexity. According to spectrum characteristic, quantum information can be classified into two categories, namely discrete variables and continuous variables. Continuous-variable quantum protocols have gained much attention for their ability to transmit more information with lower cost. To verify the identities of different data sources in a quantum network, we propose a continuous-variable quantum homomorphic signature scheme. It is based on continuous-variable entanglement swapping and provides additive and subtractive homomorphism. Security analysis shows the proposed scheme is secure against replay, forgery and repudiation. Even under nonideal conditions, it supports effective verification within a certain verification threshold.

  18. A new fractional operator of variable order: Application in the description of anomalous diffusion

    NASA Astrophysics Data System (ADS)

    Yang, Xiao-Jun; Machado, J. A. Tenreiro

    2017-09-01

    In this paper, a new fractional operator of variable order with the use of the monotonic increasing function is proposed in sense of Caputo type. The properties in term of the Laplace and Fourier transforms are analyzed and the results for the anomalous diffusion equations of variable order are discussed. The new formulation is efficient in modeling a class of concentrations in the complex transport process.

  19. Hydrothermal synthesis, crystal structure, luminescent and magnetic properties of a new mononuclear GdIII coordination complex

    NASA Astrophysics Data System (ADS)

    Coban, Mustafa Burak

    2018-06-01

    A new GdIII coordination complex, {[Gd(2-stp)2(H2O)6].2(4,4'-bipy).4(H2O)}, complex 1, (2-stp = 2-sulfoterephthalate anion and 4,4'-bipy = 4,4'-bipyridine), has been synthesized by hydrothermal method and characterized by elemental analysis, solid state UV-Vis and FT-IR spectroscopy, single-crystal X-ray diffraction, solid state photoluminescence and variable-temperature magnetic measurements. The crystal structure determination shows that GdIII ions are eight coordinated and adopt a distorted square-antiprismatic geometry. Molecules interacting through intra- and intermolecular (O-H⋯O, O-H⋯N) hydrogen bonds in complex 1, give rise to 3D hydrogen bonded structure and the discrete lattice 4,4'-bipy molecules occupy the channel of the 3D structure. π-π stacking interactions also exist 4,4'-bipy-4,4'-bipy and 4,4'-bipy-2-stp molecule rings in 3D structures. Additionally, solid state photoluminescence properties of complex 1 at room temperature have been investigated. Under the excitation of UV light (at 349 nm), the complex 1 exhibited green emissions (at 505 nm) of GdIII ion in the visible region. Furthermore, Variable-temperature magnetic susceptibility and isothermal magnetization as function of external magnetic field studies reveal that complex 1 displays possible antiferromagnetic interaction.

  20. Factors Associated with the Participation of Children with Complex Communication Needs

    ERIC Educational Resources Information Center

    Clarke, M. T.; Newton, C.; Griffiths, T.; Price, K.; Lysley, A.; Petrides, K. V.

    2011-01-01

    The aim of this study was to conduct a preliminary analysis of relations between child and environmental variables, including factors related to communication aid provision, and participation in informal everyday activities in a sample of children with complex communication needs. Ninety-seven caregivers of children provided with communication…

  1. Bovine coronavirus antibody titers at weaning negatively correlate with incidence of bovine respiratory disease in the feed yard

    USDA-ARS?s Scientific Manuscript database

    Bovine respiratory disease complex (BRDC) is a multifactorial disease caused by complex interactions among viral and bacterial pathogens, stressful management practices and host genetic variability. Although vaccines and antibiotic treatments are readily available to prevent and treat infection caus...

  2. Household Work Complexity, Intellectual Functioning, and Self-Esteem in Men and Women

    ERIC Educational Resources Information Center

    Caplan, Leslie J.; Schooler, Carmi

    2006-01-01

    Using data from a U.S. longitudinal investigation of psychological effects of occupational conditions (a project of the National Institute of Mental Health's unit on Socioenvironmental Studies), we examined the relationship between the complexity of household work and 2 psychological variables: intellectual flexibility and self-esteem.…

  3. Utility of computer simulations in landscape genetics

    Treesearch

    Bryan K. Epperson; Brad H. McRae; Kim Scribner; Samuel A. Cushman; Michael S. Rosenberg; Marie-Josee Fortin; Patrick M. A. James; Melanie Murphy; Stephanie Manel; Pierre Legendre; Mark R. T. Dale

    2010-01-01

    Population genetics theory is primarily based on mathematical models in which spatial complexity and temporal variability are largely ignored. In contrast, the field of landscape genetics expressly focuses on how population genetic processes are affected by complex spatial and temporal environmental heterogeneity. It is spatially explicit and relates patterns to...

  4. Complexity as a Reflection of the Dimensionality of a Task.

    ERIC Educational Resources Information Center

    Spilsbury, Georgina

    1992-01-01

    The hypothesis that a task that increases in complexity (increasing its correlation with a central measure of intelligence) does so by increasing its dimensionality by tapping individual differences or another variable was supported by findings from 46 adults aged 20-70 years performing a mental counting task. (SLD)

  5. When Time Makes a Difference: Addressing Ergodicity and Complexity in Education

    ERIC Educational Resources Information Center

    Koopmans, Matthijs

    2015-01-01

    The detection of complexity in behavioral outcomes often requires an estimation of their variability over a prolonged time spectrum to assess processes of stability and transformation. Conventional scholarship typically relies on time-independent measures, "snapshots", to analyze those outcomes, assuming that group means and their…

  6. Trispyrazolylborate Complexes: An Advanced Synthesis Experiment Using Paramagnetic NMR, Variable-Temperature NMR, and EPR Spectroscopies

    ERIC Educational Resources Information Center

    Abell, Timothy N.; McCarrick, Robert M.; Bretz, Stacey Lowery; Tierney, David L.

    2017-01-01

    A structured inquiry experiment for inorganic synthesis has been developed to introduce undergraduate students to advanced spectroscopic techniques including paramagnetic nuclear magnetic resonance and electron paramagnetic resonance. Students synthesize multiple complexes with unknown first row transition metals and identify the unknown metals by…

  7. Environmental Uncertainty and Communication Network Complexity: A Cross-System, Cross-Cultural Test.

    ERIC Educational Resources Information Center

    Danowski, James

    An infographic model is proposed to account for the operation of systems within their information environments. Infographics is a communication paradigm used to indicate the clustering of information processing variables in communication systems. Four propositions concerning environmental uncertainty and internal communication network complexity,…

  8. Teacher Enthusiasm: Reviewing and Redefining a Complex Construct

    ERIC Educational Resources Information Center

    Keller, Melanie M.; Hoy, Anita Woolfolk; Goetz, Thomas; Frenzel, Anne C.

    2016-01-01

    The last review on teacher enthusiasm was 45 years ago, and teacher enthusiasm remains a compelling yet complex variable in the educational context. Since Rosenshine's ("School Review" 78:499-514, 1970) review, the conceptualizations, definitions, methodology, and results have only become more scattered, and several related constructs…

  9. A Simple View of Linguistic Complexity

    ERIC Educational Resources Information Center

    Pallotti, Gabriele

    2015-01-01

    Although a growing number of second language acquisition (SLA) studies take linguistic complexity as a dependent variable, the term is still poorly defined and often used with different meanings, thus posing serious problems for research synthesis and knowledge accumulation. This article proposes a simple, coherent view of the construct, which is…

  10. Ruminant Rhombencephalitis-Associated Listeria monocytogenes Alleles Linked to a Multilocus Variable-Number Tandem-Repeat Analysis Complex ▿ †

    PubMed Central

    Balandyté, Lina; Brodard, Isabelle; Frey, Joachim; Oevermann, Anna; Abril, Carlos

    2011-01-01

    Listeria monocytogenes is among the most important food-borne pathogens and is well adapted to persist in the environment. To gain insight into the genetic relatedness and potential virulence of L. monocytogenes strains causing central nervous system (CNS) infections, we used multilocus variable-number tandem-repeat analysis (MLVA) to subtype 183 L. monocytogenes isolates, most from ruminant rhombencephalitis and some from human patients, food, and the environment. Allelic-profile-based comparisons grouped L. monocytogenes strains mainly into three clonal complexes and linked single-locus variants (SLVs). Clonal complex A essentially consisted of isolates from human and ruminant brain samples. All but one rhombencephalitis isolate from cattle were located in clonal complex A. In contrast, food and environmental isolates mainly clustered into clonal complex C, and none was classified as clonal complex A. Isolates of the two main clonal complexes (A and C) obtained by MLVA were analyzed by PCR for the presence of 11 virulence-associated genes (prfA, actA, inlA, inlB, inlC, inlD, inlE, inlF, inlG, inlJ, and inlC2H). Virulence gene analysis revealed significant differences in the actA, inlF, inlG, and inlJ allelic profiles between clinical isolates (complex A) and nonclinical isolates (complex C). The association of particular alleles of actA, inlF, and newly described alleles of inlJ with isolates from CNS infections (particularly rhombencephalitis) suggests that these virulence genes participate in neurovirulence of L. monocytogenes. The overall absence of inlG in clinical complex A and its presence in complex C isolates suggests that the InlG protein is more relevant for the survival of L. monocytogenes in the environment. PMID:21984240

  11. Nature and Nurture of Human Pain

    PubMed Central

    2013-01-01

    Humans are very different when it comes to pain. Some get painful piercings and tattoos; others can not stand even a flu shot. Interindividual variability is one of the main characteristics of human pain on every level including the processing of nociceptive impulses at the periphery, modification of pain signal in the central nervous system, perception of pain, and response to analgesic strategies. As for many other complex behaviors, the sources of this variability come from both nurture (environment) and nature (genes). Here, I will discuss how these factors contribute to human pain separately and via interplay and how epigenetic mechanisms add to the complexity of their effects. PMID:24278778

  12. Complexity analyses show two distinct types of nonlinear dynamics in short heart period variability recordings

    PubMed Central

    Porta, Alberto; Bari, Vlasta; Marchi, Andrea; De Maria, Beatrice; Cysarz, Dirk; Van Leeuwen, Peter; Takahashi, Anielle C. M.; Catai, Aparecida M.; Gnecchi-Ruscone, Tomaso

    2015-01-01

    Two diverse complexity metrics quantifying time irreversibility and local prediction, in connection with a surrogate data approach, were utilized to detect nonlinear dynamics in short heart period (HP) variability series recorded in fetuses, as a function of the gestational period, and in healthy humans, as a function of the magnitude of the orthostatic challenge. The metrics indicated the presence of two distinct types of nonlinear HP dynamics characterized by diverse ranges of time scales. These findings stress the need to render more specific the analysis of nonlinear components of HP dynamics by accounting for different temporal scales. PMID:25806002

  13. The slug parasitic nematode Phasmarhabditis hermaphrodita associates with complex and variable bacterial assemblages that do not affect its virulence.

    PubMed

    Rae, Robbie G; Tourna, Maria; Wilson, Michael J

    2010-07-01

    Phasmarhabditis hermaphrodita is a nematode parasite of slugs that is commercially reared in monoxenic culture with the bacterium Moraxella osloensis and sold as a biological molluscicide. However, its bacterial associations when reared in vivo in slugs are unknown. We show that when reared in vivo in slugs, P. hermaphrodita does not retain M. osloensis and associates with complex and variable bacterial assemblages that do not influence its virulence. This is in marked contrast to the entomopathogenic nematodes that form highly specific mutualistic associations with Enterobacteriaceae that are specifically retained during in vivo growth. (c) 2010 Elsevier Inc. All rights reserved.

  14. Modeling and enhanced sampling of molecular systems with smooth and nonlinear data-driven collective variables

    NASA Astrophysics Data System (ADS)

    Hashemian, Behrooz; Millán, Daniel; Arroyo, Marino

    2013-12-01

    Collective variables (CVs) are low-dimensional representations of the state of a complex system, which help us rationalize molecular conformations and sample free energy landscapes with molecular dynamics simulations. Given their importance, there is need for systematic methods that effectively identify CVs for complex systems. In recent years, nonlinear manifold learning has shown its ability to automatically characterize molecular collective behavior. Unfortunately, these methods fail to provide a differentiable function mapping high-dimensional configurations to their low-dimensional representation, as required in enhanced sampling methods. We introduce a methodology that, starting from an ensemble representative of molecular flexibility, builds smooth and nonlinear data-driven collective variables (SandCV) from the output of nonlinear manifold learning algorithms. We demonstrate the method with a standard benchmark molecule, alanine dipeptide, and show how it can be non-intrusively combined with off-the-shelf enhanced sampling methods, here the adaptive biasing force method. We illustrate how enhanced sampling simulations with SandCV can explore regions that were poorly sampled in the original molecular ensemble. We further explore the transferability of SandCV from a simpler system, alanine dipeptide in vacuum, to a more complex system, alanine dipeptide in explicit water.

  15. Modeling and enhanced sampling of molecular systems with smooth and nonlinear data-driven collective variables.

    PubMed

    Hashemian, Behrooz; Millán, Daniel; Arroyo, Marino

    2013-12-07

    Collective variables (CVs) are low-dimensional representations of the state of a complex system, which help us rationalize molecular conformations and sample free energy landscapes with molecular dynamics simulations. Given their importance, there is need for systematic methods that effectively identify CVs for complex systems. In recent years, nonlinear manifold learning has shown its ability to automatically characterize molecular collective behavior. Unfortunately, these methods fail to provide a differentiable function mapping high-dimensional configurations to their low-dimensional representation, as required in enhanced sampling methods. We introduce a methodology that, starting from an ensemble representative of molecular flexibility, builds smooth and nonlinear data-driven collective variables (SandCV) from the output of nonlinear manifold learning algorithms. We demonstrate the method with a standard benchmark molecule, alanine dipeptide, and show how it can be non-intrusively combined with off-the-shelf enhanced sampling methods, here the adaptive biasing force method. We illustrate how enhanced sampling simulations with SandCV can explore regions that were poorly sampled in the original molecular ensemble. We further explore the transferability of SandCV from a simpler system, alanine dipeptide in vacuum, to a more complex system, alanine dipeptide in explicit water.

  16. Chemometrics Methods for Specificity, Authenticity and Traceability Analysis of Olive Oils: Principles, Classifications and Applications.

    PubMed

    Messai, Habib; Farman, Muhammad; Sarraj-Laabidi, Abir; Hammami-Semmar, Asma; Semmar, Nabil

    2016-11-17

    Olive oils (OOs) show high chemical variability due to several factors of genetic, environmental and anthropic types. Genetic and environmental factors are responsible for natural compositions and polymorphic diversification resulting in different varietal patterns and phenotypes. Anthropic factors, however, are at the origin of different blends' preparation leading to normative, labelled or adulterated commercial products. Control of complex OO samples requires their (i) characterization by specific markers; (ii) authentication by fingerprint patterns; and (iii) monitoring by traceability analysis. These quality control and management aims require the use of several multivariate statistical tools: specificity highlighting requires ordination methods; authentication checking calls for classification and pattern recognition methods; traceability analysis implies the use of network-based approaches able to separate or extract mixed information and memorized signals from complex matrices. This chapter presents a review of different chemometrics methods applied for the control of OO variability from metabolic and physical-chemical measured characteristics. The different chemometrics methods are illustrated by different study cases on monovarietal and blended OO originated from different countries. Chemometrics tools offer multiple ways for quantitative evaluations and qualitative control of complex chemical variability of OO in relation to several intrinsic and extrinsic factors.

  17. Secondary task for full flight simulation incorporating tasks that commonly cause pilot error: Time estimation

    NASA Technical Reports Server (NTRS)

    Rosch, E.

    1975-01-01

    The task of time estimation, an activity occasionally performed by pilots during actual flight, was investigated with the objective of providing human factors investigators with an unobtrusive and minimally loading additional task that is sensitive to differences in flying conditions and flight instrumentation associated with the main task of piloting an aircraft simulator. Previous research indicated that the duration and consistency of time estimates is associated with the cognitive, perceptual, and motor loads imposed by concurrent simple tasks. The relationships between the length and variability of time estimates and concurrent task variables under a more complex situation involving simulated flight were clarified. The wrap-around effect with respect to baseline duration, a consequence of mode switching at intermediate levels of concurrent task distraction, should contribute substantially to estimate variability and have a complex effect on the shape of the resulting distribution of estimates.

  18. Effects of complex aural stimuli on mental performance.

    PubMed

    Vij, Mohit; Aghazadeh, Fereydoun; Ray, Thomas G; Hatipkarasulu, Selen

    2003-06-01

    The objective of this study is to investigate the effect of complex aural stimuli on mental performance. A series of experiments were designed to obtain data for two different analyses. The first analysis is a "Stimulus" versus "No-stimulus" comparison for each of the four dependent variables, i.e. quantitative ability, reasoning ability, spatial ability and memory of an individual, by comparing the control treatment with the rest of the treatments. The second set of analysis is a multi-variant analysis of variance for component level main effects and interactions. The two component factors are tempo of the complex aural stimuli and sound volume level, each administered at three discrete levels for all four dependent variables. Ten experiments were conducted on eleven subjects. It was found that complex aural stimuli influence the quantitative and spatial aspect of the mind, while the reasoning ability was unaffected by the stimuli. Although memory showed a trend to be worse with the presence of complex aural stimuli, the effect was statistically insignificant. Variation in tempo and sound volume level of an aural stimulus did not significantly affect the mental performance of an individual. The results of these experiments can be effectively used in designing work environments.

  19. Effects of organizational complexity and resources on construction site risk.

    PubMed

    Forteza, Francisco J; Carretero-Gómez, Jose M; Sesé, Albert

    2017-09-01

    Our research is aimed at studying the relationship between risk level and organizational complexity and resources on constructions sites. Our general hypothesis is that site complexity increases risk, whereas more resources of the structure decrease risk. A Structural Equation Model (SEM) approach was adopted to validate our theoretical model. To develop our study, 957 building sites in Spain were visited and assessed in 2003-2009. All needed data were obtained using a specific tool developed by the authors to assess site risk, structure and resources (Construction Sites Risk Assessment Tool, or CONSRAT). This tool operationalizes the variables to fit our model, specifically, via a site risk index (SRI) and 10 organizational variables. Our random sample is composed largely of small building sites with general high levels of risk, moderate complexity, and low resources on site. The model obtained adequate fit, and results showed empirical evidence that the factors of complexity and resources can be considered predictors of site risk level. Consequently, these results can help companies, managers of construction and regulators to identify which organizational aspects should be improved to prevent risks on sites and consequently accidents. Copyright © 2017 National Safety Council and Elsevier Ltd. All rights reserved.

  20. Dirac Operator in Several Variables and Combinatorial Identities

    NASA Astrophysics Data System (ADS)

    Damiano, Alberto; Souček, Vladimír

    2007-09-01

    The Dolbeault sequence is a fundamental tool for many problems in the function theory of several complex variables. A lot of attention was paid in the last decades to its analogue in the function theory of several Clifford variables. The first operator in this resolution is the Dirac operator in several variables. The complete description is known in dimension 4 (i.e., in the case of quaternionic variables, see [1, 6, 4]). Much less is known in higher dimensions. The case of three variables was described completely (see [18]). The full description of the complex for all dimensions is not known at present. Even the case of the stable range (i.e., when the number of variables is less or equal to the half of dimension) is still not fully understood. There are two different approaches to the stable range case, one based on classical algebraic geometry (the Hilbert syzygy theory, see [8]), the other one on representation theory (differential invariants in certain parabolic geometries, see [14, 20]). Differential operators in these resolutions are acting on vector-valued functions. Such spaces of functions are quite complicated in general and the first problem in the description of the resolution is to understand their dimensions. Both the approaches mentioned above suggest an answer to this question, although such answers look quite different. The aim of the paper is to compare these two results and to show that they lead to complicated combinatorial identities.

  1. Using a Bayesian network to predict barrier island geomorphologic characteristics

    USGS Publications Warehouse

    Gutierrez, Ben; Plant, Nathaniel G.; Thieler, E. Robert; Turecek, Aaron

    2015-01-01

    Quantifying geomorphic variability of coastal environments is important for understanding and describing the vulnerability of coastal topography, infrastructure, and ecosystems to future storms and sea level rise. Here we use a Bayesian network (BN) to test the importance of multiple interactions between barrier island geomorphic variables. This approach models complex interactions and handles uncertainty, which is intrinsic to future sea level rise, storminess, or anthropogenic processes (e.g., beach nourishment and other forms of coastal management). The BN was developed and tested at Assateague Island, Maryland/Virginia, USA, a barrier island with sufficient geomorphic and temporal variability to evaluate our approach. We tested the ability to predict dune height, beach width, and beach height variables using inputs that included longer-term, larger-scale, or external variables (historical shoreline change rates, distances to inlets, barrier width, mean barrier elevation, and anthropogenic modification). Data sets from three different years spanning nearly a decade sampled substantial temporal variability and serve as a proxy for analysis of future conditions. We show that distinct geomorphic conditions are associated with different long-term shoreline change rates and that the most skillful predictions of dune height, beach width, and beach height depend on including multiple input variables simultaneously. The predictive relationships are robust to variations in the amount of input data and to variations in model complexity. The resulting model can be used to evaluate scenarios related to coastal management plans and/or future scenarios where shoreline change rates may differ from those observed historically.

  2. Sampling and modeling riparian forest structure and riparian microclimate

    Treesearch

    Bianca N.I. Eskelson; Paul D. Anderson; Hailemariam Temesgen

    2013-01-01

    Riparian areas are extremely variable and dynamic, and represent some of the most complex terrestrial ecosystems in the world. The high variability within and among riparian areas poses challenges in developing efficient sampling and modeling approaches that accurately quantify riparian forest structure and riparian microclimate. Data from eight stream reaches that are...

  3. Associations between Adolescents' Perceptions of Alcohol Norms and Alcohol Behaviors: Incorporating Within-School Variability

    ERIC Educational Resources Information Center

    François, Amir; Lindstrom Johnson, Sarah; Waasdorp, Tracy E.; Parker, Elizabeth M.; Bradshaw, Catherine P.

    2017-01-01

    Background: Social norm interventions have been implemented in schools to address concerns of alcohol use among high school students; however, research in this area has not incorporated measures of variability that may better reflect the complexity of social influences. Purpose: To examine the association between perceived alcohol norms, the…

  4. Exploiting temporal variability to understand tree recruitment response to climate change

    Treesearch

    Ines Ibanez; James S. Clark; Shannon LaDeau; Janneke Hill Ris Lambers

    2007-01-01

    Predicting vegetation shifts under climate change is a challenging endeavor, given the complex interactions between biotic and abiotic variables that influence demographic rates. To determine how current trends and variation in climate change affect seedling establishment, we analyzed demographic responses to spatiotemporal variation to temperature and soil moisture in...

  5. 75. FIRST TEST SHOT OF THE VAL AT THE DEDICATION ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    75. FIRST TEST SHOT OF THE VAL AT THE DEDICATION CEREMONIES AS SEEN FROM A FIXED CAMERA STATION, May 7, 1948. (Original photograph in possession of Dave Willis, San Diego, California.) - Variable Angle Launcher Complex, Variable Angle Launcher, CA State Highway 39 at Morris Reservior, Azusa, Los Angeles County, CA

  6. The Impacts of Communication and Multiple Identifications on Intent to Leave: A Multimethodological Exploration.

    ERIC Educational Resources Information Center

    Scott, Craig R.; Connaughton, Stacey L.; Diaz-Saenz, Hector R.; Maguire, Katheryn; Ramirez, Ruben; Richardson, Brian; Shaw, Sandra Pride; Morgan, Dianne

    1999-01-01

    Contributes to scholarship on voluntary turnover by examining the impact of several communication variables and multiple targets of identification on intent to leave. Finds that supervisor/coworker relationships have the strongest association (among communication variables) with intent to leave. Finds a complex relationship between three different…

  7. Pedological memory in forest soil development

    Treesearch

    Jonathan D. Phillips; Daniel A. Marion

    2004-01-01

    Individual trees may have significant impacts on soil morphology. If these impacts are non-random such that some microsites are repeatedly preferentially affected by trees, complex local spatial variability of soils would result. A model of self-reinforcing pedologic influences of trees (SRPIT) is proposed to explain patterns of soil variability in the Ouachita...

  8. The physiological basis for regeneration response to variable retention harvest treatments in three pine species

    Treesearch

    Matthew D. Powers; Kurt S. Pregitzer; Brian J. Palik; Christopher R. Webster

    2011-01-01

    Variable retention harvesting (VRH) is promoted for enhancing biodiversity and ecosystem processes in managed forests, but regeneration responses to the complex stand structures that result from VRH are poorly understood. We analyzed foliar stable carbon isotope ratios (δ13C), oxygen isotope ratios (δ18O...

  9. 83. DETAIL OF THE MUZZLE END OF THE LAUNCHER BRIDGE ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    83. DETAIL OF THE MUZZLE END OF THE LAUNCHER BRIDGE ON TEMPORARY SUPPORTS LOOKING NORTHEAST SHOWING TWO LAUNCHING TUBES, Date unknown, circa 1950'S. (Original photograph in possession of Dave Willis, San Diego, California.) - Variable Angle Launcher Complex, Variable Angle Launcher, CA State Highway 39 at Morris Reservior, Azusa, Los Angeles County, CA

  10. 82. DETAIL OF THE MUZZLE END OF THE LAUNCHER BRIDGE ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    82. DETAIL OF THE MUZZLE END OF THE LAUNCHER BRIDGE LOOKING NORTH SHOWING THE CONNECTING BRIDGE AND TWO LAUNCHING TUBES, Date unknown, circa 1952. (Original photograph in possession of Dave Willis, San Diego, California.) - Variable Angle Launcher Complex, Variable Angle Launcher, CA State Highway 39 at Morris Reservior, Azusa, Los Angeles County, CA

  11. 81. VIEW OF VAL LOOKING NORTH AS SEEN FROM THE ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    81. VIEW OF VAL LOOKING NORTH AS SEEN FROM THE RESERVOIR SHOWING TWO LAUNCHING TUBES ON THE LAUNCHER BRIDGE, Date unknown, circa 1952. (Original photograph in possession of Dave Willis, San Diego, California.) - Variable Angle Launcher Complex, Variable Angle Launcher, CA State Highway 39 at Morris Reservior, Azusa, Los Angeles County, CA

  12. 63. VIEW LOOKING DOWN VAL LAUNCHING SLAB SHOWING DRIVE GEARS, ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    63. VIEW LOOKING DOWN VAL LAUNCHING SLAB SHOWING DRIVE GEARS, CABLES, LAUNCHER RAILS, PROJECTILE CAR AND SUPPORT CARRIAGE, April 8, 1948. (Original photograph in possession of Dave Willis, San Diego, California.) - Variable Angle Launcher Complex, Variable Angle Launcher, CA State Highway 39 at Morris Reservior, Azusa, Los Angeles County, CA

  13. Associations between Procrastination, Personality, Perfectionism, Self-Esteem and Locus of Control

    ERIC Educational Resources Information Center

    Boysan, Murat; Kiral, Erkan

    2017-01-01

    To date, many variables but particularly trait-like psychological constructs have been found to strongly contribute to procrastination but the complex relations among these variables collectively have received almost no attention. The purpose of the study was to provide a more profound understanding of the relations between procrastination,…

  14. Simulating tracer transport in variably saturated soils and shallow groundwater

    USDA-ARS?s Scientific Manuscript database

    The objective of this study was to develop a realistic model to simulate the complex processes of flow and tracer transport in variably saturated soils and to compare simulation results with the detailed monitoring observations. The USDA-ARS OPE3 field site was selected for the case study due to ava...

  15. Impact of gastrectomy procedural complexity on surgical outcomes and hospital comparisons.

    PubMed

    Mohanty, Sanjay; Paruch, Jennifer; Bilimoria, Karl Y; Cohen, Mark; Strong, Vivian E; Weber, Sharon M

    2015-08-01

    Most risk adjustment approaches adjust for patient comorbidities and the primary procedure. However, procedures done at the same time as the index case may increase operative risk and merit inclusion in adjustment models for fair hospital comparisons. Our objectives were to evaluate the impact of surgical complexity on postoperative outcomes and hospital comparisons in gastric cancer surgery. Patients who underwent gastric resection for cancer were identified from a large clinical dataset. Procedure complexity was characterized using secondary procedure CPT codes and work relative value units (RVUs). Regression models were developed to evaluate the association between complexity variables and outcomes. The impact of complexity adjustment on model performance and hospital comparisons was examined. Among 3,467 patients who underwent gastrectomy for adenocarcinoma, 2,171 operations were distal and 1,296 total. A secondary procedure was reported for 33% of distal gastrectomies and 59% of total gastrectomies. Six of 10 secondary procedures were associated with adverse outcomes. For example, patients who underwent a synchronous bowel resection had a higher risk of mortality (odds ratio [OR], 2.14; 95% CI, 1.07-4.29) and reoperation (OR, 2.09; 95% CI, 1.26-3.47). Model performance was slightly better for nearly all outcomes with complexity adjustment (mortality c-statistics: standard model, 0.853; secondary procedure model, 0.858; RVU model, 0.855). Hospital ranking did not change substantially after complexity adjustment. Surgical complexity variables are associated with adverse outcomes in gastrectomy, but complexity adjustment does not affect hospital rankings appreciably. Copyright © 2015 Elsevier Inc. All rights reserved.

  16. Extreme infrared variables from UKIDSS - II. An end-of-survey catalogue of eruptive YSOs and unusual stars

    NASA Astrophysics Data System (ADS)

    Lucas, P. W.; Smith, L. C.; Contreras Peña, C.; Froebrich, D.; Drew, J. E.; Kumar, M. S. N.; Borissova, J.; Minniti, D.; Kurtev, R.; Monguió, M.

    2017-12-01

    We present a catalogue of 618 high-amplitude infrared variable stars (1 < ΔK < 5 mag) detected by the two widely separated epochs of 2.2 μm data in the UKIDSS Galactic plane survey, from searches covering ∼1470 deg2. Most were discovered by a search of all fields at 30 < l < 230°. Sources include new dusty Mira variables, three new cataclysmic variable candidates, a blazar and a peculiar source that may be an interacting binary system. However, ∼60 per cent are young stellar obbjects (YSOs), based on spatial association with star-forming regions at distances ranging from 300 pc to over 10 kpc. This confirms our initial result in Contreras Peña et al. (Paper I) that YSOs dominate the high-amplitude infrared variable sky in the Galactic disc. It is also supported by recently published VISTA Variables in the Via Lactea (VVV) results at 295 < l < 350°. The spectral energy distributions of the YSOs indicate class I or flat-spectrum systems in most cases, as in the VVV sample. A large number of variable YSOs are associated with the Cygnus X complex and other groups are associated with the North America/Pelican nebula, the Gemini OB1 molecular cloud, the Rosette complex, the Cone nebula, the W51 star-forming region and the S86 and S236 H II regions. Most of the YSO variability is likely due to variable/episodic accretion on time-scales of years, albeit usually less extreme than classical FUors and EXors. Luminosities at the 2010 Wide-field Infrared Survey Explorer epoch range from ∼0.1 to 103 L⊙ but only rarely exceed 102.5 L⊙.

  17. Factors associated with medication hassles experienced by family caregivers of older adults.

    PubMed

    Travis, Shirley S; McAuley, William J; Dmochowski, Jacek; Bernard, Marie A; Kao, Hsueh-Fen S; Greene, Ruth

    2007-04-01

    We wished to identify potential factors associated with medication administration hassles, daily irritants, among informal caregivers who provide long-term medication assistance to persons aged 55 or older. A sample of 156 informal caregivers were recruited from seven states and several types of settings. The dependent variable was scores on the Family Caregiver Medication Administration Hassles Scale (FCMAHS). Independent variables included in the analyses were medication complexity; caregiver's gender, ethnicity, relationship to recipient, length of time in caregiving, education, and employment outside the home; care recipient's physical capacity and mental capacity; and whether the caregiver and care recipient live together. After preliminary analysis to reduce the number of independent variables, the remaining variables were included in a linear model (GLM procedure). Possible interactions and residuals were considered. Whites and Hispanics experience greater medication administration hassles than other groups, and perceived hassle intensity increases with medication complexity. Medication administration hassle scores increase with increasing education levels up to a high school degree, after which they remain consistently high. Caregivers whose care recipients have moderate levels of cognitive functioning have higher medication administration hassles scores than those whose care recipients have very high or very low cognitive functioning. The preliminary set of significant variables can be used to identify caregivers who may be at risk of experiencing medication administration hassles, increased stress, and potentially harmful events for their care recipients. Family caregivers are accepting complex caregiving responsibility for family members while receiving little or no support or assistance with caregiving hassles associated with this duty. The FCMAHS offers the means to monitor how caregivers are handling the daily irritants involved with medication administration so that educational interventions can be provided before hassles lead to more serious stress and strain.

  18. Tracking Central Hypovolemia with ECG in Humans: Cautions for the Use of Heart Period Variability in Patient Monitoring

    DTIC Science & Technology

    2010-06-01

    without influences that may confound the re- sults (e.g., pain, anxiety , transport conditions, caregiver in- terventions). Second, rather than being...32. Ryan KL, Rickards CA, Muniz GW, Moralez G, Convertino VA: Interindi- vidual variability in heart rate variability ( HRV ) and complexity measure...Raimondi G, Legramante JM, Macerata A: Revisiting the potential of time-domain indexes in short-term HRV analysis. Biomed Tech (Berl) 51:190Y193, 2006

  19. Canonical correlation analysis of infant's size at birth and maternal factors: a study in rural northwest Bangladesh.

    PubMed

    Kabir, Alamgir; Merrill, Rebecca D; Shamim, Abu Ahmed; Klemn, Rolf D W; Labrique, Alain B; Christian, Parul; West, Keith P; Nasser, Mohammed

    2014-01-01

    This analysis was conducted to explore the association between 5 birth size measurements (weight, length and head, chest and mid-upper arm [MUAC] circumferences) as dependent variables and 10 maternal factors as independent variables using canonical correlation analysis (CCA). CCA considers simultaneously sets of dependent and independent variables and, thus, generates a substantially reduced type 1 error. Data were from women delivering a singleton live birth (n = 14,506) while participating in a double-masked, cluster-randomized, placebo-controlled maternal vitamin A or β-carotene supplementation trial in rural Bangladesh. The first canonical correlation was 0.42 (P<0.001), demonstrating a moderate positive correlation mainly between the 5 birth size measurements and 5 maternal factors (preterm delivery, early pregnancy MUAC, infant sex, age and parity). A significant interaction between infant sex and preterm delivery on birth size was also revealed from the score plot. Thirteen percent of birth size variability was explained by the composite score of the maternal factors (Redundancy, RY/X = 0.131). Given an ability to accommodate numerous relationships and reduce complexities of multiple comparisons, CCA identified the 5 maternal variables able to predict birth size in this rural Bangladesh setting. CCA may offer an efficient, practical and inclusive approach to assessing the association between two sets of variables, addressing the innate complexity of interactions.

  20. Transfer of skill engendered by complex task training under conditions of variable priority.

    PubMed

    Boot, Walter R; Basak, Chandramallika; Erickson, Kirk I; Neider, Mark; Simons, Daniel J; Fabiani, Monica; Gratton, Gabriele; Voss, Michelle W; Prakash, Ruchika; Lee, HyunKyu; Low, Kathy A; Kramer, Arthur F

    2010-11-01

    We explored the theoretical underpinnings of a commonly used training strategy by examining issues of training and transfer of skill in the context of a complex video game (Space Fortress, Donchin, 1989). Participants trained using one of two training regimens: Full Emphasis Training (FET) or Variable Priority Training (VPT). Transfer of training was assessed with a large battery of cognitive and psychomotor tasks ranging from basic laboratory paradigms measuring reasoning, memory, and attention to complex real-world simulations. Consistent with previous studies, VPT accelerated learning and maximized task mastery. However, the hypothesis that VPT would result in broader transfer of training received limited support. Rather, transfer was most evident in tasks that were most similar to the Space Fortress game itself. Results are discussed in terms of potential limitations of the VPT approach. Copyright © 2010 Elsevier B.V. All rights reserved.

  1. Complex Morphological Variability in Complex Evaporitic Systems: Thermal Spring Snails from the Chihuahuan Desert, Mexico

    NASA Astrophysics Data System (ADS)

    Tang, Carol M.; Roopnarine, Peter D.

    2003-11-01

    Thermal springs in evaporitic environments provide a unique biological laboratory in which to study natural selection and evolutionary diversification. These isolated systems may be an analogue for conditions in early Earth or Mars history. One modern example of such a system can be found in the Chihuahuan Desert of north-central Mexico. The Cuatro Cienegas basin hosts a series of thermal springs that form a complex of aquatic ecosystems under a range of environmental conditions. Using landmark-based morphometric techniques, we have quantified an unusually high level of morphological variability in the endemic gastropod Mexipyrgus from Cuatro Cienegas. The differentiation is seen both within and between hydrological systems. Our results suggest that this type of environmental system is capable of producing and maintaining a high level of morphological diversity on small spatial scales, and thus should be a target for future astrobiological research.

  2. Epistemic View of Quantum States and Communication Complexity of Quantum Channels

    NASA Astrophysics Data System (ADS)

    Montina, Alberto

    2012-09-01

    The communication complexity of a quantum channel is the minimal amount of classical communication required for classically simulating a process of state preparation, transmission through the channel and subsequent measurement. It establishes a limit on the power of quantum communication in terms of classical resources. We show that classical simulations employing a finite amount of communication can be derived from a special class of hidden variable theories where quantum states represent statistical knowledge about the classical state and not an element of reality. This special class has attracted strong interest very recently. The communication cost of each derived simulation is given by the mutual information between the quantum state and the classical state of the parent hidden variable theory. Finally, we find that the communication complexity for single qubits is smaller than 1.28 bits. The previous known upper bound was 1.85 bits.

  3. Complexity analysis of fetal heart rate preceding intrauterine demise.

    PubMed

    Schnettler, William T; Goldberger, Ary L; Ralston, Steven J; Costa, Madalena

    2016-08-01

    Visual non-stress test interpretation lacks the optimal specificity and observer-agreement of an ideal screening tool for intrauterine fetal demise (IUFD) syndrome prevention. Computational methods based on traditional heart rate variability have also been of limited value. Complexity analysis probes properties of the dynamics of physiologic signals that are otherwise not accessible and, therefore, might be useful in this context. To explore the association between fetal heart rate (FHR) complexity analysis and subsequent IUFD. Our specific hypothesis is that the complexity of the fetal heart rate dynamics is lower in the IUFD group compared with controls. This case-control study utilized cases of IUFD at a single tertiary-care center among singleton pregnancies with at least 10min of continuous electronic FHR monitoring on at least 2 weekly occasions in the 3 weeks immediately prior to fetal demise. Controls delivered a live singleton beyond 35 weeks' gestation and were matched to cases by gestational age, testing indication, and maternal age in a 3:1 ratio. FHR data was analyzed using the multiscale entropy (MSE) method to derive their complexity index. In addition, pNNx, a measure of short-term heart rate variability, which in adults is ascribable primarily to cardiac vagal tone modulation, was also computed. 211 IUFDs occurred during the 9-year period of review, but only 6 met inclusion criteria. The median gestational age at the time of IUFD was 35.5 weeks. Three controls were matched to each case for a total of 24 subjects, and 87 FHR tracings were included for analysis. The median gestational age at the first fetal heart rate tracing was similar between groups (median [1st-3rd quartiles] weeks: IUFD cases: 34.7 (34.4-36.2); controls: 35.3 (34.4-36.1); p=.94). The median complexity of the cases' tracings was significantly less than the controls' (12.44 [8.9-16.77] vs. 17.82 [15.21-22.17]; p<.0001). Furthermore, the cases' median complexity decreased as gestation advanced whereas the controls' median complexity increased over time. However, this difference was not statistically significant [-0.83 (-2.03 to 0.47) vs. 0.14 (-1.25 to 0.94); p=.62]. The degree of short-term variability of FHR tracings, as measured by the pNN metric, was significantly lower (p<.005) for the controls (1.1 [0.8-1.3]) than the IUFD cases (1.3 [1.1-1.6]). FHR complexity analysis using multiscale entropy analysis may add value to other measures in detecting and monitoring pregnancies at the highest risk for IUFD. The decrease in complexity and short-term variability seen in the IUFD cases may reflect perturbations in neuroautonomic control due to multiple maternal-fetal factors. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  4. Minimizing Interrater Variability in Staging Sleep by Use of Computer-Derived Features

    PubMed Central

    Younes, Magdy; Hanly, Patrick J.

    2016-01-01

    Study Objectives: Inter-scorer variability in sleep staging of polysomnograms (PSGs) results primarily from difficulty in determining whether: (1) an electroencephalogram pattern of wakefulness spans > 15 sec in transitional epochs, (2) spindles or K complexes are present, and (3) duration of delta waves exceeds 6 sec in a 30-sec epoch. We hypothesized that providing digitally derived information about these variables to PSG scorers may reduce inter-scorer variability. Methods: Fifty-six PSGs were scored (five-stage) by two experienced technologists, (first manual, M1). Months later, the technologists edited their own scoring (second manual, M2). PSGs were then scored with an automatic system and the same two technologists and an additional experienced technologist edited them, epoch-by-epoch (Edited-Auto). This resulted in seven manual scores for each PSG. The two M2 scores were then independently modified using digitally obtained values for sleep depth and delta duration and digitally identified spindles and K complexes. Results: Percent agreement between scorers in M2 was 78.9 ± 9.0% before modification and 96.5 ± 2.6% after. Errors of this approach were defined as a change in a manual score to a stage that was not assigned by any scorer during the seven manual scoring sessions. Total errors averaged 7.1 ± 3.7% and 6.9 ± 3.8% of epochs for scorers 1 and 2, respectively, and there was excellent agreement between the modified score and the initial manual score of each technologist. Conclusions: Providing digitally obtained information about sleep depth, delta duration, spindles and K complexes during manual scoring can greatly reduce interrater variability in sleep staging by eliminating the guesswork in scoring epochs with equivocal features. Citation: Younes M, Hanly PJ. Minimizing interrater variability in staging sleep by use of computer-derived features. J Clin Sleep Med 2016;12(10):1347–1356. PMID:27448418

  5. Using a Bayesian network to clarify areas requiring research in a host-pathogen system.

    PubMed

    Bower, D S; Mengersen, K; Alford, R A; Schwarzkopf, L

    2017-12-01

    Bayesian network analyses can be used to interactively change the strength of effect of variables in a model to explore complex relationships in new ways. In doing so, they allow one to identify influential nodes that are not well studied empirically so that future research can be prioritized. We identified relationships in host and pathogen biology to examine disease-driven declines of amphibians associated with amphibian chytrid fungus (Batrachochytrium dendrobatidis). We constructed a Bayesian network consisting of behavioral, genetic, physiological, and environmental variables that influence disease and used them to predict host population trends. We varied the impacts of specific variables in the model to reveal factors with the most influence on host population trend. The behavior of the nodes (the way in which the variables probabilistically responded to changes in states of the parents, which are the nodes or variables that directly influenced them in the graphical model) was consistent with published results. The frog population had a 49% probability of decline when all states were set at their original values, and this probability increased when body temperatures were cold, the immune system was not suppressing infection, and the ambient environment was conducive to growth of B. dendrobatidis. These findings suggest the construction of our model reflected the complex relationships characteristic of host-pathogen interactions. Changes to climatic variables alone did not strongly influence the probability of population decline, which suggests that climate interacts with other factors such as the capacity of the frog immune system to suppress disease. Changes to the adaptive immune system and disease reservoirs had a large effect on the population trend, but there was little empirical information available for model construction. Our model inputs can be used as a base to examine other systems, and our results show that such analyses are useful tools for reviewing existing literature, identifying links poorly supported by evidence, and understanding complexities in emerging infectious-disease systems. © 2017 Society for Conservation Biology.

  6. Inferring Master Painters' Esthetic Biases from the Statistics of Portraits

    PubMed Central

    Aleem, Hassan; Correa-Herran, Ivan; Grzywacz, Norberto M.

    2017-01-01

    The Processing Fluency Theory posits that the ease of sensory information processing in the brain facilitates esthetic pleasure. Accordingly, the theory would predict that master painters should display biases toward visual properties such as symmetry, balance, and moderate complexity. Have these biases been occurring and if so, have painters been optimizing these properties (fluency variables)? Here, we address these questions with statistics of portrait paintings from the Early Renaissance period. To do this, we first developed different computational measures for each of the aforementioned fluency variables. Then, we measured their statistics in 153 portraits from 26 master painters, in 27 photographs of people in three controlled poses, and in 38 quickly snapped photographs of individual persons. A statistical comparison between Early Renaissance portraits and quickly snapped photographs revealed that painters showed a bias toward balance, symmetry, and moderate complexity. However, a comparison between portraits and controlled-pose photographs showed that painters did not optimize each of these properties. Instead, different painters presented biases toward different, narrow ranges of fluency variables. Further analysis suggested that the painters' individuality stemmed in part from having to resolve the tension between complexity vs. symmetry and balance. We additionally found that constraints on the use of different painting materials by distinct painters modulated these fluency variables systematically. In conclusion, the Processing Fluency Theory of Esthetic Pleasure would need expansion if we were to apply it to the history of visual art since it cannot explain the lack of optimization of each fluency variables. To expand the theory, we propose the existence of a Neuroesthetic Space, which encompasses the possible values that each of the fluency variables can reach in any given art period. We discuss the neural mechanisms of this Space and propose that it has a distributed representation in the human brain. We further propose that different artists reside in different, small sub-regions of the Space. This Neuroesthetic-Space hypothesis raises the question of how painters and their paintings evolve across art periods. PMID:28337133

  7. The extraction of simple relationships in growth factor-specific multiple-input and multiple-output systems in cell-fate decisions by backward elimination PLS regression.

    PubMed

    Akimoto, Yuki; Yugi, Katsuyuki; Uda, Shinsuke; Kudo, Takamasa; Komori, Yasunori; Kubota, Hiroyuki; Kuroda, Shinya

    2013-01-01

    Cells use common signaling molecules for the selective control of downstream gene expression and cell-fate decisions. The relationship between signaling molecules and downstream gene expression and cellular phenotypes is a multiple-input and multiple-output (MIMO) system and is difficult to understand due to its complexity. For example, it has been reported that, in PC12 cells, different types of growth factors activate MAP kinases (MAPKs) including ERK, JNK, and p38, and CREB, for selective protein expression of immediate early genes (IEGs) such as c-FOS, c-JUN, EGR1, JUNB, and FOSB, leading to cell differentiation, proliferation and cell death; however, how multiple-inputs such as MAPKs and CREB regulate multiple-outputs such as expression of the IEGs and cellular phenotypes remains unclear. To address this issue, we employed a statistical method called partial least squares (PLS) regression, which involves a reduction of the dimensionality of the inputs and outputs into latent variables and a linear regression between these latent variables. We measured 1,200 data points for MAPKs and CREB as the inputs and 1,900 data points for IEGs and cellular phenotypes as the outputs, and we constructed the PLS model from these data. The PLS model highlighted the complexity of the MIMO system and growth factor-specific input-output relationships of cell-fate decisions in PC12 cells. Furthermore, to reduce the complexity, we applied a backward elimination method to the PLS regression, in which 60 input variables were reduced to 5 variables, including the phosphorylation of ERK at 10 min, CREB at 5 min and 60 min, AKT at 5 min and JNK at 30 min. The simple PLS model with only 5 input variables demonstrated a predictive ability comparable to that of the full PLS model. The 5 input variables effectively extracted the growth factor-specific simple relationships within the MIMO system in cell-fate decisions in PC12 cells.

  8. Controlled assembly of artificial protein-protein complexes via DNA duplex formation.

    PubMed

    Płoskoń, Eliza; Wagner, Sara C; Ellington, Andrew D; Jewett, Michael C; O'Reilly, Rachel; Booth, Paula J

    2015-03-18

    DNA-protein conjugates have found a wide range of applications. This study demonstrates the formation of defined, non-native protein-protein complexes via the site specific labeling of two proteins of interest with complementary strands of single-stranded DNA in vitro. This study demonstrates that the affinity of two DNA-protein conjugates for one another may be tuned by the use of variable lengths of DNA allowing reversible control of complex formation.

  9. Determinants of Hospital Casemix Complexity

    PubMed Central

    Becker, Edmund R.; Steinwald, Bruce

    1981-01-01

    Using the Commission on Professional and Hospital Activities' Resource Need Index as a measure of casemix complexity, this paper examines the relative contributions of teaching commitment and other hospital characteristics, hospital service and insurer distributions, and area characteristics to variations in casemix complexity. The empirical estimates indicate that all three types of independent variables have a substantial influence. These results are discussed in light of recent casemix research as well as current policy implications. PMID:6799430

  10. Force control compensation method with variable load stiffness and damping of the hydraulic drive unit force control system

    NASA Astrophysics Data System (ADS)

    Kong, Xiangdong; Ba, Kaixian; Yu, Bin; Cao, Yuan; Zhu, Qixin; Zhao, Hualong

    2016-05-01

    Each joint of hydraulic drive quadruped robot is driven by the hydraulic drive unit (HDU), and the contacting between the robot foot end and the ground is complex and variable, which increases the difficulty of force control inevitably. In the recent years, although many scholars researched some control methods such as disturbance rejection control, parameter self-adaptive control, impedance control and so on, to improve the force control performance of HDU, the robustness of the force control still needs improving. Therefore, how to simulate the complex and variable load characteristics of the environment structure and how to ensure HDU having excellent force control performance with the complex and variable load characteristics are key issues to be solved in this paper. The force control system mathematic model of HDU is established by the mechanism modeling method, and the theoretical models of a novel force control compensation method and a load characteristics simulation method under different environment structures are derived, considering the dynamic characteristics of the load stiffness and the load damping under different environment structures. Then, simulation effects of the variable load stiffness and load damping under the step and sinusoidal load force are analyzed experimentally on the HDU force control performance test platform, which provides the foundation for the force control compensation experiment research. In addition, the optimized PID control parameters are designed to make the HDU have better force control performance with suitable load stiffness and load damping, under which the force control compensation method is introduced, and the robustness of the force control system with several constant load characteristics and the variable load characteristics respectively are comparatively analyzed by experiment. The research results indicate that if the load characteristics are known, the force control compensation method presented in this paper has positive compensation effects on the load characteristics variation, i.e., this method decreases the effects of the load characteristics variation on the force control performance and enhances the force control system robustness with the constant PID parameters, thereby, the online PID parameters tuning control method which is complex needs not be adopted. All the above research provides theoretical and experimental foundation for the force control method of the quadruped robot joints with high robustness.

  11. VizieR Online Data Catalog: uvby photometry of 4 CP stars (Adelman, 1997)

    NASA Astrophysics Data System (ADS)

    Adelman, S. J.

    1996-07-01

    Differential Stroemgren uvby photometric observations from the Four College Automated Photoelectric Telescope refine the rotational periods and define the shapes of the light curves of four magnetic Chemically Peculiar stars. HD 32633 (P=6.43000d) exhibits an in-phase variability with asymmetrically shaped light curves. 25 Sex (P=4.37900d) has a complex variability with the v, b, and y light variability crudely in phase, but quite different from that of u. HR 7224 (P=1.123095d) shows in-phase variability with two nearly equal secondary minima. HD 200311 (P=26.0042d), which was previous thought to be a long period variable, is found to be a modest photometric variable. (5 data files).

  12. Adaptive Variability in Skilled Human Movements

    NASA Astrophysics Data System (ADS)

    Kudo, Kazutoshi; Ohtsuki, Tatsuyuki

    Human movements are produced in variable external/internal environments. Because of this variability, the same motor command can result in quite different movement patterns. Therefore, to produce skilled movements humans must coordinate the variability, not try to exclude it. In addition, because human movements are produced in redundant and complex systems, a combination of variability should be observed in different anatomical/physiological levels. In this paper, we introduce our research about human movement variability that shows remarkable coordination among components, and between organism and environment. We also introduce nonlinear dynamical models that can describe a variety of movements as a self-organization of a dynamical system, because the dynamical systems approach is a major candidate to understand the principle underlying organization of varying systems with huge degrees-of-freedom.

  13. Complexity in Soil Systems: What Does It Mean and How Should We Proceed?

    NASA Astrophysics Data System (ADS)

    Faybishenko, B.; Molz, F. J.; Brodie, E.; Hubbard, S. S.

    2015-12-01

    The complex soil systems approach is needed fundamentally for the development of integrated, interdisciplinary methods to measure and quantify the physical, chemical and biological processes taking place in soil, and to determine the role of fine-scale heterogeneities. This presentation is aimed at a review of the concepts and observations concerning complexity and complex systems theory, including terminology, emergent complexity and simplicity, self-organization and a general approach to the study of complex systems using the Weaver (1948) concept of "organized complexity." These concepts are used to provide understanding of complex soil systems, and to develop experimental and mathematical approaches to soil microbiological processes. The results of numerical simulations, observations and experiments are presented that indicate the presence of deterministic chaotic dynamics in soil microbial systems. So what are the implications for the scientists who wish to develop mathematical models in the area of organized complexity or to perform experiments to help clarify an aspect of an organized complex system? The modelers have to deal with coupled systems having at least three dependent variables, and they have to forgo making linear approximations to nonlinear phenomena. The analogous rule for experimentalists is that they need to perform experiments that involve measurement of at least three interacting entities (variables depending on time, space, and each other). These entities could be microbes in soil penetrated by roots. If a process being studied in a soil affects the soil properties, like biofilm formation, then this effect has to be measured and included. The mathematical implications of this viewpoint are examined, and results of numerical solutions to a system of equations demonstrating deterministic chaotic behavior are also discussed using time series and the 3D strange attractors.

  14. Molecular Species Delimitation in the Racomitrium canescens Complex (Grimmiaceae) and Implications for DNA Barcoding of Species Complexes in Mosses

    PubMed Central

    Stech, Michael; Veldman, Sarina; Larraín, Juan; Muñoz, Jesús; Quandt, Dietmar; Hassel, Kristian; Kruijer, Hans

    2013-01-01

    In bryophytes a morphological species concept is still most commonly employed, but delimitation of closely related species based on morphological characters is often difficult. Here we test morphological species circumscriptions in a species complex of the moss genus Racomitrium, the R. canescens complex, based on variable DNA sequence markers from the plastid (rps4-trnT-trnL region) and nuclear (nrITS) genomes. The extensive morphological variability within the complex has led to different opinions about the number of species and intraspecific taxa to be distinguished. Molecular phylogenetic reconstructions allowed to clearly distinguish all eight currently recognised species of the complex plus a ninth species that was inferred to belong to the complex in earlier molecular analyses. The taxonomic significance of intraspecific sequence variation is discussed. The present molecular data do not support the division of the R. canescens complex into two groups of species (subsections or sections). Most morphological characters, albeit being in part difficult to apply, are reliable for species identification in the R. canescens complex. However, misidentification of collections that were morphologically intermediate between species questioned the suitability of leaf shape as diagnostic character. Four partitions of the molecular markers (rps4-trnT, trnT-trnL, ITS1, ITS2) that could potentially be used for molecular species identification (DNA barcoding) performed almost equally well concerning amplification and sequencing success. Of these, ITS1 provided the highest species discrimination capacity and should be considered as a DNA barcoding marker for mosses, especially in complexes of closely related species. Molecular species identification should be complemented by redefining morphological characters, to develop a set of easy-to-use molecular and non-molecular identification tools for improving biodiversity assessments and ecological research including mosses. PMID:23341927

  15. Adaptive control for a class of nonlinear complex dynamical systems with uncertain complex parameters and perturbations

    PubMed Central

    Liu, Jian; Liu, Kexin; Liu, Shutang

    2017-01-01

    In this paper, adaptive control is extended from real space to complex space, resulting in a new control scheme for a class of n-dimensional time-dependent strict-feedback complex-variable chaotic (hyperchaotic) systems (CVCSs) in the presence of uncertain complex parameters and perturbations, which has not been previously reported in the literature. In detail, we have developed a unified framework for designing the adaptive complex scalar controller to ensure this type of CVCSs asymptotically stable and for selecting complex update laws to estimate unknown complex parameters. In particular, combining Lyapunov functions dependent on complex-valued vectors and back-stepping technique, sufficient criteria on stabilization of CVCSs are derived in the sense of Wirtinger calculus in complex space. Finally, numerical simulation is presented to validate our theoretical results. PMID:28467431

  16. Adaptive control for a class of nonlinear complex dynamical systems with uncertain complex parameters and perturbations.

    PubMed

    Liu, Jian; Liu, Kexin; Liu, Shutang

    2017-01-01

    In this paper, adaptive control is extended from real space to complex space, resulting in a new control scheme for a class of n-dimensional time-dependent strict-feedback complex-variable chaotic (hyperchaotic) systems (CVCSs) in the presence of uncertain complex parameters and perturbations, which has not been previously reported in the literature. In detail, we have developed a unified framework for designing the adaptive complex scalar controller to ensure this type of CVCSs asymptotically stable and for selecting complex update laws to estimate unknown complex parameters. In particular, combining Lyapunov functions dependent on complex-valued vectors and back-stepping technique, sufficient criteria on stabilization of CVCSs are derived in the sense of Wirtinger calculus in complex space. Finally, numerical simulation is presented to validate our theoretical results.

  17. Towards Cost-Effective Operational Monitoring Systems for Complex Waters: Analyzing Small-Scale Coastal Processes with Optical Transmissometry

    PubMed Central

    Gonçalves-Araujo, Rafael; Wiegmann, Sonja; Torrecilla, Elena; Bardaji, Raul; Röttgers, Rüdiger; Bracher, Astrid; Piera, Jaume

    2017-01-01

    The detection and prediction of changes in coastal ecosystems require a better understanding of the complex physical, chemical and biological interactions, which involves that observations should be performed continuously. For this reason, there is an increasing demand for small, simple and cost-effective in situ sensors to analyze complex coastal waters at a broad range of scales. In this context, this study seeks to explore the potential of beam attenuation spectra, c(λ), measured in situ with an advanced-technology optical transmissometer, for assessing temporal and spatial patterns in the complex estuarine waters of Alfacs Bay (NW Mediterranean) as a test site. In particular, the information contained in the spectral beam attenuation coefficient was assessed and linked with different biogeochemical variables. The attenuation at λ = 710 nm was used as a proxy for particle concentration, TSM, whereas a novel parameter was adopted as an optical indicator for chlorophyll a (Chl-a) concentration, based on the local maximum of c(λ) observed at the long-wavelength side of the red band Chl-a absorption peak. In addition, since coloured dissolved organic matter (CDOM) has an important influence on the beam attenuation spectral shape and complementary measurements of particle size distribution were available, the beam attenuation spectral slope was used to analyze the CDOM content. Results were successfully compared with optical and biogeochemical variables from laboratory analysis of collocated water samples, and statistically significant correlations were found between the attenuation proxies and the biogeochemical variables TSM, Chl-a and CDOM. This outcome depicted the potential of high-frequency beam attenuation measurements as a simple, continuous and cost-effective approach for rapid detection of changes and patterns in biogeochemical properties in complex coastal environments. PMID:28107539

  18. Slice regular functions of several Clifford variables

    NASA Astrophysics Data System (ADS)

    Ghiloni, R.; Perotti, A.

    2012-11-01

    We introduce a class of slice regular functions of several Clifford variables. Our approach to the definition of slice functions is based on the concept of stem functions of several variables and on the introduction on real Clifford algebras of a family of commuting complex structures. The class of slice regular functions include, in particular, the family of (ordered) polynomials in several Clifford variables. We prove some basic properties of slice and slice regular functions and give examples to illustrate this function theory. In particular, we give integral representation formulas for slice regular functions and a Hartogs type extension result.

  19. Tying Variability in Summertime North American Extreme Weather Regimes to the Boreal Summer Intraseasonal Oscillation

    NASA Astrophysics Data System (ADS)

    Jenney, A. M.; Randall, D. A.

    2017-12-01

    Tropical intraseasonal oscillations are known to be a source of extratropical variability. We show that subseasonal variability in observed North American epidemiologically significant regional extreme weather regimes is teleconnected to the boreal summer intraseasonal oscillation (BSISO)—a complex tropical weather system that is active during the northern summer and has a 30-50 day timescale. The dynamics of the teleconnection are examined. We also find that interannual variability of the tropical mean-state can modulate the teleconnection. Our results suggest that the BSISO may enable subseasonal to seasonal predictions of North American summertime weather extremes.

  20. Impulses towards a Multifunctional Transition in Rural Australia: Gaps in the Research Agenda

    ERIC Educational Resources Information Center

    Holmes, John

    2006-01-01

    The direction, complexity and pace of rural change in affluent, western societies can be conceptualized as a multifunctional transition, in which a variable mix of consumption and protection values has emerged, contesting the former dominance of production values, and leading to greater complexity and heterogeneity in rural occupance at all…

  1. Quantification for Complex Assessment: Uncertainty Estimation in Final Year Project Thesis Assessment

    ERIC Educational Resources Information Center

    Kim, Ho Sung

    2013-01-01

    A quantitative method for estimating an expected uncertainty (reliability and validity) in assessment results arising from the relativity between four variables, viz examiner's expertise, examinee's expertise achieved, assessment task difficulty and examinee's performance, was developed for the complex assessment applicable to final…

  2. Insight into genome variability in the Fusarium Incarnatum-equiseti species complex through comparative analysis of secondary metabolic biosynthetic gene clusters

    USDA-ARS?s Scientific Manuscript database

    The genus Fusarium comprises 22 species complexes that together include approximately 300 phylogenetically distinct species. A major focus in Fusarium literature is to understand the genetic basis of niche specialization, secondary metabolites (SM) production, and host interactions in closely relate...

  3. Re-Os isotopic systematics in chromitites from the Stillwater Complex, Montana, USA

    NASA Astrophysics Data System (ADS)

    Marcantonio, Franco; Zindler, Alan; Reisberg, Laurie; Mathez, E. A.

    1993-08-01

    New Re-Os isotopic data on chromitites of the Stillwater Complex demonstrate isotopic equilibrium between cumulate chromite and whole rock. Initial osmium isotopic ratios for the chromitites, chosen for their freshness, are consistent with derivation from a mantle-derived magma that suffered little or no interaction with the continental crust prior to crystallization. Molybdenite, separated from a sample of the G-chromitite, yields a Re-Os age of 2740 Ma, indistinguishable from the age of the intrusion. The presence of molybdenite documents rhenium, and probably osmium, mobilization by hydrothermal fluids that permeated the intrusion shortly after crystallization. Initial osmium isotopic variability observed in chromitites and other rocks from the Stillwater Complex could result from interaction with these fluids. In this context, there is no compelling reason to call on assimilation of crust by mantle-derived magma to explain the osmium or neodymium isotopic variability. Although osmium isotopic systematics have been affected by hydrothermal processes, Re-Os results demonstrate that more than 95 percent of the osmium, and by inference other PGEs in the Stillwater Complex, derive from the mantle.

  4. Theorems and application of local activity of CNN with five state variables and one port.

    PubMed

    Xiong, Gang; Dong, Xisong; Xie, Li; Yang, Thomas

    2012-01-01

    Coupled nonlinear dynamical systems have been widely studied recently. However, the dynamical properties of these systems are difficult to deal with. The local activity of cellular neural network (CNN) has provided a powerful tool for studying the emergence of complex patterns in a homogeneous lattice, which is composed of coupled cells. In this paper, the analytical criteria for the local activity in reaction-diffusion CNN with five state variables and one port are presented, which consists of four theorems, including a serial of inequalities involving CNN parameters. These theorems can be used for calculating the bifurcation diagram to determine or analyze the emergence of complex dynamic patterns, such as chaos. As a case study, a reaction-diffusion CNN of hepatitis B Virus (HBV) mutation-selection model is analyzed and simulated, the bifurcation diagram is calculated. Using the diagram, numerical simulations of this CNN model provide reasonable explanations of complex mutant phenomena during therapy. Therefore, it is demonstrated that the local activity of CNN provides a practical tool for the complex dynamics study of some coupled nonlinear systems.

  5. Water Impact Test and Simulation of a Composite Energy Absorbing Fuselage Section

    NASA Technical Reports Server (NTRS)

    Fasanella, Edwin L.; Jackson, Karen E.; Sparks, Chad; Sareen, Ashish

    2003-01-01

    In March 2002, a 25-ft/s vertical drop test of a composite fuselage section was conducted onto water. The purpose of the test was to obtain experimental data characterizing the structural response of the fuselage section during water impact for comparison with two previous drop tests that were performed onto a rigid surface and soft soil. For the drop test, the fuselage section was configured with ten 100-lb. lead masses, five per side, that were attached to seat rails mounted to the floor. The fuselage section was raised to a height of 10-ft. and dropped vertically into a 15-ft. diameter pool filled to a depth of 3.5-ft. with water. Approximately 70 channels of data were collected during the drop test at a 10-kHz sampling rate. The test data were used to validate crash simulations of the water impact that were developed using the nonlinear, explicit transient dynamic codes, MSC.Dytran and LS-DYNA. The fuselage structure was modeled using shell and solid elements with a Lagrangian mesh, and the water was modeled with both Eulerian and Lagrangian techniques. The fluid-structure interactions were executed using the fast general coupling in MSC.Dytran and the Arbitrary Lagrange-Euler (ALE) coupling in LS-DYNA. Additionally, the smooth particle hydrodynamics (SPH) meshless Lagrangian technique was used in LS-DYNA to represent the fluid. The simulation results were correlated with the test data to validate the modeling approach. Additional simulation studies were performed to determine how changes in mesh density, mesh uniformity, fluid viscosity, and failure strain influence the test-analysis correlation.

  6. A constrained-gradient method to control divergence errors in numerical MHD

    NASA Astrophysics Data System (ADS)

    Hopkins, Philip F.

    2016-10-01

    In numerical magnetohydrodynamics (MHD), a major challenge is maintaining nabla \\cdot {B}=0. Constrained transport (CT) schemes achieve this but have been restricted to specific methods. For more general (meshless, moving-mesh, ALE) methods, `divergence-cleaning' schemes reduce the nabla \\cdot {B} errors; however they can still be significant and can lead to systematic errors which converge away slowly. We propose a new constrained gradient (CG) scheme which augments these with a projection step, and can be applied to any numerical scheme with a reconstruction. This iteratively approximates the least-squares minimizing, globally divergence-free reconstruction of the fluid. Unlike `locally divergence free' methods, this actually minimizes the numerically unstable nabla \\cdot {B} terms, without affecting the convergence order of the method. We implement this in the mesh-free code GIZMO and compare various test problems. Compared to cleaning schemes, our CG method reduces the maximum nabla \\cdot {B} errors by ˜1-3 orders of magnitude (˜2-5 dex below typical errors if no nabla \\cdot {B} cleaning is used). By preventing large nabla \\cdot {B} at discontinuities, this eliminates systematic errors at jumps. Our CG results are comparable to CT methods; for practical purposes, the nabla \\cdot {B} errors are eliminated. The cost is modest, ˜30 per cent of the hydro algorithm, and the CG correction can be implemented in a range of numerical MHD methods. While for many problems, we find Dedner-type cleaning schemes are sufficient for good results, we identify a range of problems where using only Powell or `8-wave' cleaning can produce order-of-magnitude errors.

  7. CLUMPY DISKS AS A TESTBED FOR FEEDBACK-REGULATED GALAXY FORMATION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mayer, Lucio; Tamburello, Valentina; Lupi, Alessandro

    2016-10-10

    We study the dependence of fragmentation in massive gas-rich galaxy disks at z >1 on stellar feedback schemes and hydrodynamical solvers, employing the GASOLINE2 SPH code and the lagrangian mesh-less code GIZMO in finite mass mode. Non-cosmological galaxy disk runs with the standard delayed-cooling blastwave feedback are compared with runs adopting a new superbubble feedback, which produces winds by modeling the detailed physics of supernova-driven bubbles and leads to efficient self-regulation of star formation. We find that, with blastwave feedback, massive star-forming clumps form in comparable number and with very similar masses in GASOLINE2 and GIZMO. Typical clump masses aremore » in the range 10{sup 7}–10{sup 8} M {sub ⊙}, lower than in most previous works, while giant clumps with masses above 10{sup 9} M {sub ⊙} are exceedingly rare. By contrast, superbubble feedback does not produce massive star-forming bound clumps as galaxies never undergo a phase of violent disk instability. In this scheme, only sporadic, unbound star-forming overdensities lasting a few tens of Myr can arise, triggered by non-linear perturbations from massive satellite companions. We conclude that there is severe tension between explaining massive star-forming clumps observed at z >1 primarily as the result of disk fragmentation driven by gravitational instability and the prevailing view of feedback-regulated galaxy formation. The link between disk stability and star formation efficiency should thus be regarded as a key testing ground for galaxy formation theory.« less

  8. Convergence of the Critical Cooling Rate for Protoplanetary Disk Fragmentation Achieved: The Key Role of Numerical Dissipation of Angular Momentum

    NASA Astrophysics Data System (ADS)

    Deng, Hongping; Mayer, Lucio; Meru, Farzana

    2017-09-01

    We carry out simulations of gravitationally unstable disks using smoothed particle hydrodynamics (SPH) and the novel Lagrangian meshless finite mass (MFM) scheme in the GIZMO code. Our aim is to understand the cause of the nonconvergence of the cooling boundary for fragmentation reported in the literature. We run SPH simulations with two different artificial viscosity implementations and compare them with MFM, which does not employ any artificial viscosity. With MFM we demonstrate convergence of the critical cooling timescale for fragmentation at {β }{crit}≈ 3. Nonconvergence persists in SPH codes. We show how the nonconvergence problem is caused by artificial fragmentation triggered by excessive dissipation of angular momentum in domains with large velocity derivatives. With increased resolution, such domains become more prominent. Vorticity lags behind density, due to numerical viscous dissipation in these regions, promoting collapse with longer cooling times. Such effect is shown to be dominant over the competing tendency of artificial viscosity to diminish with increasing resolution. When the initial conditions are first relaxed for several orbits, the flow is more regular, with lower shear and vorticity in nonaxisymmetric regions, aiding convergence. Yet MFM is the only method that converges exactly. Our findings are of general interest, as numerical dissipation via artificial viscosity or advection errors can also occur in grid-based codes. Indeed, for the FARGO code values of {β }{crit} significantly higher than our converged estimate have been reported in the literature. Finally, we discuss implications for giant planet formation via disk instability.

  9. The Formation of a Milky Way-sized Disk Galaxy. I. A Comparison of Numerical Methods

    NASA Astrophysics Data System (ADS)

    Zhu, Qirong; Li, Yuexing

    2016-11-01

    The long-standing challenge of creating a Milky Way- (MW-) like disk galaxy from cosmological simulations has motivated significant developments in both numerical methods and physical models. We investigate these two fundamental aspects in a new comparison project using a set of cosmological hydrodynamic simulations of an MW-sized galaxy. In this study, we focus on the comparison of two particle-based hydrodynamics methods: an improved smoothed particle hydrodynamics (SPH) code Gadget, and a Lagrangian Meshless Finite-Mass (MFM) code Gizmo. All the simulations in this paper use the same initial conditions and physical models, which include star formation, “energy-driven” outflows, metal-dependent cooling, stellar evolution, and metal enrichment. We find that both numerical schemes produce a late-type galaxy with extended gaseous and stellar disks. However, notable differences are present in a wide range of galaxy properties and their evolution, including star-formation history, gas content, disk structure, and kinematics. Compared to Gizmo, the Gadget simulation produced a larger fraction of cold, dense gas at high redshift which fuels rapid star formation and results in a higher stellar mass by 20% and a lower gas fraction by 10% at z = 0, and the resulting gas disk is smoother and more coherent in rotation due to damping of turbulent motion by the numerical viscosity in SPH, in contrast to the Gizmo simulation, which shows a more prominent spiral structure. Given its better convergence properties and lower computational cost, we argue that the MFM method is a promising alternative to SPH in cosmological hydrodynamic simulations.

  10. THE FORMATION OF A MILKY WAY-SIZED DISK GALAXY. I. A COMPARISON OF NUMERICAL METHODS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhu, Qirong; Li, Yuexing, E-mail: qxz125@psu.edu

    The long-standing challenge of creating a Milky Way- (MW-) like disk galaxy from cosmological simulations has motivated significant developments in both numerical methods and physical models. We investigate these two fundamental aspects in a new comparison project using a set of cosmological hydrodynamic simulations of an MW-sized galaxy. In this study, we focus on the comparison of two particle-based hydrodynamics methods: an improved smoothed particle hydrodynamics (SPH) code Gadget, and a Lagrangian Meshless Finite-Mass (MFM) code Gizmo. All the simulations in this paper use the same initial conditions and physical models, which include star formation, “energy-driven” outflows, metal-dependent cooling, stellarmore » evolution, and metal enrichment. We find that both numerical schemes produce a late-type galaxy with extended gaseous and stellar disks. However, notable differences are present in a wide range of galaxy properties and their evolution, including star-formation history, gas content, disk structure, and kinematics. Compared to Gizmo, the Gadget simulation produced a larger fraction of cold, dense gas at high redshift which fuels rapid star formation and results in a higher stellar mass by 20% and a lower gas fraction by 10% at z = 0, and the resulting gas disk is smoother and more coherent in rotation due to damping of turbulent motion by the numerical viscosity in SPH, in contrast to the Gizmo simulation, which shows a more prominent spiral structure. Given its better convergence properties and lower computational cost, we argue that the MFM method is a promising alternative to SPH in cosmological hydrodynamic simulations.« less

  11. Singular boundary method for global gravity field modelling

    NASA Astrophysics Data System (ADS)

    Cunderlik, Robert

    2014-05-01

    The singular boundary method (SBM) and method of fundamental solutions (MFS) are meshless boundary collocation techniques that use the fundamental solution of a governing partial differential equation (e.g. the Laplace equation) as their basis functions. They have been developed to avoid singular numerical integration as well as mesh generation in the traditional boundary element method (BEM). SBM have been proposed to overcome a main drawback of MFS - its controversial fictitious boundary outside the domain. The key idea of SBM is to introduce a concept of the origin intensity factors that isolate singularities of the fundamental solution and its derivatives using some appropriate regularization techniques. Consequently, the source points can be placed directly on the real boundary and coincide with the collocation nodes. In this study we deal with SBM applied for high-resolution global gravity field modelling. The first numerical experiment presents a numerical solution to the fixed gravimetric boundary value problem. The achieved results are compared with the numerical solutions obtained by MFS or the direct BEM indicating efficiency of all methods. In the second numerical experiments, SBM is used to derive the geopotential and its first derivatives from the Tzz components of the gravity disturbing tensor observed by the GOCE satellite mission. A determination of the origin intensity factors allows to evaluate the disturbing potential and gravity disturbances directly on the Earth's surface where the source points are located. To achieve high-resolution numerical solutions, the large-scale parallel computations are performed on the cluster with 1TB of the distributed memory and an iterative elimination of far zones' contributions is applied.

  12. The Natural Neighbour Radial Point Interpolation Meshless Method Applied to the Non-Linear Analysis

    NASA Astrophysics Data System (ADS)

    Dinis, L. M. J. S.; Jorge, R. M. Natal; Belinha, J.

    2011-05-01

    In this work the Natural Neighbour Radial Point Interpolation Method (NNRPIM), is extended to large deformation analysis of elastic and elasto-plastic structures. The NNPRIM uses the Natural Neighbour concept in order to enforce the nodal connectivity and to create a node-depending background mesh, used in the numerical integration of the NNRPIM interpolation functions. Unlike the FEM, where geometrical restrictions on elements are imposed for the convergence of the method, in the NNRPIM there are no such restrictions, which permits a random node distribution for the discretized problem. The NNRPIM interpolation functions, used in the Galerkin weak form, are constructed using the Radial Point Interpolators, with some differences that modify the method performance. In the construction of the NNRPIM interpolation functions no polynomial base is required and the used Radial Basis Function (RBF) is the Multiquadric RBF. The NNRPIM interpolation functions posses the delta Kronecker property, which simplify the imposition of the natural and essential boundary conditions. One of the scopes of this work is to present the validation the NNRPIM in the large-deformation elasto-plastic analysis, thus the used non-linear solution algorithm is the Newton-Rapson initial stiffness method and the efficient "forward-Euler" procedure is used in order to return the stress state to the yield surface. Several non-linear examples, exhibiting elastic and elasto-plastic material properties, are studied to demonstrate the effectiveness of the method. The numerical results indicated that NNRPIM handles large material distortion effectively and provides an accurate solution under large deformation.

  13. The nonlinear effects of job complexity and autonomy on job satisfaction, turnover, and psychological well-being.

    PubMed

    Chung-Yan, Greg A

    2010-07-01

    This study examines the interactive relationship between job complexity and job autonomy on job satisfaction, turnover intentions, and psychological well-being. It was hypothesized that the positive or motivating effects of job complexity are only realized when workers are given enough autonomy to effectively meet the challenges of complex jobs. Results show that not only do job complexity and job autonomy interact, but that the relationships to the outcome variables are curvilinear in form. Job complexity is shown to be both a motivator and a stressor when job autonomy is low. However, the most beneficial effects of job complexity occur when it is matched by a high level of job autonomy. Theoretical and practical implications are discussed.

  14. Water Quality Variable Estimation using Partial Least Squares Regression and Multi-Scale Remote Sensing.

    NASA Astrophysics Data System (ADS)

    Peterson, K. T.; Wulamu, A.

    2017-12-01

    Water, essential to all living organisms, is one of the Earth's most precious resources. Remote sensing offers an ideal approach to monitor water quality over traditional in-situ techniques that are highly time and resource consuming. Utilizing a multi-scale approach, incorporating data from handheld spectroscopy, UAS based hyperspectal, and satellite multispectral images were collected in coordination with in-situ water quality samples for the two midwestern watersheds. The remote sensing data was modeled and correlated to the in-situ water quality variables including chlorophyll content (Chl), turbidity, and total dissolved solids (TDS) using Normalized Difference Spectral Indices (NDSI) and Partial Least Squares Regression (PLSR). The results of the study supported the original hypothesis that correlating water quality variables with remotely sensed data benefits greatly from the use of more complex modeling and regression techniques such as PLSR. The final results generated from the PLSR analysis resulted in much higher R2 values for all variables when compared to NDSI. The combination of NDSI and PLSR analysis also identified key wavelengths for identification that aligned with previous study's findings. This research displays the advantages and future for complex modeling and machine learning techniques to improve water quality variable estimation from spectral data.

  15. Experience with compound words influences their processing: An eye movement investigation with English compound words.

    PubMed

    Juhasz, Barbara J

    2016-11-14

    Recording eye movements provides information on the time-course of word recognition during reading. Juhasz and Rayner [Juhasz, B. J., & Rayner, K. (2003). Investigating the effects of a set of intercorrelated variables on eye fixation durations in reading. Journal of Experimental Psychology: Learning, Memory and Cognition, 29, 1312-1318] examined the impact of five word recognition variables, including familiarity and age-of-acquisition (AoA), on fixation durations. All variables impacted fixation durations, but the time-course differed. However, the study focused on relatively short, morphologically simple words. Eye movements are also informative for examining the processing of morphologically complex words such as compound words. The present study further examined the time-course of lexical and semantic variables during morphological processing. A total of 120 English compound words that varied in familiarity, AoA, semantic transparency, lexeme meaning dominance, sensory experience rating (SER), and imageability were selected. The impact of these variables on fixation durations was examined when length, word frequency, and lexeme frequencies were controlled in a regression model. The most robust effects were found for familiarity and AoA, indicating that a reader's experience with compound words significantly impacts compound recognition. These results provide insight into semantic processing of morphologically complex words during reading.

  16. MO-G-BRF-01: BEST IN PHYSICS (JOINT IMAGING-THERAPY) - Sensitivity of PET-Based Texture Features to Respiratory Motion in Non-Small Cell Lung Cancer (NSCLC)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yip, S; Aerts, H; Berbeco, R

    2014-06-15

    Purpose: PET-based texture features are used to quantify tumor heterogeneity due to their predictive power in treatment outcome. We investigated the sensitivity of texture features to tumor motion by comparing whole body (3D) and respiratory-gated (4D) PET imaging. Methods: Twenty-six patients (34 lesions) received 3D and 4D [F-18]FDG-PET scans before chemo-radiotherapy. The acquired 4D data were retrospectively binned into five breathing phases to create the 4D image sequence. Four texture features (Coarseness, Contrast, Busyness, and Complexity) were computed within the the physician-defined tumor volume. The relative difference (δ) in each measure between the 3D- and 4D-PET imaging was calculated. Wilcoxonmore » signed-rank test (p<0.01) was used to determine if δ was significantly different from zero. Coefficient of variation (CV) was used to determine the variability in the texture features between all 4D-PET phases. Pearson correlation coefficient was used to investigate the impact of tumor size and motion amplitude on δ. Results: Significant differences (p<<0.01) between 3D and 4D imaging were found for Coarseness, Busyness, and Complexity. The difference for Contrast was not significant (p>0.24). 4D-PET increased Busyness (∼20%) and Complexity (∼20%), and decreased Coarseness (∼10%) and Contrast (∼5%) compared to 3D-PET. Nearly negligible variability (CV=3.9%) was found between the 4D phase bins for Coarseness and Complexity. Moderate variability was found for Contrast and Busyness (CV∼10%). Poor correlation was found between the tumor volume and δ for the texture features (R=−0.34−0.34). Motion amplitude had moderate impact on δ for Contrast and Busyness (R=−0.64− 0.54) and no impact for Coarseness and Complexity (R=−0.29−0.17). Conclusion: Substantial differences in textures were found between 3D and 4D-PET imaging. Moreover, the variability between phase bins for Coarseness and Complexity was negligible, suggesting that similar quantification can be obtained from all phases. Texture features, blurred out by respiratory motion during 3D-PET acquisition, can be better resolved by 4D-PET imaging with any phase.« less

  17. Cardiac interbeat interval dynamics from childhood to senescence : comparison of conventional and new measures based on fractals and chaos theory

    NASA Technical Reports Server (NTRS)

    Pikkujamsa, S. M.; Makikallio, T. H.; Sourander, L. B.; Raiha, I. J.; Puukka, P.; Skytta, J.; Peng, C. K.; Goldberger, A. L.; Huikuri, H. V.

    1999-01-01

    BACKGROUND: New methods of R-R interval variability based on fractal scaling and nonlinear dynamics ("chaos theory") may give new insights into heart rate dynamics. The aims of this study were to (1) systematically characterize and quantify the effects of aging from early childhood to advanced age on 24-hour heart rate dynamics in healthy subjects; (2) compare age-related changes in conventional time- and frequency-domain measures with changes in newly derived measures based on fractal scaling and complexity (chaos) theory; and (3) further test the hypothesis that there is loss of complexity and altered fractal scaling of heart rate dynamics with advanced age. METHODS AND RESULTS: The relationship between age and cardiac interbeat (R-R) interval dynamics from childhood to senescence was studied in 114 healthy subjects (age range, 1 to 82 years) by measurement of the slope, beta, of the power-law regression line (log power-log frequency) of R-R interval variability (10(-4) to 10(-2) Hz), approximate entropy (ApEn), short-term (alpha(1)) and intermediate-term (alpha(2)) fractal scaling exponents obtained by detrended fluctuation analysis, and traditional time- and frequency-domain measures from 24-hour ECG recordings. Compared with young adults (<40 years old, n=29), children (<15 years old, n=27) showed similar complexity (ApEn) and fractal correlation properties (alpha(1), alpha(2), beta) of R-R interval dynamics despite lower spectral and time-domain measures. Progressive loss of complexity (decreased ApEn, r=-0.69, P<0.001) and alterations of long-term fractal-like heart rate behavior (increased alpha(2), r=0.63, decreased beta, r=-0.60, P<0.001 for both) were observed thereafter from middle age (40 to 60 years, n=29) to old age (>60 years, n=29). CONCLUSIONS: Cardiac interbeat interval dynamics change markedly from childhood to old age in healthy subjects. Children show complexity and fractal correlation properties of R-R interval time series comparable to those of young adults, despite lower overall heart rate variability. Healthy aging is associated with R-R interval dynamics showing higher regularity and altered fractal scaling consistent with a loss of complex variability.

  18. A Marked Poisson Process Driven Latent Shape Model for 3D Segmentation of Reflectance Confocal Microscopy Image Stacks of Human Skin.

    PubMed

    Ghanta, Sindhu; Jordan, Michael I; Kose, Kivanc; Brooks, Dana H; Rajadhyaksha, Milind; Dy, Jennifer G

    2017-01-01

    Segmenting objects of interest from 3D data sets is a common problem encountered in biological data. Small field of view and intrinsic biological variability combined with optically subtle changes of intensity, resolution, and low contrast in images make the task of segmentation difficult, especially for microscopy of unstained living or freshly excised thick tissues. Incorporating shape information in addition to the appearance of the object of interest can often help improve segmentation performance. However, the shapes of objects in tissue can be highly variable and design of a flexible shape model that encompasses these variations is challenging. To address such complex segmentation problems, we propose a unified probabilistic framework that can incorporate the uncertainty associated with complex shapes, variable appearance, and unknown locations. The driving application that inspired the development of this framework is a biologically important segmentation problem: the task of automatically detecting and segmenting the dermal-epidermal junction (DEJ) in 3D reflectance confocal microscopy (RCM) images of human skin. RCM imaging allows noninvasive observation of cellular, nuclear, and morphological detail. The DEJ is an important morphological feature as it is where disorder, disease, and cancer usually start. Detecting the DEJ is challenging, because it is a 2D surface in a 3D volume which has strong but highly variable number of irregularly spaced and variably shaped "peaks and valleys." In addition, RCM imaging resolution, contrast, and intensity vary with depth. Thus, a prior model needs to incorporate the intrinsic structure while allowing variability in essentially all its parameters. We propose a model which can incorporate objects of interest with complex shapes and variable appearance in an unsupervised setting by utilizing domain knowledge to build appropriate priors of the model. Our novel strategy to model this structure combines a spatial Poisson process with shape priors and performs inference using Gibbs sampling. Experimental results show that the proposed unsupervised model is able to automatically detect the DEJ with physiologically relevant accuracy in the range 10- 20 μm .

  19. A Marked Poisson Process Driven Latent Shape Model for 3D Segmentation of Reflectance Confocal Microscopy Image Stacks of Human Skin

    PubMed Central

    Ghanta, Sindhu; Jordan, Michael I.; Kose, Kivanc; Brooks, Dana H.; Rajadhyaksha, Milind; Dy, Jennifer G.

    2016-01-01

    Segmenting objects of interest from 3D datasets is a common problem encountered in biological data. Small field of view and intrinsic biological variability combined with optically subtle changes of intensity, resolution and low contrast in images make the task of segmentation difficult, especially for microscopy of unstained living or freshly excised thick tissues. Incorporating shape information in addition to the appearance of the object of interest can often help improve segmentation performance. However, shapes of objects in tissue can be highly variable and design of a flexible shape model that encompasses these variations is challenging. To address such complex segmentation problems, we propose a unified probabilistic framework that can incorporate the uncertainty associated with complex shapes, variable appearance and unknown locations. The driving application which inspired the development of this framework is a biologically important segmentation problem: the task of automatically detecting and segmenting the dermal-epidermal junction (DEJ) in 3D reflectance confocal microscopy (RCM) images of human skin. RCM imaging allows noninvasive observation of cellular, nuclear and morphological detail. The DEJ is an important morphological feature as it is where disorder, disease and cancer usually start. Detecting the DEJ is challenging because it is a 2D surface in a 3D volume which has strong but highly variable number of irregularly spaced and variably shaped “peaks and valleys”. In addition, RCM imaging resolution, contrast and intensity vary with depth. Thus a prior model needs to incorporate the intrinsic structure while allowing variability in essentially all its parameters. We propose a model which can incorporate objects of interest with complex shapes and variable appearance in an unsupervised setting by utilizing domain knowledge to build appropriate priors of the model. Our novel strategy to model this structure combines a spatial Poisson process with shape priors and performs inference using Gibbs sampling. Experimental results show that the proposed unsupervised model is able to automatically detect the DEJ with physiologically relevant accuracy in the range 10 – 20µm. PMID:27723590

  20. Quantifying Landscape Spatial Pattern: What Is the State of the Art?

    Treesearch

    Eric J. Gustafson

    1998-01-01

    Landscape ecology is based on the premise that there are strong links between ecological pattern and ecological function and process. Ecological systems are spatially heterogeneous, exhibiting consid-erable complexity and variability in time and space. This variability is typically represented by categorical maps or by a collection of samples taken at specific spatial...

Top